Using SSL for Email and Internet Protocols

If you want to increase the security attached to your email messaging then there’s several routes you can take.  First of all, you should look at digitally signing and encrypting all your email messages.  There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system.    Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.

Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications.   However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation?  Well the first place to investigate is increasing the security of authentication and data transmission.   There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.

Here’s a few of the RFC based protocols related to Email  :

  • Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
  • Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
  • Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
  • Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers.  Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.

The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.

However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.

It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.

No Comments News, Proxies, VPN

Digital Certificate Authorities

A digital certificate essentially associates specific identity information with a public key which is then linked together in a trusted package.  It is important to realise that the certificate is always signed by the certificate issuer so we can easily verify that the information has not been changed or modified in any way.  However it is more difficult to determine whether the identity and the public key have been associated together correctly.

Remember there’s no real restrictions about who can issue certificates, indeed using OpenSSL virtually anyone can with some limited technical experience. There are a large number of certificate programming APIs and they get easier to use every day.  These should be distinguished however from trusted certificate issuers who are known as certificate authorities also known as CA’s. The role of the certificate authority is to accept and process requests for certificates which come form organisations and individual entities.    Larger organisations who require high levels of security for example like the BBC for their VPNs, would use only the Tier one Certificate Authorities who provide a high level of assurance. They must authenticate the information which is received from these entities, issue the certificates and maintain a repository of information about both the certificates and the subjects.

Here’s a brief summary of the roles and responsibilities of a Certificate Authority.

    • Certificate Enrollment Process – simply the process which details how an entity must apply for a digital certificate.
    • Authentication of Subject – The Certificate Authority must ensure that the applicant is indeed who they claim to be. There are different levels to this and it’s directly linked to the level of assurance given by the CA to certificate.
  • Certificate Generation – Once the identity has been assured then the certificate must be generated and released. It is relatively simple to generate the certificate however it must assure that the process and delivery mechanism is completely secure. Any issues at this stage can compromise the security and validity of the certificate.
  • Certificate Distribution – as mentioned above, the certificates and associated private keys must be distributed to the applicant.
  • Revocation of Certificate – when there is an issue about the integrity of a released certificate, there must be a defined procedure to revoke that certificate. This should be completed securely and the revoked certificate should be added to a list of invalid certificates.

The Certificate Authority would usually publish the standards and processes that underpin the above activities in something called a CPS ( certification practice statement). In secure applications these would be included in the security benchmarks for example for authentication of something like an IP cloaker or VPN system. These are not meant to be long, legal filled documents but practical and readable guides which detail the exact processes and the underpinning activities. Although usually designed to be straight forward, they are usually fairly lengthy documents often many pages long.

X Windows System

The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server.  The server in this instance manages the display, mouse and keyboard.   The client is actually any remote application which runs on a different host (or on the same one).    In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host.  Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.

Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required.  In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access.   In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication.  To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.

X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP.  The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server.   So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.

It is when multiple connections are being used that communication can get more complex.  This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System.    Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen.   It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients.  Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.

The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up.  The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n).    Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established.  In this instance n is the number of the display so will be incremented to allow multiple concurrent connections.  On a Unix server this will usually be a domain socket incremented by n with display numbers too.

Introduction to IP Routing

Conceptually IP routing is pretty straight forward, especially when you look at it from the hosts point of view.  If the destination is directly connected such as a direct link or on the same Ethernet network then the IP datagram is simply forwarded to it’s destination.  If it’s not connected then the host simply send the datagram to it’s default router and lets this handle the next stage of the delivery.  This simple example illustrates most scenarios, for example if an IP packet was being routed through a proxy to allow access to the BBC iPlayer – like this situation.

The basis of IP routing is that it is done on a hop-by-hop basis. The Internet Protocol does not know the complete route to any destination except those directly connected to it.  Ip routing relies on sending the datagram to the next hop router – assuming  this host is closer to the destination until it reaches a router which is directly connected to the destination.

IP routing performs the following –

  • Searches the routing table to see if there is a matching network and host ID.  If there is the packet can be transferred through to the destination.
  • Search the routing table for an entry that matches the network ID.  It only needs one entry for an entire network and the packet can then be sent to the indicated next hop.
  • If all other searches fail then look for the entry marked – ’default’.  The packet then is sent to the next hop router associated with this entry.

If all these searches fail then the datagram is not  marked deliverable.  Even if it has a custom address perhaps an IP address for Netflix routing, it still will not matter.  In reality most searches will fail the initial two searches and be transferred to the default gateway which could be a router or even a proxy site which forwards to the internet.

If the packet cannot be delivered (usually down to some fault or configuration error) then an error message is generated and sent back to the original host.  The two key points to remember is that default routes can be specified for all packets even when the destination and network ID are not known.  The ability to specify specific routes to networks without having to specify the exact host makes the whole system work – routing tables thus contain a few thousand destinations instead of several million!!

Some Notes on Firewall Implementations

For anyone considering implementing a new firewall onto a network here are a few notes to help you through the process.  However before you get started there’s a very important first step that you should always take when implementing on medium to large networks.  This step is to establish a Firewall change control board, which consists of user, system administrators and technical managers from throughout your organisation.  Failing to establish proper change control and implementation processes can be very dangerous on a firewall.  A badly thought out rule could create huge security issues or operational problems – that ‘deny all’ rule might look safe but if it ends up blocking mission critical applications you won’t be popular.

Hardware firewalls are amazingly secure and not too expensive. The very first reported type of network firewall is referred to as a packet filter. Establishing a firewall for your infrastructure is an excellent method to present some simple security for your expert services.

 

Firewalls frequently have such functionality to hide the real address of computer that is linked to the network. You can install most firewall products on a customized network and have it’s protection almost immediately. The host-based firewall might be a daemon or service as part of the operating system or an agent application like endpoint security or protection. These firewalls often arrive in conjunction with antivirus program. Otherwise, a software firewall can be set up on the computer in your house that has an online connection. Or, you may add an extra software component to your firewall.  If you are primarily responsible for your company’s firewall it’s best to have some secure remote access in case of emergencies.  Be careful with rules which allow your access though, you don’t want to let through users’ streaming through UK TV through a VPN service.

In case the connection is controlled by NetworkManager, you may also utilize nm-connection-editor to modify the zone. The secure connection is currently established and now is the time to launch vncviewer so that it employs the secure tunnel. The SSH connection is currently established. Especially in case you allow connection from anywhere online and on the normal SSH port (22).
After you own a server to try from and the targets you want to evaluate, you may continue with this guide. As stated in the past edition you also may want to locate a repository closer to your server. By applying the forwarder you may override the DNS servers supplied by your ISP and utilize fast, higher performance servers instead. Repeat this for each domain that you would like the server to manage. It is necessary for a standard server. Also many servers block dynamic dns hosts, so you could discover your server becomes rejected. At this point you have a simple mail server!

The application shouldn’t be confused with malware behavior. Some Antivirus software applications may ask that you switch off the firewall and disable the Antivirus to be able to install it. Before you put in a software, the very first important step is to look at the configuration of your computer, and the system prerequisites of the program. Update the neighborhood package index and install the software if it’s not already offered.

The configuration of your computer must match the demands of the software to be set up. If you are pleased with your present configuration and have tested that it’s functional once you restart the service, it is possible to safely permit the service. The only configuration you should make that actually impacts the functionality of the service will probably be the port definition in which you determine the port number and protocol you desire to open. If all your interfaces can best be managed by a single zone, it’s probably simpler to just pick out the best default zone and use that for your configuration. You may then modify your network interfaces to automatically choose the right zones. Whenever you are transitioning an interface to a different zone, be conscious that you are most likely modifying the services which are going to be operational. Opening up an entire interface to incoming packets might not be restrictive enough and you may want to have more control concerning what to allow and what to reject.

James Hellings

Author of IP Cloaker

 

 

ATM – Routing IP

There are of course many different network architectures many of which have been around for many years. One of them is known as ATM (Asynchronous Transfer Mode) and was considered in the 1990’s to be the ultimate network architecture design. The belief was that in the future every computer or device would be fitted with an ATM network adapter rather than the alternatives which at the time were token-ring or ethernet.

The reality has turned out somewhat different of course, and it’s unlikely that we will ever see the extensive use of ATM based networks. However many corporations installed ATM backbone switches for an important reason because they have the ability to handle network traffic at extremely high speeds.

There is a difficulty though for using these switches, that is ATM is a virtual circuit based, cell based networking scheme which is primarily connection orientated. Compare this with Ethernet which powers the majority of commercial networks which is actually a connection less frame based networking scheme. In fact to integrate the two systems, you need to use one of the available overlays which have been developed in order to allow Ethernet to be connected to the ATM backbones and switches.

These normally work by using layer 3 routing algorithms which can discover the initial routes through the network, Then layer 2 virtual circuits can be established through the ATM fabric delivering data without actually going directly through the routers. This technique is normally known as ‘shortcut routing’ although you will often here it described by other terms as it’s a useful technique. If you need more detailed information check your normal networking references or search online using search terms like ‘IP routing over ATM’.

There are difficulties with these improvised techniques one of the most common is knowing when to route and when to switch the traffic at layer 2. Long data transmissions such as Netflix video streams should be switched as the more efficient method of transport. However for shorter transmissions then the router is normally the best option.

Layer 3 traffic will not under normal circumstances identify the length of the transmission so it may or may not be suitable to be switched. There are ways of identifying the length of the transmission normally by inspecting the content of the datagrams itself. There are many different methods of identifying the flow mostly developed by different networking companies, some are no longer commonly used but you will find others being developed or utilized extensively in various environments. See the references below for some examples that can be researched for more information.

References:
3 Com Fast IP
Ipsilon IP Switching
Switch IP Address – Watch UK TV in USA

No Comments Protocols, Proxies, VPN

Creating a Proxy Hierarchy

Although most networks and organisations would benefit from implementing proxy servers into their environment it can be a difficult task to decide the location and hierarchy of these servers.  It is very important and there are some questions which can aid the decision making process.

Flat or Hierarchical Proxy Structure?

This decision will largely depend on the both the size and the geographical dispersion of the network.  The two main options are firstly whether a standard single flat level of proxies will be sufficient, or whether something larger is required.  This would be a larger hierarchy based on  tree structure much like an Active Directory forest structure used in complexed windows environments.

Indeed in such environments it may be suitable to mirror the Active Directory design with a proxy server structure.   Many technical staff would use the following rule of thumb – each branch office would require an individual proxy server.  Again this may map onto an AD design where each office exists with it’s own Organisational Unit (OU) . This has other benefits because you can apply custom security and configurations options based on that OU, for example allowing  the sales OU more access through the proxy than administrative teams,

This of course needs to be carefully planned in line with whatever physical infrastructure is in place.   You cannot install heavy duty proxy hardware at the end of a small ISDN line for example.  The proxy servers should be installed in line with both the organisation configuration and network infrastructure.    Larger organisations can base these along larger geographical areas for example a separate hierarchy in each country.  So you would have a top level UK proxy server above regional proxies further down in the organisation.

If the organisation is fairly centralized you’ll certainly find a single level of proxies a better solution.  It is much easier to manage and the latency is minimised without tunnelling through multiple layers of servers and networks.

Single or Proxy Arrays

A standard rule of thumb for proxy servers is usually something like one proxy for every 3000 potential users.   This is of course only an estimate and can vary widely depending on the users and their geographic spread.  This doesn’t mean that the proxies need to be automatically independant, but can indeed be installed in a chain together.

For example you can set up four proxies in parallel to support 12000 users using the Cache Array Protocol (CARP).  These could be set up across different boundaries even across a flat proxy structure.   Remember that the servers will have different IP address ranges if across national borders.   Make sure that your proxy with the Irish IP address can speak to all the other European sites, most proxies should ideally be multihomed to help with routing.

Using the caching array will allow multiple physical proxies to be combined into a single logical device.    This is normally a good idea as it will increase things like the cache size and eliminates redundancy between individual proxy caches.

It’s normally best to run proxies in parallel whenever the opportunity exists. However sometimes this will not be possible and specific network configurations may stop this method meaning you’ll have to run proxies individually in a flat mode.   Even if you have to split up proxy resources in to individual machines be careful about creating network bottlenecks.  Individual proxies should not be pointing to single gateways or machines, even an overworked firewall can cause significant impact on a network’s performance and latency.

Digital Interface Testing – Cisco

If you need to check the physical layer status and the quality of digital circuits then there are two tools which you are likely to need.   The first is a breakout box which can be used to determine the connection integrity between the DTE and the DCE. This box (also know as ‘BOB’) has two external connections which can be extended on the DTE and DCE.

The box supplies status information on the digital circuit and will also display any data being transmitted at the time.   The device will normally display real-time status information about data, clocking, space and activity.  On most of the breakout boxes, this information is displayed using status LEDs.  It is normally quite a compact device powered by batteries in order to increase it’s portability.   The box contains buffered electrical circuitry which does not interfere with the actual line signal during testing,  Most are also capable of verifying the electrical resistance and line voltage too.

These are focused on physical problems on a network primarily, although errors can occur for other reasons.  If you’re looking at other issues perhaps conflicts on a proxy IP address or an application error then you should look at other tools.

The second piece of equipment you’ll need has a variety of names but is most commonly known as BERT.  This stands for bit-error-rate tester and is actually a lot more sophisticated piece of kit.   This can effectively measure the error rate in a digital signal.  This error rate can be measured both from end to end circuit or on a portion of a circuit for isolating individual faults.  The bit error rate is often measured during installation and commissioning so that it can be used as a baseline.

The BERT also is used to measure error rates on the variety of different bit patterns that it can generate. You can use this information for timing or noise issues on the circuit.  It does take time but allows a line to be monitored accurately and a traffic and error analysis can be performed

John Williams

UK VPN Free trial

Internet Control Message Protocol – ICMP

The Internet control message protocol has a wide variety of different message types many of which are extremely useful for managing and troubleshooting an IP Network.   Most of us are familiar with the command ‘ping’ which uses at it’s core both ICMP echo and echo reply.   Another well used ICMP tool is that of traceroute which is useful for monitoring TTLs (time to live) and hop counts.

There are however quite a number of these ICMP messages, beyond the ones used by these tools and most are extremely useful for anyone managing a complex IP based network.   Here’s some of the most useful ones:

ICMP unreachable – an IP host will produce an ICMP unreachable message if there is no valid path to the requested host, network, protocol or port.  There are several of these messages which are often grouped together for convenience.  They are often generated from routers and switches, for example if local access lists are restricting access to the requested resource.   You should be careful about allowing these messages to be propagated as they contain source addresses.  Particularly if the connection is being used externally perhaps through an external connection like a BBC VPN for instance.    The messages can be blocked by using the no ip unreachables command on Cisco hardware.

ICMP redirects – a router will produce a redirect message if it receives a packet on a given interface and the route is on  the same device.   These can be used to help update local routing tables with the correct information.   There is an interesting protocol from Cisco which can be configured to help with these situations it’s called the Hot Standby Routing Protocol (HSRP).

ICMP  mask request and reply – some hosts do not have their subnet masks statically defined and have no way of learning it.  Here they can use an ICMP mask request which can be responded to by the router with an ICMP mask reply.

ICMP source quench – these messages provide an important function within ICMP that of congestion control on the network.   If a network device such as a router detects network congestion perhaps because of dropped packets or overflows in buffers and on it’s interfaces then it will send an ICMP source quench message to the source of these packets.

ICMP Fragmentation – this type of message is sent when an IP packet is received which is larger than the MTU specified within the LAN or WAN environment yet it also has the flag DF set (do not fragment). Here the packet cannot be forwarded however the ICMP message can be used to at least pass back some information on the issue.  There are actually quite a few scenarios where the DF bit is set automatically by devices as the packet is distributed.

Further Reading:

John Summer, Proxy for Netflix – video, Harvard Press, 2017

HTTP (Hypertext Transfer Protocol)

For many of us a network is either our little home setup consisting of perhaps a modem and wireless access point and a few connected devices, or perhaps that huge global wide network – the internet.  Whatever the size all networks need to allow communication between the various devices connected to them.  Just like human beings need languages to communicate so do networks only in this context we call them ‘protocols’.

The internet is built primarily using TCP/IP protocols to communicate, this is used to transport information between ‘web clients’ and ‘web servers’.   It’s not enough though to enable the media rich content delivered to our web browsers and a host of secondary protocols site above the main transport protocol – the most important one which enables the world wide web is called HTTP.

This provides a method for web browsers to access content stored on web servers, which is created using HTML (Hypertext Markup Language).  HTML documents contain text, graphics and video but also hyperlinks to other locations on the world wide web.   HTTP is responsible for processing these links and enabling the client/server communication which results.

Without HTTP the world wide web simply wouldn’t exist and if you want to see it’s origins search for RFC 1945 where you’ll find HTTP defined as an application level protocol designed with the lightness and speed necessary for distributed, collaborative, hypermedia information systems.   It is a stateless, generic and object orientated protocol which can be used for a huge variety of tasks – crucially it can be used on a variety of platforms which means it doesn’t matter whether you’re platform your computer is on (linux, Windows or Mac for instance) – you can still access the web content via HTTP.

So what happens? When someone types a web name or address into the address field of their web browser, the browser attempts to locate the address on the network it is connected to.  This can either be a local address or more commonly it will look out on to the internet looking for the designated web server.   HTTP is the command and control protocol which enables communication between the client and the web server allowing commands to be passed between the two of them.   HTML is the formatting language of the web pages which are transferred when you access a web site.

The HTTP connection between the client and server can be secured in two specific ways – using secure HTTP (SHTTP) or Secure Sockets Layer (SSL) which both allow the information transmitted to be encrypted and thus protected.  It should be noted though that the vast majority of communication is standard HTTP and is transmitted in clear text insecurely which is why so many people use proxies and VPNs like this to protect their connections.