If you want to increase the security attached to your email messaging then there’s several routes you can take. First of all, you should look at digitally signing and encrypting all your email messages. There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system. Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.
Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications. However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation? Well the first place to investigate is increasing the security of authentication and data transmission. There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.
Here’s a few of the RFC based protocols related to Email :
- Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
- Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
- Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
- Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers. Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.
The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.
However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.
It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.
Any automated identity system needs one thing – the ability to create and distribute the authentication of users credentials and the rights that they assert. Many people look initially to the world leader – Kerberos but there are other systems which are just as capable. In later years, SAML (Security Assertion Mark Up Language) has become increasingly popular and is becoming something of an industry standard. There are good, practical reasons why SAML has become popular including it’s abili9ty to use XML to represent various security credentials. It defines a protocol to request and receive the various credential data which flows from a SAML authority service.
In reality although SAML can look quite complicated on first glance it is relative straight forward to use. It’s ideally positioned to deal with security and authentication issues online, including the many users who protect their privacy and indulge in anonymous surfing for example. Remember the security assertions will normally only be for a particular domain which means that the user’s identity can be protected to some extent.
A SAML authority can be described as a service usually online which responds to specific SAML request. We define these requests as assertions and they come in three distinct types:
Authentication: a SAML authority receives a request about a specific user’s credentials. The reply will stipulate that the authentication was completed and at what time.
Attribute: when an authentication assertion has been returned, a SAML attribute authority can be asked for the attributes associated with the subject. These are returned and are known as attribute assertions.
Authorization: a SAML authorization assertion is returned in response to a request about permissions to specified resources. This will be referenced against an access control list with the relative permissions and could even be dynamically referenced and updated. the response would typically be quite simple – i.e that subject A has been granted permission for access to resource Z.
Although all these assertions are quite distinct, it is very likely that they all take place on a single authority. However in highly secure or distributed systems they may be spread across distinct servers in a domain.
SAML has become more popular because it is ideal for use in web based and distributed systems as opposed to Kerberos which is not as flexible. For example it could be used to allocate permissions for users to download videos like this based on permissions assigned to a subscriber. This means that the permissions can be integrated with all sorts of web services and functions including integration with SOAP. This is of course an advanced protocol often used for exchanging information in a structured format across computer networks.
A digital certificate essentially associates specific identity information with a public key which is then linked together in a trusted package. It is important to realise that the certificate is always signed by the certificate issuer so we can easily verify that the information has not been changed or modified in any way. However it is more difficult to determine whether the identity and the public key have been associated together correctly.
Remember there’s no real restrictions about who can issue certificates, indeed using OpenSSL virtually anyone can with some limited technical experience. There are a large number of certificate programming APIs and they get easier to use every day. These should be distinguished however from trusted certificate issuers who are known as certificate authorities also known as CA’s. The role of the certificate authority is to accept and process requests for certificates which come form organisations and individual entities. Larger organisations who require high levels of security for example like the BBC for their VPNs, would use only the Tier one Certificate Authorities who provide a high level of assurance. They must authenticate the information which is received from these entities, issue the certificates and maintain a repository of information about both the certificates and the subjects.
Here’s a brief summary of the roles and responsibilities of a Certificate Authority.
- Certificate Enrollment Process – simply the process which details how an entity must apply for a digital certificate.
- Authentication of Subject – The Certificate Authority must ensure that the applicant is indeed who they claim to be. There are different levels to this and it’s directly linked to the level of assurance given by the CA to certificate.
- Certificate Generation – Once the identity has been assured then the certificate must be generated and released. It is relatively simple to generate the certificate however it must assure that the process and delivery mechanism is completely secure. Any issues at this stage can compromise the security and validity of the certificate.
- Certificate Distribution – as mentioned above, the certificates and associated private keys must be distributed to the applicant.
- Revocation of Certificate – when there is an issue about the integrity of a released certificate, there must be a defined procedure to revoke that certificate. This should be completed securely and the revoked certificate should be added to a list of invalid certificates.
The Certificate Authority would usually publish the standards and processes that underpin the above activities in something called a CPS ( certification practice statement). In secure applications these would be included in the security benchmarks for example for authentication of something like an IP cloaker or VPN system. These are not meant to be long, legal filled documents but practical and readable guides which detail the exact processes and the underpinning activities. Although usually designed to be straight forward, they are usually fairly lengthy documents often many pages long.
The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server. The server in this instance manages the display, mouse and keyboard. The client is actually any remote application which runs on a different host (or on the same one). In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host. Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.
Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required. In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access. In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication. To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.
X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP. The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server. So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.
It is when multiple connections are being used that communication can get more complex. This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System. Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen. It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients. Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.
The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up. The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n). Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established. In this instance n is the number of the display so will be incremented to allow multiple concurrent connections. On a Unix server this will usually be a domain socket incremented by n with display numbers too.
Conceptually IP routing is pretty straight forward, especially when you look at it from the hosts point of view. If the destination is directly connected such as a direct link or on the same Ethernet network then the IP datagram is simply forwarded to it’s destination. If it’s not connected then the host simply send the datagram to it’s default router and lets this handle the next stage of the delivery. This simple example illustrates most scenarios, for example if an IP packet was being routed through a proxy to allow access to the BBC iPlayer – like this situation.
The basis of IP routing is that it is done on a hop-by-hop basis. The Internet Protocol does not know the complete route to any destination except those directly connected to it. Ip routing relies on sending the datagram to the next hop router – assuming this host is closer to the destination until it reaches a router which is directly connected to the destination.
IP routing performs the following –
- Searches the routing table to see if there is a matching network and host ID. If there is the packet can be transferred through to the destination.
- Search the routing table for an entry that matches the network ID. It only needs one entry for an entire network and the packet can then be sent to the indicated next hop.
- If all other searches fail then look for the entry marked – ’default’. The packet then is sent to the next hop router associated with this entry.
If all these searches fail then the datagram is not marked deliverable. Even if it has a custom address perhaps an IP address for Netflix routing, it still will not matter. In reality most searches will fail the initial two searches and be transferred to the default gateway which could be a router or even a proxy site which forwards to the internet.
If the packet cannot be delivered (usually down to some fault or configuration error) then an error message is generated and sent back to the original host. The two key points to remember is that default routes can be specified for all packets even when the destination and network ID are not known. The ability to specify specific routes to networks without having to specify the exact host makes the whole system work – routing tables thus contain a few thousand destinations instead of several million!!
For anyone considering implementing a new firewall onto a network here are a few notes to help you through the process. However before you get started there’s a very important first step that you should always take when implementing on medium to large networks. This step is to establish a Firewall change control board, which consists of user, system administrators and technical managers from throughout your organisation. Failing to establish proper change control and implementation processes can be very dangerous on a firewall. A badly thought out rule could create huge security issues or operational problems – that ‘deny all’ rule might look safe but if it ends up blocking mission critical applications you won’t be popular.
Hardware firewalls are amazingly secure and not too expensive. The very first reported type of network firewall is referred to as a packet filter. Establishing a firewall for your infrastructure is an excellent method to present some simple security for your expert services.
Firewalls frequently have such functionality to hide the real address of computer that is linked to the network. You can install most firewall products on a customized network and have it’s protection almost immediately. The host-based firewall might be a daemon or service as part of the operating system or an agent application like endpoint security or protection. These firewalls often arrive in conjunction with antivirus program. Otherwise, a software firewall can be set up on the computer in your house that has an online connection. Or, you may add an extra software component to your firewall. If you are primarily responsible for your company’s firewall it’s best to have some secure remote access in case of emergencies. Be careful with rules which allow your access though, you don’t want to let through users’ streaming through UK TV through a VPN service.
In case the connection is controlled by NetworkManager, you may also utilize nm-connection-editor to modify the zone. The secure connection is currently established and now is the time to launch vncviewer so that it employs the secure tunnel. The SSH connection is currently established. Especially in case you allow connection from anywhere online and on the normal SSH port (22).
After you own a server to try from and the targets you want to evaluate, you may continue with this guide. As stated in the past edition you also may want to locate a repository closer to your server. By applying the forwarder you may override the DNS servers supplied by your ISP and utilize fast, higher performance servers instead. Repeat this for each domain that you would like the server to manage. It is necessary for a standard server. Also many servers block dynamic dns hosts, so you could discover your server becomes rejected. At this point you have a simple mail server!
The application shouldn’t be confused with malware behavior. Some Antivirus software applications may ask that you switch off the firewall and disable the Antivirus to be able to install it. Before you put in a software, the very first important step is to look at the configuration of your computer, and the system prerequisites of the program. Update the neighborhood package index and install the software if it’s not already offered.
The configuration of your computer must match the demands of the software to be set up. If you are pleased with your present configuration and have tested that it’s functional once you restart the service, it is possible to safely permit the service. The only configuration you should make that actually impacts the functionality of the service will probably be the port definition in which you determine the port number and protocol you desire to open. If all your interfaces can best be managed by a single zone, it’s probably simpler to just pick out the best default zone and use that for your configuration. You may then modify your network interfaces to automatically choose the right zones. Whenever you are transitioning an interface to a different zone, be conscious that you are most likely modifying the services which are going to be operational. Opening up an entire interface to incoming packets might not be restrictive enough and you may want to have more control concerning what to allow and what to reject.
Author of IP Cloaker
There are two basic schemes which have been adapted to encapsulate and transmit IP packets over serial point to point links. The older protocol is called SLIP (Serial Line Internet Protocol) and the newer version in known as PPP (Point to Point Protocol). The dates though can be slightly misleading as although SLIP is the original protocol you’ll find PPP is more popular because it can work with other protocols. This crucially includes IPX (Inter network Packet Exchange) – the PPP protocol is defined in RFC 1661-1663.
So what does PPP provide? Well it is important in many ways including it’s core functions providing router to router and host to host connections. PPP was also very commonly used to provide a connection on old modem and dial up connections for home users to connect to their ISP. In fact it is still used in that context using more modern cable or data modems and routers. When the modem has connected to the ISP a connection is made between the users hardware and the ISPs gateway. The setup of the connection includes authentication and also the assignment of an IP address.
When this connection is established the users computer is actually then an extension of the ISP network and the physical ports have the same functionality as any other serial or network card connected on the network. It is important that the IP address is assigned correctly as it is essential to communicate over the internet. In fact it also should be registered to the host country too otherwise there will be issues regarding region blocks as described in this article – BBC Block VPN connections.
It is useful to understand how PPP encapsulates high level protocol packets in order to transmit them. Basically it uses a framing method with a pre-defined framing method. The format includes placeholders for delimiters, addresses, controls, protocol and of course data. There is also a checksum included in the packet which is called a Frame Check Sequence.
The physical layer of PPP actually supports a range of transmissions including those over asynchronous and synchronous lines. These will also involve additional protocols such as EIA-232£ and CCITT V.24.
The data link layer of PPP takes it’s structure from HDLC (High Level Data Link Control. Using an additional link control protocol it will establish and manage links between endpoints. This protocol also establishes packet sizes and the methods of encapsulation. It can also manage authentication if required and things like compression methods which are often used in physical device connections.
Further Reading: Linking IP Address American Netflix, Faber Books.
There are of course many different network architectures many of which have been around for many years. One of them is known as ATM (Asynchronous Transfer Mode) and was considered in the 1990’s to be the ultimate network architecture design. The belief was that in the future every computer or device would be fitted with an ATM network adapter rather than the alternatives which at the time were token-ring or ethernet.
The reality has turned out somewhat different of course, and it’s unlikely that we will ever see the extensive use of ATM based networks. However many corporations installed ATM backbone switches for an important reason because they have the ability to handle network traffic at extremely high speeds.
There is a difficulty though for using these switches, that is ATM is a virtual circuit based, cell based networking scheme which is primarily connection orientated. Compare this with Ethernet which powers the majority of commercial networks which is actually a connection less frame based networking scheme. In fact to integrate the two systems, you need to use one of the available overlays which have been developed in order to allow Ethernet to be connected to the ATM backbones and switches.
These normally work by using layer 3 routing algorithms which can discover the initial routes through the network, Then layer 2 virtual circuits can be established through the ATM fabric delivering data without actually going directly through the routers. This technique is normally known as ‘shortcut routing’ although you will often here it described by other terms as it’s a useful technique. If you need more detailed information check your normal networking references or search online using search terms like ‘IP routing over ATM’.
There are difficulties with these improvised techniques one of the most common is knowing when to route and when to switch the traffic at layer 2. Long data transmissions such as Netflix video streams should be switched as the more efficient method of transport. However for shorter transmissions then the router is normally the best option.
Layer 3 traffic will not under normal circumstances identify the length of the transmission so it may or may not be suitable to be switched. There are ways of identifying the length of the transmission normally by inspecting the content of the datagrams itself. There are many different methods of identifying the flow mostly developed by different networking companies, some are no longer commonly used but you will find others being developed or utilized extensively in various environments. See the references below for some examples that can be researched for more information.
3 Com Fast IP
Ipsilon IP Switching
Switch IP Address – Watch UK TV in USA
On shared network topologies like Ethernet there is a need to control access to the network. One of the most common method is to use CSMA to ensure that all devices get equal access to the available bandwidth.
Devices attached to the network will listen to other traffic before transmitting this is called ‘carrier sense’. The devices will wait until the channel is free before transmitting on the same cable. There is also the ability to for many devices to use the same network using MA (multiple access), so multiple devices will communicate using the same network cable. The reason that multiple access is a necessity is because all the devices on a CSMA network will have equal access rights to transmit. It is therefore inevitable that there will be two stations attempting to transmit at the same time especially on larger networks. In this case there is the possibility of collisions which should be avoided by using special techniques called collision detection.
CD (Collision detection) defines what happens when two devices see a clear network channel and both attempt to use it at the same time. When a collision initially occurs both devices will stop transmission, wait for a random number of seconds before attempting to retransmit. This is likely to happen often especially on busy networks with lots of users or computers, although a few clients downloading video from a source like Netflix will generate similar issues.
This method is used on most Ethernet networks and is surprisingly effective on a standard IEEE 802.3 Ethernet network channel. It should be noted though that this method only handle collisions as they occur it does not actively prevent them happening in the first place. If there are too many collisions on a network then network performance can be impacted greatly. Indeed to avoid all collisions you need to ensure that only 40% of the bus capacity is used which is very difficult for most busy corporate networks.
A more advanced method of dealing with collisions is the CSMA/CA which stands for collision avoidance. This attempts to avoid collisions by getting each node to broadcast before transmitting. It is usually very effective but not widely used because the avoidance usually generates similar overhead that the collisions themselves.
MAC Medium Access Control
Netflix VPN problem
On proxies and network performance there are obviously many components which can be an influencing factor. One of those is content filtering, which in most networks form an important part of perimeter and internal security. Nowadays most employees enjoy access to the internet from their corporate PCs which in itself necessitates the need for some content filtering. URL filtering is one such process, the impact of intense checking against patterns to block.
There are huge risks with allowing access to the internet, so it is essential that these risks are mitigated in some way. Users obviously can be made aware of code of conducts and a robust internet usage policy is essential. However there will always be some users who will ignore these issues and even some who will actively seek to bypass them. It is not uncommon to analyse outbound connections and see many people with constant media streams of UK TV from abroad which obviously is not good for your network.
Other examples of content filtering are things like HTML tag filtering and screening for viruses and malware. HTML tag filtering allows certain tags to be removed from transferred HTML documents usually for security purposes. Many organisations for example will routinely screen out all Java or Active X controls from content. Blocking any content which contains viruses or malware is of course a sensible option in today’s security environment.
When these objects are being transferred and cachesd through a proxy server, there is an opportunity to filter this content. It is the logical place for example to implement virus screening plugins. The problems are that most of these plugin will require the whole object to be retrieved before it can be scanned. This leads to the undesirable situation where the proxy server is caching a potentially dangerous file. Also this can lead to a large amount of latency from the user perspective as the entire content is first downloaded and cached before the user sees anything on their computer screens.
There have been some technological developments which are improving this situation with more sophisticated scanners which can operate on streaming files and content. Other filtering applications can deal with HTML tag filtering in this way so that the data can be sent almost immediately and prevent that large data lag at the client’s side.
John ITV Stevens