X Windows System

The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server.  The server in this instance manages the display, mouse and keyboard.   The client is actually any remote application which runs on a different host (or on the same one).    In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host.  Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.

Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required.  In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access.   In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication.  To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.

X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP.  The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server.   So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.

It is when multiple connections are being used that communication can get more complex.  This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System.    Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen.   It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients.  Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.

The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up.  The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n).    Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established.  In this instance n is the number of the display so will be incremented to allow multiple concurrent connections.  On a Unix server this will usually be a domain socket incremented by n with display numbers too.

Some Notes on Firewall Implementations

For anyone considering implementing a new firewall onto a network here are a few notes to help you through the process.  However before you get started there’s a very important first step that you should always take when implementing on medium to large networks.  This step is to establish a Firewall change control board, which consists of user, system administrators and technical managers from throughout your organisation.  Failing to establish proper change control and implementation processes can be very dangerous on a firewall.  A badly thought out rule could create huge security issues or operational problems – that ‘deny all’ rule might look safe but if it ends up blocking mission critical applications you won’t be popular.

Hardware firewalls are amazingly secure and not too expensive. The very first reported type of network firewall is referred to as a packet filter. Establishing a firewall for your infrastructure is an excellent method to present some simple security for your expert services.


Firewalls frequently have such functionality to hide the real address of computer that is linked to the network. You can install most firewall products on a customized network and have it’s protection almost immediately. The host-based firewall might be a daemon or service as part of the operating system or an agent application like endpoint security or protection. These firewalls often arrive in conjunction with antivirus program. Otherwise, a software firewall can be set up on the computer in your house that has an online connection. Or, you may add an extra software component to your firewall.  If you are primarily responsible for your company’s firewall it’s best to have some secure remote access in case of emergencies.  Be careful with rules which allow your access though, you don’t want to let through users’ streaming through UK TV through a VPN service.

In case the connection is controlled by NetworkManager, you may also utilize nm-connection-editor to modify the zone. The secure connection is currently established and now is the time to launch vncviewer so that it employs the secure tunnel. The SSH connection is currently established. Especially in case you allow connection from anywhere online and on the normal SSH port (22).
After you own a server to try from and the targets you want to evaluate, you may continue with this guide. As stated in the past edition you also may want to locate a repository closer to your server. By applying the forwarder you may override the DNS servers supplied by your ISP and utilize fast, higher performance servers instead. Repeat this for each domain that you would like the server to manage. It is necessary for a standard server. Also many servers block dynamic dns hosts, so you could discover your server becomes rejected. At this point you have a simple mail server!

The application shouldn’t be confused with malware behavior. Some Antivirus software applications may ask that you switch off the firewall and disable the Antivirus to be able to install it. Before you put in a software, the very first important step is to look at the configuration of your computer, and the system prerequisites of the program. Update the neighborhood package index and install the software if it’s not already offered.

The configuration of your computer must match the demands of the software to be set up. If you are pleased with your present configuration and have tested that it’s functional once you restart the service, it is possible to safely permit the service. The only configuration you should make that actually impacts the functionality of the service will probably be the port definition in which you determine the port number and protocol you desire to open. If all your interfaces can best be managed by a single zone, it’s probably simpler to just pick out the best default zone and use that for your configuration. You may then modify your network interfaces to automatically choose the right zones. Whenever you are transitioning an interface to a different zone, be conscious that you are most likely modifying the services which are going to be operational. Opening up an entire interface to incoming packets might not be restrictive enough and you may want to have more control concerning what to allow and what to reject.

James Hellings

Author of IP Cloaker



IP Transmission – Point to Point Protocols

There are two basic schemes which have been adapted to encapsulate and transmit IP packets over serial point to point links. The older protocol is called SLIP (Serial Line Internet Protocol) and the newer version in known as PPP (Point to Point Protocol). The dates though can be slightly misleading as although SLIP is the original protocol you’ll find PPP is more popular because it can work with other protocols. This crucially includes IPX (Inter network Packet Exchange) – the PPP protocol is defined in RFC 1661-1663.

So what does PPP provide? Well it is important in many ways including it’s core functions providing router to router and host to host connections. PPP was also very commonly used to provide a connection on old modem and dial up connections for home users to connect to their ISP. In fact it is still used in that context using more modern cable or data modems and routers. When the modem has connected to the ISP a connection is made between the users hardware and the ISPs gateway. The setup of the connection includes authentication and also the assignment of an IP address.

When this connection is established the users computer is actually then an extension of the ISP network and the physical ports have the same functionality as any other serial or network card connected on the network. It is important that the IP address is assigned correctly as it is essential to communicate over the internet. In fact it also should be registered to the host country too otherwise there will be issues regarding region blocks as described in this article – BBC Block VPN connections.

It is useful to understand how PPP encapsulates high level protocol packets in order to transmit them. Basically it uses a framing method with a pre-defined framing method. The format includes placeholders for delimiters, addresses, controls, protocol and of course data. There is also a checksum included in the packet which is called a Frame Check Sequence.

The physical layer of PPP actually supports a range of transmissions including those over asynchronous and synchronous lines. These will also involve additional protocols such as EIA-232£ and CCITT V.24.
The data link layer of PPP takes it’s structure from HDLC (High Level Data Link Control. Using an additional link control protocol it will establish and manage links between endpoints. This protocol also establishes packet sizes and the methods of encapsulation. It can also manage authentication if required and things like compression methods which are often used in physical device connections.

Further Reading: Linking IP Address American Netflix, Faber Books.

ATM – Routing IP

There are of course many different network architectures many of which have been around for many years. One of them is known as ATM (Asynchronous Transfer Mode) and was considered in the 1990’s to be the ultimate network architecture design. The belief was that in the future every computer or device would be fitted with an ATM network adapter rather than the alternatives which at the time were token-ring or ethernet.

The reality has turned out somewhat different of course, and it’s unlikely that we will ever see the extensive use of ATM based networks. However many corporations installed ATM backbone switches for an important reason because they have the ability to handle network traffic at extremely high speeds.

There is a difficulty though for using these switches, that is ATM is a virtual circuit based, cell based networking scheme which is primarily connection orientated. Compare this with Ethernet which powers the majority of commercial networks which is actually a connection less frame based networking scheme. In fact to integrate the two systems, you need to use one of the available overlays which have been developed in order to allow Ethernet to be connected to the ATM backbones and switches.

These normally work by using layer 3 routing algorithms which can discover the initial routes through the network, Then layer 2 virtual circuits can be established through the ATM fabric delivering data without actually going directly through the routers. This technique is normally known as ‘shortcut routing’ although you will often here it described by other terms as it’s a useful technique. If you need more detailed information check your normal networking references or search online using search terms like ‘IP routing over ATM’.

There are difficulties with these improvised techniques one of the most common is knowing when to route and when to switch the traffic at layer 2. Long data transmissions such as Netflix video streams should be switched as the more efficient method of transport. However for shorter transmissions then the router is normally the best option.

Layer 3 traffic will not under normal circumstances identify the length of the transmission so it may or may not be suitable to be switched. There are ways of identifying the length of the transmission normally by inspecting the content of the datagrams itself. There are many different methods of identifying the flow mostly developed by different networking companies, some are no longer commonly used but you will find others being developed or utilized extensively in various environments. See the references below for some examples that can be researched for more information.

3 Com Fast IP
Ipsilon IP Switching
Switch IP Address – Watch UK TV in USA

No Comments Protocols, Proxies, VPN

CSMA/CD (Carrier Sense Multiple Access/Collision Detection)

On shared network topologies like Ethernet there is a need to control access to the network. One of the most common method is to use CSMA to ensure that all devices get equal access to the available bandwidth.

Devices attached to the network will listen to other traffic before transmitting this is called ‘carrier sense’. The devices will wait until the channel is free before transmitting on the same cable. There is also the ability to for many devices to use the same network using MA (multiple access), so multiple devices will communicate using the same network cable. The reason that multiple access is a necessity is because all the devices on a CSMA network will have equal access rights to transmit. It is therefore inevitable that there will be two stations attempting to transmit at the same time especially on larger networks. In this case there is the possibility of collisions which should be avoided by using special techniques called collision detection.

CD (Collision detection) defines what happens when two devices see a clear network channel and both attempt to use it at the same time. When a collision initially occurs both devices will stop transmission, wait for a random number of seconds before attempting to retransmit. This is likely to happen often especially on busy networks with lots of users or computers, although a few clients downloading video from a source like Netflix will generate similar issues.

This method is used on most Ethernet networks and is surprisingly effective on a standard IEEE 802.3 Ethernet network channel. It should be noted though that this method only handle collisions as they occur it does not actively prevent them happening in the first place. If there are too many collisions on a network then network performance can be impacted greatly. Indeed to avoid all collisions you need to ensure that only 40% of the bus capacity is used which is very difficult for most busy corporate networks.

A more advanced method of dealing with collisions is the CSMA/CA which stands for collision avoidance. This attempts to avoid collisions by getting each node to broadcast before transmitting. It is usually very effective but not widely used because the avoidance usually generates similar overhead that the collisions themselves.

Further Reading:
Etherenet IEEE
MAC Medium Access Control
Netflix VPN problem

No Comments Networks, Protocols, VPN

Performance Issues – Content Filtering

On proxies and network performance there are obviously many components which can be an influencing factor.  One of those is content filtering, which in most networks form an important part of perimeter and internal security.  Nowadays most employees enjoy access to the internet from their corporate PCs which in itself necessitates the need for some content filtering.  URL filtering is one such process, the impact of intense checking against patterns to block.

There are huge risks with allowing access to the internet, so it is essential that these risks are mitigated in some way.  Users obviously can be made aware of code of conducts and a robust internet usage policy is essential.  However there will always be some users who will ignore these issues and even some who will actively seek to bypass them.  It is not uncommon to analyse outbound connections and see many people with constant media streams of UK TV from abroad which obviously is not good for your network.

Other examples of content filtering are things like HTML tag filtering and screening for viruses and malware. HTML tag filtering allows certain tags to be removed from transferred HTML documents usually for security purposes. Many organisations for example will routinely screen out all Java or Active X controls from content. Blocking any content which contains viruses or malware is of course a sensible option in today’s security environment.

When these objects are being transferred and cachesd through a proxy server, there is an opportunity to filter this content. It is the logical place for example to implement virus screening plugins. The problems are that most of these plugin will require the whole object to be retrieved before it can be scanned. This leads to the undesirable situation where the proxy server is caching a potentially dangerous file. Also this can lead to a large amount of latency from the user perspective as the entire content is first downloaded and cached before the user sees anything on their computer screens.

There have been some technological developments which are improving this situation with more sophisticated scanners which can operate on streaming files and content. Other filtering applications can deal with HTML tag filtering in this way so that the data can be sent almost immediately and prevent that large data lag at the client’s side.

John ITV Stevens

Creating a Proxy Hierarchy

Although most networks and organisations would benefit from implementing proxy servers into their environment it can be a difficult task to decide the location and hierarchy of these servers.  It is very important and there are some questions which can aid the decision making process.

Flat or Hierarchical Proxy Structure?

This decision will largely depend on the both the size and the geographical dispersion of the network.  The two main options are firstly whether a standard single flat level of proxies will be sufficient, or whether something larger is required.  This would be a larger hierarchy based on  tree structure much like an Active Directory forest structure used in complexed windows environments.

Indeed in such environments it may be suitable to mirror the Active Directory design with a proxy server structure.   Many technical staff would use the following rule of thumb – each branch office would require an individual proxy server.  Again this may map onto an AD design where each office exists with it’s own Organisational Unit (OU) . This has other benefits because you can apply custom security and configurations options based on that OU, for example allowing  the sales OU more access through the proxy than administrative teams,

This of course needs to be carefully planned in line with whatever physical infrastructure is in place.   You cannot install heavy duty proxy hardware at the end of a small ISDN line for example.  The proxy servers should be installed in line with both the organisation configuration and network infrastructure.    Larger organisations can base these along larger geographical areas for example a separate hierarchy in each country.  So you would have a top level UK proxy server above regional proxies further down in the organisation.

If the organisation is fairly centralized you’ll certainly find a single level of proxies a better solution.  It is much easier to manage and the latency is minimised without tunnelling through multiple layers of servers and networks.

Single or Proxy Arrays

A standard rule of thumb for proxy servers is usually something like one proxy for every 3000 potential users.   This is of course only an estimate and can vary widely depending on the users and their geographic spread.  This doesn’t mean that the proxies need to be automatically independant, but can indeed be installed in a chain together.

For example you can set up four proxies in parallel to support 12000 users using the Cache Array Protocol (CARP).  These could be set up across different boundaries even across a flat proxy structure.   Remember that the servers will have different IP address ranges if across national borders.   Make sure that your proxy with the Irish IP address can speak to all the other European sites, most proxies should ideally be multihomed to help with routing.

Using the caching array will allow multiple physical proxies to be combined into a single logical device.    This is normally a good idea as it will increase things like the cache size and eliminates redundancy between individual proxy caches.

It’s normally best to run proxies in parallel whenever the opportunity exists. However sometimes this will not be possible and specific network configurations may stop this method meaning you’ll have to run proxies individually in a flat mode.   Even if you have to split up proxy resources in to individual machines be careful about creating network bottlenecks.  Individual proxies should not be pointing to single gateways or machines, even an overworked firewall can cause significant impact on a network’s performance and latency.

Cisco Pushes Firewall into Next Generation

In October 2013, Cisco closed around the $2.7 billion purchase of Sourcefire. Ever since, Cisco was integrating Sourcefire’s technology. Now Cisco finally entirely embraces the Sourcefire technologies from the Firm’s brand new Cisco Firepower NGFW, rather literally another generation of Cisco’s network defense midsize technology

Scott Harrell, Vice President of Product Management, Security Business Group in Cisco, clarified the Cisco Firepower NGFW is a completely integrated platform which includes firewall, IPS and URL filtering capabilities in addition to integration outside to fasten endpoints. Furthermore, Cisco’s danger telemetry data is incorporated into the Firepower NGFW. The managing of risk information and the safety workflow is also enhanced.

“When we purchased Sourcefire two decades back, we knew it’d be a trip to get to the stage,” Harrell informed Enterprise Networking PlanetEarth. “Many business analysts were doubtful of Cisco’s capacity to deliver Sourcefire’s technology jointly with technologies such as our classical ASA firewall and for this launch, we are saying we got it”

On the previous two decades, Cisco was incorporating Firepower attributes to the ASA product lineup. In September 2014, Cisco additional Firepower providers from Sourcefire into Cisco ASA firewalls. At the moment, Harrell explained the Sourcefire Firepower providers can be utilized to substitute a current Cisco IPS service operating on the ASA.

Together with the newest Firepower NGFW, Harrell explained that an present ASA 5500 could be updated via software to the new picture. Also a number of the old Firepower appliances may also be updated to the new picture. Historically, ASA was largely only a firewall and Firepower was largely only an IPS, but with Firepower NGFW, both worlds are coming together.  There are now many implementations working in organisations across the world to handle complex communications like streaming UK TV into Spain for example.

The crux of the Firepower NGFW is a brand new Linux operating system supply. Harrell explained that Cisco is calling its newest Linux powered operating system FXOS ( Firepower eXtensible Running System). The brand new FXOS introduces service-chaining capacities which may help allow a safety review and remediation workflow.

Chaining and comprehension context is further improved through the integration of the Cisco Identity Service Engine (ISE). Harrell clarified the Firepower is now able to consume ISE info about users and coverage. Also the integration of ISE and Firepower allows rapid danger containment in which an awake from Firepower could be extended through ISE to maintain a danger or malicious wind point off the community.

“So you are not only obstructing threats in the firewall, it is possible to really force the infected person into a quarantine zone or some sort of video proxy until the the threat is remediated, ” Harrell said.

While firewall and IPS devices were though of just two distinct technologies together with the Firepower NGFW that is no longer true.

New Security Partners for Cisco

Media giant Cisco will put up cyber protection centers in Gurgaon and Pune to assist track threats in real-time in addition to train individuals, including government officials to fight these challenges.

The US-based firm has inked a pact with CERT-In for tactical cyber security collaboration, which will focus on skilling and sharing of data and best practices to boost awareness and electronic security readiness.  He further added that cooperation with Cisco can help improve the safety of India’s electronic infrastructure and accelerate digitalisation of India.

“I am quite delighted to know they’re setting up finance to encourage start-ups that are working within the area of cyber security. We’re encouraging cyber payment in a large way. Most of us must work together to plug in the cyber safety gaps in life,” he explained.

Cisco President India and SAARC Dinesh Malkani said these attempts are a part of business’s USD 100 million investment commitment to India.

“These attempts are more significant in the light of government’s drive towards electronic trades. By 2020, India’s digital payments sector is anticipated to grow 10X to attain USD 500 billion,” he explained.

Cisco will establish a Security Operations Centre (SOC) from Pune to offer a wide selection of services, such as tracking of dangers and its finishing direction for business requirements. It’ll be connected to additional Cisco SOCs around the globe and enable streaming news across the globe.

The Safety and Trust Office (STO) from Gurgaon will advise on and assist the Indian government form the federal cyber security plan and initiatives. This is actually the third largest STO for Cisco, following France and Germany and certainly bigger than the nearest one in Australia.

Cisco and CERT-In will operate collectively on hazard intelligence sharing, whereas employees from Cisco and CERT-In will work collectively with each other to tackle cyber security threats and events, identify and form emerging safety market trends, discuss leading practices, and learn new strategies to increase cyber security.

The US-based firm will also establish a Cyber Range Lab in its Gurgaon centre, which will offer specialised technical training workshops to assist security employees construct the skills and expertise essential to fight new-age cyber dangers.

It will simulate an environment which enables employees to perform the part of the attacker and defender to find out the most recent methods of vulnerability manipulation and using innovative instruments and methods to mitigate and eliminate threats.

These centers will soon be fully operational during the upcoming few weeks. Cisco has over 1,000 individuals working in the region of security working for international operations.

Joe Simmons

Blogger, Author of Watch BBC TV abroad.

Introducing Fog Computing

Fog computing refers to a specific extension of the standard cloud computing model. It specifies a more decentralized architecture which collaborates with one or more node devices. This provides the subsequent control and configuration of end devices, something that is difficult for standard cloud computing models where data must be accessible centrally.The Fog computing model offers the chance for cloud based services to expand their reach and increases speed of accessibility to such devices.

There are two distinct planes – control and data which is often known as the forwarding plane.The destination and control of data packets is the responsibility of the data plane.This allows specific computing resources to be placed anywhere on the network unlike traditional cloud based computing which has to be focussed on central servers.An overview of the network is provided by the control plane which works with all the routing protocols specific in the architecture.

This Fog model allows data from devices in the Internet of Things to be processed in hardware that can be nearer the origin of the data.  It’s important to remember that the client side architecture is becoming increasingly complex too.  For example many of our devices actually are connected through VPNs or specialist DNS servers, read more in this article – Smart DNS vs VPN.

Cloud computing relies on the existence and a connection to that central server, which means you have to specify connectivity and bandwidth to accommodate this. Not so with the fog computing model, data can easily accessed between local devices – there is no dependency on the cloud. This model improves accessibility and the availability of device data.The idea also promotes collaboration between devices and data centres.

The model will work better in managing the capacity requirements of the IoT which is growing exponentially.This rise is partly due to the increase in smartphones and other devices which need access to data handling and computation power often in real time. With the conventional cloud, the smallest piece of data needs to be transmitted up to the central cloud from edge devices – this of course slows the whole network down.

Here’s a Quick Summary of the Advantages

  1. Globally distributed network helps minimal downtime
  2. Load balancing
  3. Maximize network bandwidth utilization
  4. Optimal operational expense
  5. Business Agility
  6. Better Interconnectivity
  7. Enhanced QoS
  8. Latency Reduction