ATM – Routing IP

There are of course many different network architectures many of which have been around for many years. One of them is known as ATM (Asynchronous Transfer Mode) and was considered in the 1990’s to be the ultimate network architecture design. The belief was that in the future every computer or device would be fitted with an ATM network adapter rather than the alternatives which at the time were token-ring or ethernet.

The reality has turned out somewhat different of course, and it’s unlikely that we will ever see the extensive use of ATM based networks. However many corporations installed ATM backbone switches for an important reason because they have the ability to handle network traffic at extremely high speeds.

There is a difficulty though for using these switches, that is ATM is a virtual circuit based, cell based networking scheme which is primarily connection orientated. Compare this with Ethernet which powers the majority of commercial networks which is actually a connection less frame based networking scheme. In fact to integrate the two systems, you need to use one of the available overlays which have been developed in order to allow Ethernet to be connected to the ATM backbones and switches.

These normally work by using layer 3 routing algorithms which can discover the initial routes through the network, Then layer 2 virtual circuits can be established through the ATM fabric delivering data without actually going directly through the routers. This technique is normally known as ‘shortcut routing’ although you will often here it described by other terms as it’s a useful technique. If you need more detailed information check your normal networking references or search online using search terms like ‘IP routing over ATM’.

There are difficulties with these improvised techniques one of the most common is knowing when to route and when to switch the traffic at layer 2. Long data transmissions such as Netflix video streams should be switched as the more efficient method of transport. However for shorter transmissions then the router is normally the best option.

Layer 3 traffic will not under normal circumstances identify the length of the transmission so it may or may not be suitable to be switched. There are ways of identifying the length of the transmission normally by inspecting the content of the datagrams itself. There are many different methods of identifying the flow mostly developed by different networking companies, some are no longer commonly used but you will find others being developed or utilized extensively in various environments. See the references below for some examples that can be researched for more information.

References:
3 Com Fast IP
Ipsilon IP Switching
Switch IP Address – Watch UK TV in USA

No Comments Protocols, Proxies, VPN

CSMA/CD (Carrier Sense Multiple Access/Collision Detection)

On shared network topologies like Ethernet there is a need to control access to the network. One of the most common method is to use CSMA to ensure that all devices get equal access to the available bandwidth.

Devices attached to the network will listen to other traffic before transmitting this is called ‘carrier sense’. The devices will wait until the channel is free before transmitting on the same cable. There is also the ability to for many devices to use the same network using MA (multiple access), so multiple devices will communicate using the same network cable. The reason that multiple access is a necessity is because all the devices on a CSMA network will have equal access rights to transmit. It is therefore inevitable that there will be two stations attempting to transmit at the same time especially on larger networks. In this case there is the possibility of collisions which should be avoided by using special techniques called collision detection.

CD (Collision detection) defines what happens when two devices see a clear network channel and both attempt to use it at the same time. When a collision initially occurs both devices will stop transmission, wait for a random number of seconds before attempting to retransmit. This is likely to happen often especially on busy networks with lots of users or computers, although a few clients downloading video from a source like Netflix will generate similar issues.

This method is used on most Ethernet networks and is surprisingly effective on a standard IEEE 802.3 Ethernet network channel. It should be noted though that this method only handle collisions as they occur it does not actively prevent them happening in the first place. If there are too many collisions on a network then network performance can be impacted greatly. Indeed to avoid all collisions you need to ensure that only 40% of the bus capacity is used which is very difficult for most busy corporate networks.

A more advanced method of dealing with collisions is the CSMA/CA which stands for collision avoidance. This attempts to avoid collisions by getting each node to broadcast before transmitting. It is usually very effective but not widely used because the avoidance usually generates similar overhead that the collisions themselves.

Further Reading:
Etherenet IEEE
MAC Medium Access Control
Netflix VPN problem

No Comments Networks, Protocols, VPN

Performance Issues – Content Filtering

On proxies and network performance there are obviously many components which can be an influencing factor.  One of those is content filtering, which in most networks form an important part of perimeter and internal security.  Nowadays most employees enjoy access to the internet from their corporate PCs which in itself necessitates the need for some content filtering.  URL filtering is one such process, the impact of intense checking against patterns to block.

There are huge risks with allowing access to the internet, so it is essential that these risks are mitigated in some way.  Users obviously can be made aware of code of conducts and a robust internet usage policy is essential.  However there will always be some users who will ignore these issues and even some who will actively seek to bypass them.  It is not uncommon to analyse outbound connections and see many people with constant media streams of UK TV from abroad which obviously is not good for your network.

Other examples of content filtering are things like HTML tag filtering and screening for viruses and malware. HTML tag filtering allows certain tags to be removed from transferred HTML documents usually for security purposes. Many organisations for example will routinely screen out all Java or Active X controls from content. Blocking any content which contains viruses or malware is of course a sensible option in today’s security environment.

When these objects are being transferred and cachesd through a proxy server, there is an opportunity to filter this content. It is the logical place for example to implement virus screening plugins. The problems are that most of these plugin will require the whole object to be retrieved before it can be scanned. This leads to the undesirable situation where the proxy server is caching a potentially dangerous file. Also this can lead to a large amount of latency from the user perspective as the entire content is first downloaded and cached before the user sees anything on their computer screens.

There have been some technological developments which are improving this situation with more sophisticated scanners which can operate on streaming files and content. Other filtering applications can deal with HTML tag filtering in this way so that the data can be sent almost immediately and prevent that large data lag at the client’s side.

John ITV Stevens

Creating a Proxy Hierarchy

Although most networks and organisations would benefit from implementing proxy servers into their environment it can be a difficult task to decide the location and hierarchy of these servers.  It is very important and there are some questions which can aid the decision making process.

Flat or Hierarchical Proxy Structure?

This decision will largely depend on the both the size and the geographical dispersion of the network.  The two main options are firstly whether a standard single flat level of proxies will be sufficient, or whether something larger is required.  This would be a larger hierarchy based on  tree structure much like an Active Directory forest structure used in complexed windows environments.

Indeed in such environments it may be suitable to mirror the Active Directory design with a proxy server structure.   Many technical staff would use the following rule of thumb – each branch office would require an individual proxy server.  Again this may map onto an AD design where each office exists with it’s own Organisational Unit (OU) . This has other benefits because you can apply custom security and configurations options based on that OU, for example allowing  the sales OU more access through the proxy than administrative teams,

This of course needs to be carefully planned in line with whatever physical infrastructure is in place.   You cannot install heavy duty proxy hardware at the end of a small ISDN line for example.  The proxy servers should be installed in line with both the organisation configuration and network infrastructure.    Larger organisations can base these along larger geographical areas for example a separate hierarchy in each country.  So you would have a top level UK proxy server above regional proxies further down in the organisation.

If the organisation is fairly centralized you’ll certainly find a single level of proxies a better solution.  It is much easier to manage and the latency is minimised without tunnelling through multiple layers of servers and networks.

Single or Proxy Arrays

A standard rule of thumb for proxy servers is usually something like one proxy for every 3000 potential users.   This is of course only an estimate and can vary widely depending on the users and their geographic spread.  This doesn’t mean that the proxies need to be automatically independant, but can indeed be installed in a chain together.

For example you can set up four proxies in parallel to support 12000 users using the Cache Array Protocol (CARP).  These could be set up across different boundaries even across a flat proxy structure.   Remember that the servers will have different IP address ranges if across national borders.   Make sure that your proxy with the Irish IP address can speak to all the other European sites, most proxies should ideally be multihomed to help with routing.

Using the caching array will allow multiple physical proxies to be combined into a single logical device.    This is normally a good idea as it will increase things like the cache size and eliminates redundancy between individual proxy caches.

It’s normally best to run proxies in parallel whenever the opportunity exists. However sometimes this will not be possible and specific network configurations may stop this method meaning you’ll have to run proxies individually in a flat mode.   Even if you have to split up proxy resources in to individual machines be careful about creating network bottlenecks.  Individual proxies should not be pointing to single gateways or machines, even an overworked firewall can cause significant impact on a network’s performance and latency.

Cisco Pushes Firewall into Next Generation

In October 2013, Cisco closed around the $2.7 billion purchase of Sourcefire. Ever since, Cisco was integrating Sourcefire’s technology. Now Cisco finally entirely embraces the Sourcefire technologies from the Firm’s brand new Cisco Firepower NGFW, rather literally another generation of Cisco’s network defense midsize technology

Scott Harrell, Vice President of Product Management, Security Business Group in Cisco, clarified the Cisco Firepower NGFW is a completely integrated platform which includes firewall, IPS and URL filtering capabilities in addition to integration outside to fasten endpoints. Furthermore, Cisco’s danger telemetry data is incorporated into the Firepower NGFW. The managing of risk information and the safety workflow is also enhanced.

“When we purchased Sourcefire two decades back, we knew it’d be a trip to get to the stage,” Harrell informed Enterprise Networking PlanetEarth. “Many business analysts were doubtful of Cisco’s capacity to deliver Sourcefire’s technology jointly with technologies such as our classical ASA firewall and for this launch, we are saying we got it”

On the previous two decades, Cisco was incorporating Firepower attributes to the ASA product lineup. In September 2014, Cisco additional Firepower providers from Sourcefire into Cisco ASA firewalls. At the moment, Harrell explained the Sourcefire Firepower providers can be utilized to substitute a current Cisco IPS service operating on the ASA.

Together with the newest Firepower NGFW, Harrell explained that an present ASA 5500 could be updated via software to the new picture. Also a number of the old Firepower appliances may also be updated to the new picture. Historically, ASA was largely only a firewall and Firepower was largely only an IPS, but with Firepower NGFW, both worlds are coming together.  There are now many implementations working in organisations across the world to handle complex communications like streaming UK TV into Spain for example.

The crux of the Firepower NGFW is a brand new Linux operating system supply. Harrell explained that Cisco is calling its newest Linux powered operating system FXOS ( Firepower eXtensible Running System). The brand new FXOS introduces service-chaining capacities which may help allow a safety review and remediation workflow.

Chaining and comprehension context is further improved through the integration of the Cisco Identity Service Engine (ISE). Harrell clarified the Firepower is now able to consume ISE info about users and coverage. Also the integration of ISE and Firepower allows rapid danger containment in which an awake from Firepower could be extended through ISE to maintain a danger or malicious wind point off the community.

“So you are not only obstructing threats in the firewall, it is possible to really force the infected person into a quarantine zone or some sort of video proxy until the the threat is remediated, ” Harrell said.

While firewall and IPS devices were though of just two distinct technologies together with the Firepower NGFW that is no longer true.

New Security Partners for Cisco

Media giant Cisco will put up cyber protection centers in Gurgaon and Pune to assist track threats in real-time in addition to train individuals, including government officials to fight these challenges.

The US-based firm has inked a pact with CERT-In for tactical cyber security collaboration, which will focus on skilling and sharing of data and best practices to boost awareness and electronic security readiness.  He further added that cooperation with Cisco can help improve the safety of India’s electronic infrastructure and accelerate digitalisation of India.

“I am quite delighted to know they’re setting up finance to encourage start-ups that are working within the area of cyber security. We’re encouraging cyber payment in a large way. Most of us must work together to plug in the cyber safety gaps in life,” he explained.

Cisco President India and SAARC Dinesh Malkani said these attempts are a part of business’s USD 100 million investment commitment to India.

“These attempts are more significant in the light of government’s drive towards electronic trades. By 2020, India’s digital payments sector is anticipated to grow 10X to attain USD 500 billion,” he explained.

Cisco will establish a Security Operations Centre (SOC) from Pune to offer a wide selection of services, such as tracking of dangers and its finishing direction for business requirements. It’ll be connected to additional Cisco SOCs around the globe and enable streaming news across the globe.

The Safety and Trust Office (STO) from Gurgaon will advise on and assist the Indian government form the federal cyber security plan and initiatives. This is actually the third largest STO for Cisco, following France and Germany and certainly bigger than the nearest one in Australia.

Cisco and CERT-In will operate collectively on hazard intelligence sharing, whereas employees from Cisco and CERT-In will work collectively with each other to tackle cyber security threats and events, identify and form emerging safety market trends, discuss leading practices, and learn new strategies to increase cyber security.

The US-based firm will also establish a Cyber Range Lab in its Gurgaon centre, which will offer specialised technical training workshops to assist security employees construct the skills and expertise essential to fight new-age cyber dangers.

It will simulate an environment which enables employees to perform the part of the attacker and defender to find out the most recent methods of vulnerability manipulation and using innovative instruments and methods to mitigate and eliminate threats.

These centers will soon be fully operational during the upcoming few weeks. Cisco has over 1,000 individuals working in the region of security working for international operations.

Joe Simmons

Blogger, Author of Watch BBC TV abroad.

Introducing Fog Computing

Fog computing refers to a specific extension of the standard cloud computing model. It specifies a more decentralized architecture which collaborates with one or more node devices. This provides the subsequent control and configuration of end devices, something that is difficult for standard cloud computing models where data must be accessible centrally.The Fog computing model offers the chance for cloud based services to expand their reach and increases speed of accessibility to such devices.

There are two distinct planes – control and data which is often known as the forwarding plane.The destination and control of data packets is the responsibility of the data plane.This allows specific computing resources to be placed anywhere on the network unlike traditional cloud based computing which has to be focussed on central servers.An overview of the network is provided by the control plane which works with all the routing protocols specific in the architecture.

This Fog model allows data from devices in the Internet of Things to be processed in hardware that can be nearer the origin of the data.  It’s important to remember that the client side architecture is becoming increasingly complex too.  For example many of our devices actually are connected through VPNs or specialist DNS servers, read more in this article – Smart DNS vs VPN.

Cloud computing relies on the existence and a connection to that central server, which means you have to specify connectivity and bandwidth to accommodate this. Not so with the fog computing model, data can easily accessed between local devices – there is no dependency on the cloud. This model improves accessibility and the availability of device data.The idea also promotes collaboration between devices and data centres.

The model will work better in managing the capacity requirements of the IoT which is growing exponentially.This rise is partly due to the increase in smartphones and other devices which need access to data handling and computation power often in real time. With the conventional cloud, the smallest piece of data needs to be transmitted up to the central cloud from edge devices – this of course slows the whole network down.

Here’s a Quick Summary of the Advantages

  1. Globally distributed network helps minimal downtime
  2. Load balancing
  3. Maximize network bandwidth utilization
  4. Optimal operational expense
  5. Business Agility
  6. Better Interconnectivity
  7. Enhanced QoS
  8. Latency Reduction

Digital Interface Testing – Cisco

If you need to check the physical layer status and the quality of digital circuits then there are two tools which you are likely to need.   The first is a breakout box which can be used to determine the connection integrity between the DTE and the DCE. This box (also know as ‘BOB’) has two external connections which can be extended on the DTE and DCE.

The box supplies status information on the digital circuit and will also display any data being transmitted at the time.   The device will normally display real-time status information about data, clocking, space and activity.  On most of the breakout boxes, this information is displayed using status LEDs.  It is normally quite a compact device powered by batteries in order to increase it’s portability.   The box contains buffered electrical circuitry which does not interfere with the actual line signal during testing,  Most are also capable of verifying the electrical resistance and line voltage too.

These are focused on physical problems on a network primarily, although errors can occur for other reasons.  If you’re looking at other issues perhaps conflicts on a proxy IP address or an application error then you should look at other tools.

The second piece of equipment you’ll need has a variety of names but is most commonly known as BERT.  This stands for bit-error-rate tester and is actually a lot more sophisticated piece of kit.   This can effectively measure the error rate in a digital signal.  This error rate can be measured both from end to end circuit or on a portion of a circuit for isolating individual faults.  The bit error rate is often measured during installation and commissioning so that it can be used as a baseline.

The BERT also is used to measure error rates on the variety of different bit patterns that it can generate. You can use this information for timing or noise issues on the circuit.  It does take time but allows a line to be monitored accurately and a traffic and error analysis can be performed

John Williams

UK VPN Free trial

Dynamic Host Configuration Protocol – DHCP

There are several popular mechanisms which can allocate an IP address to a computer or network device, however DHCP is probably the most advanced method in common use.    It’s a robust and efficient protocol which uses UDP as it’s transport mechanism.   It exists largely as a result of the shortcomings of it’s predecessor BOOTP to which DHCP offers a host of enhancements.

One of the biggest improvements was that a DHCP allows the inclusion of a client’s subnet mask, which allows clients to be configured much easily particularly on large networks with many subnets.  The other addition was regarding the ability to lease IP addresses for a specified period.  In large networks this is crucial because of several reasons but primarily it made managing IP addresses much simpler and ensured that IP addresses weren’t locked into computers which weren’t even switched on.   It enabled a network administrator to work with a much smaller pool of usable IP addresses than the number of ‘potential’ network enabled clients.

Although DHCP is a huge improvement on the IP addressing allocations systems that preceded it, there is still some situations which can cause problems.  It’s worth considering issues with DHCP if there are network connectivity problems with your clients.

Typically DHCP related problems are to do with configuration or connectivity.  One of the simplest issues is that DHCP hasn’t actually been configured on a client, although most later versions of Windows attempt to use DHCP by default some older versions need the IP address mechanism configured first.

A DHCP server will often be on a different network segment that the client it is attempting to update, any issues of connectivity between the two segments will be made worse if IP addresses are not allocated by the DHCP server.   Remember the protocol uses UDP as it’s transport mechanism which does not have any delivery checking.   A client will also broadcast attempting to find the nearest connectible DHCP server, this can cause issues if these broadcasts are not repeated by some network hardware.

If you do have problems on larger networks with DHCP broadcasts not being repeated then you should configure IP helper addresses on routers within the network to solve this.  Sometimes it can get confusing with multiple DHCP servers on different networks, it’s important you have a good VPN service in order to connect to the various devices to ensure connectivity across segments.

If  you’ve ruled out connectivity problems make sure the DHCP server is configured properly and has plenty of available IP addresses to allocate.  Sometimes problems are not that the DHCP can’t be contacted but has simply run out of addresses to allocate to clients.

With Thanks

Raphael Silvano – Italian Networks, Rai Streaming Estero, 2017 Haver Press

Internet Control Message Protocol – ICMP

The Internet control message protocol has a wide variety of different message types many of which are extremely useful for managing and troubleshooting an IP Network.   Most of us are familiar with the command ‘ping’ which uses at it’s core both ICMP echo and echo reply.   Another well used ICMP tool is that of traceroute which is useful for monitoring TTLs (time to live) and hop counts.

There are however quite a number of these ICMP messages, beyond the ones used by these tools and most are extremely useful for anyone managing a complex IP based network.   Here’s some of the most useful ones:

ICMP unreachable – an IP host will produce an ICMP unreachable message if there is no valid path to the requested host, network, protocol or port.  There are several of these messages which are often grouped together for convenience.  They are often generated from routers and switches, for example if local access lists are restricting access to the requested resource.   You should be careful about allowing these messages to be propagated as they contain source addresses.  Particularly if the connection is being used externally perhaps through an external connection like a BBC VPN for instance.    The messages can be blocked by using the no ip unreachables command on Cisco hardware.

ICMP redirects – a router will produce a redirect message if it receives a packet on a given interface and the route is on  the same device.   These can be used to help update local routing tables with the correct information.   There is an interesting protocol from Cisco which can be configured to help with these situations it’s called the Hot Standby Routing Protocol (HSRP).

ICMP  mask request and reply – some hosts do not have their subnet masks statically defined and have no way of learning it.  Here they can use an ICMP mask request which can be responded to by the router with an ICMP mask reply.

ICMP source quench – these messages provide an important function within ICMP that of congestion control on the network.   If a network device such as a router detects network congestion perhaps because of dropped packets or overflows in buffers and on it’s interfaces then it will send an ICMP source quench message to the source of these packets.

ICMP Fragmentation – this type of message is sent when an IP packet is received which is larger than the MTU specified within the LAN or WAN environment yet it also has the flag DF set (do not fragment). Here the packet cannot be forwarded however the ICMP message can be used to at least pass back some information on the issue.  There are actually quite a few scenarios where the DF bit is set automatically by devices as the packet is distributed.

Further Reading:

John Summer, Proxy for Netflix – video, Harvard Press, 2017