Performance Issues – Content Filtering

On proxies and network performance there are obviously many components which can be an influencing factor.  One of those is content filtering, which in most networks form an important part of perimeter and internal security.  Nowadays most employees enjoy access to the internet from their corporate PCs which in itself necessitates the need for some content filtering.  URL filtering is one such process, the impact of intense checking against patterns to block.

There are huge risks with allowing access to the internet, so it is essential that these risks are mitigated in some way.  Users obviously can be made aware of code of conducts and a robust internet usage policy is essential.  However there will always be some users who will ignore these issues and even some who will actively seek to bypass them.  It is not uncommon to analyse outbound connections and see many people with constant media streams of UK TV from abroad which obviously is not good for your network.

Other examples of content filtering are things like HTML tag filtering and screening for viruses and malware. HTML tag filtering allows certain tags to be removed from transferred HTML documents usually for security purposes. Many organisations for example will routinely screen out all Java or Active X controls from content. Blocking any content which contains viruses or malware is of course a sensible option in today’s security environment.

When these objects are being transferred and cachesd through a proxy server, there is an opportunity to filter this content. It is the logical place for example to implement virus screening plugins. The problems are that most of these plugin will require the whole object to be retrieved before it can be scanned. This leads to the undesirable situation where the proxy server is caching a potentially dangerous file. Also this can lead to a large amount of latency from the user perspective as the entire content is first downloaded and cached before the user sees anything on their computer screens.

There have been some technological developments which are improving this situation with more sophisticated scanners which can operate on streaming files and content. Other filtering applications can deal with HTML tag filtering in this way so that the data can be sent almost immediately and prevent that large data lag at the client’s side.

John ITV Stevens

Creating a Proxy Hierarchy

Although most networks and organisations would benefit from implementing proxy servers into their environment it can be a difficult task to decide the location and hierarchy of these servers.  It is very important and there are some questions which can aid the decision making process.

Flat or Hierarchical Proxy Structure?

This decision will largely depend on the both the size and the geographical dispersion of the network.  The two main options are firstly whether a standard single flat level of proxies will be sufficient, or whether something larger is required.  This would be a larger hierarchy based on  tree structure much like an Active Directory forest structure used in complexed windows environments.

Indeed in such environments it may be suitable to mirror the Active Directory design with a proxy server structure.   Many technical staff would use the following rule of thumb – each branch office would require an individual proxy server.  Again this may map onto an AD design where each office exists with it’s own Organisational Unit (OU) . This has other benefits because you can apply custom security and configurations options based on that OU, for example allowing  the sales OU more access through the proxy than administrative teams,

This of course needs to be carefully planned in line with whatever physical infrastructure is in place.   You cannot install heavy duty proxy hardware at the end of a small ISDN line for example.  The proxy servers should be installed in line with both the organisation configuration and network infrastructure.    Larger organisations can base these along larger geographical areas for example a separate hierarchy in each country.  So you would have a top level UK proxy server above regional proxies further down in the organisation.

If the organisation is fairly centralized you’ll certainly find a single level of proxies a better solution.  It is much easier to manage and the latency is minimised without tunnelling through multiple layers of servers and networks.

Single or Proxy Arrays

A standard rule of thumb for proxy servers is usually something like one proxy for every 3000 potential users.   This is of course only an estimate and can vary widely depending on the users and their geographic spread.  This doesn’t mean that the proxies need to be automatically independant, but can indeed be installed in a chain together.

For example you can set up four proxies in parallel to support 12000 users using the Cache Array Protocol (CARP).  These could be set up across different boundaries even across a flat proxy structure.   Remember that the servers will have different IP address ranges if across national borders.   Make sure that your proxy with the Irish IP address can speak to all the other European sites, most proxies should ideally be multihomed to help with routing.

Using the caching array will allow multiple physical proxies to be combined into a single logical device.    This is normally a good idea as it will increase things like the cache size and eliminates redundancy between individual proxy caches.

It’s normally best to run proxies in parallel whenever the opportunity exists. However sometimes this will not be possible and specific network configurations may stop this method meaning you’ll have to run proxies individually in a flat mode.   Even if you have to split up proxy resources in to individual machines be careful about creating network bottlenecks.  Individual proxies should not be pointing to single gateways or machines, even an overworked firewall can cause significant impact on a network’s performance and latency.

Cisco Pushes Firewall into Next Generation

In October 2013, Cisco closed around the $2.7 billion purchase of Sourcefire. Ever since, Cisco was integrating Sourcefire’s technology. Now Cisco finally entirely embraces the Sourcefire technologies from the Firm’s brand new Cisco Firepower NGFW, rather literally another generation of Cisco’s network defense midsize technology

Scott Harrell, Vice President of Product Management, Security Business Group in Cisco, clarified the Cisco Firepower NGFW is a completely integrated platform which includes firewall, IPS and URL filtering capabilities in addition to integration outside to fasten endpoints. Furthermore, Cisco’s danger telemetry data is incorporated into the Firepower NGFW. The managing of risk information and the safety workflow is also enhanced.

“When we purchased Sourcefire two decades back, we knew it’d be a trip to get to the stage,” Harrell informed Enterprise Networking PlanetEarth. “Many business analysts were doubtful of Cisco’s capacity to deliver Sourcefire’s technology jointly with technologies such as our classical ASA firewall and for this launch, we are saying we got it”

On the previous two decades, Cisco was incorporating Firepower attributes to the ASA product lineup. In September 2014, Cisco additional Firepower providers from Sourcefire into Cisco ASA firewalls. At the moment, Harrell explained the Sourcefire Firepower providers can be utilized to substitute a current Cisco IPS service operating on the ASA.

Together with the newest Firepower NGFW, Harrell explained that an present ASA 5500 could be updated via software to the new picture. Also a number of the old Firepower appliances may also be updated to the new picture. Historically, ASA was largely only a firewall and Firepower was largely only an IPS, but with Firepower NGFW, both worlds are coming together.  There are now many implementations working in organisations across the world to handle complex communications like streaming UK TV into Spain for example.

The crux of the Firepower NGFW is a brand new Linux operating system supply. Harrell explained that Cisco is calling its newest Linux powered operating system FXOS ( Firepower eXtensible Running System). The brand new FXOS introduces service-chaining capacities which may help allow a safety review and remediation workflow.

Chaining and comprehension context is further improved through the integration of the Cisco Identity Service Engine (ISE). Harrell clarified the Firepower is now able to consume ISE info about users and coverage. Also the integration of ISE and Firepower allows rapid danger containment in which an awake from Firepower could be extended through ISE to maintain a danger or malicious wind point off the community.

“So you are not only obstructing threats in the firewall, it is possible to really force the infected person into a quarantine zone or some sort of video proxy until the the threat is remediated, ” Harrell said.

While firewall and IPS devices were though of just two distinct technologies together with the Firepower NGFW that is no longer true.

New Security Partners for Cisco

Media giant Cisco will put up cyber protection centers in Gurgaon and Pune to assist track threats in real-time in addition to train individuals, including government officials to fight these challenges.

The US-based firm has inked a pact with CERT-In for tactical cyber security collaboration, which will focus on skilling and sharing of data and best practices to boost awareness and electronic security readiness.  He further added that cooperation with Cisco can help improve the safety of India’s electronic infrastructure and accelerate digitalisation of India.

“I am quite delighted to know they’re setting up finance to encourage start-ups that are working within the area of cyber security. We’re encouraging cyber payment in a large way. Most of us must work together to plug in the cyber safety gaps in life,” he explained.

Cisco President India and SAARC Dinesh Malkani said these attempts are a part of business’s USD 100 million investment commitment to India.

“These attempts are more significant in the light of government’s drive towards electronic trades. By 2020, India’s digital payments sector is anticipated to grow 10X to attain USD 500 billion,” he explained.

Cisco will establish a Security Operations Centre (SOC) from Pune to offer a wide selection of services, such as tracking of dangers and its finishing direction for business requirements. It’ll be connected to additional Cisco SOCs around the globe and enable streaming news across the globe.

The Safety and Trust Office (STO) from Gurgaon will advise on and assist the Indian government form the federal cyber security plan and initiatives. This is actually the third largest STO for Cisco, following France and Germany and certainly bigger than the nearest one in Australia.

Cisco and CERT-In will operate collectively on hazard intelligence sharing, whereas employees from Cisco and CERT-In will work collectively with each other to tackle cyber security threats and events, identify and form emerging safety market trends, discuss leading practices, and learn new strategies to increase cyber security.

The US-based firm will also establish a Cyber Range Lab in its Gurgaon centre, which will offer specialised technical training workshops to assist security employees construct the skills and expertise essential to fight new-age cyber dangers.

It will simulate an environment which enables employees to perform the part of the attacker and defender to find out the most recent methods of vulnerability manipulation and using innovative instruments and methods to mitigate and eliminate threats.

These centers will soon be fully operational during the upcoming few weeks. Cisco has over 1,000 individuals working in the region of security working for international operations.

Joe Simmons

Blogger, Author of Watch BBC TV abroad.

Introducing Fog Computing

Fog computing refers to a specific extension of the standard cloud computing model. It specifies a more decentralized architecture which collaborates with one or more node devices. This provides the subsequent control and configuration of end devices, something that is difficult for standard cloud computing models where data must be accessible centrally.The Fog computing model offers the chance for cloud based services to expand their reach and increases speed of accessibility to such devices.

There are two distinct planes – control and data which is often known as the forwarding plane.The destination and control of data packets is the responsibility of the data plane.This allows specific computing resources to be placed anywhere on the network unlike traditional cloud based computing which has to be focussed on central servers.An overview of the network is provided by the control plane which works with all the routing protocols specific in the architecture.

This Fog model allows data from devices in the Internet of Things to be processed in hardware that can be nearer the origin of the data.  It’s important to remember that the client side architecture is becoming increasingly complex too.  For example many of our devices actually are connected through VPNs or specialist DNS servers, read more in this article – Smart DNS vs VPN.

Cloud computing relies on the existence and a connection to that central server, which means you have to specify connectivity and bandwidth to accommodate this. Not so with the fog computing model, data can easily accessed between local devices – there is no dependency on the cloud. This model improves accessibility and the availability of device data.The idea also promotes collaboration between devices and data centres.

The model will work better in managing the capacity requirements of the IoT which is growing exponentially.This rise is partly due to the increase in smartphones and other devices which need access to data handling and computation power often in real time. With the conventional cloud, the smallest piece of data needs to be transmitted up to the central cloud from edge devices – this of course slows the whole network down.

Here’s a Quick Summary of the Advantages

  1. Globally distributed network helps minimal downtime
  2. Load balancing
  3. Maximize network bandwidth utilization
  4. Optimal operational expense
  5. Business Agility
  6. Better Interconnectivity
  7. Enhanced QoS
  8. Latency Reduction

Digital Interface Testing – Cisco

If you need to check the physical layer status and the quality of digital circuits then there are two tools which you are likely to need.   The first is a breakout box which can be used to determine the connection integrity between the DTE and the DCE. This box (also know as ‘BOB’) has two external connections which can be extended on the DTE and DCE.

The box supplies status information on the digital circuit and will also display any data being transmitted at the time.   The device will normally display real-time status information about data, clocking, space and activity.  On most of the breakout boxes, this information is displayed using status LEDs.  It is normally quite a compact device powered by batteries in order to increase it’s portability.   The box contains buffered electrical circuitry which does not interfere with the actual line signal during testing,  Most are also capable of verifying the electrical resistance and line voltage too.

These are focused on physical problems on a network primarily, although errors can occur for other reasons.  If you’re looking at other issues perhaps conflicts on a proxy IP address or an application error then you should look at other tools.

The second piece of equipment you’ll need has a variety of names but is most commonly known as BERT.  This stands for bit-error-rate tester and is actually a lot more sophisticated piece of kit.   This can effectively measure the error rate in a digital signal.  This error rate can be measured both from end to end circuit or on a portion of a circuit for isolating individual faults.  The bit error rate is often measured during installation and commissioning so that it can be used as a baseline.

The BERT also is used to measure error rates on the variety of different bit patterns that it can generate. You can use this information for timing or noise issues on the circuit.  It does take time but allows a line to be monitored accurately and a traffic and error analysis can be performed

John Williams

UK VPN Free trial

Dynamic Host Configuration Protocol – DHCP

There are several popular mechanisms which can allocate an IP address to a computer or network device, however DHCP is probably the most advanced method in common use.    It’s a robust and efficient protocol which uses UDP as it’s transport mechanism.   It exists largely as a result of the shortcomings of it’s predecessor BOOTP to which DHCP offers a host of enhancements.

One of the biggest improvements was that a DHCP allows the inclusion of a client’s subnet mask, which allows clients to be configured much easily particularly on large networks with many subnets.  The other addition was regarding the ability to lease IP addresses for a specified period.  In large networks this is crucial because of several reasons but primarily it made managing IP addresses much simpler and ensured that IP addresses weren’t locked into computers which weren’t even switched on.   It enabled a network administrator to work with a much smaller pool of usable IP addresses than the number of ‘potential’ network enabled clients.

Although DHCP is a huge improvement on the IP addressing allocations systems that preceded it, there is still some situations which can cause problems.  It’s worth considering issues with DHCP if there are network connectivity problems with your clients.

Typically DHCP related problems are to do with configuration or connectivity.  One of the simplest issues is that DHCP hasn’t actually been configured on a client, although most later versions of Windows attempt to use DHCP by default some older versions need the IP address mechanism configured first.

A DHCP server will often be on a different network segment that the client it is attempting to update, any issues of connectivity between the two segments will be made worse if IP addresses are not allocated by the DHCP server.   Remember the protocol uses UDP as it’s transport mechanism which does not have any delivery checking.   A client will also broadcast attempting to find the nearest connectible DHCP server, this can cause issues if these broadcasts are not repeated by some network hardware.

If you do have problems on larger networks with DHCP broadcasts not being repeated then you should configure IP helper addresses on routers within the network to solve this.  Sometimes it can get confusing with multiple DHCP servers on different networks, it’s important you have a good VPN service in order to connect to the various devices to ensure connectivity across segments.

If  you’ve ruled out connectivity problems make sure the DHCP server is configured properly and has plenty of available IP addresses to allocate.  Sometimes problems are not that the DHCP can’t be contacted but has simply run out of addresses to allocate to clients.

With Thanks

Raphael Silvano – Italian Networks, Rai Streaming Estero, 2017 Haver Press

Internet Control Message Protocol – ICMP

The Internet control message protocol has a wide variety of different message types many of which are extremely useful for managing and troubleshooting an IP Network.   Most of us are familiar with the command ‘ping’ which uses at it’s core both ICMP echo and echo reply.   Another well used ICMP tool is that of traceroute which is useful for monitoring TTLs (time to live) and hop counts.

There are however quite a number of these ICMP messages, beyond the ones used by these tools and most are extremely useful for anyone managing a complex IP based network.   Here’s some of the most useful ones:

ICMP unreachable – an IP host will produce an ICMP unreachable message if there is no valid path to the requested host, network, protocol or port.  There are several of these messages which are often grouped together for convenience.  They are often generated from routers and switches, for example if local access lists are restricting access to the requested resource.   You should be careful about allowing these messages to be propagated as they contain source addresses.  Particularly if the connection is being used externally perhaps through an external connection like a BBC VPN for instance.    The messages can be blocked by using the no ip unreachables command on Cisco hardware.

ICMP redirects – a router will produce a redirect message if it receives a packet on a given interface and the route is on  the same device.   These can be used to help update local routing tables with the correct information.   There is an interesting protocol from Cisco which can be configured to help with these situations it’s called the Hot Standby Routing Protocol (HSRP).

ICMP  mask request and reply – some hosts do not have their subnet masks statically defined and have no way of learning it.  Here they can use an ICMP mask request which can be responded to by the router with an ICMP mask reply.

ICMP source quench – these messages provide an important function within ICMP that of congestion control on the network.   If a network device such as a router detects network congestion perhaps because of dropped packets or overflows in buffers and on it’s interfaces then it will send an ICMP source quench message to the source of these packets.

ICMP Fragmentation – this type of message is sent when an IP packet is received which is larger than the MTU specified within the LAN or WAN environment yet it also has the flag DF set (do not fragment). Here the packet cannot be forwarded however the ICMP message can be used to at least pass back some information on the issue.  There are actually quite a few scenarios where the DF bit is set automatically by devices as the packet is distributed.

Further Reading:

John Summer, Proxy for Netflix – video, Harvard Press, 2017

WAN Connectivity Issues

Troubleshooting applications which operate across WAN (wide area networks) can be especially difficult.  When a PC has the potential to both communicate with servers and other workstations across different IP networks and subnets there will almost always be complications.  The PC could be using various methods and protocols to communicate and there’s inevitably the difficulty of identifying if your network hardware or the end network is causing the problems.

It’s important before looking for complex solutions is to start with the basics.  A computer that needs to communicate across a wide area network will normally be configured to route it’s traffic through a default gateway.    Although it sounds unlikely, misconfiguration of this very basic setting is quite often the root cause of any network connectivity issue.  Basic IP configuration on the workstation will break most connectivity, remember it may be some external change that has caused this problem too.  If a router or gateway is removed or updated, then any static configurations must be updated.

The error could be a simple incorrect IP address of the default gateway, or more commonly something like a incorrect subnet mask.   Always remember that many operating systems require a reboot to enforce changes in IP configuration, another simple mistake to make especially if diagnosing remotely.  If you can talk to a user or have command access on the workstation, the first checks should be basic connectivity ones.  If a workstation can ping hosts on the same subnet but not on other subnets, your next step should be to check connectivity to the default gateway.

Other errors can be simply down to incorrect name resolution.  If all network configuration and operation is ok, then it may simply be that the machine is being directed to the wrong address.  Static information for name resolution can unfortunately be stored in all sorts of places, some difficult to locate.  There are files on the host PC which should be checked both hosts and lmhosts can cause connectivity issues if there’s an incorrect address.  Also many devices cache addresses to help with speed and network connectivity.

Checking IP connectivity might not tell the whole story though particularly if you’re trying to troubleshoot an application.  Many have their own connectivity and configuration information pre-installed, a configuration files with incorrect connectivity information.  These could potentially overwrite things like a default gateway and cause issues.   Many applications work through web browsers and can also pick up connection details from these, users will offer specify a proxy in their browsers settings for various reasons, perhaps for accessing a popular web site like BBC iPlayer from abroad  – such as this would also cause the application to be routed through the proxy too.  It may work ok depending on the configuration of the proxy server (many just pass through data like this) however it will have an extra step added to the route.

Further Information on BBC News

Cisco Discovery Protocol – CDP

The Cisco Discovery Protocol is of use in most networked environments however it is probably most useful in a Cisco switched environment. This is because one the difficulties in troubleshooting an environment with mostly switches is that they usually provide a lot less information than routers. It can also pose further difficulties because a switched environment is sometimes more confusing because these networks are not always clearly segmented and can be confusing. CDP provides a useful tool for identifying and detecting Cisco switches and routers and to build up a picture of the complete network topology of all the Cisco hardware.

The Cisco Discovery Protocol is a data link multicast protocol, which uses a standard multicast MAC address. These broadcasts are formed as SNAP type 200 packets and as such they have no layer 3 component. CDP must be enabled on a Cisco router or switch before it can be used for detecting other Cisco devices which can be seen on all it’s interfaces. It should be noted that CDP is not like extended network management protocols like SNMP it can only detect directly attached devices. Even if two routers were plugged into a single Cisco switch, CDP will allow each router to see the switch but it would not allow them to see the other router. The routers will appear to the switches as CDP neighbours, the primary reason for this is that the protocol is not designed to forward frames between devices.

Although this sounds of limited use, the information that CDP provides about directly attached devices is extensive and extremely useful for troubleshooting things like network problems or anonymous torrenting taking place on the network. You can for example use the protocol for completely defining whole wiring closers, and detailing the exact models and versions of routers and switches. The information is extensive and you’ll soon know even the versions of Catalyst switches that are connected and which port/interface. Other protocols will normally go little further than identifying whether the hardware is a router or switch.

There are a host of commands which can be used to provide extensive information on all Cisco hardware. The most used ones are the SHow and Clear commands which can be used to bring up general information on the devices plus details of the software versions, modules installed, port configuration and any error messages. The command can also be used to display the full routing and switching tables plus much more information.  Often used to see if any connections or VPNs to the BBC are blocked or inaccurate.

The show config command is used to display the configuration information which is stored in NVRAM of the device. For example in Catalyst 5000 series of routers, the entire configuration is stored automatically in this memory, the command write terminal can also be used to display the same information. To clear the configuration information the clear config command will erase the information on a specified module. The displayed information can be categorized into the following broad categories:

  • switch management parameters
  • IP configuration
  • Virtual LANS
  • Bridging Parameters

There is further information which can be accessed including extensive information on module specific information. You can also use the show cam command which refers to the special memory on the switches which has a low access time. THe information stored here is normally bridging and switching tables, it needs a fast response time because it can be updated very quickly in fast dynamic environments.

Further Reading

No Comments Networks, Protocols