If you need to check the physical layer status and the quality of digital circuits then there are two tools which you are likely to need. The first is a breakout box which can be used to determine the connection integrity between the DTE and the DCE. This box (also know as ‘BOB’) has two external connections which can be extended on the DTE and DCE.
The box supplies status information on the digital circuit and will also display any data being transmitted at the time. The device will normally display real-time status information about data, clocking, space and activity. On most of the breakout boxes, this information is displayed using status LEDs. It is normally quite a compact device powered by batteries in order to increase it’s portability. The box contains buffered electrical circuitry which does not interfere with the actual line signal during testing, Most are also capable of verifying the electrical resistance and line voltage too.
These are focused on physical problems on a network primarily, although errors can occur for other reasons. If you’re looking at other issues perhaps conflicts on a proxy IP address or an application error then you should look at other tools.
The second piece of equipment you’ll need has a variety of names but is most commonly known as BERT. This stands for bit-error-rate tester and is actually a lot more sophisticated piece of kit. This can effectively measure the error rate in a digital signal. This error rate can be measured both from end to end circuit or on a portion of a circuit for isolating individual faults. The bit error rate is often measured during installation and commissioning so that it can be used as a baseline.
The BERT also is used to measure error rates on the variety of different bit patterns that it can generate. You can use this information for timing or noise issues on the circuit. It does take time but allows a line to be monitored accurately and a traffic and error analysis can be performed
UK VPN Free trial
There are several popular mechanisms which can allocate an IP address to a computer or network device, however DHCP is probably the most advanced method in common use. It’s a robust and efficient protocol which uses UDP as it’s transport mechanism. It exists largely as a result of the shortcomings of it’s predecessor BOOTP to which DHCP offers a host of enhancements.
One of the biggest improvements was that a DHCP allows the inclusion of a client’s subnet mask, which allows clients to be configured much easily particularly on large networks with many subnets. The other addition was regarding the ability to lease IP addresses for a specified period. In large networks this is crucial because of several reasons but primarily it made managing IP addresses much simpler and ensured that IP addresses weren’t locked into computers which weren’t even switched on. It enabled a network administrator to work with a much smaller pool of usable IP addresses than the number of ‘potential’ network enabled clients.
Although DHCP is a huge improvement on the IP addressing allocations systems that preceded it, there is still some situations which can cause problems. It’s worth considering issues with DHCP if there are network connectivity problems with your clients.
Typically DHCP related problems are to do with configuration or connectivity. One of the simplest issues is that DHCP hasn’t actually been configured on a client, although most later versions of Windows attempt to use DHCP by default some older versions need the IP address mechanism configured first.
A DHCP server will often be on a different network segment that the client it is attempting to update, any issues of connectivity between the two segments will be made worse if IP addresses are not allocated by the DHCP server. Remember the protocol uses UDP as it’s transport mechanism which does not have any delivery checking. A client will also broadcast attempting to find the nearest connectible DHCP server, this can cause issues if these broadcasts are not repeated by some network hardware.
If you do have problems on larger networks with DHCP broadcasts not being repeated then you should configure IP helper addresses on routers within the network to solve this. Sometimes it can get confusing with multiple DHCP servers on different networks, it’s important you have a good VPN service in order to connect to the various devices to ensure connectivity across segments.
If you’ve ruled out connectivity problems make sure the DHCP server is configured properly and has plenty of available IP addresses to allocate. Sometimes problems are not that the DHCP can’t be contacted but has simply run out of addresses to allocate to clients.
Raphael Silvano – Italian Networks, Rai Streaming Estero, 2017 Haver Press
The Internet control message protocol has a wide variety of different message types many of which are extremely useful for managing and troubleshooting an IP Network. Most of us are familiar with the command ‘ping’ which uses at it’s core both ICMP echo and echo reply. Another well used ICMP tool is that of traceroute which is useful for monitoring TTLs (time to live) and hop counts.
There are however quite a number of these ICMP messages, beyond the ones used by these tools and most are extremely useful for anyone managing a complex IP based network. Here’s some of the most useful ones:
ICMP unreachable – an IP host will produce an ICMP unreachable message if there is no valid path to the requested host, network, protocol or port. There are several of these messages which are often grouped together for convenience. They are often generated from routers and switches, for example if local access lists are restricting access to the requested resource. You should be careful about allowing these messages to be propagated as they contain source addresses. Particularly if the connection is being used externally perhaps through an external connection like a BBC VPN for instance. The messages can be blocked by using the no ip unreachables command on Cisco hardware.
ICMP redirects – a router will produce a redirect message if it receives a packet on a given interface and the route is on the same device. These can be used to help update local routing tables with the correct information. There is an interesting protocol from Cisco which can be configured to help with these situations it’s called the Hot Standby Routing Protocol (HSRP).
ICMP mask request and reply – some hosts do not have their subnet masks statically defined and have no way of learning it. Here they can use an ICMP mask request which can be responded to by the router with an ICMP mask reply.
ICMP source quench – these messages provide an important function within ICMP that of congestion control on the network. If a network device such as a router detects network congestion perhaps because of dropped packets or overflows in buffers and on it’s interfaces then it will send an ICMP source quench message to the source of these packets.
ICMP Fragmentation – this type of message is sent when an IP packet is received which is larger than the MTU specified within the LAN or WAN environment yet it also has the flag DF set (do not fragment). Here the packet cannot be forwarded however the ICMP message can be used to at least pass back some information on the issue. There are actually quite a few scenarios where the DF bit is set automatically by devices as the packet is distributed.
John Summer, Proxy for Netflix – video, Harvard Press, 2017
Troubleshooting applications which operate across WAN (wide area networks) can be especially difficult. When a PC has the potential to both communicate with servers and other workstations across different IP networks and subnets there will almost always be complications. The PC could be using various methods and protocols to communicate and there’s inevitably the difficulty of identifying if your network hardware or the end network is causing the problems.
It’s important before looking for complex solutions is to start with the basics. A computer that needs to communicate across a wide area network will normally be configured to route it’s traffic through a default gateway. Although it sounds unlikely, misconfiguration of this very basic setting is quite often the root cause of any network connectivity issue. Basic IP configuration on the workstation will break most connectivity, remember it may be some external change that has caused this problem too. If a router or gateway is removed or updated, then any static configurations must be updated.
The error could be a simple incorrect IP address of the default gateway, or more commonly something like a incorrect subnet mask. Always remember that many operating systems require a reboot to enforce changes in IP configuration, another simple mistake to make especially if diagnosing remotely. If you can talk to a user or have command access on the workstation, the first checks should be basic connectivity ones. If a workstation can ping hosts on the same subnet but not on other subnets, your next step should be to check connectivity to the default gateway.
Other errors can be simply down to incorrect name resolution. If all network configuration and operation is ok, then it may simply be that the machine is being directed to the wrong address. Static information for name resolution can unfortunately be stored in all sorts of places, some difficult to locate. There are files on the host PC which should be checked both hosts and lmhosts can cause connectivity issues if there’s an incorrect address. Also many devices cache addresses to help with speed and network connectivity.
Checking IP connectivity might not tell the whole story though particularly if you’re trying to troubleshoot an application. Many have their own connectivity and configuration information pre-installed, a configuration files with incorrect connectivity information. These could potentially overwrite things like a default gateway and cause issues. Many applications work through web browsers and can also pick up connection details from these, users will offer specify a proxy in their browsers settings for various reasons, perhaps for accessing a popular web site like BBC iPlayer from abroad – such as this http://bbciplayerabroad.co.uk/which would also cause the application to be routed through the proxy too. It may work ok depending on the configuration of the proxy server (many just pass through data like this) however it will have an extra step added to the route.
Further Information on BBC News
The Cisco Discovery Protocol is of use in most networked environments however it is probably most useful in a Cisco switched environment. This is because one the difficulties in troubleshooting an environment with mostly switches is that they usually provide a lot less information than routers. It can also pose further difficulties because a switched environment is sometimes more confusing because these networks are not always clearly segmented and can be confusing. CDP provides a useful tool for identifying and detecting Cisco switches and routers and to build up a picture of the complete network topology of all the Cisco hardware.
The Cisco Discovery Protocol is a data link multicast protocol, which uses a standard multicast MAC address. These broadcasts are formed as SNAP type 200 packets and as such they have no layer 3 component. CDP must be enabled on a Cisco router or switch before it can be used for detecting other Cisco devices which can be seen on all it’s interfaces. It should be noted that CDP is not like extended network management protocols like SNMP it can only detect directly attached devices. Even if two routers were plugged into a single Cisco switch, CDP will allow each router to see the switch but it would not allow them to see the other router. The routers will appear to the switches as CDP neighbours, the primary reason for this is that the protocol is not designed to forward frames between devices.
Although this sounds of limited use, the information that CDP provides about directly attached devices is extensive and extremely useful for troubleshooting things like network problems or anonymous torrenting taking place on the network. You can for example use the protocol for completely defining whole wiring closers, and detailing the exact models and versions of routers and switches. The information is extensive and you’ll soon know even the versions of Catalyst switches that are connected and which port/interface. Other protocols will normally go little further than identifying whether the hardware is a router or switch.
There are a host of commands which can be used to provide extensive information on all Cisco hardware. The most used ones are the SHow and Clear commands which can be used to bring up general information on the devices plus details of the software versions, modules installed, port configuration and any error messages. The command can also be used to display the full routing and switching tables plus much more information.
The show config command is used to display the configuration information which is stored in NVRAM of the device. For example in Catalyst 5000 series of routers, the entire configuration is stored automatically in this memory, the command write terminal can also be used to display the same information. To clear the configuration information the clear config command will erase the information on a specified module. The displayed information can be categorized into the following broad categories:
- switch management parameters
- IP configuration
- Virtual LANS
- Bridging Parameters
There is further information which can be accessed including extensive information on module specific information. You can also use the show cam command which refers to the special memory on the switches which has a low access time. THe information stored here is normally bridging and switching tables, it needs a fast response time because it can be updated very quickly in fast dynamic environments.
It’s often the more complex IP routing protocols which are the most difficult to diagnose and troubleshoot and BGP (Border Gaetway Protocol) is no exception. Like many such protocol, BGP has a fairly specialised application in that it is used specifically for routing between different routing domains and autonomous systems. You’ll normally find BGP being used in advanced or specialised network environments like Internet Service Providers (ISP) or global corporate networks with advanced routing requirements.
Another situation where you may encounter BGP is when companies have merged, it is ideally suited to bring disparate computer networks together without starting from scratch. During the end of the last century there was a huge amount of these sort of corporate mergers and huge networks needed to be joined together – BGP provided the optimum solution for many of these situations and indeed is still commonly used today. Many a network administrator will have spent hours analysing at the end of residential VPN trying to determine the complexities behind a long established BGP routing tables.
When troubleshooting issues that may be related to BGP it’s important to understand the fundamental characteristics of the protocol. Without knowing these core concepts it can be very difficult to analyse a complex and specialised protocol like BGP:
Neighbour Formation : Like many routing protocols, BGP creates neighbour adjacency between routers before it starts exchanging information. These neighbours though are almost always defined statically rather than dynamically by the protocol. Their formation is normally determined by the setting up of a simple TCP connection, the command for determining a list and status of BGP neighbours is as follows:
show ip bgp neighbor
Most of the important data is found in the first few lines of the output of this show command. The most useful parameter for troubleshooting is the BGP state which will switch from Idle-Active-Open-Established as the formation of the neighbour state takes place. Remember this process can take a little time to complete, especially compared to some modern day routing protocols, so give it time, however if the state ends up as anything other than Established then the formation has not completed successfully.
Other relevant information that is important are the BGP version. There are quite a few different versions of BGP being used in the wild and they will always establish on the lowest common version when establishing a connection. If you see these version constantly changing and switching it is usually indicative of some fundamental network configuration problem.
External BGP : This is usually run between two different but autonomous systems which are defined on networks which must be directly connected. The neighbors are established by specifying the address of the link, for example you could configure by naming the address of a serial links between two routers on the two networks. You may have to use the ebgp-multihop parameter in these situations as often interfaces are not directly connected as specified by the loopback address. In order to ensure that there is a loop-free topology, BGP will ignore any BGP routes which has originated in any autonomous systems (AS).
Moving voice and video over any data network can be a challenge, if you’ve ever sat through a stuttering video conference you’ll appreciate that you have to do it well. Fortunately it’s becoming more of a reality nowadays with efficient compressions techniques, high bandwidth networks and of course QoS. Compression is probably the most important factor as it radically reduces the volume of traffic that needs to be transmitted over network links.
Genuine multimedia networks are rarer than you would think, and indeed some of the best which have integrated ATM (Asynchronous Transfer Mode) can be extremely fast. One of the most important factors apart from the increased speed ATM can bring to both WAN and LAN networks is it’s support for QoS. This guarantees a certain bandwidth and performance levels for the multimedia connections, although this has to be reserved to be effective. Not only can administrators reserve their multimedia requirements but they can also set up virtual circuits to separate their video conference, multimedia or voice calls. Although it should be noted that this will require either ATM compatible applications, adapters fitted to the workstations or software that emulates ATM on standard network interface cards.
Whatever technology is incorporated the main issue with adding multimedia applications to a network is simply the traffic load. It’s pointless letting users have access to real time multimedia applications without a very fast data network and some sort of QoS guarantee. The network also needs the capacity to provide these guarantees without affecting the rest of the normal data traffic. Capacity planning is crucial and until this is carried out you will have little idea how even a modest set of multimedia applications will effect your network speeds.
For any long term use there are a variety of techniques which can radically boost network performance for multimedia. Core switched networks which connect to existing departmental hubs is a start and these can be upgraded to provide switched services to different departments as required. Any videoconferencing equipment should be connected directly to high performance switches, on no account should the traffic be allowed to broadcast out through out the network through a simple hub or repeater. Most high performance networks now try to standardise on Gigabit ethernet although often this can be slowed by legacy network hardware. Iso-Ethernet is an emerging technology which can incorporate voice and standard 10 mbit ethernet on the same cable.
There are a variety of methods and technologies which will provide quality of service over existing networks if you don’t have access to ATM. In fact often it is easier to use one of these bespoke methods as ATM does require modification and support in all applications, transports and software. A technology called RSVP (Resource Reservation Protocol) has been developed by the IETF (Internet Engineering Task Force) which allows any IP host to request directly a specified amount of bandwidth on a network.
Microsoft’s component technology started off being known as COM – the Component Object Model. To build software components that can communicate with each other both locally and across networks then you need a standard framework. Active X provides that standard together with an associated technology called DCOM (Distributed Component Object Model) which allows the components to communicate across the internet and other networks.
ActiveX has been with us through many years and has been updated consistently. It is promoted as a tool for building both dynamic web pages plus sophisticated distributed object applications. Every time a client visits a web site which runs Active X components a version check is performed and the latest controls are downloaded to the browser. These are not deleted when the browser navigates away but kept updated, this is necessary in order to keep the browser controls updated as far as possible. Obviously sometimes there are configuration or security options enabled in particular browsers which prevent this.
You may have seen ActiveX controls run in all sorts of situations, perhaps running some graphical banners or multimedia applications on a web page. ActiveX controls also can run complicated real time information systems on pages, perhaps temperature measurements, financial tickers or simply just updated news feeds actively updating themselves. ActiveX controls have the facility to directly access data servers using protocols that are much more sophisticated than anything standard HTTP can handle. It’s an important concept to understand in the development of distributed object computing.
ActiveX looks at a computer browser in a different way than you might imagine, it simply considers it as a container which has the ability to hold and display ActiveX controls. Many of the internet’s most impressive interactive objects are in fact ActiveX controls and they represent a way for developers to push beyond the static, simple pages supported by the Hypertext Transport Protocol. One of the downsides is obviously cross-compatibility which relies heavily on the ability of the client browser to downloading the specific components required locally and keeping them up to date. When a user/browser initially visits an ActiveX site for the first time there can be a significant delay whilst core components are downloaded, however updates and additional installations are usually performed very quickly in the background.
The controls have the additional advantage of combining well with the user interface of most common interfaces. The simplest of course is with traditional Windows systems, as ActiveX is based on COM technology which is already incorporated within MS Windows. Microsoft has been very pro-active in support cross platform support though and the Active Platform technology has also been extended to work with other operating systems such as Macintosh, Unix and Linux. There is also a sophisticated programming language called Active Scripting which can be used on all these platforms to control and integrate ActiveX objects from the server or the client.
Microsoft have attempted to prevent technological conflicts by also allowing ActiveX component to interact and work alongside the main competitor JAVA to some extent. Remember though all Java applets function for security reasons within their own virtual machines on the user’s computer. ActiveX requires greater access to the operating system so cannot operate within this virtual sandbox, so although call can be made across components their interaction is limited to some extent.
Networks, Proxies and VPNs in Distributed Computing – http://www.proxyusa.com/
For many of us a network is either our little home setup consisting of perhaps a modem and wireless access point and a few connected devices, or perhaps that huge global wide network – the internet. Whatever the size all networks need to allow communication between the various devices connected to them. Just like human beings need languages to communicate so do networks only in this context we call them ‘protocols’.
The internet is built primarily using TCP/IP protocols to communicate, this is used to transport information between ‘web clients’ and ‘web servers’. It’s not enough though to enable the media rich content delivered to our web browsers and a host of secondary protocols site above the main transport protocol – the most important one which enables the world wide web is called HTTP.
This provides a method for web browsers to access content stored on web servers, which is created using HTML (Hypertext Markup Language). HTML documents contain text, graphics and video but also hyperlinks to other locations on the world wide web. HTTP is responsible for processing these links and enabling the client/server communication which results.
Without HTTP the world wide web simply wouldn’t exist and if you want to see it’s origins search for RFC 1945 where you’ll find HTTP defined as an application level protocol designed with the lightness and speed necessary for distributed, collaborative, hypermedia information systems. It is a stateless, generic and object orientated protocol which can be used for a huge variety of tasks – crucially it can be used on a variety of platforms which means it doesn’t matter whether you’re platform your computer is on (linux, Windows or Mac for instance) – you can still access the web content via HTTP.
So what happens? When someone types a web name or address into the address field of their web browser, the browser attempts to locate the address on the network it is connected to. This can either be a local address or more commonly it will look out on to the internet looking for the designated web server. HTTP is the command and control protocol which enables communication between the client and the web server allowing commands to be passed between the two of them. HTML is the formatting language of the web pages which are transferred when you access a web site.
The HTTP connection between the client and server can be secured in two specific ways – using secure HTTP (SHTTP) or Secure Sockets Layer (SSL) which both allow the information transmitted to be encrypted and thus protected. It should be noted though that the vast majority of communication is standard HTTP and is transmitted in clear text insecurely which is why so many people use proxies and VPNs like this to protect their connections.