Tips on Debugging with telnet

It’s rather old school and can seem very time consuming in a world of automated and visual debugging tools, but sometimes the older tools can be extremely effective. It’s been a long time since telnet was used as a proper terminal emulator simply because it is so insecure, yet it’s still extremely useful as troubleshooting tool as it engages on a very simple level. Although it should be noted that it can be used securely using a VPN connection which will at least encrypt the connection.


One of the biggest benefits of the fact that HTTP is an ASCII protocol is that it is possible to debug it using the telnet program. A binary protocol Would be much harder to debug, as the binary data would have to be translated into a human-readable format. Debugging with telnet is done by establishing a telnet connection to the port that the proxy server is running on.

On UNIX, the port number can be specified as a second parameter to the telnet program:

telnet

For example, let’s say the proxy server’s hostname is step, and it is listening to port 8080. To establish a telnet session, type this at the UNIX shell prompt:

telnet step 8080

The telnet program will attempt to connect to the proxy server; you
will see the line

Trying

If the server is up and running without problems, you will immediately get the connection, and telnet will display
Connected to servername.com
Escape character is ’“]’.

(Above, the “_” sign signifies the cursor.) After that, any characters you
type will be forwarded to the server, and the server’s response will be dis-
played on your terminal. You Will need to type in a legitimate HTTP
request.

In short, the request consists of the actual request line containing the method, URL, and the protocol version; the header section; and a single empty line terminating the header section.
With POST and PUT requests, the empty line is Followed by the request body. This section contains the HTML form field values, the file that is being uploaded, or other data that is being posted to the server.

The simplest HTTP request is one that has just the request line and no header section. Remember the empty line at the end! That is, press RETURN twice after typing in the request line.
GET http://www.google.com/index.html HTTP/1.1

(remember to hit RETURN twice)

The response will come back, such as,
HTTP/1.1 200 OK
Server: Google—Enterprise/3.0
Date: Mon, 30 Jun 1997 22:37:25 GMT
Content—type: text/html
Connection: close

This can then be used to perform further troubleshooting steps, simply type individual commands into the terminal and you can see the direct response. You should have permission to perform these functions on the server you are using. Typically these will be troubleshooting connections, however it can be a remote attack. Many attacks using this method will use something like a proxy or online IP changer in order to hide the true location.

Components of a Web Proxy Cache

There are several important components to the standard cache architecture of your typical web proxy server. In order to implement a fully functional Web proxy cache, a cache architecture requires several components:

  • A storage mechanism for storing the cache data.
  • A mapping mechanism to the establish relationship between the URLs to their respective cached copies.
  • Format of the cached object content and its metadata.

These components may vary from implementation to implementation, and certain architectures can do away with some components. Storage The main Web cache storage type is persistent disk storage. However, it is common to have a combination of disk and in-memory caches, so that frequently accessed documents remain in the main memory of the proxy server and don’t have to be constantly reread from the disk.

The disk storage may be deployed in different ways:

  • The disk maybe used as a raw partition and the proxy performs all space management, data addressing, and lookup-related tasks.
  • The cache may be in a single or a few large files which contain an internal structure capable of storing any number of cached documents.

The proxy deals with the issues of space management and addressing. ‘ The filesystem provided by the operating system may be used to create a hierarchical structure (a directory tree); data is then stored in filesystem files and addressed by filesystem paths. The operating system will do the work of locating the file(s). ° An object database may be used.

Again, the database may internally use the disk as a raw partition and perform all space manage- ment tasks, or it may create a single file, or a set of files, and create its own “filesystem” within those files. Mapping In order to cache the document, a mapping has to be established such that, given the URL, the cached document can be looked up Fast. The mapping may be a straight-forward mapping to a file system path, although this can be stored internally as a static route.

Typically a proxy would store any resource that is accessed frequently. For example in many UK proxies, the BBC website is extremely popular and it’s essential that this is cached. even for satellite offices it allows people to access BBC VPN through the companies internal network. This is because the page is requested and cached by the proxy which is based in the UK, so instead of the BBC being blocked outside the UK it is still accessible.

Indeed many large multinational corporations sometimes inadvertently offer these facilities. Employees who have the technical know how can connect their remote access clients to specific servers in order to obtain access to normally blocked resources. So they would connect through the British proxy to access the BBC and then switch to a French proxy in order to access a media site like M6 Replay which only allows French IP addresses.

It is also important to remember that direct mappings are normally reversible, that is if you have the correct cache file name then you can use it to produce the unique URL for that document. There are lots of applications which can make use of this function and include some sort of mapping function based on hashes.

Intrusion Detection – Post Attack Phase

If you’re protecting any network then understanding the options and various phases of an attack can be crucial.  When you detect an intrusion, it’s important to quickly assess what stage the attack is at and what possible developments are likely.  Whether it’s a skilled attacker of some opportunist kid with some technical skill makes a huge difference in the possible outcomes.

Even regular, normal traffic in suspicious or unusual situations can indicate a possible intrusion. If you suddenly notice TCP three-way handshakes completing on TCP ports 20 and 21 on a home Web server, but you know that you do not run an FTP server at home, it is safe to assume that something suspicious is going on. Post—Attack Phase After an attacker has successfully penetrated a host on your network, the further actions he will take for the most part follow no predictable pattern.   Obviously the danger is much greater if the attacker is both skilled and has plans to further exploit your network while many will simply deface a few pages or use it as  a VPN to watch US or UK TV channels abroad.

This phase is where the attacker carries out his plan and makes use of any information resources as he sees fit. Some of the different options available to the attacker at this point include the following:

  • Covering tracks
  • Penetrating deeper into network infrastructure
  • Using the host to attack other networks
  • Gathering, manipulating, or destroying data
  • Handing over the host to a friend or hacker group
  • Walking or running away

If the attacker is even somewhat skilled, he is likely to attempt to cover his tracks. There are several methods; most involve the removal of evidence and the replacement of system files with modified versions.The replaced versions of system files are designed to hide the presence of the intruder. On a Linux box, netstat would be modified to hide a Trojan listening on a particular port. Hackers can also cover their tracks by destroying system or security log files that would alert an administrator to their presence. Removing logs can also disable an HIDS that relies on them to detect malicious activity. There are automated scripts available that can perform all these actions with a single command. These scripts are commonly referred to as root/ens.

Externally facing servers in large network topologies usually contain very little in terms of useful data for the attacker. Application logic and data is usually stored in subsequent tiers separated by firewalls.The attacker may use the compromised host to cycle through the first three attack phases to penetrate deeper into the system infrastructure. Another possibility for the black hat is to make use of the host as an attack or scanning box.When skilled hackers want to penetrate a high—profile network, they often compromise a chain of hosts to hide their tracks.   It’s not unusual for the attackers to relay their connections through multiple servers, bouncing from remote sites such as Russian, Czech and a German proxy for example before attacking the network.

The most obvious possibilities for the attacker are to gather, manipulate, or destroy data. The attacker may steal credit card numbers and then format the server. The cracker could subtract monies from a transactional database.The possibilities are endless. Sometimes the attackers motivation is solely to intrude into vulnerable hosts to see whether he can. Skilled hackers take pride in pulling off complicated hacks and do not desire to cause damage. He may turn the compromised system over to a friend to play with or to a hacker group he belongs to. The cracker may realize that he has gotten in over his head and attacked a highly visible host, such as the military’s or major financial institutions host, and want to walk away from it praying he isn’t later discovered.

Cryptographic Methods and Authentication

It used to be the domain of mathematicians and spies but know cryptography plays an important part in all our lives. It is important if we want to continue to use the internet for commerce and any sort of financial transactions. All our basic web traffic exists in the clear and is transported via a myriad of shared network equipment. Which means basically anything can be intercepted and read unless we protect it in some way – the most accessible option is to use encryption.

Cryptographic methods are utilized by software to maintain computing and data resources safe-,effectively shielding them with secret code or their,’key.   It’s not always necessary of course, the requirements are heavily dependent on what the connection is being used for.  For example there’s little point encrypting compressed streams like audio and video in normal circumstance, no-one is at risk from intercepting you streaming UK TV abroad from your computer.The key holder is the only individual who has access to the secure information. That individual might share the key with others, permitting them to also get into the information. In a digital world, and especially from the envisaged world of electronic commerce, the demand for safety which is backed by cryptographic systems is paramount. At the future, a person’s initial approach to most electronic devices, and especially to networked electronic devices, will demand cryptography working from the background. Whenever security is necessary, the first point from the human-to machine interface is that of authentication.

The electronic system should know with whom it’s dealing. But just how is this done?  Strong authentication is based on three characteristics which a user needs to have:

  • What the user knows.
  • What the user has.
  • Who the user is.

Today, a typical authentication routine will be to present what you’ve, a token like an identification card, then to uncover what you know, a pin number or password. In a very brief time in the future, the ,who you are kind of identification would be common, first on computers, and after that on an entire selection of merchandise, progressively phasing out the need for us to memorize contact numbers and passwords.  Indeed many entertainment websites are looking at developments in this field with a view to incorporating identity checks in a seamless way.  For example to allow access to UK TV license fee payers who want to watch the BBC from Ireland for example.

But where does the cryptography come to the equation? . In the easiest level, you might offer a system. Like a pc terminal, a password. The system checks your password. You can be logged on to the system. In this example of quite weak authentication, cryptographic methods are utilized to encrypt your password stored inside the system. If your password was held in clear text, rather than cipher text, then a person with an aptitude for programming could soon find the password inside the system and start to usurp and obtaining access to all of the information and system resources you’re permitted to use.

Cryptography does its best to defend the secret, which is your password. Now consider a system that requires stronger authentication. The automatic teller machine is a good example. To perform transactions in an Automated teller machine terminal, you want an ATM Card and a pin number. Inside the terminal, information is encrypted. The information the terminal transmits to the bank is also encrypted. Security is better, but not perfect, since the system will authenticate an individual who isn’t the owner of the card / pin number. The person might be a relative utilizing your card by permission, or he can be a burglar who has just relieved you of your pocket and is about to save you of your life savings. Time, you could think, for stronger authentication. Systems currently in field tests require an additional attribute based on your identity to strengthen the authentication procedure.

Using SSL for Email and Internet Protocols

If you want to increase the security attached to your email messaging then there’s several routes you can take.  First of all, you should look at digitally signing and encrypting all your email messages.  There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system.    Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.

Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications.   However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation?  Well the first place to investigate is increasing the security of authentication and data transmission.   There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.

Here’s a few of the RFC based protocols related to Email  :

  • Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
  • Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
  • Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
  • Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers.  Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.

The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.

However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.

It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.

No Comments News, Proxies, VPN

Digital Certificate Authorities

A digital certificate essentially associates specific identity information with a public key which is then linked together in a trusted package.  It is important to realise that the certificate is always signed by the certificate issuer so we can easily verify that the information has not been changed or modified in any way.  However it is more difficult to determine whether the identity and the public key have been associated together correctly.

Remember there’s no real restrictions about who can issue certificates, indeed using OpenSSL virtually anyone can with some limited technical experience. There are a large number of certificate programming APIs and they get easier to use every day.  These should be distinguished however from trusted certificate issuers who are known as certificate authorities also known as CA’s. The role of the certificate authority is to accept and process requests for certificates which come form organisations and individual entities.    Larger organisations who require high levels of security for example like the BBC for their VPNs, would use only the Tier one Certificate Authorities who provide a high level of assurance. They must authenticate the information which is received from these entities, issue the certificates and maintain a repository of information about both the certificates and the subjects.

Here’s a brief summary of the roles and responsibilities of a Certificate Authority.

    • Certificate Enrollment Process – simply the process which details how an entity must apply for a digital certificate.
    • Authentication of Subject – The Certificate Authority must ensure that the applicant is indeed who they claim to be. There are different levels to this and it’s directly linked to the level of assurance given by the CA to certificate.
  • Certificate Generation – Once the identity has been assured then the certificate must be generated and released. It is relatively simple to generate the certificate however it must assure that the process and delivery mechanism is completely secure. Any issues at this stage can compromise the security and validity of the certificate.
  • Certificate Distribution – as mentioned above, the certificates and associated private keys must be distributed to the applicant.
  • Revocation of Certificate – when there is an issue about the integrity of a released certificate, there must be a defined procedure to revoke that certificate. This should be completed securely and the revoked certificate should be added to a list of invalid certificates.

The Certificate Authority would usually publish the standards and processes that underpin the above activities in something called a CPS ( certification practice statement). In secure applications these would be included in the security benchmarks for example for authentication of something like an IP cloaker or VPN system. These are not meant to be long, legal filled documents but practical and readable guides which detail the exact processes and the underpinning activities. Although usually designed to be straight forward, they are usually fairly lengthy documents often many pages long.

X Windows System

The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server.  The server in this instance manages the display, mouse and keyboard.   The client is actually any remote application which runs on a different host (or on the same one).    In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host.  Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.

Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required.  In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access.   In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication.  To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.

X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP.  The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server.   So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.

It is when multiple connections are being used that communication can get more complex.  This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System.    Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen.   It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients.  Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.

The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up.  The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n).    Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established.  In this instance n is the number of the display so will be incremented to allow multiple concurrent connections.  On a Unix server this will usually be a domain socket incremented by n with display numbers too.

Some Notes on Firewall Implementations

For anyone considering implementing a new firewall onto a network here are a few notes to help you through the process.  However before you get started there’s a very important first step that you should always take when implementing on medium to large networks.  This step is to establish a Firewall change control board, which consists of user, system administrators and technical managers from throughout your organisation.  Failing to establish proper change control and implementation processes can be very dangerous on a firewall.  A badly thought out rule could create huge security issues or operational problems – that ‘deny all’ rule might look safe but if it ends up blocking mission critical applications you won’t be popular.

Hardware firewalls are amazingly secure and not too expensive. The very first reported type of network firewall is referred to as a packet filter. Establishing a firewall for your infrastructure is an excellent method to present some simple security for your expert services.

 

Firewalls frequently have such functionality to hide the real address of computer that is linked to the network. You can install most firewall products on a customized network and have it’s protection almost immediately. The host-based firewall might be a daemon or service as part of the operating system or an agent application like endpoint security or protection. These firewalls often arrive in conjunction with antivirus program. Otherwise, a software firewall can be set up on the computer in your house that has an online connection. Or, you may add an extra software component to your firewall.  If you are primarily responsible for your company’s firewall it’s best to have some secure remote access in case of emergencies.  Be careful with rules which allow your access though, you don’t want to let through users’ streaming through UK TV through a VPN service.

In case the connection is controlled by NetworkManager, you may also utilize nm-connection-editor to modify the zone. The secure connection is currently established and now is the time to launch vncviewer so that it employs the secure tunnel. The SSH connection is currently established. Especially in case you allow connection from anywhere online and on the normal SSH port (22).
After you own a server to try from and the targets you want to evaluate, you may continue with this guide. As stated in the past edition you also may want to locate a repository closer to your server. By applying the forwarder you may override the DNS servers supplied by your ISP and utilize fast, higher performance servers instead. Repeat this for each domain that you would like the server to manage. It is necessary for a standard server. Also many servers block dynamic dns hosts, so you could discover your server becomes rejected. At this point you have a simple mail server!

The application shouldn’t be confused with malware behavior. Some Antivirus software applications may ask that you switch off the firewall and disable the Antivirus to be able to install it. Before you put in a software, the very first important step is to look at the configuration of your computer, and the system prerequisites of the program. Update the neighborhood package index and install the software if it’s not already offered.

The configuration of your computer must match the demands of the software to be set up. If you are pleased with your present configuration and have tested that it’s functional once you restart the service, it is possible to safely permit the service. The only configuration you should make that actually impacts the functionality of the service will probably be the port definition in which you determine the port number and protocol you desire to open. If all your interfaces can best be managed by a single zone, it’s probably simpler to just pick out the best default zone and use that for your configuration. You may then modify your network interfaces to automatically choose the right zones. Whenever you are transitioning an interface to a different zone, be conscious that you are most likely modifying the services which are going to be operational. Opening up an entire interface to incoming packets might not be restrictive enough and you may want to have more control concerning what to allow and what to reject.

James Hellings

Author of IP Cloaker

 

 

ATM – Routing IP

There are of course many different network architectures many of which have been around for many years. One of them is known as ATM (Asynchronous Transfer Mode) and was considered in the 1990’s to be the ultimate network architecture design. The belief was that in the future every computer or device would be fitted with an ATM network adapter rather than the alternatives which at the time were token-ring or ethernet.

The reality has turned out somewhat different of course, and it’s unlikely that we will ever see the extensive use of ATM based networks. However many corporations installed ATM backbone switches for an important reason because they have the ability to handle network traffic at extremely high speeds.

There is a difficulty though for using these switches, that is ATM is a virtual circuit based, cell based networking scheme which is primarily connection orientated. Compare this with Ethernet which powers the majority of commercial networks which is actually a connection less frame based networking scheme. In fact to integrate the two systems, you need to use one of the available overlays which have been developed in order to allow Ethernet to be connected to the ATM backbones and switches.

These normally work by using layer 3 routing algorithms which can discover the initial routes through the network, Then layer 2 virtual circuits can be established through the ATM fabric delivering data without actually going directly through the routers. This technique is normally known as ‘shortcut routing’ although you will often here it described by other terms as it’s a useful technique. If you need more detailed information check your normal networking references or search online using search terms like ‘IP routing over ATM’.

There are difficulties with these improvised techniques one of the most common is knowing when to route and when to switch the traffic at layer 2. Long data transmissions such as Netflix video streams should be switched as the more efficient method of transport. However for shorter transmissions then the router is normally the best option.

Layer 3 traffic will not under normal circumstances identify the length of the transmission so it may or may not be suitable to be switched. There are ways of identifying the length of the transmission normally by inspecting the content of the datagrams itself. There are many different methods of identifying the flow mostly developed by different networking companies, some are no longer commonly used but you will find others being developed or utilized extensively in various environments. See the references below for some examples that can be researched for more information.

References:
3 Com Fast IP
Ipsilon IP Switching
Switch IP Address – Watch UK TV in USA

No Comments Protocols, Proxies, VPN

Creating a Proxy Hierarchy

Although most networks and organisations would benefit from implementing proxy servers into their environment it can be a difficult task to decide the location and hierarchy of these servers.  It is very important and there are some questions which can aid the decision making process.

Flat or Hierarchical Proxy Structure?

This decision will largely depend on the both the size and the geographical dispersion of the network.  The two main options are firstly whether a standard single flat level of proxies will be sufficient, or whether something larger is required.  This would be a larger hierarchy based on  tree structure much like an Active Directory forest structure used in complexed windows environments.

Indeed in such environments it may be suitable to mirror the Active Directory design with a proxy server structure.   Many technical staff would use the following rule of thumb – each branch office would require an individual proxy server.  Again this may map onto an AD design where each office exists with it’s own Organisational Unit (OU) . This has other benefits because you can apply custom security and configurations options based on that OU, for example allowing  the sales OU more access through the proxy than administrative teams,

This of course needs to be carefully planned in line with whatever physical infrastructure is in place.   You cannot install heavy duty proxy hardware at the end of a small ISDN line for example.  The proxy servers should be installed in line with both the organisation configuration and network infrastructure.    Larger organisations can base these along larger geographical areas for example a separate hierarchy in each country.  So you would have a top level UK proxy server above regional proxies further down in the organisation.

If the organisation is fairly centralized you’ll certainly find a single level of proxies a better solution.  It is much easier to manage and the latency is minimised without tunnelling through multiple layers of servers and networks.

Single or Proxy Arrays

A standard rule of thumb for proxy servers is usually something like one proxy for every 3000 potential users.   This is of course only an estimate and can vary widely depending on the users and their geographic spread.  This doesn’t mean that the proxies need to be automatically independant, but can indeed be installed in a chain together.

For example you can set up four proxies in parallel to support 12000 users using the Cache Array Protocol (CARP).  These could be set up across different boundaries even across a flat proxy structure.   Remember that the servers will have different IP address ranges if across national borders.   Make sure that your proxy with the Irish IP address can speak to all the other European sites, most proxies should ideally be multihomed to help with routing.

Using the caching array will allow multiple physical proxies to be combined into a single logical device.    This is normally a good idea as it will increase things like the cache size and eliminates redundancy between individual proxy caches.

It’s normally best to run proxies in parallel whenever the opportunity exists. However sometimes this will not be possible and specific network configurations may stop this method meaning you’ll have to run proxies individually in a flat mode.   Even if you have to split up proxy resources in to individual machines be careful about creating network bottlenecks.  Individual proxies should not be pointing to single gateways or machines, even an overworked firewall can cause significant impact on a network’s performance and latency.