How to Network your Home Using HPNA

Using Home Phone Line Connections ( HPNA ) to Connect Your Computers – commonly referred to as HPNA in the networking world-

This is a guide to setting up a computer network using the existing telephone wiring that is running through your home’s walls.
You can connect your computers to a network using your phone jacks and existing wiring –
Basically saying that instead of running CAT-5 cable to Ethernet cards on your computers you can use a special HPNA network card and regular telephone cabling into the phone jack.

Not all your pc’s need to be connected using this technology,in fact this is usually reserved for “remote” computers that are in a room that is too far or too inconvenient to run an Ethernet cable or reach with a Wireless signal.
Similar to using an Ethernet network each computer that connects to the network using a telephone cable and phone jack and will need it’s own Network card installed – this will be a most likely be a PCI card and must be HPNA capable.

If you have purchased a HPNA network card (can also be a USB device) and need help with how to install it you can use the instructions for installing an Ethernet Network Card (it’s basically the same technique)
Click here for instructions

It’s very rare for a computer to already have a HPNA network card in it, so if you see a place where you can plug in a telephone size cable it is most likely a dial-up modem and will not be capable of connecting your computer to the network.    Also keep in mind that simply installing a HPNA card and plugging a telephone cable into the phone jack and possible doing the same with another computer in another room is not necessarily going to create a computer network.

You are still going to need some kind of “central” device like a Router that “talks” to each computer and decides which traffic goes to which pc.  You’ll need one of these devices too if you want to route or prioritize traffic from your network onto the internet.  You can also use it to set up VPNs or even connect to residential proxies for hiding your IP address when you’re online.

The benefits to doing a network using HPNA are going to be the convenience of being able to plug your computer into the telephone jack in the wall and having it connect to your network and to able to SHARE THE SAME INTERNET CONNECTION as your other pc’s and SHARE FILES between the computers.
All this without running your own CAT-5 cabling or purchasing wireless equipment.

DO keep in mind though there are some draw backs to using this type of networking technology.

First off you must understand that the quality of the communication between your computers using HPNA will be entirely dependant on the quality of the phone wiring of your home.
Also remember telephone wiring was not originally intended for this type of data transfer. So if your house is old or you have doubts about the quality of either the wiring or the phone jacks themselves, then you must keep that in mind if and when you are troubleshooting any issues that may arise during and after the install.
And as a side note
* Sometimes the phone company doesn’t have your house wired correctly in order to allow this type of communication* – you may want to contact them or your ISP for more information.

What You’ll need –

  • Telephone cabling- the minimal amount necessary to reach from your computer to the phone jack- you want this cable as short as possible
  • HPNA enabled Network Interface Card/Adapter- One for each computer that connects through the phone wiring
  • Router/Switch- one to be the central connector of all the computers and must be HPNA capable

* It’s best to have atleast Windows Vista or newer version of windows on your computer * No need to worry about requirements for CPU speed or RAM or Harddrive space

The easiest way to create a computer network is to connect all your pc’s to a central switch or router, which they will all use to communicate to one another. Look at the diagram below to see an example of what the finished network will look like.
So lets get started-
First thing is to decide if your are going to use a Router or a Switch You could use a Hub in this same type of set up, but with todays prices you’re much better sticking with either a Switch or a Router. But which one, both a switch and router are going to be the “decsion maker” on the network, which means it will decide what data goes to what pc. Having said that a Router is much smarter and does a much better job of this and make the number of decisions you have to make much simpler. And a switch will not be able to send out a HPNA signal for the pc’s to detect. If you use a Switch there will be some addition settings you will have to configure on your computers so they all have addresses on the network.

Also if you are planning on sharing one internet connection then you are going to need to use a Router so that it can dish out IP Addresses to each of the computers.  SO it’s probably best to just go with the Router right off the bat, because it will be more useful down the road as your network expands and becomes more complex.  For example if  you need some online privacy and want to try out a rotating proxies trial then it’s much, much easier if you have a decent router to route your traffic through.

Not all routers do Home Phoneline Networking: FOR HPNA YOUR ROUTER MUST BE CAPABLE OF TRANSMITTING A HPNA signal down your telephone wiring

USE A ROUTER – YOU’LL BE HAPPIER in the end.

Now lets take a look at your Computers-
Each computer on the network needs to have some type of network card installed, and if your computer is fairly new theres a good chance it already has a ethernet card or a dial-up modem installed.
You can be sure by either simply looking at the back of your pc for a port that looks like a phone jack- if it is the same size as a telephone cable ending then it is either a dial-up modem or a HPNA card.
Test this by trying to insert your telephone cable into it. – if it’s too small for the port then it is a ethernet card that is installed.
The other way to tell what type of hardware is installed on your pc is to check in your Device Manager
Click here for instructions

Alright so now we’re ready to plug some things in-
If you have a DSL internet connection then it may be nessary for you to also use a Filter or Splitter to seperate the different signals traveling through the phone wiring.
If this is true and your ISP has provided your hardware, I would strongly recommend contacting them for more specific directions on how to correctly use the filter Basically, it is the filters job to keep the two signal ( telephone and internet ) from interfering with one one another.

Connect one end of your telephone cable to the HPNA adapter you’ve purchased and the other end to the filter or directly into the phone jack.

Now assuming you have the HPNA adapter installed correctly and the computer has gotten the drivers installed properly then now the pc should be able to detect the HPNA signal being transmitted out by the router.

And then verify that each of your computers is able to ” Pull an IP Address” from the router and that they all start with the same numbers. The router should give each pc it’s own unique address and they will all start with the same numbers.
All IP addresses will be in the form of 4 octets – format – xxx.xxx.xxx.xxx Some examples of an IP address are: 192.168.x.x or 172.16.x.x or 10.x.x.x
The X’s are numbers that can change from one network to another so not to worry as long as the beginings are the same.
Think of the IP address as a street address – All your pc’s need to live on the same street in the same town.
There will also be a Subnet Mask- don’t worry about this number to much either your router will do this stuff for you.

To be sure all your computers are set-up to automatically get an IP address from your Router

Causes of Network Latency – TCP Proxies

On any sort of internet connection, speed is of course important.  The fastest response will be direct connections when the two computers are physically connected.  Of course the internet enables connections over thousands of miles but obviously this involves many more hops in the route.   If you start to use proxy servers or VPNs then you add an additional hop in the route which will almost always slow down your connection even more.

Overall speed is obviously one issue, but depending on what you’re doing online then there’s another that may be even more important.  Latency can actually cause a real problem with all sorts of online applications and especially for people playing games online.  If there is a long delay on the connection playing any sort of online action game can be virtually impossible online. If  you don’t believe me try playing Call of Duty using a satellite internet connection!   If you combine these with a slow VPN or even rotating residential proxies then you can seriously impact performance of your link.

TCP Hybla is an experimental TCP enhancement developed with the principal objective of combating the performance decline triggered by the prolonged RTTs typical of satellite links. It consists of a set of procedures that includes, among others:

  • an enhancement of the standard congestion control algorithm (to grant long RTT connections the exact same instantaneous segment transmission rate of a comparatively fast reference connection).
  • the compulsory adoption of the SACK policy.
  • the use of timestamps.
  • the adoption of Hoe’s channel bandwidth estimate.
  • the application and compulsory use of packet spacing methods (also known as “pacing”).
  • TCP Hybla includes only sender-side modification of TCP. As that, it is totally compatible with standard receivers.

For a full description of goals and characteristics of TCP Hybla refer to the publications section.

Performance.
TCP Hybla offers a pretty impressive efficiency improvement to long RTT satellite hookups with respect to TCP NewReno. It may be adopted either as an end-to-end protocol, or as satellite segment transport protocol in PEP (Performance Enhancing Proxy) designs based upon the TCP splitting principle. It can be also used as transport protocol in DTN architectures. See the performance section for further information.

Linux implementation.
Starting from kernel 2.6.13 Hybla has been included in the official Linux kernel. This implementation, based on the “module” Linux technology, does not include the last two Hybla components: Hoe’s channel bandwidth estimate and packet spacing. Their enhancement is mandatory to totally benefit from Hybla performance improvement. To this end, it is enough to patch the official kernel with the MultiTCP package, downloadable from the downloads section.

NS-2 implementation.
A TCP Hybla module has been developed for the widely adopted NS-2 simulation platform. This element can be downloaded from the downloads section.  At the time of writing this has yet to be tested extensively, it should work with all platforms and even with proxies designed for Instagram for instance.

TATPA testbed.
TATPA stands for Testbed for Advanced Transport Protocols and Architecture. It is a testbed developed by Hybla’s publishers to carry out comparative efficiency assessment of new TCP variants (included Hybla) and alternative architectures, such as PEPs (Performance Enhanced Proxy) and also DTN (Delay Tolerant Networks). It could be fully managed by remote through a powerful web interface. For further information see the TATPA testbed and the publications sections.

Projects.
TCP Hybla development is supported by the European Satellite Network of Excellence (SatNEx) project.

Using Proxy Servers for Privacy and Profit

Everyone online has a digital address.  It’s nothing complicated but it’s usually directly linked to your internet protocol address or IP address for short.  Although this number does vary throughout time, at the moment you connect to the internet it’s completely unique to you and you alone.  This number can be used to track your online activity to a surprising degree, it is the primary way that careless online criminals are tracked down.  There are of course huge privacy issues to having this address recorded and  technology exists to hide your location from websites you visit and your ISP.  At the heart of these are tools like VPN and proxy servers which we’ll cover briefly in this article.

Lots of us most probably have made use of a proxy server in all sorts of environments. In the case that you use the internet at the workplace or college and university, there’s a strong probability that you connect to it through a proxy server. They are actually frequently deployed to regulate access inwards and outwards to a company network from the world wide web. The idea is that as opposed to examining a wide variety of individual connections, the proxy can channel web traffic through a solitary point that makes it less complicated to monitor and check for things like viruses.

To impose making use of the proxy server, a large number of network administrators will definitely enforce their usage by a range of techniques. From the client computer the use of the proxy will be made mandatory by hard coding the settings into the browser. So for instance, Internet Explorer would certainly be set up and the settings pre-configured by using something like the Internet Explorer Application kit. The settings can also be installed by utilizing group policy settings released to the client from the Active directory.

In addition, the system administrator may even release configurations on the exterior firewall to control access throughout the network perimeter. This would be achieved by defining the IP address of the proxy and ensuring all other addresses are blocked out from leaving the network. If there are numerous proxies or they are set up in an array then multiple addresses would be configured. This would stop any individual from bypassing client side settings or setting up an additional browser and trying to gain access to the internet directly. If the address isn’t specified then the access would be blocked.

Proxies on the internet are normally used in a marginally different context although the functionality is relatively the same. They are mostly used to provide a level of privacy and hide your internet address from web servers. The idea is that rather than seeing the IP address of your client, then the web server (and your ISP) will only observe the IP address of the proxy. This would also allow you to circumvent some of the many geo-blocks which exist on the web. Essentially if you route your connection through a proxy located in the right nation then you can bypass the block. Countless people use these to view things like the BBC from Spain or anywhere outside the UK, though it can be challenging to find a UK proxy fast enough to stream video at least without paying for one. This has become a bit more complicated over the last handful of years though, as the websites have begun to detect the use of proxies and are blocking them automatically. Nowadays you normally need a VPN so as to watch video from one of the primary media sites, due to the fact that proxies won’t function any longer.

Presently there are other common uses of proxies online and that’s usually to make money. Countless individuals and companies, use proxies so as to create more electronic identities. Doing this means instead of being restricted to one connection, then you can efficiently make use of hundreds at the same time. This is most especially useful for performing online research, posting adverts, internet marketing and even utilizing e-commerce sites to buy stock to resale. A common use is to use automated software to buy things like sneakers or tickets to popular concerts, normally you’ll only be allowed to try and buy once but using proxies you can purchase many. This is why people employ computer software to speed up these methods and purchase the best rotating proxies so as to facilitate these purchases. There are many individuals making thousands from simple software programs, a few of the best rotating proxy networks and an ordinary home computer acquiring and selling limited availability items such as these.

Don’t Expect Internet Privacy by Default

When the internet was first conceived back in the 1980s, well the date varies depending on your definition – there was little thought about security. The date of course is disputed but I prefer 1983 when TCP/IP was adopted by ARPANET, however the lack of security is a matter of fact. It was a form on communication allowing disparate devices and people to talk to each other and no-one expected it to end up where it is. Unfortunately to allow cross compatibility then compromises need to be made, the security of your data is one of them.

However there are methods to add some security, web sites try with SSL implementation but the end user can assist to. Most users who have security concerns or have experienced cyber crime, will have come across VPN software. This is a virtual private network which can be created to encrypt your data as it travels across the internet. These come in all shapes and sizes from basic personal security ones, to advances residential IP rotating proxies like these ones.

With regards to lots of people there is a pervasive picture of a VPN user, it’s something similar to a young person sporting a hoodie, hunched up in a coffee shop with their laptop. They’re possibly attempting to hack into some federal government computers and are actually on the run from the authorities. As a VPN conceals your geographic location and your web traffic there’s a common idea that the individual is up to no good and certainly has something to hide.

The reality is literally a very long way from this viewpoint and even though numerous hackers do indeed use VPNs consistently so do an awful number of ordinary individuals. Most large corporations have been using VPNs for decades to support inbound connections from remote users. If a salesman needs access to the product database on the company’s network it’s much simpler to allow them to connect through the internet and view the latest version. This is much more secure than travelling around with DVDs and obviously assures that he or she has the most recent versions.

If you make any type of normal connection over the internet, all your web traffic is pretty much viewable, i.e anyone with a mind can intercept and see it. In the event that you’re logging and connecting to a secured share then this would certainly consist of usernames and security passwords. So in order to protect these connections, you might commonly install a VPN client on the laptop computer and make certain it’s used to encrypt the connection back to the company network. It is actually completely legitimate and indeed intelligent business practice.

Regular home users will make use of VPNs for very similar reasons. Essentially the internet is insecure and there is minimal provision for security integrated in automatically. Sure you can access secure sites through things like SSL when you have to enter a credit card or payment information. However this is the exception not the rule and most websites are actually not secure and the vast majority of information flies across the wires in clear text.

In addition to the general insecurity of the web, there’s the additional issue of privacy. Your surfing data is easily available via a variety of sources. For a start, there’s a complete list in your ISP of every little thing you do on the internet and depending on where you reside this can be routinely and easily accessed. Using a VPN stops this, transforming your internet activity into an encrypted list which is unreadable without your permission. Are they used by cyber criminals and terrorists? Sure but also by millions of people who think that what they do online shouldn’t be part of public records.

The VPN systems are becoming more and more sophisticated simply driven by demand and the risks of recognition. There are all sorts of variations including enabling different setups and ports to dodge detection. You can also get them to use home based IP addresses through specific residential IP providers –

In a large number of countries VPNs are definitely not illegal but simply a simple business and personal security tool. However in some countries this is not the case and you can get into trouble if caught using them. Countries that actually ban the use of VPN include places like China, Iraq, Belarus and Turkey. Various other countries merely allow authorized services which usually indicate those which can be jeopardized if required. Individuals still use VPNs in the majority of these nations indeed in Turkey almost all expats use one to view things like British and American TV on-line. It’s actually quite difficult to detect a VPN in use however that doesn’t stop it technically being illegal in those locations.

Source: http://www.onlineanonymity.org/proxies/residential-vpn-ip-address/

Network Attacks : Denial of Service

A few years ago being a network administrator was a much easier job. Sure you probably had fewer resources and technology choices for running applications were limited, but there was one crucial difference – the internet. As soon as even one computer on your network was connected to the internet the game changes completely, you have internet access from the network but it works the other way around too. Any server or PC in your network is potentially accessible from the internet too.

A Denial of Service (DOS) attack is any kind of attack that interferes with the function of a computer so that genuine individuals can no longer get access to it. DoS attacks are actually possible on the majority of network equipment, including switches, hosting servers, firewalls, remote access computers, as well as just about every other network resource.  A DoS attack may be specific to a service, such as in an FTP attack, or perhaps an entire machine. The different kinds of DoS are diverse and wide ranging, however, they can be split into 2 distinctive categories that connect to intrusion detection: resource depletion and malicious packet attacks.

Malicious packet DoS attacks work by transmitting abnormal traffic to a host in order to cause the service or the host in itself to crash. Crafted packet DoS attacks occur whenever software is not properly coded to deal with abnormal or unusual traffic. Commonly out-of– spec traffic can easily cause computer software to react unexpectedly and crash. Attackers can utilize crafted packet DoS attacks in order to bring down IDSs, even Snort.A specifically crafted tiny ICMP packet with a size of 1 was discovered to cause Snort v. 1.8.3 to core dump. This particular version of Snort did not actually correctly define the minimum ICMP header dimensions, which in turn made it possible for the DoS to happen.

One of the reasons that the denial of service attacks are so common is that the attacker is extremely difficult to trace.   The most obvious factor behind this is that most of the attacks don’t require valid responses to complete, therefore it’s very hard to identity the source.  In addition to that are the huge number of anonymous  resources available online including VPNs, anonymous proxies and providers of residential IP address networks like these.

Along with out of spec traffic, malicious packets can certainly consist of payloads which cause a system to crash. A packet’s payload is actually taken as input right into a service. If the input is not properly checked, the program can be DoSed. The Microsoft FTP DoS attack demonstrates the comprehensive assortment of DoS attacks readily available to black hats in the wild.The very first step in the attack is actually to start a legitimate FTP connection.The attacker would most likely then issue a command with a wildcard pattern (such as * or?). Within the FTP Web server, a function which processes wildcard sequences in FTP commands does not allocate adequate memory when executing pattern matching. It is actually feasible for the attackers command incorporating a wildcard pattern to cause the FTP service to crash.This DoS, as well as the Snort ICMP DoS, are 2 instances of the many thousands of potential DoS attacks out there.

The additional method to deny service is via resource depletion. A resource depletion DOS attack functions simply by flooding a service with a great deal normal traffic that legitimate users can not gain access to the service. An attacker overrunning a service with typical traffic can certainly exhaust finite resources such as bandwidth, memory, and processor chip cycles.

A classic memory resource exhaustion DoS is a SYN flood. A SYN flood makes use of the TCP three-way handshake. The handshake starts with the client sending a TCP SYN packet. The host then sends out a SYN ACK in response. The handshake is concluded when the client responds with an ACK. If the host does not obtain the returned ACK, the host sits unoccupied and waits with the session open. Each and every open session consumes a certain amount of memory. If sufficient three– way handshakes are launched, the host consumes all of the readily available memory waiting for ACKs.The traffic generated from a SYN flood is normal in appearance. The majority servers are configured these days to leave just a specific number of TCP connections open. One other classic resource depletion attack is the Smurf attack.

A Smurf attack works by capitalizing on open network broadcast addresses.A broadcast address forwards all packets on to just about every host on the destination subnet. Every host on the destination subnet answers to the source address listed in the traffic to the broadcast address. An attacker sends a stream of ICMP echo requests or pings to a broadcast address.This has the effect of magnifying a single ICMP echo request up to 250 times.

Furthermore. the attacker spoofs the origin address in order that the target receives all the ICMP echo reply traffic. An attacker with a 128 Kb/s DSL Net connection can certainly create a 32 Mb/s Smurf flood. DoS attacks commonly utilize spoofed IP addresses due to the fact that the attack succeeds even if the response is misdirected.The attacker requires no response, and in cases like the Smurf attack, wants at all costs to avoid a response.This can make DoS attacks difficult to defend from, and even harder to trace.

Further Reading: http://www.changeipaddress.net/us-ip-address-for-netflix/

Proxy Selection Using Hash Based Function

One of the difficulties in running a large scale proxy infrastructure is how to choose which proxy to use. This is not as straight forward as it sounds and there are various methods commonly used in selecting the best proxy to be used.

In hash-function-based proxy selection, a hash value is calculated from some information in the URL, and the resulting hash value is used to pick the proxy that is used. One approach could be to use the entire URL as data for the hash Function. However, as we’ve seen before, it is harmful to make the proxy selection completely random: some applications expect a given client to contact a given origin server using the same proxy chain.

For this reason, it makes more sense to use the DNS host or domain name in the URL as the basis for the hash function. This way, every URL from a certain origin server host, or domain, will always go through the same proxy server (chain). In practice, it is even safer to use the domain name instead of the full host name (that is, drop the first part of the host- name)—this avoids any cookie problems where a cookie is shared across several servers in the same domain.

It’s also useful when large amounts of data are involved and can indeed be used to switch proxies even during the same connection.  For example if someone is using a proxy to stream video – such as in this article – BBC iPlayer France, then the connection will be live for a considerable time with a significant amount of data.  In these situations, there is also limited requirement for any caching facilities particularly with live video streams.

This approach may be subject to “hot spots”—that is, sites that are very well known and have a tremendous number of requests. However, while the high load may indeed be tremendous at those sites’ servers, the hot spots are considerably scaled down in each proxy server. There are several smaller hot spots from the proxy’s point of view, and they start to balance each other out. I-lash—function-based load balancing in the client can be accomplished by using the client proxy auto-configuration feature (page 322). In proxy servers, this is done through the proxy server’s configuration file, or its API.

Cache Array Routing Protocol [CARP], is an advanced hash function based proxy selection mechanism. It allows proxies to be added and removed from the proxy array Without relocating more than a single proxy’s share of documents. More simplistic hash functions use the module of the URL hash to determine which proxy the URL belongs to. If a proxy gets added or deleted, most of the documents get relocated—that is, their storage place assigned by the hash function changes.

Where the allocations are shown for three and four proxies. Note how most of the documents in the three-proxy scenario are on a different numbered proxy in the four-proxy scenario. Simplistic hash-function-based proxy allocation using modulo of the hash function to determine which proxy to use. When adding a fourth proxy server, many of the proxy assignments change, these changed locations are marked with a diamond. Note that we have numbered the proxies starting from zero in order to be able to use the hash module directly.

John Ferris:

Further Reading Link

Tips on Debugging with telnet

It’s rather old school and can seem very time consuming in a world of automated and visual debugging tools, but sometimes the older tools can be extremely effective. It’s been a long time since telnet was used as a proper terminal emulator simply because it is so insecure, yet it’s still extremely useful as troubleshooting tool as it engages on a very simple level. Although it should be noted that it can be used securely using a VPN connection which will at least encrypt the connection.


One of the biggest benefits of the fact that HTTP is an ASCII protocol is that it is possible to debug it using the telnet program. A binary protocol Would be much harder to debug, as the binary data would have to be translated into a human-readable format. Debugging with telnet is done by establishing a telnet connection to the port that the proxy server is running on.

On UNIX, the port number can be specified as a second parameter to the telnet program:

telnet

For example, let’s say the proxy server’s hostname is step, and it is listening to port 8080. To establish a telnet session, type this at the UNIX shell prompt:

telnet step 8080

The telnet program will attempt to connect to the proxy server; you
will see the line

Trying

If the server is up and running without problems, you will immediately get the connection, and telnet will display
Connected to servername.com
Escape character is ’“]’.

(Above, the “_” sign signifies the cursor.) After that, any characters you
type will be forwarded to the server, and the server’s response will be dis-
played on your terminal. You Will need to type in a legitimate HTTP
request.

In short, the request consists of the actual request line containing the method, URL, and the protocol version; the header section; and a single empty line terminating the header section.
With POST and PUT requests, the empty line is Followed by the request body. This section contains the HTML form field values, the file that is being uploaded, or other data that is being posted to the server.

The simplest HTTP request is one that has just the request line and no header section. Remember the empty line at the end! That is, press RETURN twice after typing in the request line.
GET http://www.google.com/index.html HTTP/1.1

(remember to hit RETURN twice)

The response will come back, such as,
HTTP/1.1 200 OK
Server: Google—Enterprise/3.0
Date: Mon, 30 Jun 1997 22:37:25 GMT
Content—type: text/html
Connection: close

This can then be used to perform further troubleshooting steps, simply type individual commands into the terminal and you can see the direct response. You should have permission to perform these functions on the server you are using. Typically these will be troubleshooting connections, however it can be a remote attack. Many attacks using this method will use something like a proxy or online IP changer in order to hide the true location.

Components of a Web Proxy Cache

There are several important components to the standard cache architecture of your typical web proxy server. In order to implement a fully functional Web proxy cache, a cache architecture requires several components:

  • A storage mechanism for storing the cache data.
  • A mapping mechanism to the establish relationship between the URLs to their respective cached copies.
  • Format of the cached object content and its metadata.

These components may vary from implementation to implementation, and certain architectures can do away with some components. Storage The main Web cache storage type is persistent disk storage. However, it is common to have a combination of disk and in-memory caches, so that frequently accessed documents remain in the main memory of the proxy server and don’t have to be constantly reread from the disk.

The disk storage may be deployed in different ways:

  • The disk maybe used as a raw partition and the proxy performs all space management, data addressing, and lookup-related tasks.
  • The cache may be in a single or a few large files which contain an internal structure capable of storing any number of cached documents.

The proxy deals with the issues of space management and addressing. ‘ The filesystem provided by the operating system may be used to create a hierarchical structure (a directory tree); data is then stored in filesystem files and addressed by filesystem paths. The operating system will do the work of locating the file(s). ° An object database may be used.

Again, the database may internally use the disk as a raw partition and perform all space manage- ment tasks, or it may create a single file, or a set of files, and create its own “filesystem” within those files. Mapping In order to cache the document, a mapping has to be established such that, given the URL, the cached document can be looked up Fast. The mapping may be a straight-forward mapping to a file system path, although this can be stored internally as a static route.

Typically a proxy would store any resource that is accessed frequently. For example in many UK proxies, the BBC website is extremely popular and it’s essential that this is cached. even for satellite offices it allows people to access BBC VPN through the companies internal network. This is because the page is requested and cached by the proxy which is based in the UK, so instead of the BBC being blocked outside the UK it is still accessible.

Indeed many large multinational corporations sometimes inadvertently offer these facilities. Employees who have the technical know how can connect their remote access clients to specific servers in order to obtain access to normally blocked resources. So they would connect through the British proxy to access the BBC and then switch to a French proxy in order to access a media site like M6 Replay which only allows French IP addresses.

It is also important to remember that direct mappings are normally reversible, that is if you have the correct cache file name then you can use it to produce the unique URL for that document. There are lots of applications which can make use of this function and include some sort of mapping function based on hashes.

Programming Terms: Garbage Collection

There are lots of IT terms thrown about which can be quite confusing for even the experienced IT worker. Particularly in the world of network programming and proxies, sometimes similar words have completely different meanings depending on where you are in the world.

Let’s step back for a minute, and look at what garbage collection means in the programming language world. Though not strictly relevant to the subject of this blog, it is a good way to illustrate the benefits and draw- backs of garbage collection type memory management, whether on disk or in memory. Compiled programming languages, such as C or Pascal, typically do not have run-time garbage collection type memory management.  Which is why they are often not suitable for heavy duty network resources such as the BBC servers which supply millions  streaming live TV such as Match of the Day from VPNs and home connections anywhere.

Instead, those languages require the program authors to explicitly manage the dynamic memory: memory is allocated by a call to malloc ( ) , and the allocated memory must be freed by a call to free ( ) once the memory is no longer needed. Otherwise, the memory space will get cluttered and may run out. Other programming languages, such as Lisp, use an easier [1] memory management style: dynamic memory that gets allocated does not have to be explicitly freed. Instead, the run-time system will periodically inspect its dynamic memory pool and figure out which chunks of memory are still used, and which are no longer needed and can be marked free.

Usually programming languages that are interpreted or object oriented (Lisp, ]ava, Smalltalk) use garbage collection techniques for their dynamic memory management. The determination of what is still used is done by determining whether the memory area is still referenced somewhere-—that is, if there is still a pointer pointing to that area. If all references are lost for example if it has been thrown away by the program—-the memory could no longer be accessed and therefore could be freed.

There are several different approaches to doing this reference detection. One approach is to make each memory block contain an explicit reference counter which gets incremented when a new reference is created and decremented when the reference is deleted or changed to point somewhere else. This requires more work from the run-time system when managing memory references. Another approach is simply to use brute force periodically and traverse the entire memory arena of the program looking for memory references and determine which chunks still get referenced.

This makes it easier and faster to manage memory references as reference counters don’t have to be updated constantly. However, at the same time it introduces a rather heavyweight operation of having to traverse the entire memory scanning for references.

John Williams

From – Web Site

No Comments Networks, Protocols

Subroutine – Passing Parameters

Passing parameters into Subroutines, following examples are from Perl scripts.

Parameters are passed into subroutines in a list with a special name — it’s called @_ and it doesn’t conform to the usual rules of variable naming. This name isn’t descriptive, so it’s usual to copy the incoming variables into other variables within the subroutine.

Here’s what we did at the start of the getplayer subroutine: $angle = $_[O]; If multiple parameters are going to be passed, you’ll write something like: ($angle,$units) = @_; Or if a list is passed to a subroutine: @pqr = @_; In each of these examples, you’ve taken a copy of each of the incoming parameters; this means that if you alter the value held in the variable, that will not alter the value of any variable in the calling code.

This copying is a wise thing to do; later on, when other people use your subroutines, they may get a little annoyed if you change the value of an incoming variable!   Although this method can also be used to hack into websites or divert video streams to bypass geo-blocking for example to watch BBC News outside the UK  like this.

Returning values Our first example concludes the subroutine with a return statement: return ($response); which very clearly states that the value of $response is to returned as the result of running the subroutine. Note that if you execute a return statement earlier in your subroutine, the rest of the code in the subroutine will be skipped over.

For example: sub flines { $fnrd = $_[0]; open (FH,$fnrd) or return (—1); @tda = ; close PH; return (scalar (@tda)); l will return a -1 value if the file requested couldn’t be opened.

Writing subroutines in a separate file
Subroutines are often reused between programs. You really won’t want to rewrite the same code many times, and you’ll
certainly not want to have to maintain the same thing many times over Here’s a simple technique and checklist that you can use in your own programs. This is from a Perl coding lesson, but can be used in any high level programming language
which supports subroutines.

Plan of action:
a) Place the subroutines in a separate file, using a file extension .pm
b) Add a use statement at the top of your main program, calling
in that tile of subroutines
c) Add a 1; at the end oi the file of subroutines. This is necessary since use executes any code that’s not included in subroutine blocks as the tile is loaded, and that code must return a true value — a safety feature to prevent people using TV channels and files that weren’t designed to be used.

No Comments News, Protocols, VPN