Do We Need Encryption for S3?

There are lots of people who hesitate about using cloud based services like S3 and with very good reason. For one it’s important to remember the old saying about the cloud – it’s simply someone else’s computer. The cloud is not some super secure remote service, it’s just a bunch of hard disks controlled and services by someone else. It’s in many ways no difference from accessing any other web based service. So if you worry about what information and details you’re putting on the Craigslist servers, you should also worry about the cloud too!

By default, HTTPS traffic on port 443 and HTTP and HTTPS proxy on port 8080 is inspected. The important benefits from online shopping are convenience of 24/7 shopping from home avoiding traffic and crowds. Routers are special because they have two IP addresses. If the printer does not have an Ethernet connection, then you have to go with the switch plus print server solution. What you have to do then, is to connect the wireless router’s Ethernet port to the switch and then plug your printer to the print server and the print server to the switch. The basics of a wireless computer controlled security camera system and a CCTV system are almost similar. A Linksys wireless router forms the backbone of a local area network that lets users work with wireless laptop computers, media players, and smartphones form anywhere within the router’s effective range. You are also to perform modem reconfiguration or alter the setup of the local area network. Some of the issues that are encountered are related to the setup along with installation of the routers.

Installation requires no previous knowledge, and they’ve got a friendly support team. Computer technical support experts always be best guides to a person to have the top impact. In order to get to that granular of a level, it would be necessary for Google to ask the ISP that issued the IP Address for the identity of the person that was using that IP Address. 1 to 232 – 1. It is necessary that the slave’s server-id must be different from the master’s server-id. 5. Leave the MAC address as default. With the IP address in hand, hearth up Minecraft, click on Multiplayer from the primary menu and add the brand new server or use the direct connect characteristic. You can add a new host site with the New button. Add – Import a new server certificate. For applications to access the services offered by the core TCP/IP protocols in a standard way, network operating systems like Windows Server 2003 make industry-standard application programming interfaces (APIs) available.

Most systems administrators configure machines as the software was developed before version control – that is manually making changes on servers. I do my best to keep my startup sequence as lean as possible, and use several utilities to make sure that various software isn’t sneaking stuff into my startup. Most of the time, we think that network is one of the major problems, but it isn’t. One of the reasons is because YAML takes into account the indentation or the lines so be extra cautious with that space bar. 7. On the protocol and ports options leave everything at its defaults and click next. To filter the Available list by categories or custom-defined sites, click the specified button in the toolbar of the viewer. The rules use the categories defined in the Application Database, network objects and custom objects (if defined). You can use the router to change your IP and you can make the connection secure by built in encryption capabilities of the router. When you develop a webpage, chances are that you are going to want to use some pictures along with those words.

Now you want to buy some virtual dollars. However, if you are into e-commerce and you want to make sure that your clients are protected and their transactions are secured, it will do well if you shift to dedicated server even if it’s more expensive. Adoption of SD-WAN is a self-fulifing prophecy, the increased adoption will only tilt the scale more and more in your favour as an enterprise. IP CCTV is now being installed into new commercial properties to offer businesses high quality security from the starting point of their business, which will ensure that they are ready to deal with security in the best possible way. Just using superman will NOT be sufficient and will cause the login process to fail. You should get a SpeedwayR login. Also does it provide you with video on demand (VOD), you just need to browse the catalog of videos and get it done.

Using a VPN is always an Option

With a VPN server’s IP address attached to your encrypted information, you can browse the internet safely and securely. Your IE should now be configured to browse the net through a proxy server. Changing your IP address and using a Web proxy can help protect your identity, which can come in handy if you’re worried that someone might be snooping on you. On the off chance that you are searching for a dependable security insurance arrangement that additionally gives secure web associations, at that point a VPN is an unquestionable requirement have application for every one of your PCs and cell phones. Of course these savings are usually passed down to the consumer of these web hosting services, which in the end benefits everybody involved with these shared IP. The TCP part ensures that data is completely sent and received at the other end. This is how you will organize your data.

It doesn’t guarantee security of course, but it does add an important layer which is extremely useful. For a start the encryption protects all the data in transit, which means that those important sales videos you have created don’t get intercepted while you’re emailing them to your marketing manager over an insecure link. They may not be safe if she leaves her laptop somewhere but at least it won’t be your fault!

However, be aware that this will only fool the most simple of IP detectors, as your real IP address will still be displayed in other areas of the HTTP header that is sent to the target webpage. What are the components of an IP address? For larger servers, 100mbit speeds are ideal. At the bottom of the PuTTYgen window are three parameters choices including SSH-1 (RSA), SSH-2 RSA, and SSH-2 DSA. This will open the Address list dialog window as seen below. Ability to frequently change IP address increases privacy. Each configuration change is logged and referenced by a ‘configuration path’ with a time stamp, the username of the administrator and an action. Always take time to test the system in full after making the changes. This means that low powered server such as a Pentium 4 2 GHz, 512 DDR RAM, 80 GB HDD can handle several thousand traders at the same time. IP location of the proxy, so that it looks as if the proxy server is the client.

No Comments Networks, News, VPN

Software Testing: Static Analysis

There are several phases to a proper test analysis, the initial stage is normally the static review. This is the process of examining the static code initially to check for simple errors such as syntax problems or fundamental flaws in both design and application. It’s not normally a long exhaustive check, unless of course some obvious or major issues are identified at this stage.

Just like reviews, static analysis looks out for problems without executing the code. Nonetheless, as opposed to reviews static analysis is undertaken once the code has actually been written. Its objective is to find flaws in software source code and software models. Source code is actually any series of statements recorded some human-readable computer programming language which can then be transposed to equivalent computer executable code– this is normally produced by the developer. A software model is an image of the final approach developed using techniques such as Unified Modeling Language (UML); it is commonly generated by a software designer.  Normally this should be accessed and stored securely, with restrictions on who can alter this.  If accessed remotely it should be through a dedicated line if possible or at least using some sort of secure residential VPN (such as this).

Static analysis can find issues that are hard to find during the course of test execution by analyzing the program code e.g. instructions to the computer can be in the style of control flow graphs (how control passes between modules) and data flows (making certain data is identified and accurately used). The value of static analysis is:

Early discovery of defects just before test execution. Just like reviews, the sooner the issue is located, the cheaper and simpler it is to fix.

Early warning regarding questionable aspects of the code or development, by the calculation of metrics, such as a high-complexity measure. If code is too complicated it could be a lot more prone to error or a lot less dependent on the focus given to the code by programmers. If they recognize that the code has to be complicated then they are more probable to check and double check that this is correct; nevertheless, if it is unexpectedly complex there is a higher chance that there will certainly be a problem in it.

Identification of defects not easily found by dynamic testing, such as development standard non-compliances as well as detecting dependencies and inconsistencies in software models, such as hyperlinks or user interfaces that were actually either incorrect or unknown before static analysis was carried out.

Enhanced maintainability of code and design. By executing static analysis, defects will be removed that would certainly typically have increased the volume of maintenance needed after ‘go live’. It can also recognize complex code which if fixed will make the code more understandable as well as consequently easier to manage.

Prevention of defects. By pinpointing the defect early in the life cycle it is actually a great deal easier to identify why it was there in the first place (root cause analysis) than during test execution, therefore providing information on possible process improvement that could be made to prevent the same defect appearing again.

Source: Finding Residential Proxies, James Williams

Don’t Expect Internet Privacy by Default

When the internet was first conceived back in the 1980s, well the date varies depending on your definition – there was little thought about security. The date of course is disputed but I prefer 1983 when TCP/IP was adopted by ARPANET, however the lack of security is a matter of fact. It was a form on communication allowing disparate devices and people to talk to each other and no-one expected it to end up where it is. Unfortunately to allow cross compatibility then compromises need to be made, the security of your data is one of them.

However there are methods to add some security, web sites try with SSL implementation but the end user can assist to. Most users who have security concerns or have experienced cyber crime, will have come across VPN software. This is a virtual private network which can be created to encrypt your data as it travels across the internet. These come in all shapes and sizes from basic personal security ones, to advances residential IP rotating proxies like these ones.

With regards to lots of people there is a pervasive picture of a VPN user, it’s something similar to a young person sporting a hoodie, hunched up in a coffee shop with their laptop. They’re possibly attempting to hack into some federal government computers and are actually on the run from the authorities. As a VPN conceals your geographic location and your web traffic there’s a common idea that the individual is up to no good and certainly has something to hide.

The reality is literally a very long way from this viewpoint and even though numerous hackers do indeed use VPNs consistently so do an awful number of ordinary individuals. Most large corporations have been using VPNs for decades to support inbound connections from remote users. If a salesman needs access to the product database on the company’s network it’s much simpler to allow them to connect through the internet and view the latest version. This is much more secure than travelling around with DVDs and obviously assures that he or she has the most recent versions.

If you make any type of normal connection over the internet, all your web traffic is pretty much viewable, i.e anyone with a mind can intercept and see it. In the event that you’re logging and connecting to a secured share then this would certainly consist of usernames and security passwords. So in order to protect these connections, you might commonly install a VPN client on the laptop computer and make certain it’s used to encrypt the connection back to the company network. It is actually completely legitimate and indeed intelligent business practice.

Regular home users will make use of VPNs for very similar reasons. Essentially the internet is insecure and there is minimal provision for security integrated in automatically. Sure you can access secure sites through things like SSL when you have to enter a credit card or payment information. However this is the exception not the rule and most websites are actually not secure and the vast majority of information flies across the wires in clear text.

In addition to the general insecurity of the web, there’s the additional issue of privacy. Your surfing data is easily available via a variety of sources. For a start, there’s a complete list in your ISP of every little thing you do on the internet and depending on where you reside this can be routinely and easily accessed. Using a VPN stops this, transforming your internet activity into an encrypted list which is unreadable without your permission. Are they used by cyber criminals and terrorists? Sure but also by millions of people who think that what they do online shouldn’t be part of public records.

The VPN systems are becoming more and more sophisticated simply driven by demand and the risks of recognition. There are all sorts of variations including enabling different setups and ports to dodge detection. You can also get them to use home based IP addresses through specific residential IP providers –

In a large number of countries VPNs are definitely not illegal but simply a simple business and personal security tool. However in some countries this is not the case and you can get into trouble if caught using them. Countries that actually ban the use of VPN include places like China, Iraq, Belarus and Turkey. Various other countries merely allow authorized services which usually indicate those which can be jeopardized if required. Individuals still use VPNs in the majority of these nations indeed in Turkey almost all expats use one to view things like British and American TV on-line. It’s actually quite difficult to detect a VPN in use however that doesn’t stop it technically being illegal in those locations.

Source: http://www.onlineanonymity.org/proxies/residential-vpn-ip-address/

Components of a Web Proxy Cache

There are several important components to the standard cache architecture of your typical web proxy server. In order to implement a fully functional Web proxy cache, a cache architecture requires several components:

  • A storage mechanism for storing the cache data.
  • A mapping mechanism to the establish relationship between the URLs to their respective cached copies.
  • Format of the cached object content and its metadata.

These components may vary from implementation to implementation, and certain architectures can do away with some components. Storage The main Web cache storage type is persistent disk storage. However, it is common to have a combination of disk and in-memory caches, so that frequently accessed documents remain in the main memory of the proxy server and don’t have to be constantly reread from the disk.

The disk storage may be deployed in different ways:

  • The disk maybe used as a raw partition and the proxy performs all space management, data addressing, and lookup-related tasks.
  • The cache may be in a single or a few large files which contain an internal structure capable of storing any number of cached documents.

The proxy deals with the issues of space management and addressing. ‘ The filesystem provided by the operating system may be used to create a hierarchical structure (a directory tree); data is then stored in filesystem files and addressed by filesystem paths. The operating system will do the work of locating the file(s). ° An object database may be used.

Again, the database may internally use the disk as a raw partition and perform all space manage- ment tasks, or it may create a single file, or a set of files, and create its own “filesystem” within those files. Mapping In order to cache the document, a mapping has to be established such that, given the URL, the cached document can be looked up Fast. The mapping may be a straight-forward mapping to a file system path, although this can be stored internally as a static route.

Typically a proxy would store any resource that is accessed frequently. For example in many UK proxies, the BBC website is extremely popular and it’s essential that this is cached. even for satellite offices it allows people to access BBC VPN through the companies internal network. This is because the page is requested and cached by the proxy which is based in the UK, so instead of the BBC being blocked outside the UK it is still accessible.

Indeed many large multinational corporations sometimes inadvertently offer these facilities. Employees who have the technical know how can connect their remote access clients to specific servers in order to obtain access to normally blocked resources. So they would connect through the British proxy to access the BBC and then switch to a French proxy in order to access a media site like M6 Replay which only allows French IP addresses.

It is also important to remember that direct mappings are normally reversible, that is if you have the correct cache file name then you can use it to produce the unique URL for that document. There are lots of applications which can make use of this function and include some sort of mapping function based on hashes.

Subroutine – Passing Parameters

Passing parameters into Subroutines, following examples are from Perl scripts.

Parameters are passed into subroutines in a list with a special name — it’s called @_ and it doesn’t conform to the usual rules of variable naming. This name isn’t descriptive, so it’s usual to copy the incoming variables into other variables within the subroutine.

Here’s what we did at the start of the getplayer subroutine: $angle = $_[O]; If multiple parameters are going to be passed, you’ll write something like: ($angle,$units) = @_; Or if a list is passed to a subroutine: @pqr = @_; In each of these examples, you’ve taken a copy of each of the incoming parameters; this means that if you alter the value held in the variable, that will not alter the value of any variable in the calling code.

This copying is a wise thing to do; later on, when other people use your subroutines, they may get a little annoyed if you change the value of an incoming variable!   Although this method can also be used to hack into websites or divert video streams to bypass geo-blocking for example to watch BBC News outside the UK  like this.

Returning values Our first example concludes the subroutine with a return statement: return ($response); which very clearly states that the value of $response is to returned as the result of running the subroutine. Note that if you execute a return statement earlier in your subroutine, the rest of the code in the subroutine will be skipped over.

For example: sub flines { $fnrd = $_[0]; open (FH,$fnrd) or return (—1); @tda = ; close PH; return (scalar (@tda)); l will return a -1 value if the file requested couldn’t be opened.

Writing subroutines in a separate file
Subroutines are often reused between programs. You really won’t want to rewrite the same code many times, and you’ll
certainly not want to have to maintain the same thing many times over Here’s a simple technique and checklist that you can use in your own programs. This is from a Perl coding lesson, but can be used in any high level programming language
which supports subroutines.

Plan of action:
a) Place the subroutines in a separate file, using a file extension .pm
b) Add a use statement at the top of your main program, calling
in that tile of subroutines
c) Add a 1; at the end oi the file of subroutines. This is necessary since use executes any code that’s not included in subroutine blocks as the tile is loaded, and that code must return a true value — a safety feature to prevent people using TV channels and files that weren’t designed to be used.

No Comments News, Protocols, VPN

Data Encapsulation and the OSI Model

When a client needs to transmit data across the network to another device an important process happens.  This process is called encapsulation and involved adding protocol information from each layer of the OSI model.   Every layer in the model only communicates with it’s peer layer on the receiving device.

In order to communicate and exchange information, each layer uses something called PDU which are Protocol Data Units.   These are extremely important and contain the control information attached to the data at each layer of the model.  It’s normally attached to the header of the data field however it can also be attached to the trailer at the end of the data.

The encapsulation process is how the PDU is attached to the data at each layer of the OSI model.  Every PDU has a specific name which is dependent on the information contained in each header.   The PDU is only read by the peer layer on the receiving device at which point it is stripped off and the data handed to the next layer.

Upper layer information only is passed onto the next level and then transmitted onto the network.    After this process the data is converted and handed down to the Transport layer this is done by setting up a virtual circuit to the receiving device by sending a synch packet.   In most cases the data needs to be broken up into smaller segments then a Transport layer PDU attached to the header of the field.

Network addressing and routing through the internetwork happens at the network layer and each data segment.    Logical addressing for example IP is used to transport every data segment to it’s destination network.  When the Network layer protocol adds the control header from the data received from the transport layer it is then described as packet or datagram.  This addressing information is essential to ensure the data reaches it’s destination.  It can allow data to traverse all sorts of networks and devices with the right delivery information added to subsequent PDUs on it’s journey.

One aspect that often causes confusion is the later where packets are taken from the network layer and placed in the actual delivery medium (e.g. cable or wireless for example). This can be even more confusing when other complications such as VPNs are included which involve routing the data through a specified path.   For example people route through a VPN server in order to access BBC iPlayer abroad like this post which will add additional PDUs to the data.   This stage is covered by the Data Link layer which encapsulates all the data into a frame and adds to the header the hardware address of both the source and the destination.

Remember for this data to be transmitted over a physical network it must be converted into a digital signal.  A frame is therefore simply a logical group of binary digits – 1 and 0s which is read only by devices on the local networks.   Receiving devices will synchronize the digital signal and extract all the 1s and 0s.  Here the devices build the frames and run a CRC (Cyclic Redundancy Check) in order to ensure it matches with the transmitted frame.

Additional Information 

No Comments Networks, Protocols, VPN

Network Topology: Ethernet at Physical Layer

Ethernet is commonly implemented in a shared hub/switch environment where if one station broadcasts a frame then all devices must synchronize to the digital signal to extract the data from the physical wire.  The connection is between physical medium, and all the devices that share this need to listen to each frame as they are considered to be on the same collision domain.  The downside of this is that only one device can transmit at each time plus all devices need to synchronize and extract all the data.

If two devices try to transmit at the same time, and this is very possible – the a collision will occur.  Many years ago, in 1984 to be precise, the IEEE Ethernet Committee released a method of dealing with this situation.  It’s a protocol called the Carrier Sense Multiple Access with Collision Detect protocol or CSMA/CD for short.  The function of this protocol is to tell all stations to listen for devices trying to transmit and to stop and wait if they detect any activity.  The length of the wait is predetermined by the protocol and will vary randomly, the idea is that when the collision is detected it won’t be repeated.

It’s important to remember that Ethernet, uses a bus topology.   This means that whenever a device transmits then the signal must run from one end of the segment to the other.   It also defines that a baseband technology should be used which means that when a station does transmit it is allowed to use all potential bandwidth on the wire.  There is no allowance for other devices to utilise the potential available bandwidth.

Over the years the original IEEE 802.3 standards have been updated but here are the initial settings:

  • 10Base2: 10 Mbps, baseband technology up to 185 meters in cable length.  Also known as thinnet capable of supporting up to 30 workstations in one segment.  Not often seen now.
  • 10base5: 10 Mbps, baseband technology allows up to 500 meters in length. Known as thicknet.
  • 10BaseT: 10Mbps using category 3 twisted pair cables. Here every device must connect directly into a network hub or switch.   This also means that there can only be one device per network segment.

Both the speeds and topologies have changed greatly over the years, and of course 10Mbps is no longer adequate for most applications.  In fact most networks will run on gigabit switches in order to facilitate the increasing demands of network enabled applications.    Remember allowing access to the internet means that bandwidth requirements will rocket even if you allow for places like the BBC blocking VPN access (article here).

Each of the 802.3 standards defines an Attachment Unit Interface (AUI) that allows one bit at a time transfer using the data link media access method to the Physical layer.  This means that the physical layer becomes adaptable and can support any emerging or newer technologies which operate in a different way.  There is one exception though and it is a notable one, the AUI interface cannot support 100Mbs Ethernet for one specific reason – it cannot cope with the high frequencies involved.   Obviously this is the case for even faster technologies too like Gigabit Ethernet.

John Smith

Author and Network VPN Blogger.

No Comments Networks, Protocols, VPN

Using SSL for Email and Internet Protocols

If you want to increase the security attached to your email messaging then there’s several routes you can take.  First of all, you should look at digitally signing and encrypting all your email messages.  There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system.    Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.

Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications.   However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation?  Well the first place to investigate is increasing the security of authentication and data transmission.   There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.

Here’s a few of the RFC based protocols related to Email  :

  • Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
  • Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
  • Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
  • Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers.  Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.

The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.

However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.

It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.

No Comments News, Proxies, VPN

Authentication of Anonymous Sessions

Any automated identity system needs one thing – the ability to create and distribute the authentication of users credentials and the rights that they assert.  Many people look initially to the world leader – Kerberos but there are other systems which are just as capable.   In later years, SAML (Security Assertion Mark Up Language) has become increasingly popular and is becoming something of an industry standard.  There are good, practical reasons why SAML has become popular including it’s ability to use XML to represent various security credentials.    It defines a protocol to request and receive the various credential data which flows from a SAML authority service.

In reality although SAML can look quite complicated on first glance it is relative straight forward to use.    It’s ideally positioned to deal with security and authentication issues online, including the many users who protect their privacy and indulge in anonymous surfing for example.  Remember the security assertions will normally only be for a particular domain which means that the user’s identity can be protected to some extent.

A SAML authority can be described as a service usually online which responds to specific SAML request.  We define these requests as assertions and they come in three distinct types:

Authentication: a SAML authority receives a request about a specific user’s credentials. The reply will stipulate that the authentication was completed and at what time.

Attribute: when an authentication assertion has been returned, a SAML attribute authority can be asked for the attributes associated with the subject.  These are returned and are known as attribute assertions.

Authorization: a SAML authorization assertion is returned in response to a request about permissions to specified resources.  This will be referenced against an access control list with the relative permissions and could even be dynamically referenced and updated.  the response would typically be quite simple – i.e that subject A has been granted permission for access to resource Z.

Although all these assertions are quite distinct, it is very likely that they all take place on a single authority.  However in highly secure or distributed systems they may be spread across distinct servers in a domain.

SAML has become more popular because it is ideal for use in web based and distributed systems as opposed to Kerberos which is not as flexible.   For example it could be used to allocate permissions for users to download videos like this based on permissions assigned to a subscriber.   This means that the permissions can be integrated with all sorts of web services and functions including integration with SOAP.  This is of course an advanced protocol often used for exchanging information in a structured format across computer networks.

No Comments Networks, Protocols, VPN

X Windows System

The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server.  The server in this instance manages the display, mouse and keyboard.   The client is actually any remote application which runs on a different host (or on the same one).    In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host.  Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.

Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required.  In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access.   In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication.  To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.

X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP.  The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server.   So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.

It is when multiple connections are being used that communication can get more complex.  This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System.    Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen.   It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients.  Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.

The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up.  The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n).    Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established.  In this instance n is the number of the display so will be incremented to allow multiple concurrent connections.  On a Unix server this will usually be a domain socket incremented by n with display numbers too.