Choosing the Right Proxies for Instagram

In the technology world, all proxies are not created equally.  Indeed there is a huge variation of the servers and their respective IP addresses which can often be confusing.  It’s almost 30 years now since the first proxies were created in the CERN labs and in the intervening time, they have changed into all sorts of different formats.  The very first proxies were actually just simple gateways and in many environments they remain just like this.  However the global network that is the internet means that proxies now have to deal with a whole lot more protocols and communication metrics.

The use of proxies has also developed greatly too, especially as they have slowly migrated from the server room out into internet.  However not only is the actual server configuration important so is the IP address ranges that are assigned to it.  Now obviously the operating system does have some impact, after all an outdated copy of IIS installed on a version of Windows NT is going to have a serious amount of vulnerabilities built into it.   Yet for many purposes this is not as important as the actual IP addresses that are assigned to the server.

Take for example people who want to use proxies to run multiple accounts on social media platforms like Instagram.   This is a very popular social site which focuses on the upload and sharing of online photos.  Over the passed few years it has developed from a simple photo album site to one of the rivals to sites like Facebook.  As it has millions of users, then there are lots of people who now make money from running various accounts.  The issue is that Instagram only want people to run single accounts and takes many steps to block multiple or concurrent access.

To bypass these blocks there’s only one real solution – hiding your real IP address when accessing multiple accounts.  Most of the entrepreneurs use proxies in order to achieve this.  The key to achieving this is to minimize the footprint so that each account is being accessed by a completely different person.   Now just picking up a free proxy address from the internet is obviously possible however it’s an extremely bad idea for a variety of reasons.  However the main one is that often these are already being used to access Instagram and therefore you risk your account being flagged too.  This is one of the reasons that people using the site for business purposes will almost always use private proxies for Instagram whenever they can.

These private servers have two crucial components – firstly they are not used by anyone else, so you know that some spammer isn’t trying to use them for fake likes or account management.  The secondly is that the very best ones have residential IP addresses assigned to them.  This is also important as currently most proxies have commercial IP address ranges assigned as they generally reside in datacentres.  However Instagram and many other sites know that their genuine home customers will never have these sorts of IP addresses at all.

So they are often flagged as suspicious and commonly blocked from accessing the social media site.  It’s a growing practice as last year Netflix blocked all access from commercial IP address ranges which instantly stopped people using commercial VPNs to access the site.  So if you want to buy proxies for Instagram you should ensure that they have residential IP addresses and definitely are’t already blocked by the site.   Sure there likely to cost a bit more but they will ensure that your Instagram accounts are safe and will not be risked.  Obviously these are only needed if you’re trying to run multiple accounts or some Instagram promotion software.  Otherwise just continue to use your own IP address to access the site.

 

Causes of Network Latency – TCP Proxies

On any sort of internet connection, speed is of course important.  The fastest response will be direct connections when the two computers are physically connected.  Of course the internet enables connections over thousands of miles but obviously this involves many more hops in the route.   If you start to use proxy servers or VPNs then you add an additional hop in the route which will almost always slow down your connection even more.

Overall speed is obviously one issue, but depending on what you’re doing online then there’s another that may be even more important.  Latency can actually cause a real problem with all sorts of online applications and especially for people playing games online.  If there is a long delay on the connection playing any sort of online action game can be virtually impossible online. If  you don’t believe me try playing Call of Duty using a satellite internet connection!   If you combine these with a slow VPN or even rotating residential proxies then you can seriously impact performance of your link.

TCP Hybla is an experimental TCP enhancement developed with the principal objective of combating the performance decline triggered by the prolonged RTTs typical of satellite links. It consists of a set of procedures that includes, among others:

  • an enhancement of the standard congestion control algorithm (to grant long RTT connections the exact same instantaneous segment transmission rate of a comparatively fast reference connection).
  • the compulsory adoption of the SACK policy.
  • the use of timestamps.
  • the adoption of Hoe’s channel bandwidth estimate.
  • the application and compulsory use of packet spacing methods (also known as “pacing”).
  • TCP Hybla includes only sender-side modification of TCP. As that, it is totally compatible with standard receivers.

For a full description of goals and characteristics of TCP Hybla refer to the publications section.

Performance.
TCP Hybla offers a pretty impressive efficiency improvement to long RTT satellite hookups with respect to TCP NewReno. It may be adopted either as an end-to-end protocol, or as satellite segment transport protocol in PEP (Performance Enhancing Proxy) designs based upon the TCP splitting principle. It can be also used as transport protocol in DTN architectures. See the performance section for further information.

Linux implementation.
Starting from kernel 2.6.13 Hybla has been included in the official Linux kernel. This implementation, based on the “module” Linux technology, does not include the last two Hybla components: Hoe’s channel bandwidth estimate and packet spacing. Their enhancement is mandatory to totally benefit from Hybla performance improvement. To this end, it is enough to patch the official kernel with the MultiTCP package, downloadable from the downloads section.

NS-2 implementation.
A TCP Hybla module has been developed for the widely adopted NS-2 simulation platform. This element can be downloaded from the downloads section.  At the time of writing this has yet to be tested extensively, it should work with all platforms and even with proxies designed for Instagram for instance.

TATPA testbed.
TATPA stands for Testbed for Advanced Transport Protocols and Architecture. It is a testbed developed by Hybla’s publishers to carry out comparative efficiency assessment of new TCP variants (included Hybla) and alternative architectures, such as PEPs (Performance Enhanced Proxy) and also DTN (Delay Tolerant Networks). It could be fully managed by remote through a powerful web interface. For further information see the TATPA testbed and the publications sections.

Projects.
TCP Hybla development is supported by the European Satellite Network of Excellence (SatNEx) project.

Using Proxy Servers for Privacy and Profit

Everyone online has a digital address.  It’s nothing complicated but it’s usually directly linked to your internet protocol address or IP address for short.  Although this number does vary throughout time, at the moment you connect to the internet it’s completely unique to you and you alone.  This number can be used to track your online activity to a surprising degree, it is the primary way that careless online criminals are tracked down.  There are of course huge privacy issues to having this address recorded and  technology exists to hide your location from websites you visit and your ISP.  At the heart of these are tools like VPN and proxy servers which we’ll cover briefly in this article.

Lots of us most probably have made use of a proxy server in all sorts of environments. In the case that you use the internet at the workplace or college and university, there’s a strong probability that you connect to it through a proxy server. They are actually frequently deployed to regulate access inwards and outwards to a company network from the world wide web. The idea is that as opposed to examining a wide variety of individual connections, the proxy can channel web traffic through a solitary point that makes it less complicated to monitor and check for things like viruses.

To impose making use of the proxy server, a large number of network administrators will definitely enforce their usage by a range of techniques. From the client computer the use of the proxy will be made mandatory by hard coding the settings into the browser. So for instance, Internet Explorer would certainly be set up and the settings pre-configured by using something like the Internet Explorer Application kit. The settings can also be installed by utilizing group policy settings released to the client from the Active directory.

In addition, the system administrator may even release configurations on the exterior firewall to control access throughout the network perimeter. This would be achieved by defining the IP address of the proxy and ensuring all other addresses are blocked out from leaving the network. If there are numerous proxies or they are set up in an array then multiple addresses would be configured. This would stop any individual from bypassing client side settings or setting up an additional browser and trying to gain access to the internet directly. If the address isn’t specified then the access would be blocked.

Proxies on the internet are normally used in a marginally different context although the functionality is relatively the same. They are mostly used to provide a level of privacy and hide your internet address from web servers. The idea is that rather than seeing the IP address of your client, then the web server (and your ISP) will only observe the IP address of the proxy. This would also allow you to circumvent some of the many geo-blocks which exist on the web. Essentially if you route your connection through a proxy located in the right nation then you can bypass the block. Countless people use these to view things like the BBC from Spain or anywhere outside the UK, though it can be challenging to find a UK proxy fast enough to stream video at least without paying for one. This has become a bit more complicated over the last handful of years though, as the websites have begun to detect the use of proxies and are blocking them automatically. Nowadays you normally need a VPN so as to watch video from one of the primary media sites, due to the fact that proxies won’t function any longer.

Presently there are other common uses of proxies online and that’s usually to make money. Countless individuals and companies, use proxies so as to create more electronic identities. Doing this means instead of being restricted to one connection, then you can efficiently make use of hundreds at the same time. This is most especially useful for performing online research, posting adverts, internet marketing and even utilizing e-commerce sites to buy stock to resale. A common use is to use automated software to buy things like sneakers or tickets to popular concerts, normally you’ll only be allowed to try and buy once but using proxies you can purchase many. This is why people employ computer software to speed up these methods and purchase the best rotating proxies so as to facilitate these purchases. There are many individuals making thousands from simple software programs, a few of the best rotating proxy networks and an ordinary home computer acquiring and selling limited availability items such as these.

Specialized Proxies with Residential IP Addresses

Now to 99% of the population, this principle is going to sound a little bizarre however it does illustrate the relevance of proxies today. The term sneaker proxies does not describe some incredibly, sneaky configuration of a proxy server more to the function they carry out. However before we explain what they in fact are and their function then we initially need a little background.

This subject is concerned all about the latest style, and more particularly the latest sneakers and shoes (maybe known as trainers outside the U.S.A). Now in my day, if you desired the trendiest trainers then you ‘d wait for their release and pop down to the sports shop and purchase them. Obviously life is a lot more complicated nowadays and there’s actually a selection of minimal edition tennis shoes that are quite in demand but extremely challenging to get. Exactly what takes place is the manufacturer launches a minimal amounts of these and they do so in an extremely particular way to maintain need.

  • Producer Releases Limited Edition Sneakers to Sellers
  • Middle Men typically get them.
  • These are sold online to consumers

This sounds basic however unfortunately, the demand is extremely high worldwide and the makers just release a very small number of the tennis shoes. It’s really a crazy market and it’s incredibly difficult to obtain even a single pair of these sneakers if you play the game by the book. Essentially even if you await alert then instantly go to among these sneaker sites you ‘d need to be extremely lucky to get even single pair. It’s so exceptionally hard to choose these up an entire sub industry has been developed with supporting technology to acquire them. So here’s exactly what you require and why utilizing tennis shoe proxies is an important component of this struggle.

Unfortunately if you just play the game, it’s pretty unlikely you’re going to get any of these unusual tennis shoe releases. So if you’re desperate for the latest fashion or possibly simply want to make a couple of bucks selling them on at a profit then they’re are approaches to significantly enhance your opportunities of getting lots of pairs. All of these releases are generally offered online from various sneaker professional sellers, however just wishing to click and purchase isn’t really going to work.

Exactly what do you require? How can you get a couple of or perhaps great deals on the latest sneakers? Well ideally there’s 3 components you need to basically warranty at least a couple of pairs.

A devoted server: now if you’re just after a number of pairs for your self, then this action is perhaps not necessary. If you’re in it for organisation and want to increase return it’s a sensible investment. Tennis shoe servers are merely devoted web servers preferably located to the datacentres of the business like Nike, Supreme, Footsite and Shopify who offer these sneakers. You use these to host the next phase, the Bots and automated software application described below.

Sneaker Bots— there are a great deal of these and it’s best to do your research on what’s working best at any point in time. A few of the Bots work best with specific sites, but they all work in a similar way. It’s automated software which can keep looking for defined tennis shoes without a human having to sit there for hours pushing the buy button. You can set up the software application to imitate human behaviour with infinite perseverance– obtaining these sneakers day and night when they’re released. You can run these bots on your PC or laptop with a fast connection although they’re more effective on devoted servers.
Sneaker Proxies
Now this is perhaps the most vital, and frequently mostly forgotten action if you’re heading to become a sneaker baron. Automated software application is fantastic for sitting there gradually aiming to fill shopping baskets with the latest tennis shoes however if you try it they get prohibited pretty quickly. Exactly what takes place is that the retail sites quickly spot these multiple applications since they’re all originating from the exact same IP address of either your server or your computer system. As soon as it occurs, and it will extremely rapidly, they obstruct the IP address and any request from there will be disregarded– then it’s game over for that address I’m afraid.

If you do not get the proxy stage proper then all the rest will be meaningless expenditure and effort. Exactly what makes an appropriate sneaker proxy? Well there’s certainly tons of complimentary proxies around on the internet, and totally free is certainly great. It’s pointless utilizing these and indeed very risky.   Free proxies are a combination of misconfigured servers, that is accidentally exposed which people get on and utilize. The others are hacked or taken over servers intentionally exposed so identity thieves can use them to take usernames, accounts and passwords. Considered that you will require at some point to spend for these sneakers utilizing some sort of credit or debit card using totally free proxies to send your financial information is utter insanity– do not do it.

Even if you do take place to pick a safe proxy which some dozy network administrator has actually exposed, there’s still little point. They are going to be slow which indicates however fast your computer system or sneaker server is, your applications will run at a snail’s rate. You’re not likely to be successful with a sluggish connection and often you’ll see the bot timing out. The second problem is that there is an important component to the proxy which you’ll have to succeed and essentially no free proxies will have these– a domestic IP address.
Numerous business websites now are well aware of people using proxies and VPNs to bypass geoblocks or run automated software. They discover it challenging to spot these programs however there’s a basic approach which blocks 90% of people who attempt– they ban connections from commercial IP addresses. Residential IP addresses are just allocated to home users from ISPs and so it’s very difficult to obtain great deals of them – read about them here. Virtually all proxies and VPNs available to employ are assigned with business IP addresses, these are not effective as sneaker proxies at all.

The Sneaker proxies are different, they utilize residential IP addresses which look identical to home users and will be enabled access to essentially all websites. Certainly you still have to be careful with numerous connections but the companies who supply these usually provide something called rotating backconnect configurations which switch both setups and IP addresses automatically. These are able to replicate rotating proxies but make them  more affordable than purchasing devoted property proxies which can get exceptionally pricey.

Software Testing: Static Analysis

There are several phases to a proper test analysis, the initial stage is normally the static review. This is the process of examining the static code initially to check for simple errors such as syntax problems or fundamental flaws in both design and application. It’s not normally a long exhaustive check, unless of course some obvious or major issues are identified at this stage.

Just like reviews, static analysis looks out for problems without executing the code. Nonetheless, as opposed to reviews static analysis is undertaken once the code has actually been written. Its objective is to find flaws in software source code and software models. Source code is actually any series of statements recorded some human-readable computer programming language which can then be transposed to equivalent computer executable code– this is normally produced by the developer. A software model is an image of the final approach developed using techniques such as Unified Modeling Language (UML); it is commonly generated by a software designer.  Normally this should be accessed and stored securely, with restrictions on who can alter this.  If accessed remotely it should be through a dedicated line if possible or at least using some sort of secure residential VPN (such as this).

Static analysis can find issues that are hard to find during the course of test execution by analyzing the program code e.g. instructions to the computer can be in the style of control flow graphs (how control passes between modules) and data flows (making certain data is identified and accurately used). The value of static analysis is:

Early discovery of defects just before test execution. Just like reviews, the sooner the issue is located, the cheaper and simpler it is to fix.

Early warning regarding questionable aspects of the code or development, by the calculation of metrics, such as a high-complexity measure. If code is too complicated it could be a lot more prone to error or a lot less dependent on the focus given to the code by programmers. If they recognize that the code has to be complicated then they are more probable to check and double check that this is correct; nevertheless, if it is unexpectedly complex there is a higher chance that there will certainly be a problem in it.

Identification of defects not easily found by dynamic testing, such as development standard non-compliances as well as detecting dependencies and inconsistencies in software models, such as hyperlinks or user interfaces that were actually either incorrect or unknown before static analysis was carried out.

Enhanced maintainability of code and design. By executing static analysis, defects will be removed that would certainly typically have increased the volume of maintenance needed after ‘go live’. It can also recognize complex code which if fixed will make the code more understandable as well as consequently easier to manage.

Prevention of defects. By pinpointing the defect early in the life cycle it is actually a great deal easier to identify why it was there in the first place (root cause analysis) than during test execution, therefore providing information on possible process improvement that could be made to prevent the same defect appearing again.

Source: Finding Residential Proxies, James Williams

Understanding ARP and Lower Protocols

They’re are many important protocols that you need knowledge of if you’re troubleshooting complicated networks. First of all there’s TCP, IP and UDP plus a host of application protocols such as DHCP and DNS. Any of these could be an issue if you’re having problems with a network. However often the most difficult to troubleshoot and indeed to understand are the lower level protocols such as ARP. If you don’t have some understanding of these it can be extremely confusing to understand how they interact.

The address resolution protocol often sits in the background happily resolving addresses, however if you get issues it can cause some very difficult problems. If you’re working on some sort of complicated network such as a residential proxy set up or ISP like this, there will be all sorts of hardware resolution requests taking place on both local and remote networks.

Both logical and physical addresses are used for intercommunication on a network. The use of logical addresses permits communication among a wide range of networks and not directly connected devices. The use of physical addresses assists in communication on a single network segment for devices that are directly connected to each other with a switch. In the majority of cases, these two kinds of addressing must collaborate in order for communication to happen.

Consider a scenario where you want to communicate with a device on your network. This device may be a server of some kind or simply another work- station you have to share files with. The application you are utilizing to launch the communication is already aware of the IP address of the remote host (by means of DNS, addressed elsewhere), meaning the system should have all it needs to build the layer 3 through 7 information of the packet it wishes to transmit.

The only piece of info it needs at this point is the layer 2 data link information consisting of the MAC address of the intended host. MAC addresses are required for the reason that a switch that interconnects devices on a network uses a Content Addressable Memory (CAM) table, which specifies the MAC addresses of all devices plugged into each of its ports. When the switch acquires traffic destined for a specific MAC address, it makes use of this table to know through which port to send the traffic.
If the destination MAC address is not known, the transmitting device will definitely first check for the address in its cache; if it is not there, then this must be resolved by means of supplementary communicating on the network.

The resolution procedure that TCP/IP networking (along with IPv4) uses to resolve an IP address to a MAC address is referred to as the Address Resolution Prrotocol (ARP), which is defined in RFC 826. The ARP resolution process uses only two packets: an ARP request and an ARP response.

Source: http://bbciplayerabroad.co.uk/free-trial-of-bbc-iplayer-in-australia/

Proxy Selection Using Hash Based Function

One of the difficulties in running a large scale proxy infrastructure is how to choose which proxy to use. This is not as straight forward as it sounds and there are various methods commonly used in selecting the best proxy to be used.

In hash-function-based proxy selection, a hash value is calculated from some information in the URL, and the resulting hash value is used to pick the proxy that is used. One approach could be to use the entire URL as data for the hash Function. However, as we’ve seen before, it is harmful to make the proxy selection completely random: some applications expect a given client to contact a given origin server using the same proxy chain.

For this reason, it makes more sense to use the DNS host or domain name in the URL as the basis for the hash function. This way, every URL from a certain origin server host, or domain, will always go through the same proxy server (chain). In practice, it is even safer to use the domain name instead of the full host name (that is, drop the first part of the host- name)—this avoids any cookie problems where a cookie is shared across several servers in the same domain.

It’s also useful when large amounts of data are involved and can indeed be used to switch proxies even during the same connection.  For example if someone is using a proxy to stream video – such as in this article – BBC iPlayer France, then the connection will be live for a considerable time with a significant amount of data.  In these situations, there is also limited requirement for any caching facilities particularly with live video streams.

This approach may be subject to “hot spots”—that is, sites that are very well known and have a tremendous number of requests. However, while the high load may indeed be tremendous at those sites’ servers, the hot spots are considerably scaled down in each proxy server. There are several smaller hot spots from the proxy’s point of view, and they start to balance each other out. I-lash—function-based load balancing in the client can be accomplished by using the client proxy auto-configuration feature (page 322). In proxy servers, this is done through the proxy server’s configuration file, or its API.

Cache Array Routing Protocol [CARP], is an advanced hash function based proxy selection mechanism. It allows proxies to be added and removed from the proxy array Without relocating more than a single proxy’s share of documents. More simplistic hash functions use the module of the URL hash to determine which proxy the URL belongs to. If a proxy gets added or deleted, most of the documents get relocated—that is, their storage place assigned by the hash function changes.

Where the allocations are shown for three and four proxies. Note how most of the documents in the three-proxy scenario are on a different numbered proxy in the four-proxy scenario. Simplistic hash-function-based proxy allocation using modulo of the hash function to determine which proxy to use. When adding a fourth proxy server, many of the proxy assignments change, these changed locations are marked with a diamond. Note that we have numbered the proxies starting from zero in order to be able to use the hash module directly.

John Ferris:

Further Reading Link

Tips on Debugging with telnet

It’s rather old school and can seem very time consuming in a world of automated and visual debugging tools, but sometimes the older tools can be extremely effective. It’s been a long time since telnet was used as a proper terminal emulator simply because it is so insecure, yet it’s still extremely useful as troubleshooting tool as it engages on a very simple level. Although it should be noted that it can be used securely using a VPN connection which will at least encrypt the connection.


One of the biggest benefits of the fact that HTTP is an ASCII protocol is that it is possible to debug it using the telnet program. A binary protocol Would be much harder to debug, as the binary data would have to be translated into a human-readable format. Debugging with telnet is done by establishing a telnet connection to the port that the proxy server is running on.

On UNIX, the port number can be specified as a second parameter to the telnet program:

telnet

For example, let’s say the proxy server’s hostname is step, and it is listening to port 8080. To establish a telnet session, type this at the UNIX shell prompt:

telnet step 8080

The telnet program will attempt to connect to the proxy server; you
will see the line

Trying

If the server is up and running without problems, you will immediately get the connection, and telnet will display
Connected to servername.com
Escape character is ’“]’.

(Above, the “_” sign signifies the cursor.) After that, any characters you
type will be forwarded to the server, and the server’s response will be dis-
played on your terminal. You Will need to type in a legitimate HTTP
request.

In short, the request consists of the actual request line containing the method, URL, and the protocol version; the header section; and a single empty line terminating the header section.
With POST and PUT requests, the empty line is Followed by the request body. This section contains the HTML form field values, the file that is being uploaded, or other data that is being posted to the server.

The simplest HTTP request is one that has just the request line and no header section. Remember the empty line at the end! That is, press RETURN twice after typing in the request line.
GET http://www.google.com/index.html HTTP/1.1

(remember to hit RETURN twice)

The response will come back, such as,
HTTP/1.1 200 OK
Server: Google—Enterprise/3.0
Date: Mon, 30 Jun 1997 22:37:25 GMT
Content—type: text/html
Connection: close

This can then be used to perform further troubleshooting steps, simply type individual commands into the terminal and you can see the direct response. You should have permission to perform these functions on the server you are using. Typically these will be troubleshooting connections, however it can be a remote attack. Many attacks using this method will use something like a proxy or online IP changer in order to hide the true location.

Components of a Web Proxy Cache

There are several important components to the standard cache architecture of your typical web proxy server. In order to implement a fully functional Web proxy cache, a cache architecture requires several components:

  • A storage mechanism for storing the cache data.
  • A mapping mechanism to the establish relationship between the URLs to their respective cached copies.
  • Format of the cached object content and its metadata.

These components may vary from implementation to implementation, and certain architectures can do away with some components. Storage The main Web cache storage type is persistent disk storage. However, it is common to have a combination of disk and in-memory caches, so that frequently accessed documents remain in the main memory of the proxy server and don’t have to be constantly reread from the disk.

The disk storage may be deployed in different ways:

  • The disk maybe used as a raw partition and the proxy performs all space management, data addressing, and lookup-related tasks.
  • The cache may be in a single or a few large files which contain an internal structure capable of storing any number of cached documents.

The proxy deals with the issues of space management and addressing. ‘ The filesystem provided by the operating system may be used to create a hierarchical structure (a directory tree); data is then stored in filesystem files and addressed by filesystem paths. The operating system will do the work of locating the file(s). ° An object database may be used.

Again, the database may internally use the disk as a raw partition and perform all space manage- ment tasks, or it may create a single file, or a set of files, and create its own “filesystem” within those files. Mapping In order to cache the document, a mapping has to be established such that, given the URL, the cached document can be looked up Fast. The mapping may be a straight-forward mapping to a file system path, although this can be stored internally as a static route.

Typically a proxy would store any resource that is accessed frequently. For example in many UK proxies, the BBC website is extremely popular and it’s essential that this is cached. even for satellite offices it allows people to access BBC VPN through the companies internal network. This is because the page is requested and cached by the proxy which is based in the UK, so instead of the BBC being blocked outside the UK it is still accessible.

Indeed many large multinational corporations sometimes inadvertently offer these facilities. Employees who have the technical know how can connect their remote access clients to specific servers in order to obtain access to normally blocked resources. So they would connect through the British proxy to access the BBC and then switch to a French proxy in order to access a media site like M6 Replay which only allows French IP addresses.

It is also important to remember that direct mappings are normally reversible, that is if you have the correct cache file name then you can use it to produce the unique URL for that document. There are lots of applications which can make use of this function and include some sort of mapping function based on hashes.

Intrusion Detection – Post Attack Phase

If you’re protecting any network then understanding the options and various phases of an attack can be crucial.  When you detect an intrusion, it’s important to quickly assess what stage the attack is at and what possible developments are likely.  Whether it’s a skilled attacker of some opportunist kid with some technical skill makes a huge difference in the possible outcomes.

Even regular, normal traffic in suspicious or unusual situations can indicate a possible intrusion. If you suddenly notice TCP three-way handshakes completing on TCP ports 20 and 21 on a home Web server, but you know that you do not run an FTP server at home, it is safe to assume that something suspicious is going on. Post—Attack Phase After an attacker has successfully penetrated a host on your network, the further actions he will take for the most part follow no predictable pattern.   Obviously the danger is much greater if the attacker is both skilled and has plans to further exploit your network while many will simply deface a few pages or use it as  a VPN to watch US or UK TV channels abroad.

This phase is where the attacker carries out his plan and makes use of any information resources as he sees fit. Some of the different options available to the attacker at this point include the following:

  • Covering tracks
  • Penetrating deeper into network infrastructure
  • Using the host to attack other networks
  • Gathering, manipulating, or destroying data
  • Handing over the host to a friend or hacker group
  • Walking or running away

If the attacker is even somewhat skilled, he is likely to attempt to cover his tracks. There are several methods; most involve the removal of evidence and the replacement of system files with modified versions.The replaced versions of system files are designed to hide the presence of the intruder. On a Linux box, netstat would be modified to hide a Trojan listening on a particular port. Hackers can also cover their tracks by destroying system or security log files that would alert an administrator to their presence. Removing logs can also disable an HIDS that relies on them to detect malicious activity. There are automated scripts available that can perform all these actions with a single command. These scripts are commonly referred to as root/ens.

Externally facing servers in large network topologies usually contain very little in terms of useful data for the attacker. Application logic and data is usually stored in subsequent tiers separated by firewalls.The attacker may use the compromised host to cycle through the first three attack phases to penetrate deeper into the system infrastructure. Another possibility for the black hat is to make use of the host as an attack or scanning box.When skilled hackers want to penetrate a high—profile network, they often compromise a chain of hosts to hide their tracks.   It’s not unusual for the attackers to relay their connections through multiple servers, bouncing from remote sites such as Russian, Czech and a German proxy for example before attacking the network.

The most obvious possibilities for the attacker are to gather, manipulate, or destroy data. The attacker may steal credit card numbers and then format the server. The cracker could subtract monies from a transactional database.The possibilities are endless. Sometimes the attackers motivation is solely to intrude into vulnerable hosts to see whether he can. Skilled hackers take pride in pulling off complicated hacks and do not desire to cause damage. He may turn the compromised system over to a friend to play with or to a hacker group he belongs to. The cracker may realize that he has gotten in over his head and attacked a highly visible host, such as the military’s or major financial institutions host, and want to walk away from it praying he isn’t later discovered.