Using Trial Proxies To Make Your Sneaker Fortune

Proxies no longer just sit in the server room quietly routing their traffic.  The initial years of being basically a simple web gateway which nobody really bothered with are long gone.  Proxies now are big business and exist in all sorts of guises, they can make serious money whether using them or running them.

The main growth area for proxies has taken them away from the corporate environment and into the mainstream.  The actually servers themselves haven’t changed that much from the early days of Glype and Microsoft ISA server.    However the way they operate and the IP address ranges assigned to them has changed a great deal.   Instead of protecting the network and dumbly routing traffic, nowadays proxies offer the ability for people to control multiple digital identities from their desktop.

The biggest user of proxies that I know is a 24 year old man who I believe lives in New York.  I only know him online and although he’s never worked in IT directly, he knows much more about proxies than most of my Cisco associates from over the years.  The main reason is that he spend upwards of several thousand dollars every single month on proxies, that’s every single month !  You might well wonder what the hell he does with all these proxies, so I’ll tell you.

Every Proxy Connection is Potentially a New Digital Identity

As network professionals we’re of course all very familiar with the attributes of an IP address, but have you ever considered that potentially every unique IP address is a new identity for someone. This is exactly the reason that proxies have become so popular, as they have the potential to assign multiple digital identities through hundreds of IP addresses. Each unique connection allows the user to operate as a separate identity.

You might think so what? Who needs hundred or thousands of separate digital identities? Well my friend is perhaps the easiest example as he lives, breathes and works with designer sneakers. Those hugely prized and overpriced footwear that people will literally spend thousands of dollars on. The rarest of which are released periodically through the web stores of most of the biggest names in sports footwear. Nike, Adidas and Supreme to name just a few of the in-demand suppliers who release these ‘limited edition’ sneakers.

For you and me, it’s a matter of pot luck to log onto the site when a release takes place and hope we’re hit gold with our single sign on attempt. If we’re extremely lucky we’ll cop a pair although this is actually pretty difficult to do. However people like my friend make it much more unlikely as they sit with automated software called sneaker Bots running them through proxies which switch IP addresses on each new connection. For every single attempt you and I get, this set up can try hundreds of times to buy a different pair. The idea is to buy as many of these pairs as possible by appearing to come from unique customers (they’re always limited to one pair per customer).

Obviously these scarce items are extremely valuable and each pair of sneakers can be resold without difficulty for many times the purchase price. The potential for profit is huge and there are many ‘sneaker millionaires’ all over the world who use this or some sort of variation on this tactic.

You can’t just use any proxy though as the e-commerce servers are actively looking for people who operate like this. The classification and location of the IP address is crucial. Anything which looks unusual will lead to a block or redirection, the IP addresses must be clean and also originate from the right place. For example, if you’re hitting the Supreme servers for a limited San Francisco release, then using IP addresses registered from a datacentre in Moscow is a complete waste of time. They will be instantly detected and flagged as being suspicious and any accounts using them will be blacklisted.

The secret is to make every individual connection look ideally like a normal US home user attempting to cop a pair of their favorite sneakers. Firstly the IP address must be from either a residential or mobile range and secondly it must originate ideally from the US (if it’s a US release). This is why it’s important to use premium proxies with the ability to switch to different IP addresses. The best ones will rotate through each address when requested by the Bot and provide a unique connection to the sneaker server. If you want to try this for yourself, you’ll first need a Bot (try googling AIOBot to start) and a set of decent proxies. There are a few which offer trial proxies for a few dollars where you can test the water. The technical side is not that challenging although it is constantly developing, in my opinion knowledge of which sneakers to buy is the hardest part !

TCP/UDP Port Numbers

Both TCP and UDP require port numbers in order to communicate with the upper layers.  These port numbers are used to keep track of varying conversations which criss-cross the network simultaneously. The origin port numbers are dynamically assigned by the source host, most of them will be  at some number above 1024.   All the numbers below 1024 are reserved for specific services as defined in RFC 1700 – they are known as well known port numbers.

Any virtual circuit which is not assigned with a specified service will always be assigned a random port number from this range above 1024.    The port numbers will identify the source and destination in the TCP segment.

 Here’s some common port numbers that are associated with well known services:

  • FTP – 21
  • Telnet -23
  • DNS – 53
  • TFTP – 69
  • POP3 – 110
  • News – 144

As you can see all the port numbers assigned are under 1023, whereas above 1024 and above are assigned by the upper layers to set up connections with other hosts.

The internet layer exists for two main reasons, routing and providing a specific network interface to the upper layers. As regards to routing none of the upper or lower layer protocols have any specific functions. Al the routing functionality is primarily the job of the internet layer. As well as routing the internet layer has a second function – to provide a single network interface and gateway to the upper layer protocols.
Application programmers, use this layer to to access the functionality into their application for network access. It is important as it ensures that there is a standardization to access the network layer. Therefore the same functions apply whether you’re on a ethernet or Token ring network.

IP provides a single network interface to access all of these upper layer protocols. The following protocols specifically work at the internet layer:

  • Internet Protocol (IP)
  • Internet Protocol (ICMP)
  • Address Resolution Protocol (ARP)
  • Reverse Address Resolution Protocol (RARP)

The internet protocol is essentially the Internet layer, all the other protocols merely support this functionality. So if for instance you buy UK proxy connections then IP would look at each packet’s address. Then using a routing table, the protocol would decide where the packet should be routed next. The other protocols, the network access layer ones at the bottom of the OSI model are not able to see the entire network topology as they only have connections to the physical addresses.

In order to decide on the specific route, the IP layer needs to answer two specific questions,. The first is which network is the destination host on and the second is what is the ID on that network.   these can be determined and allocated as the logical and hardware address.  The logical address is better known as the IP address and is a unique identifier on any network of the location of a specific host.  These are allocated by specific location and are used by websites to determine resources, so for example to watch BBC iPlayer in Ireland you’d need to route through a UK IP address and not your assigned Irish address.


Data Encapsulation and the OSI Model

When a client needs to transmit data across the network to another device an important process happens.  This process is called encapsulation and involved adding protocol information from each layer of the OSI model.   Every layer in the model only communicates with it’s peer layer on the receiving device.

In order to communicate and exchange information, each layer uses something called PDU which are Protocol Data Units.   These are extremely important and contain the control information attached to the data at each layer of the model.  It’s normally attached to the header of the data field however it can also be attached to the trailer at the end of the data.

The encapsulation process is how the PDU is attached to the data at each layer of the OSI model.  Every PDU has a specific name which is dependent on the information contained in each header.   The PDU is only read by the peer layer on the receiving device at which point it is stripped off and the data handed to the next layer.

Upper layer information only is passed onto the next level and then transmitted onto the network.    After this process the data is converted and handed down to the Transport layer this is done by setting up a virtual circuit to the receiving device by sending a synch packet.   In most cases the data needs to be broken up into smaller segments then a Transport layer PDU attached to the header of the field.

Network addressing and routing through the internetwork happens at the network layer and each data segment.    Logical addressing for example IP is used to transport every data segment to it’s destination network.  When the Network layer protocol adds the control header from the data received from the transport layer it is then described as packet or datagram.  This addressing information is essential to ensure the data reaches it’s destination.  It can allow data to traverse all sorts of networks and devices with the right delivery information added to subsequent PDUs on it’s journey.

One aspect that often causes confusion is the later where packets are taken from the network layer and placed in the actual delivery medium (e.g. cable or wireless for example). This can be even more confusing when other complications such as VPNs are included which involve routing the data through a specified path.   For example people route through a VPN server in order to access BBC iPlayer abroad like this post which will add additional PDUs to the data.   This stage is covered by the Data Link layer which encapsulates all the data into a frame and adds to the header the hardware address of both the source and the destination.

Remember for this data to be transmitted over a physical network it must be converted into a digital signal.  A frame is therefore simply a logical group of binary digits – 1 and 0s which is read only by devices on the local networks.   Receiving devices will synchronize the digital signal and extract all the 1s and 0s.  Here the devices build the frames and run a CRC (Cyclic Redundancy Check) in order to ensure it matches with the transmitted frame.

Additional Information 

No Comments Networks, Protocols, VPN

Network Topology: Ethernet at Physical Layer

Ethernet is commonly implemented in a shared hub/switch environment where if one station broadcasts a frame then all devices must synchronize to the digital signal to extract the data from the physical wire.  The connection is between physical medium, and all the devices that share this need to listen to each frame as they are considered to be on the same collision domain.  The downside of this is that only one device can transmit at each time plus all devices need to synchronize and extract all the data.

If two devices try to transmit at the same time, and this is very possible – the a collision will occur.  Many years ago, in 1984 to be precise, the IEEE Ethernet Committee released a method of dealing with this situation.  It’s a protocol called the Carrier Sense Multiple Access with Collision Detect protocol or CSMA/CD for short.  The function of this protocol is to tell all stations to listen for devices trying to transmit and to stop and wait if they detect any activity.  The length of the wait is predetermined by the protocol and will vary randomly, the idea is that when the collision is detected it won’t be repeated.

It’s important to remember that Ethernet, uses a bus topology.   This means that whenever a device transmits then the signal must run from one end of the segment to the other.   It also defines that a baseband technology should be used which means that when a station does transmit it is allowed to use all potential bandwidth on the wire.  There is no allowance for other devices to utilise the potential available bandwidth.

Over the years the original IEEE 802.3 standards have been updated but here are the initial settings:

  • 10Base2: 10 Mbps, baseband technology up to 185 meters in cable length.  Also known as thinnet capable of supporting up to 30 workstations in one segment.  Not often seen now.
  • 10base5: 10 Mbps, baseband technology allows up to 500 meters in length. Known as thicknet.
  • 10BaseT: 10Mbps using category 3 twisted pair cables. Here every device must connect directly into a network hub or switch.   This also means that there can only be one device per network segment.

Both the speeds and topologies have changed greatly over the years, and of course 10Mbps is no longer adequate for most applications.  In fact most networks will run on gigabit switches in order to facilitate the increasing demands of network enabled applications.    Remember allowing access to the internet means that bandwidth requirements will rocket even if you allow for places like the BBC blocking VPN access (article here).

Each of the 802.3 standards defines an Attachment Unit Interface (AUI) that allows one bit at a time transfer using the data link media access method to the Physical layer.  This means that the physical layer becomes adaptable and can support any emerging or newer technologies which operate in a different way.  There is one exception though and it is a notable one, the AUI interface cannot support 100Mbs Ethernet for one specific reason – it cannot cope with the high frequencies involved.   Obviously this is the case for even faster technologies too like Gigabit Ethernet.

John Smith

Author and Network VPN Blogger.

No Comments Networks, Protocols, VPN

Using SSL for Email and Internet Protocols

If you want to increase the security attached to your email messaging then there’s several routes you can take.  First of all, you should look at digitally signing and encrypting all your email messages.  There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system.    Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.

Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications.   However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation?  Well the first place to investigate is increasing the security of authentication and data transmission.   There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.

Here’s a few of the RFC based protocols related to Email  :

  • Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
  • Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
  • Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
  • Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers.  Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.

The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.

However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.

It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.

No Comments News, Proxies, VPN

Authentication of Anonymous Sessions

Any automated identity system needs one thing – the ability to create and distribute the authentication of users credentials and the rights that they assert.  Many people look initially to the world leader – Kerberos but there are other systems which are just as capable.   In later years, SAML (Security Assertion Mark Up Language) has become increasingly popular and is becoming something of an industry standard.  There are good, practical reasons why SAML has become popular including it’s abili9ty to use XML to represent various security credentials.    It defines a protocol to request and receive the various credential data which flows from a SAML authority service.

In reality although SAML can look quite complicated on first glance it is relative straight forward to use.    It’s ideally positioned to deal with security and authentication issues online, including the many users who protect their privacy and indulge in anonymous surfing for example.  Remember the security assertions will normally only be for a particular domain which means that the user’s identity can be protected to some extent.

A SAML authority can be described as a service usually online which responds to specific SAML request.  We define these requests as assertions and they come in three distinct types:

Authentication: a SAML authority receives a request about a specific user’s credentials. The reply will stipulate that the authentication was completed and at what time.

Attribute: when an authentication assertion has been returned, a SAML attribute authority can be asked for the attributes associated with the subject.  These are returned and are known as attribute assertions.

Authorization: a SAML authorization assertion is returned in response to a request about permissions to specified resources.  This will be referenced against an access control list with the relative permissions and could even be dynamically referenced and updated.  the response would typically be quite simple – i.e that subject A has been granted permission for access to resource Z.

Although all these assertions are quite distinct, it is very likely that they all take place on a single authority.  However in highly secure or distributed systems they may be spread across distinct servers in a domain.

SAML has become more popular because it is ideal for use in web based and distributed systems as opposed to Kerberos which is not as flexible.   For example it could be used to allocate permissions for users to download videos like this based on permissions assigned to a subscriber.   This means that the permissions can be integrated with all sorts of web services and functions including integration with SOAP.  This is of course an advanced protocol often used for exchanging information in a structured format across computer networks.

No Comments Networks, Protocols, VPN

Digital Certificate Authorities

A digital certificate essentially associates specific identity information with a public key which is then linked together in a trusted package.  It is important to realise that the certificate is always signed by the certificate issuer so we can easily verify that the information has not been changed or modified in any way.  However it is more difficult to determine whether the identity and the public key have been associated together correctly.

Remember there’s no real restrictions about who can issue certificates, indeed using OpenSSL virtually anyone can with some limited technical experience. There are a large number of certificate programming APIs and they get easier to use every day.  These should be distinguished however from trusted certificate issuers who are known as certificate authorities also known as CA’s. The role of the certificate authority is to accept and process requests for certificates which come form organisations and individual entities.    Larger organisations who require high levels of security for example like the BBC for their VPNs, would use only the Tier one Certificate Authorities who provide a high level of assurance. They must authenticate the information which is received from these entities, issue the certificates and maintain a repository of information about both the certificates and the subjects.

Here’s a brief summary of the roles and responsibilities of a Certificate Authority.

    • Certificate Enrollment Process – simply the process which details how an entity must apply for a digital certificate.
    • Authentication of Subject – The Certificate Authority must ensure that the applicant is indeed who they claim to be. There are different levels to this and it’s directly linked to the level of assurance given by the CA to certificate.
  • Certificate Generation – Once the identity has been assured then the certificate must be generated and released. It is relatively simple to generate the certificate however it must assure that the process and delivery mechanism is completely secure. Any issues at this stage can compromise the security and validity of the certificate.
  • Certificate Distribution – as mentioned above, the certificates and associated private keys must be distributed to the applicant.
  • Revocation of Certificate – when there is an issue about the integrity of a released certificate, there must be a defined procedure to revoke that certificate. This should be completed securely and the revoked certificate should be added to a list of invalid certificates.

The Certificate Authority would usually publish the standards and processes that underpin the above activities in something called a CPS ( certification practice statement). In secure applications these would be included in the security benchmarks for example for authentication of something like an IP cloaker or VPN system. These are not meant to be long, legal filled documents but practical and readable guides which detail the exact processes and the underpinning activities. Although usually designed to be straight forward, they are usually fairly lengthy documents often many pages long.

X Windows System

The X Windows system, which is commonly abbreviated to just X – is a client/server application which allows multiple clients use the same display managed by a server.  The server in this instance manages the display, mouse and keyboard.   The client is actually any remote application which runs on a different host (or on the same one).    In most configurations, the standard protocol used is TCP because it’s more commonly understood by both client and host.  Twenty years ago though, there were many other protocols were used by X Windows – DECNET was a typical choice in large Unix and Ultrix environments.

Sometimes the X Windows System could be a dedicated piece of hardware although this is becoming less common. Most of the time the client and server are used on the same host, but allowing inbound connections from remote clients when required.  In some specialised support environment you’ll even find the processes running on a workstation to support the X Windows access.   In some sense where the application is installed is irrelevant, what is more important is that a reliable bi-directional protocol is available for communication.  To support increased security, particularly in certain sensitive environments access may be restricted and controlled via an online IP changer.

X windows running with something like UDP is never going to work very well, but the ideal as mentioned above is probably something like TCP.  The main communication matrix relies on 8 bit bytes transferred across the connection between the client and server.   So on a Unix system when the client and server is installed on the same host, the system will default back to Unix domain protocols instead. This is because these domain protocols are more efficient when used on the same host and minimizes the IP processing involved in the communication stream.

It is when multiple connections are being used that communication can get more complex.  This is not unusual as for example X Windows is often used to allow multiple connections to an application running on a Unix System.    Sometimes these applications have specific requirements to allow full functionality for example special graphic commands which affect the screen.   It is important to remember though that all X Windows does is allow access to the keyboard, display and mouse to these clients.  Although it might seem similar it is not the same as a remote access protocol like Telnet which allows logging in to a remote host but no direct control of hardware.

The X Windows system normally is there to allow access to important applications so will usually be bootstrapped at start up.  The server will create a TCP end point and will do a passive open on a port (default normally 6000 +n).    Sometimes configuration files will be needed to support different applications especially if they have graphical requirements like the BBC iPlayer, these must be downloaded before the session is established.  In this instance n is the number of the display so will be incremented to allow multiple concurrent connections.  On a Unix server this will usually be a domain socket incremented by n with display numbers too.

Introduction to IP Routing

Conceptually IP routing is pretty straight forward, especially when you look at it from the hosts point of view.  If the destination is directly connected such as a direct link or on the same Ethernet network then the IP datagram is simply forwarded to it’s destination.  If it’s not connected then the host simply send the datagram to it’s default router and lets this handle the next stage of the delivery.  This simple example illustrates most scenarios, for example if an IP packet was being routed through a proxy to allow access to the BBC iPlayer – like this situation.

The basis of IP routing is that it is done on a hop-by-hop basis. The Internet Protocol does not know the complete route to any destination except those directly connected to it.  Ip routing relies on sending the datagram to the next hop router – assuming  this host is closer to the destination until it reaches a router which is directly connected to the destination.

IP routing performs the following –

  • Searches the routing table to see if there is a matching network and host ID.  If there is the packet can be transferred through to the destination.
  • Search the routing table for an entry that matches the network ID.  It only needs one entry for an entire network and the packet can then be sent to the indicated next hop.
  • If all other searches fail then look for the entry marked – ’default’.  The packet then is sent to the next hop router associated with this entry.

If all these searches fail then the datagram is not  marked deliverable.  Even if it has a custom address perhaps an IP address for Netflix routing, it still will not matter.  In reality most searches will fail the initial two searches and be transferred to the default gateway which could be a router or even a proxy site which forwards to the internet.

If the packet cannot be delivered (usually down to some fault or configuration error) then an error message is generated and sent back to the original host.  The two key points to remember is that default routes can be specified for all packets even when the destination and network ID are not known.  The ability to specify specific routes to networks without having to specify the exact host makes the whole system work – routing tables thus contain a few thousand destinations instead of several million!!

Some Notes on Firewall Implementations

For anyone considering implementing a new firewall onto a network here are a few notes to help you through the process.  However before you get started there’s a very important first step that you should always take when implementing on medium to large networks.  This step is to establish a Firewall change control board, which consists of user, system administrators and technical managers from throughout your organisation.  Failing to establish proper change control and implementation processes can be very dangerous on a firewall.  A badly thought out rule could create huge security issues or operational problems – that ‘deny all’ rule might look safe but if it ends up blocking mission critical applications you won’t be popular.

Hardware firewalls are amazingly secure and not too expensive. The very first reported type of network firewall is referred to as a packet filter. Establishing a firewall for your infrastructure is an excellent method to present some simple security for your expert services.


Firewalls frequently have such functionality to hide the real address of computer that is linked to the network. You can install most firewall products on a customized network and have it’s protection almost immediately. The host-based firewall might be a daemon or service as part of the operating system or an agent application like endpoint security or protection. These firewalls often arrive in conjunction with antivirus program. Otherwise, a software firewall can be set up on the computer in your house that has an online connection. Or, you may add an extra software component to your firewall.  If you are primarily responsible for your company’s firewall it’s best to have some secure remote access in case of emergencies.  Be careful with rules which allow your access though, you don’t want to let through users’ streaming through UK TV through a VPN service.

In case the connection is controlled by NetworkManager, you may also utilize nm-connection-editor to modify the zone. The secure connection is currently established and now is the time to launch vncviewer so that it employs the secure tunnel. The SSH connection is currently established. Especially in case you allow connection from anywhere online and on the normal SSH port (22).
After you own a server to try from and the targets you want to evaluate, you may continue with this guide. As stated in the past edition you also may want to locate a repository closer to your server. By applying the forwarder you may override the DNS servers supplied by your ISP and utilize fast, higher performance servers instead. Repeat this for each domain that you would like the server to manage. It is necessary for a standard server. Also many servers block dynamic dns hosts, so you could discover your server becomes rejected. At this point you have a simple mail server!

The application shouldn’t be confused with malware behavior. Some Antivirus software applications may ask that you switch off the firewall and disable the Antivirus to be able to install it. Before you put in a software, the very first important step is to look at the configuration of your computer, and the system prerequisites of the program. Update the neighborhood package index and install the software if it’s not already offered.

The configuration of your computer must match the demands of the software to be set up. If you are pleased with your present configuration and have tested that it’s functional once you restart the service, it is possible to safely permit the service. The only configuration you should make that actually impacts the functionality of the service will probably be the port definition in which you determine the port number and protocol you desire to open. If all your interfaces can best be managed by a single zone, it’s probably simpler to just pick out the best default zone and use that for your configuration. You may then modify your network interfaces to automatically choose the right zones. Whenever you are transitioning an interface to a different zone, be conscious that you are most likely modifying the services which are going to be operational. Opening up an entire interface to incoming packets might not be restrictive enough and you may want to have more control concerning what to allow and what to reject.

James Hellings

Author of IP Cloaker