Specialized Proxies with Residential IP Addresses

Now to 99% of the population, this principle is going to sound a little bizarre however it does illustrate the relevance of proxies today. The term sneaker proxies does not describe some incredibly, sneaky configuration of a proxy server more to the function they carry out. However before we explain what they in fact are and their function then we initially need a little background.

This subject is concerned all about the latest style, and more particularly the latest sneakers and shoes (maybe known as trainers outside the U.S.A). Now in my day, if you desired the trendiest trainers then you ‘d wait for their release and pop down to the sports shop and purchase them. Obviously life is a lot more complicated nowadays and there’s actually a selection of minimal edition tennis shoes that are quite in demand but extremely challenging to get. Exactly what takes place is the manufacturer launches a minimal amounts of these and they do so in an extremely particular way to maintain need.

  • Producer Releases Limited Edition Sneakers to Sellers
  • Middle Men typically get them.
  • These are sold online to consumers

This sounds basic however unfortunately, the demand is extremely high worldwide and the makers just release a very small number of the tennis shoes. It’s really a crazy market and it’s incredibly difficult to obtain even a single pair of these sneakers if you play the game by the book. Essentially even if you await alert then instantly go to among these sneaker sites you ‘d need to be extremely lucky to get even single pair. It’s so exceptionally hard to choose these up an entire sub industry has been developed with supporting technology to acquire them. So here’s exactly what you require and why utilizing tennis shoe proxies is an important component of this struggle.

Unfortunately if you just play the game, it’s pretty unlikely you’re going to get any of these unusual tennis shoe releases. So if you’re desperate for the latest fashion or possibly simply want to make a couple of bucks selling them on at a profit then they’re are approaches to significantly enhance your opportunities of getting lots of pairs. All of these releases are generally offered online from various sneaker professional sellers, however just wishing to click and purchase isn’t really going to work.

Exactly what do you require? How can you get a couple of or perhaps great deals on the latest sneakers? Well ideally there’s 3 components you need to basically warranty at least a couple of pairs.

A devoted server: now if you’re just after a number of pairs for your self, then this action is perhaps not necessary. If you’re in it for organisation and want to increase return it’s a sensible investment. Tennis shoe servers are merely devoted web servers preferably located to the datacentres of the business like Nike, Supreme, Footsite and Shopify who offer these sneakers. You use these to host the next phase, the Bots and automated software application described below.

Sneaker Bots— there are a great deal of these and it’s best to do your research on what’s working best at any point in time. A few of the Bots work best with specific sites, but they all work in a similar way. It’s automated software which can keep looking for defined tennis shoes without a human having to sit there for hours pushing the buy button. You can set up the software application to imitate human behaviour with infinite perseverance– obtaining these sneakers day and night when they’re released. You can run these bots on your PC or laptop with a fast connection although they’re more effective on devoted servers.
Sneaker Proxies
Now this is perhaps the most vital, and frequently mostly forgotten action if you’re heading to become a sneaker baron. Automated software application is fantastic for sitting there gradually aiming to fill shopping baskets with the latest tennis shoes however if you try it they get prohibited pretty quickly. Exactly what takes place is that the retail sites quickly spot these multiple applications since they’re all originating from the exact same IP address of either your server or your computer system. As soon as it occurs, and it will extremely rapidly, they obstruct the IP address and any request from there will be disregarded– then it’s game over for that address I’m afraid.

If you do not get the proxy stage proper then all the rest will be meaningless expenditure and effort. Exactly what makes an appropriate sneaker proxy? Well there’s certainly tons of complimentary proxies around on the internet, and totally free is certainly great. It’s pointless utilizing these and indeed very risky.   Free proxies are a combination of misconfigured servers, that is accidentally exposed which people get on and utilize. The others are hacked or taken over servers intentionally exposed so identity thieves can use them to take usernames, accounts and passwords. Considered that you will require at some point to spend for these sneakers utilizing some sort of credit or debit card using totally free proxies to send your financial information is utter insanity– do not do it.

Even if you do take place to pick a safe proxy which some dozy network administrator has actually exposed, there’s still little point. They are going to be slow which indicates however fast your computer system or sneaker server is, your applications will run at a snail’s rate. You’re not likely to be successful with a sluggish connection and often you’ll see the bot timing out. The second problem is that there is an important component to the proxy which you’ll have to succeed and essentially no free proxies will have these– a domestic IP address.
Numerous business websites now are well aware of people using proxies and VPNs to bypass geoblocks or run automated software. They discover it challenging to spot these programs however there’s a basic approach which blocks 90% of people who attempt– they ban connections from commercial IP addresses. Residential IP addresses are just allocated to home users from ISPs and so it’s very difficult to obtain great deals of them – read about them here. Virtually all proxies and VPNs available to employ are assigned with business IP addresses, these are not effective as sneaker proxies at all.

The Sneaker proxies are different, they utilize residential IP addresses which look identical to home users and will be enabled access to essentially all websites. Certainly you still have to be careful with numerous connections but the companies who supply these usually provide something called rotating backconnect configurations which switch both setups and IP addresses automatically. These are able to replicate rotating proxies but make them  more affordable than purchasing devoted property proxies which can get exceptionally pricey.

Software Testing: Static Analysis

There are several phases to a proper test analysis, the initial stage is normally the static review. This is the process of examining the static code initially to check for simple errors such as syntax problems or fundamental flaws in both design and application. It’s not normally a long exhaustive check, unless of course some obvious or major issues are identified at this stage.

Just like reviews, static analysis looks out for problems without executing the code. Nonetheless, as opposed to reviews static analysis is undertaken once the code has actually been written. Its objective is to find flaws in software source code and software models. Source code is actually any series of statements recorded some human-readable computer programming language which can then be transposed to equivalent computer executable code– this is normally produced by the developer. A software model is an image of the final approach developed using techniques such as Unified Modeling Language (UML); it is commonly generated by a software designer.  Normally this should be accessed and stored securely, with restrictions on who can alter this.  If accessed remotely it should be through a dedicated line if possible or at least using some sort of secure residential VPN (such as this).

Static analysis can find issues that are hard to find during the course of test execution by analyzing the program code e.g. instructions to the computer can be in the style of control flow graphs (how control passes between modules) and data flows (making certain data is identified and accurately used). The value of static analysis is:

Early discovery of defects just before test execution. Just like reviews, the sooner the issue is located, the cheaper and simpler it is to fix.

Early warning regarding questionable aspects of the code or development, by the calculation of metrics, such as a high-complexity measure. If code is too complicated it could be a lot more prone to error or a lot less dependent on the focus given to the code by programmers. If they recognize that the code has to be complicated then they are more probable to check and double check that this is correct; nevertheless, if it is unexpectedly complex there is a higher chance that there will certainly be a problem in it.

Identification of defects not easily found by dynamic testing, such as development standard non-compliances as well as detecting dependencies and inconsistencies in software models, such as hyperlinks or user interfaces that were actually either incorrect or unknown before static analysis was carried out.

Enhanced maintainability of code and design. By executing static analysis, defects will be removed that would certainly typically have increased the volume of maintenance needed after ‘go live’. It can also recognize complex code which if fixed will make the code more understandable as well as consequently easier to manage.

Prevention of defects. By pinpointing the defect early in the life cycle it is actually a great deal easier to identify why it was there in the first place (root cause analysis) than during test execution, therefore providing information on possible process improvement that could be made to prevent the same defect appearing again.

Source: Finding Residential Proxies, James Williams

Don’t Expect Internet Privacy by Default

When the internet was first conceived back in the 1980s, well the date varies depending on your definition – there was little thought about security. The date of course is disputed but I prefer 1983 when TCP/IP was adopted by ARPANET, however the lack of security is a matter of fact. It was a form on communication allowing disparate devices and people to talk to each other and no-one expected it to end up where it is. Unfortunately to allow cross compatibility then compromises need to be made, the security of your data is one of them.

However there are methods to add some security, web sites try with SSL implementation but the end user can assist to. Most users who have security concerns or have experienced cyber crime, will have come across VPN software. This is a virtual private network which can be created to encrypt your data as it travels across the internet. These come in all shapes and sizes from basic personal security ones, to advances residential IP rotating proxies like these ones.

With regards to lots of people there is a pervasive picture of a VPN user, it’s something similar to a young person sporting a hoodie, hunched up in a coffee shop with their laptop. They’re possibly attempting to hack into some federal government computers and are actually on the run from the authorities. As a VPN conceals your geographic location and your web traffic there’s a common idea that the individual is up to no good and certainly has something to hide.

The reality is literally a very long way from this viewpoint and even though numerous hackers do indeed use VPNs consistently so do an awful number of ordinary individuals. Most large corporations have been using VPNs for decades to support inbound connections from remote users. If a salesman needs access to the product database on the company’s network it’s much simpler to allow them to connect through the internet and view the latest version. This is much more secure than travelling around with DVDs and obviously assures that he or she has the most recent versions.

If you make any type of normal connection over the internet, all your web traffic is pretty much viewable, i.e anyone with a mind can intercept and see it. In the event that you’re logging and connecting to a secured share then this would certainly consist of usernames and security passwords. So in order to protect these connections, you might commonly install a VPN client on the laptop computer and make certain it’s used to encrypt the connection back to the company network. It is actually completely legitimate and indeed intelligent business practice.

Regular home users will make use of VPNs for very similar reasons. Essentially the internet is insecure and there is minimal provision for security integrated in automatically. Sure you can access secure sites through things like SSL when you have to enter a credit card or payment information. However this is the exception not the rule and most websites are actually not secure and the vast majority of information flies across the wires in clear text.

In addition to the general insecurity of the web, there’s the additional issue of privacy. Your surfing data is easily available via a variety of sources. For a start, there’s a complete list in your ISP of every little thing you do on the internet and depending on where you reside this can be routinely and easily accessed. Using a VPN stops this, transforming your internet activity into an encrypted list which is unreadable without your permission. Are they used by cyber criminals and terrorists? Sure but also by millions of people who think that what they do online shouldn’t be part of public records.

The VPN systems are becoming more and more sophisticated simply driven by demand and the risks of recognition. There are all sorts of variations including enabling different setups and ports to dodge detection. You can also get them to use home based IP addresses through specific residential IP providers –

In a large number of countries VPNs are definitely not illegal but simply a simple business and personal security tool. However in some countries this is not the case and you can get into trouble if caught using them. Countries that actually ban the use of VPN include places like China, Iraq, Belarus and Turkey. Various other countries merely allow authorized services which usually indicate those which can be jeopardized if required. Individuals still use VPNs in the majority of these nations indeed in Turkey almost all expats use one to view things like British and American TV on-line. It’s actually quite difficult to detect a VPN in use however that doesn’t stop it technically being illegal in those locations.

Source: http://www.onlineanonymity.org/proxies/residential-vpn-ip-address/

Understanding ARP and Lower Protocols

They’re are many important protocols that you need knowledge of if you’re troubleshooting complicated networks. First of all there’s TCP, IP and UDP plus a host of application protocols such as DHCP and DNS. Any of these could be an issue if you’re having problems with a network. However often the most difficult to troubleshoot and indeed to understand are the lower level protocols such as ARP. If you don’t have some understanding of these it can be extremely confusing to understand how they interact.

The address resolution protocol often sits in the background happily resolving addresses, however if you get issues it can cause some very difficult problems. If you’re working on some sort of complicated network such as a residential proxy set up or ISP like this, there will be all sorts of hardware resolution requests taking place on both local and remote networks.

Both logical and physical addresses are used for intercommunication on a network. The use of logical addresses permits communication among a wide range of networks and not directly connected devices. The use of physical addresses assists in communication on a single network segment for devices that are directly connected to each other with a switch. In the majority of cases, these two kinds of addressing must collaborate in order for communication to happen.

Consider a scenario where you want to communicate with a device on your network. This device may be a server of some kind or simply another work- station you have to share files with. The application you are utilizing to launch the communication is already aware of the IP address of the remote host (by means of DNS, addressed elsewhere), meaning the system should have all it needs to build the layer 3 through 7 information of the packet it wishes to transmit.

The only piece of info it needs at this point is the layer 2 data link information consisting of the MAC address of the intended host. MAC addresses are required for the reason that a switch that interconnects devices on a network uses a Content Addressable Memory (CAM) table, which specifies the MAC addresses of all devices plugged into each of its ports. When the switch acquires traffic destined for a specific MAC address, it makes use of this table to know through which port to send the traffic.
If the destination MAC address is not known, the transmitting device will definitely first check for the address in its cache; if it is not there, then this must be resolved by means of supplementary communicating on the network.

The resolution procedure that TCP/IP networking (along with IPv4) uses to resolve an IP address to a MAC address is referred to as the Address Resolution Prrotocol (ARP), which is defined in RFC 826. The ARP resolution process uses only two packets: an ARP request and an ARP response.

Source: http://bbciplayerabroad.co.uk/free-trial-of-bbc-iplayer-in-australia/

Network Attacks : Denial of Service

A few years ago being a network administrator was a much easier job. Sure you probably had fewer resources and technology choices for running applications were limited, but there was one crucial difference – the internet. As soon as even one computer on your network was connected to the internet the game changes completely, you have internet access from the network but it works the other way around too. Any server or PC in your network is potentially accessible from the internet too.

A Denial of Service (DOS) attack is any kind of attack that interferes with the function of a computer so that genuine individuals can no longer get access to it. DoS attacks are actually possible on the majority of network equipment, including switches, hosting servers, firewalls, remote access computers, as well as just about every other network resource.  A DoS attack may be specific to a service, such as in an FTP attack, or perhaps an entire machine. The different kinds of DoS are diverse and wide ranging, however, they can be split into 2 distinctive categories that connect to intrusion detection: resource depletion and malicious packet attacks.

Malicious packet DoS attacks work by transmitting abnormal traffic to a host in order to cause the service or the host in itself to crash. Crafted packet DoS attacks occur whenever software is not properly coded to deal with abnormal or unusual traffic. Commonly out-of– spec traffic can easily cause computer software to react unexpectedly and crash. Attackers can utilize crafted packet DoS attacks in order to bring down IDSs, even Snort.A specifically crafted tiny ICMP packet with a size of 1 was discovered to cause Snort v. 1.8.3 to core dump. This particular version of Snort did not actually correctly define the minimum ICMP header dimensions, which in turn made it possible for the DoS to happen.

One of the reasons that the denial of service attacks are so common is that the attacker is extremely difficult to trace.   The most obvious factor behind this is that most of the attacks don’t require valid responses to complete, therefore it’s very hard to identity the source.  In addition to that are the huge number of anonymous  resources available online including VPNs, anonymous proxies and providers of residential IP address networks like these.

Along with out of spec traffic, malicious packets can certainly consist of payloads which cause a system to crash. A packet’s payload is actually taken as input right into a service. If the input is not properly checked, the program can be DoSed. The Microsoft FTP DoS attack demonstrates the comprehensive assortment of DoS attacks readily available to black hats in the wild.The very first step in the attack is actually to start a legitimate FTP connection.The attacker would most likely then issue a command with a wildcard pattern (such as * or?). Within the FTP Web server, a function which processes wildcard sequences in FTP commands does not allocate adequate memory when executing pattern matching. It is actually feasible for the attackers command incorporating a wildcard pattern to cause the FTP service to crash.This DoS, as well as the Snort ICMP DoS, are 2 instances of the many thousands of potential DoS attacks out there.

The additional method to deny service is via resource depletion. A resource depletion DOS attack functions simply by flooding a service with a great deal normal traffic that legitimate users can not gain access to the service. An attacker overrunning a service with typical traffic can certainly exhaust finite resources such as bandwidth, memory, and processor chip cycles.

A classic memory resource exhaustion DoS is a SYN flood. A SYN flood makes use of the TCP three-way handshake. The handshake starts with the client sending a TCP SYN packet. The host then sends out a SYN ACK in response. The handshake is concluded when the client responds with an ACK. If the host does not obtain the returned ACK, the host sits unoccupied and waits with the session open. Each and every open session consumes a certain amount of memory. If sufficient three– way handshakes are launched, the host consumes all of the readily available memory waiting for ACKs.The traffic generated from a SYN flood is normal in appearance. The majority servers are configured these days to leave just a specific number of TCP connections open. One other classic resource depletion attack is the Smurf attack.

A Smurf attack works by capitalizing on open network broadcast addresses.A broadcast address forwards all packets on to just about every host on the destination subnet. Every host on the destination subnet answers to the source address listed in the traffic to the broadcast address. An attacker sends a stream of ICMP echo requests or pings to a broadcast address.This has the effect of magnifying a single ICMP echo request up to 250 times.

Furthermore. the attacker spoofs the origin address in order that the target receives all the ICMP echo reply traffic. An attacker with a 128 Kb/s DSL Net connection can certainly create a 32 Mb/s Smurf flood. DoS attacks commonly utilize spoofed IP addresses due to the fact that the attack succeeds even if the response is misdirected.The attacker requires no response, and in cases like the Smurf attack, wants at all costs to avoid a response.This can make DoS attacks difficult to defend from, and even harder to trace.

Further Reading: http://www.changeipaddress.net/us-ip-address-for-netflix/

Proxy Selection Using Hash Based Function

One of the difficulties in running a large scale proxy infrastructure is how to choose which proxy to use. This is not as straight forward as it sounds and there are various methods commonly used in selecting the best proxy to be used.

In hash-function-based proxy selection, a hash value is calculated from some information in the URL, and the resulting hash value is used to pick the proxy that is used. One approach could be to use the entire URL as data for the hash Function. However, as we’ve seen before, it is harmful to make the proxy selection completely random: some applications expect a given client to contact a given origin server using the same proxy chain.

For this reason, it makes more sense to use the DNS host or domain name in the URL as the basis for the hash function. This way, every URL from a certain origin server host, or domain, will always go through the same proxy server (chain). In practice, it is even safer to use the domain name instead of the full host name (that is, drop the first part of the host- name)—this avoids any cookie problems where a cookie is shared across several servers in the same domain.

It’s also useful when large amounts of data are involved and can indeed be used to switch proxies even during the same connection.  For example if someone is using a proxy to stream video – such as in this article – BBC iPlayer France, then the connection will be live for a considerable time with a significant amount of data.  In these situations, there is also limited requirement for any caching facilities particularly with live video streams.

This approach may be subject to “hot spots”—that is, sites that are very well known and have a tremendous number of requests. However, while the high load may indeed be tremendous at those sites’ servers, the hot spots are considerably scaled down in each proxy server. There are several smaller hot spots from the proxy’s point of view, and they start to balance each other out. I-lash—function-based load balancing in the client can be accomplished by using the client proxy auto-configuration feature (page 322). In proxy servers, this is done through the proxy server’s configuration file, or its API.

Cache Array Routing Protocol [CARP], is an advanced hash function based proxy selection mechanism. It allows proxies to be added and removed from the proxy array Without relocating more than a single proxy’s share of documents. More simplistic hash functions use the module of the URL hash to determine which proxy the URL belongs to. If a proxy gets added or deleted, most of the documents get relocated—that is, their storage place assigned by the hash function changes.

Where the allocations are shown for three and four proxies. Note how most of the documents in the three-proxy scenario are on a different numbered proxy in the four-proxy scenario. Simplistic hash-function-based proxy allocation using modulo of the hash function to determine which proxy to use. When adding a fourth proxy server, many of the proxy assignments change, these changed locations are marked with a diamond. Note that we have numbered the proxies starting from zero in order to be able to use the hash module directly.

John Ferris:

Further Reading Link

Tips on Debugging with telnet

It’s rather old school and can seem very time consuming in a world of automated and visual debugging tools, but sometimes the older tools can be extremely effective. It’s been a long time since telnet was used as a proper terminal emulator simply because it is so insecure, yet it’s still extremely useful as troubleshooting tool as it engages on a very simple level. Although it should be noted that it can be used securely using a VPN connection which will at least encrypt the connection.


One of the biggest benefits of the fact that HTTP is an ASCII protocol is that it is possible to debug it using the telnet program. A binary protocol Would be much harder to debug, as the binary data would have to be translated into a human-readable format. Debugging with telnet is done by establishing a telnet connection to the port that the proxy server is running on.

On UNIX, the port number can be specified as a second parameter to the telnet program:

telnet

For example, let’s say the proxy server’s hostname is step, and it is listening to port 8080. To establish a telnet session, type this at the UNIX shell prompt:

telnet step 8080

The telnet program will attempt to connect to the proxy server; you
will see the line

Trying

If the server is up and running without problems, you will immediately get the connection, and telnet will display
Connected to servername.com
Escape character is ’“]’.

(Above, the “_” sign signifies the cursor.) After that, any characters you
type will be forwarded to the server, and the server’s response will be dis-
played on your terminal. You Will need to type in a legitimate HTTP
request.

In short, the request consists of the actual request line containing the method, URL, and the protocol version; the header section; and a single empty line terminating the header section.
With POST and PUT requests, the empty line is Followed by the request body. This section contains the HTML form field values, the file that is being uploaded, or other data that is being posted to the server.

The simplest HTTP request is one that has just the request line and no header section. Remember the empty line at the end! That is, press RETURN twice after typing in the request line.
GET http://www.google.com/index.html HTTP/1.1

(remember to hit RETURN twice)

The response will come back, such as,
HTTP/1.1 200 OK
Server: Google—Enterprise/3.0
Date: Mon, 30 Jun 1997 22:37:25 GMT
Content—type: text/html
Connection: close

This can then be used to perform further troubleshooting steps, simply type individual commands into the terminal and you can see the direct response. You should have permission to perform these functions on the server you are using. Typically these will be troubleshooting connections, however it can be a remote attack. Many attacks using this method will use something like a proxy or online IP changer in order to hide the true location.

Components of a Web Proxy Cache

There are several important components to the standard cache architecture of your typical web proxy server. In order to implement a fully functional Web proxy cache, a cache architecture requires several components:

  • A storage mechanism for storing the cache data.
  • A mapping mechanism to the establish relationship between the URLs to their respective cached copies.
  • Format of the cached object content and its metadata.

These components may vary from implementation to implementation, and certain architectures can do away with some components. Storage The main Web cache storage type is persistent disk storage. However, it is common to have a combination of disk and in-memory caches, so that frequently accessed documents remain in the main memory of the proxy server and don’t have to be constantly reread from the disk.

The disk storage may be deployed in different ways:

  • The disk maybe used as a raw partition and the proxy performs all space management, data addressing, and lookup-related tasks.
  • The cache may be in a single or a few large files which contain an internal structure capable of storing any number of cached documents.

The proxy deals with the issues of space management and addressing. ‘ The filesystem provided by the operating system may be used to create a hierarchical structure (a directory tree); data is then stored in filesystem files and addressed by filesystem paths. The operating system will do the work of locating the file(s). ° An object database may be used.

Again, the database may internally use the disk as a raw partition and perform all space manage- ment tasks, or it may create a single file, or a set of files, and create its own “filesystem” within those files. Mapping In order to cache the document, a mapping has to be established such that, given the URL, the cached document can be looked up Fast. The mapping may be a straight-forward mapping to a file system path, although this can be stored internally as a static route.

Typically a proxy would store any resource that is accessed frequently. For example in many UK proxies, the BBC website is extremely popular and it’s essential that this is cached. even for satellite offices it allows people to access BBC VPN through the companies internal network. This is because the page is requested and cached by the proxy which is based in the UK, so instead of the BBC being blocked outside the UK it is still accessible.

Indeed many large multinational corporations sometimes inadvertently offer these facilities. Employees who have the technical know how can connect their remote access clients to specific servers in order to obtain access to normally blocked resources. So they would connect through the British proxy to access the BBC and then switch to a French proxy in order to access a media site like M6 Replay which only allows French IP addresses.

It is also important to remember that direct mappings are normally reversible, that is if you have the correct cache file name then you can use it to produce the unique URL for that document. There are lots of applications which can make use of this function and include some sort of mapping function based on hashes.

Programming Terms: Garbage Collection

There are lots of IT terms thrown about which can be quite confusing for even the experienced IT worker. Particularly in the world of network programming and proxies, sometimes similar words have completely different meanings depending on where you are in the world.

Let’s step back for a minute, and look at what garbage collection means in the programming language world. Though not strictly relevant to the subject of this blog, it is a good way to illustrate the benefits and draw- backs of garbage collection type memory management, whether on disk or in memory. Compiled programming languages, such as C or Pascal, typically do not have run-time garbage collection type memory management.  Which is why they are often not suitable for heavy duty network resources such as the BBC servers which supply millions  streaming live TV such as Match of the Day from VPNs and home connections anywhere.

Instead, those languages require the program authors to explicitly manage the dynamic memory: memory is allocated by a call to malloc ( ) , and the allocated memory must be freed by a call to free ( ) once the memory is no longer needed. Otherwise, the memory space will get cluttered and may run out. Other programming languages, such as Lisp, use an easier [1] memory management style: dynamic memory that gets allocated does not have to be explicitly freed. Instead, the run-time system will periodically inspect its dynamic memory pool and figure out which chunks of memory are still used, and which are no longer needed and can be marked free.

Usually programming languages that are interpreted or object oriented (Lisp, ]ava, Smalltalk) use garbage collection techniques for their dynamic memory management. The determination of what is still used is done by determining whether the memory area is still referenced somewhere-—that is, if there is still a pointer pointing to that area. If all references are lost for example if it has been thrown away by the program—-the memory could no longer be accessed and therefore could be freed.

There are several different approaches to doing this reference detection. One approach is to make each memory block contain an explicit reference counter which gets incremented when a new reference is created and decremented when the reference is deleted or changed to point somewhere else. This requires more work from the run-time system when managing memory references. Another approach is simply to use brute force periodically and traverse the entire memory arena of the program looking for memory references and determine which chunks still get referenced.

This makes it easier and faster to manage memory references as reference counters don’t have to be updated constantly. However, at the same time it introduces a rather heavyweight operation of having to traverse the entire memory scanning for references.

John Williams

From – Web Site

No Comments Networks, Protocols

Subroutine – Passing Parameters

Passing parameters into Subroutines, following examples are from Perl scripts.

Parameters are passed into subroutines in a list with a special name — it’s called @_ and it doesn’t conform to the usual rules of variable naming. This name isn’t descriptive, so it’s usual to copy the incoming variables into other variables within the subroutine.

Here’s what we did at the start of the getplayer subroutine: $angle = $_[O]; If multiple parameters are going to be passed, you’ll write something like: ($angle,$units) = @_; Or if a list is passed to a subroutine: @pqr = @_; In each of these examples, you’ve taken a copy of each of the incoming parameters; this means that if you alter the value held in the variable, that will not alter the value of any variable in the calling code.

This copying is a wise thing to do; later on, when other people use your subroutines, they may get a little annoyed if you change the value of an incoming variable!   Although this method can also be used to hack into websites or divert video streams to bypass geo-blocking for example to watch BBC News outside the UK  like this.

Returning values Our first example concludes the subroutine with a return statement: return ($response); which very clearly states that the value of $response is to returned as the result of running the subroutine. Note that if you execute a return statement earlier in your subroutine, the rest of the code in the subroutine will be skipped over.

For example: sub flines { $fnrd = $_[0]; open (FH,$fnrd) or return (—1); @tda = ; close PH; return (scalar (@tda)); l will return a -1 value if the file requested couldn’t be opened.

Writing subroutines in a separate file
Subroutines are often reused between programs. You really won’t want to rewrite the same code many times, and you’ll
certainly not want to have to maintain the same thing many times over Here’s a simple technique and checklist that you can use in your own programs. This is from a Perl coding lesson, but can be used in any high level programming language
which supports subroutines.

Plan of action:
a) Place the subroutines in a separate file, using a file extension .pm
b) Add a use statement at the top of your main program, calling
in that tile of subroutines
c) Add a 1; at the end oi the file of subroutines. This is necessary since use executes any code that’s not included in subroutine blocks as the tile is loaded, and that code must return a true value — a safety feature to prevent people using TV channels and files that weren’t designed to be used.

No Comments News, Protocols, VPN