Software Testing: Static Analysis

There are several phases to a proper test analysis, the initial stage is normally the static review. This is the process of examining the static code initially to check for simple errors such as syntax problems or fundamental flaws in both design and application. It’s not normally a long exhaustive check, unless of course some obvious or major issues are identified at this stage.

Just like reviews, static analysis looks out for problems without executing the code. Nonetheless, as opposed to reviews static analysis is undertaken once the code has actually been written. Its objective is to find flaws in software source code and software models. Source code is actually any series of statements recorded some human-readable computer programming language which can then be transposed to equivalent computer executable code– this is normally produced by the developer. A software model is an image of the final approach developed using techniques such as Unified Modeling Language (UML); it is commonly generated by a software designer.  Normally this should be accessed and stored securely, with restrictions on who can alter this.  If accessed remotely it should be through a dedicated line if possible or at least using some sort of secure residential VPN (such as this).

Static analysis can find issues that are hard to find during the course of test execution by analyzing the program code e.g. instructions to the computer can be in the style of control flow graphs (how control passes between modules) and data flows (making certain data is identified and accurately used). The value of static analysis is:

Early discovery of defects just before test execution. Just like reviews, the sooner the issue is located, the cheaper and simpler it is to fix.

Early warning regarding questionable aspects of the code or development, by the calculation of metrics, such as a high-complexity measure. If code is too complicated it could be a lot more prone to error or a lot less dependent on the focus given to the code by programmers. If they recognize that the code has to be complicated then they are more probable to check and double check that this is correct; nevertheless, if it is unexpectedly complex there is a higher chance that there will certainly be a problem in it.

Identification of defects not easily found by dynamic testing, such as development standard non-compliances as well as detecting dependencies and inconsistencies in software models, such as hyperlinks or user interfaces that were actually either incorrect or unknown before static analysis was carried out.

Enhanced maintainability of code and design. By executing static analysis, defects will be removed that would certainly typically have increased the volume of maintenance needed after ‘go live’. It can also recognize complex code which if fixed will make the code more understandable as well as consequently easier to manage.

Prevention of defects. By pinpointing the defect early in the life cycle it is actually a great deal easier to identify why it was there in the first place (root cause analysis) than during test execution, therefore providing information on possible process improvement that could be made to prevent the same defect appearing again.

Source: Finding Residential Proxies, James Williams

Don’t Expect Internet Privacy by Default

When the internet was first conceived back in the 1980s, well the date varies depending on your definition – there was little thought about security. The date of course is disputed but I prefer 1983 when TCP/IP was adopted by ARPANET, however the lack of security is a matter of fact. It was a form on communication allowing disparate devices and people to talk to each other and no-one expected it to end up where it is. Unfortunately to allow cross compatibility then compromises need to be made, the security of your data is one of them.

However there are methods to add some security, web sites try with SSL implementation but the end user can assist to. Most users who have security concerns or have experienced cyber crime, will have come across VPN software. This is a virtual private network which can be created to encrypt your data as it travels across the internet. These come in all shapes and sizes from basic personal security ones, to advances residential IP rotating proxies like these ones.

With regards to lots of people there is a pervasive picture of a VPN user, it’s something similar to a young person sporting a hoodie, hunched up in a coffee shop with their laptop. They’re possibly attempting to hack into some federal government computers and are actually on the run from the authorities. As a VPN conceals your geographic location and your web traffic there’s a common idea that the individual is up to no good and certainly has something to hide.

The reality is literally a very long way from this viewpoint and even though numerous hackers do indeed use VPNs consistently so do an awful number of ordinary individuals. Most large corporations have been using VPNs for decades to support inbound connections from remote users. If a salesman needs access to the product database on the company’s network it’s much simpler to allow them to connect through the internet and view the latest version. This is much more secure than travelling around with DVDs and obviously assures that he or she has the most recent versions.

If you make any type of normal connection over the internet, all your web traffic is pretty much viewable, i.e anyone with a mind can intercept and see it. In the event that you’re logging and connecting to a secured share then this would certainly consist of usernames and security passwords. So in order to protect these connections, you might commonly install a VPN client on the laptop computer and make certain it’s used to encrypt the connection back to the company network. It is actually completely legitimate and indeed intelligent business practice.

Regular home users will make use of VPNs for very similar reasons. Essentially the internet is insecure and there is minimal provision for security integrated in automatically. Sure you can access secure sites through things like SSL when you have to enter a credit card or payment information. However this is the exception not the rule and most websites are actually not secure and the vast majority of information flies across the wires in clear text.

In addition to the general insecurity of the web, there’s the additional issue of privacy. Your surfing data is easily available via a variety of sources. For a start, there’s a complete list in your ISP of every little thing you do on the internet and depending on where you reside this can be routinely and easily accessed. Using a VPN stops this, transforming your internet activity into an encrypted list which is unreadable without your permission. Are they used by cyber criminals and terrorists? Sure but also by millions of people who think that what they do online shouldn’t be part of public records.

The VPN systems are becoming more and more sophisticated simply driven by demand and the risks of recognition. There are all sorts of variations including enabling different setups and ports to dodge detection. You can also get them to use home based IP addresses through specific residential IP providers –

In a large number of countries VPNs are definitely not illegal but simply a simple business and personal security tool. However in some countries this is not the case and you can get into trouble if caught using them. Countries that actually ban the use of VPN include places like China, Iraq, Belarus and Turkey. Various other countries merely allow authorized services which usually indicate those which can be jeopardized if required. Individuals still use VPNs in the majority of these nations indeed in Turkey almost all expats use one to view things like British and American TV on-line. It’s actually quite difficult to detect a VPN in use however that doesn’t stop it technically being illegal in those locations.

Source: http://www.onlineanonymity.org/proxies/residential-vpn-ip-address/

Subroutine – Passing Parameters

Passing parameters into Subroutines, following examples are from Perl scripts.

Parameters are passed into subroutines in a list with a special name — it’s called @_ and it doesn’t conform to the usual rules of variable naming. This name isn’t descriptive, so it’s usual to copy the incoming variables into other variables within the subroutine.

Here’s what we did at the start of the getplayer subroutine: $angle = $_[O]; If multiple parameters are going to be passed, you’ll write something like: ($angle,$units) = @_; Or if a list is passed to a subroutine: @pqr = @_; In each of these examples, you’ve taken a copy of each of the incoming parameters; this means that if you alter the value held in the variable, that will not alter the value of any variable in the calling code.

This copying is a wise thing to do; later on, when other people use your subroutines, they may get a little annoyed if you change the value of an incoming variable!   Although this method can also be used to hack into websites or divert video streams to bypass geo-blocking for example to watch BBC News outside the UK  like this.

Returning values Our first example concludes the subroutine with a return statement: return ($response); which very clearly states that the value of $response is to returned as the result of running the subroutine. Note that if you execute a return statement earlier in your subroutine, the rest of the code in the subroutine will be skipped over.

For example: sub flines { $fnrd = $_[0]; open (FH,$fnrd) or return (—1); @tda = ; close PH; return (scalar (@tda)); l will return a -1 value if the file requested couldn’t be opened.

Writing subroutines in a separate file
Subroutines are often reused between programs. You really won’t want to rewrite the same code many times, and you’ll
certainly not want to have to maintain the same thing many times over Here’s a simple technique and checklist that you can use in your own programs. This is from a Perl coding lesson, but can be used in any high level programming language
which supports subroutines.

Plan of action:
a) Place the subroutines in a separate file, using a file extension .pm
b) Add a use statement at the top of your main program, calling
in that tile of subroutines
c) Add a 1; at the end oi the file of subroutines. This is necessary since use executes any code that’s not included in subroutine blocks as the tile is loaded, and that code must return a true value — a safety feature to prevent people using TV channels and files that weren’t designed to be used.

No Comments News, Protocols, VPN

Network Programming : What are Subroutines?

What are subroutines and why would you use them?The limitations of “single block code” You won’t be the first person in the world to want to :

  • be able to read options from the command line
  • interpret form input in a CGI script –
  • pluralize words in English

But it doesn’t stop there, lets choose a few other seemingly simple but useful tasks that your code may need to accomplish.  You won’t be the first person in your organisation to want to

  • output your organisation’s copyright statement
  • validate an employee code
  • automatically contact a resource on your web site

These are the sort of tasks that may need to happen again and again, both in the same piece of codes or perhaps across different programs. You may need to handle the same data in several programs, or to handle in your programs the same data that your colleagues handle in theirs. And you may want to perform the same series of instructions at several places within the same program. Almost all programming languages, at least the high level ones can handle these operation including things like Perl. Even the beginners who start off with all your code has been in a single file and indeed has “flowed” from top to bottom.

You can use these subroutines to perform tasks that need to be repeated over and over again. In the context of network programming you could use a specific subroutine to assign a British IP address to a client or hardware device,
You have not been able to call the same code in two different places ‘ You have not been able to share code between programs — copying is not normally an option as it gives maintenance problems ~ You have not used your colleague’s code, nor code that’s available for everyone on the CPAN, nor additional code that’s so often needed that it’s shipped with the Perl distribution. First use of subroutines The first computer programs were written rather like the ones that we’ve written so far.

Each one for its own specific task. In time, programmers (said to be naturally lazy people) noticed that they could save effort by placing commonly used sections of code into separate blocks which could be called whenever and wherever they were needed. Such separate blocks were variously known as functions, procedures or subroutines.

We’ll use the word “subroutine” because Perl does! Structured programming The subroutine approach was then taken to extreme so that all the code was put into separate blocks, each of which could be described as performing a single task. For example, the program I run might be described as performing the task of “reporting on all towns with names matching a pattern”.  You could then split that task into multiple tasks for example creating multiple network connections to different servers.  On a multimedia server you could call the relevant subroutines depending on which channel was to be displayed e’g one for English channel, one for commercial ITV channel abroad  and another for a French variant.  All of these could be separate subroutines called from within the main code when the user presses a button.

No Comments News, Protocols

Cryptographic Methods and Authentication

It used to be the domain of mathematicians and spies but know cryptography plays an important part in all our lives. It is important if we want to continue to use the internet for commerce and any sort of financial transactions. All our basic web traffic exists in the clear and is transported via a myriad of shared network equipment. Which means basically anything can be intercepted and read unless we protect it in some way – the most accessible option is to use encryption.

Cryptographic methods are utilized by software to maintain computing and data resources safe-,effectively shielding them with secret code or their,’key.   It’s not always necessary of course, the requirements are heavily dependent on what the connection is being used for.  For example there’s little point encrypting compressed streams like audio and video in normal circumstance, no-one is at risk from intercepting you streaming UK TV abroad from your computer.The key holder is the only individual who has access to the secure information. That individual might share the key with others, permitting them to also get into the information. In a digital world, and especially from the envisaged world of electronic commerce, the demand for safety which is backed by cryptographic systems is paramount. At the future, a person’s initial approach to most electronic devices, and especially to networked electronic devices, will demand cryptography working from the background. Whenever security is necessary, the first point from the human-to machine interface is that of authentication.

The electronic system should know with whom it’s dealing. But just how is this done?  Strong authentication is based on three characteristics which a user needs to have:

  • What the user knows.
  • What the user has.
  • Who the user is.

Today, a typical authentication routine will be to present what you’ve, a token like an identification card, then to uncover what you know, a pin number or password. In a very brief time in the future, the ,who you are kind of identification would be common, first on computers, and after that on an entire selection of merchandise, progressively phasing out the need for us to memorize contact numbers and passwords.  Indeed many entertainment websites are looking at developments in this field with a view to incorporating identity checks in a seamless way.  For example to allow access to UK TV license fee payers who want to watch the BBC from Ireland for example.

But where does the cryptography come to the equation? . In the easiest level, you might offer a system. Like a pc terminal, a password. The system checks your password. You can be logged on to the system. In this example of quite weak authentication, cryptographic methods are utilized to encrypt your password stored inside the system. If your password was held in clear text, rather than cipher text, then a person with an aptitude for programming could soon find the password inside the system and start to usurp and obtaining access to all of the information and system resources you’re permitted to use.

Cryptography does its best to defend the secret, which is your password. Now consider a system that requires stronger authentication. The automatic teller machine is a good example. To perform transactions in an Automated teller machine terminal, you want an ATM Card and a pin number. Inside the terminal, information is encrypted. The information the terminal transmits to the bank is also encrypted. Security is better, but not perfect, since the system will authenticate an individual who isn’t the owner of the card / pin number. The person might be a relative utilizing your card by permission, or he can be a burglar who has just relieved you of your pocket and is about to save you of your life savings. Time, you could think, for stronger authentication. Systems currently in field tests require an additional attribute based on your identity to strengthen the authentication procedure.

TCP/UDP Port Numbers

Both TCP and UDP require port numbers in order to communicate with the upper layers.  These port numbers are used to keep track of varying conversations which criss-cross the network simultaneously. The origin port numbers are dynamically assigned by the source host, most of them will be  at some number above 1024.   All the numbers below 1024 are reserved for specific services as defined in RFC 1700 – they are known as well known port numbers.

Any virtual circuit which is not assigned with a specified service will always be assigned a random port number from this range above 1024.    The port numbers will identify the source and destination in the TCP segment.    Here’s some common port numbers that are associated with well known services:

  • FTP – 21
  • Telnet -23
  • DNS – 53
  • TFTP – 69
  • POP3 – 110
  • News – 144

As you can see all the port numbers assigned are under 1023, whereas above 1024 and above are assigned by the upper layers to set up connections with other hosts.

The internet layer exists for two main reasons, routing and providing a specific network interface to the upper layers. As regards to routing none of the upper or lower layer protocols have any specific functions. Al the routing functionality is primarily the job of the internet layer. As well as routing the internet layer has a second function – to provide a single network interface and gateway to the upper layer protocols.
Application programmers, use this layer to to access the functionality into their application for network access. It is important as it ensures that there is a standardization to access the network layer. Therefore the same functions apply whether you’re on a ethernet or Token ring network.

IP provides a single network interface to access all of these upper layer protocols. The following protocols specifically work at the internet layer:

  • Internet Protocol (IP)
  • Internet Protocol (ICMP)
  • Address Resolution Protocol (ARP)
  • Reverse Address Resolution Protocol (RARP)

The internet protocol is essentially the Internet layer, all the other protocols merely support this functionality. So if for instance you buy UK proxy connections then IP would look at each packet’s address. Then using a routing table, the protocol would decide where the packet should be routed next. The other protocols, the network access layer ones at the bottom of the OSI model are not able to see the entire network topology as they only have connections to the physical addresses.

In order to decide on the specific route, the IP layer needs to answer two specific questions,. The first is which network is the destination host on and the second is what is the ID on that network.   these can be determined and allocated as the logical and hardware address.  The logical address is better known as the IP address and is a unique identifier on any network of the location of a specific host.  These are allocated by specific location and are used by websites to determine resources, so for example to watch BBC iPlayer in Ireland you’d need to route through a UK IP address and not your assigned Irish address.

 

Using SSL for Email and Internet Protocols

If you want to increase the security attached to your email messaging then there’s several routes you can take.  First of all, you should look at digitally signing and encrypting all your email messages.  There are several applications that can do this, or you could switch your emails to the cloud and look at a server based email system.    Most of the major suppliers of web based secure mail are extremely secure with regards to interception and end point security, however obviously you have to trust your email with a third party.

Many companies won’t be happy with outsourcing their messaging like this as it’s often the most crucial part of a companies digital communications.   However what are the options if you want to operate a secure and digitally advanced email messaging service within your corporation?  Well the first place to investigate is increasing the security of authentication and data transmission.   There are plenty of RFCs (Request for Comments) on these subjects particularly related to emails and their related protocols.

Here’s a few of the RFC based protocols related to Email  :

  • Post Office Protocol 3 (POP3) – the simple but effective protocol used to retrieve email messages from an inbox on a dedicated email server.
  • Internet Message Access Protocol 4 (IMAP4) – this is usually used to retrieve any messages stored on an email server. It includes those stored in inboxes, and other types of message boxes such as drafts, sent items and public folders.
  • Simple Mail Transfer Protocol (SMTP) – very popular and ubiquitous email protocol, generally just used to send email messages to recipients.
  • Network News Transfer Protocol (NNTP) – Not specifically an email protocol, however can be used as such if required! It’s normally used to post and download newsgroup messages from news servers.  Perhaps slightly outdated now, but a seriously efficient protocol that can be used for distributing emails.

The big security issue with all these protocols however is that the majority in default mode send their messages in plain text. You can obviously counteract this by encrypting on a client level, the easiest method is by simply using a VPN. Many people already use VPN to access things like various media channels – read this post about BBC iPlayer VPN which is not exclusively about security more about bypassing region blocks.

However remember when an email message is transmitted in clear text it can be intercepted at various levels. Anyone with a decent network sniffer and access to the data could read the information and message content. The solution is in some ways obvious and implied in the title of this post – implement SSL. Using this extra security layer you can protect all the simple RFC based email protocols, and better still they can slot simply to interact with standard email systems like Exchange.

It works and is easy to implement and also when SSL is implemented the server will accept connections on the SSL sport and not the standard oirt that the email protocol normally uses. If you have only one or two users who need a high level of email security then using a virtual private network might be sufficient. There are many sophisticated services that come with support – for instance this BBC Live VPN is based in Prague and has some high level security experts who work in support.

No Comments News, Proxies, VPN

Cisco Pushes Firewall into Next Generation

In October 2013, Cisco closed around the $2.7 billion purchase of Sourcefire. Ever since, Cisco was integrating Sourcefire’s technology. Now Cisco finally entirely embraces the Sourcefire technologies from the Firm’s brand new Cisco Firepower NGFW, rather literally another generation of Cisco’s network defense midsize technology

Scott Harrell, Vice President of Product Management, Security Business Group in Cisco, clarified the Cisco Firepower NGFW is a completely integrated platform which includes firewall, IPS and URL filtering capabilities in addition to integration outside to fasten endpoints. Furthermore, Cisco’s danger telemetry data is incorporated into the Firepower NGFW. The managing of risk information and the safety workflow is also enhanced.

“When we purchased Sourcefire two decades back, we knew it’d be a trip to get to the stage,” Harrell informed Enterprise Networking PlanetEarth. “Many business analysts were doubtful of Cisco’s capacity to deliver Sourcefire’s technology jointly with technologies such as our classical ASA firewall and for this launch, we are saying we got it”

On the previous two decades, Cisco was incorporating Firepower attributes to the ASA product lineup. In September 2014, Cisco additional Firepower providers from Sourcefire into Cisco ASA firewalls. At the moment, Harrell explained the Sourcefire Firepower providers can be utilized to substitute a current Cisco IPS service operating on the ASA.

Together with the newest Firepower NGFW, Harrell explained that an present ASA 5500 could be updated via software to the new picture. Also a number of the old Firepower appliances may also be updated to the new picture. Historically, ASA was largely only a firewall and Firepower was largely only an IPS, but with Firepower NGFW, both worlds are coming together.  There are now many implementations working in organisations across the world to handle complex communications like streaming UK TV into Spain for example.

The crux of the Firepower NGFW is a brand new Linux operating system supply. Harrell explained that Cisco is calling its newest Linux powered operating system FXOS ( Firepower eXtensible Running System). The brand new FXOS introduces service-chaining capacities which may help allow a safety review and remediation workflow.

Chaining and comprehension context is further improved through the integration of the Cisco Identity Service Engine (ISE). Harrell clarified the Firepower is now able to consume ISE info about users and coverage. Also the integration of ISE and Firepower allows rapid danger containment in which an awake from Firepower could be extended through ISE to maintain a danger or malicious wind point off the community.

“So you are not only obstructing threats in the firewall, it is possible to really force the infected person into a quarantine zone or some sort of video proxy until the the threat is remediated, ” Harrell said.

While firewall and IPS devices were though of just two distinct technologies together with the Firepower NGFW that is no longer true.

ActiveX – (Previously COM)

Microsoft’s component technology started off being known as COM – the Component Object Model.   To build software components that can communicate with each other both locally and across networks then you need a standard framework.  Active X provides that standard together with an associated technology called DCOM (Distributed Component Object Model) which allows the components to communicate across the internet and other networks.

ActiveX has been with us through many years and has been updated consistently.  It is promoted as a tool for building both dynamic web pages plus sophisticated distributed object applications.  Every time a client visits a web site which runs Active X components a version check is performed and the latest  controls are downloaded to the browser.  These are not deleted when the browser navigates away but kept updated, this is necessary in order to keep the browser controls updated as far as possible.  Obviously sometimes there are configuration or security options enabled in particular browsers which prevent this.

You may have seen ActiveX controls run in all sorts of situations, perhaps running some graphical banners or multimedia applications on a web page.  ActiveX controls also can run complicated real time information systems on pages, perhaps temperature measurements, financial tickers or simply just updated news feeds actively updating themselves.   ActiveX controls have the facility to directly access data servers using protocols that are much more sophisticated than anything standard HTTP can handle.  It’s an important concept to understand in the development of distributed object computing.

ActiveX looks at a computer browser in a different way than you might imagine, it simply considers it as a container which has the ability to hold and display ActiveX controls.   Many of the internet’s most impressive interactive objects are in fact ActiveX controls and they represent a way for developers to push beyond the static, simple pages supported by the Hypertext Transport Protocol.  One of the downsides is obviously cross-compatibility which relies heavily on the ability of the client browser to downloading the specific components required locally and keeping them up to date.   When a user/browser initially visits an ActiveX site for the first time there can be a significant delay whilst core components are downloaded, however updates and additional installations are usually performed very quickly in the background.

The controls have the additional advantage of combining well with the user interface of most common interfaces.  The simplest of course is with traditional Windows systems, as ActiveX is based on COM technology which is already incorporated within MS Windows.   Microsoft has been very pro-active in support cross platform support though and the Active Platform technology has also been extended to work with other operating systems such as Macintosh, Unix and Linux.    There is also a sophisticated programming language called Active Scripting which can be used on all these platforms to control and integrate ActiveX objects from the server or the client.

Microsoft have attempted to prevent technological conflicts by also allowing ActiveX component to interact and work alongside the main competitor JAVA to some extent.  Remember though all Java applets function for security reasons within their own virtual machines on the user’s computer.    ActiveX requires greater access to the operating system so cannot operate within this virtual sandbox, so although call can be made across components their interaction is limited to some extent.

Additional Reading

Networks, Proxies and VPNs in Distributed Computing – http://www.proxyusa.com/