Designing Multimedia Networks

Moving voice and video over any data network can be a challenge, if you’ve ever sat through a stuttering video conference you’ll appreciate that you have to do it well.     Fortunately it’s becoming more of a reality nowadays with efficient compressions techniques, high bandwidth networks and of course QoS.   Compression is probably the most important factor as it radically reduces the volume of traffic that needs to be transmitted over network links.

Genuine multimedia networks are rarer than you would think, and indeed some of the best which have integrated ATM (Asynchronous Transfer Mode) can be extremely fast.   One of the most important factors apart from the increased speed ATM can bring to both WAN and LAN networks is it’s support for QoS.   This guarantees a certain bandwidth and performance levels for the multimedia connections  such as live streaming news from the BBC. However it should be remembered this has to be reserved to be effective.  Not only can administrators reserve their multimedia requirements but they can also set up virtual circuits to separate their video conference, multimedia or voice calls.   Although it should be noted that this will require either ATM compatible applications, adapters fitted to the workstations or software that emulates ATM on standard network interface cards.

Whatever technology is incorporated the main issue with adding multimedia applications to a network is simply the traffic load.  It’s pointless letting users have access to real time multimedia applications without a very fast data network and some sort of QoS guarantee.   The network also needs the capacity to provide these guarantees without affecting the rest of the normal data traffic.    Capacity planning is crucial and until this is carried out you will have little idea how even a modest set of multimedia applications will effect your network speeds.

For any long term use there are a variety of techniques which can radically boost network performance for multimedia.  Core switched networks which connect to existing departmental hubs is a start and these can be upgraded to provide switched services to different departments as required.   Any videoconferencing equipment should be connected directly to high performance switches, on no account should the traffic be allowed to broadcast out through out the network through a simple hub or repeater.  Most high performance networks now try to standardise on Gigabit ethernet although often this can be slowed by legacy network hardware. Iso-Ethernet is an emerging technology which can incorporate  voice and standard 10 mbit ethernet on the same cable.

There are a variety of methods and technologies which will provide quality of service over existing networks if you don’t have access to ATM.  In fact often it is easier to use one of these bespoke methods as ATM does require modification and support in all applications, transports and software.  A technology called RSVP (Resource Reservation Protocol) has been developed by the IETF (Internet Engineering Task Force) which allows any IP host to request directly a specified amount of bandwidth on a network.

Further Reading

 

ActiveX – (Previously COM)

Microsoft’s component technology started off being known as COM – the Component Object Model.   To build software components that can communicate with each other both locally and across networks then you need a standard framework.  Active X provides that standard together with an associated technology called DCOM (Distributed Component Object Model) which allows the components to communicate across the internet and other networks.

ActiveX has been with us through many years and has been updated consistently.  It is promoted as a tool for building both dynamic web pages plus sophisticated distributed object applications.  Every time a client visits a web site which runs Active X components a version check is performed and the latest  controls are downloaded to the browser.  These are not deleted when the browser navigates away but kept updated, this is necessary in order to keep the browser controls updated as far as possible.  Obviously sometimes there are configuration or security options enabled in particular browsers which prevent this.

You may have seen ActiveX controls run in all sorts of situations, perhaps running some graphical banners or multimedia applications on a web page.  ActiveX controls also can run complicated real time information systems on pages, perhaps temperature measurements, financial tickers or simply just updated news feeds actively updating themselves.   ActiveX controls have the facility to directly access data servers using protocols that are much more sophisticated than anything standard HTTP can handle.  It’s an important concept to understand in the development of distributed object computing.

ActiveX looks at a computer browser in a different way than you might imagine, it simply considers it as a container which has the ability to hold and display ActiveX controls.   Many of the internet’s most impressive interactive objects are in fact ActiveX controls and they represent a way for developers to push beyond the static, simple pages supported by the Hypertext Transport Protocol.  One of the downsides is obviously cross-compatibility which relies heavily on the ability of the client browser to downloading the specific components required locally and keeping them up to date.   When a user/browser initially visits an ActiveX site for the first time there can be a significant delay whilst core components are downloaded, however updates and additional installations are usually performed very quickly in the background.

The controls have the additional advantage of combining well with the user interface of most common interfaces.  The simplest of course is with traditional Windows systems, as ActiveX is based on COM technology which is already incorporated within MS Windows.   Microsoft has been very pro-active in support cross platform support though and the Active Platform technology has also been extended to work with other operating systems such as Macintosh, Unix and Linux.    There is also a sophisticated programming language called Active Scripting which can be used on all these platforms to control and integrate ActiveX objects from the server or the client.

Microsoft have attempted to prevent technological conflicts by also allowing ActiveX component to interact and work alongside the main competitor JAVA to some extent.  Remember though all Java applets function for security reasons within their own virtual machines on the user’s computer.    ActiveX requires greater access to the operating system so cannot operate within this virtual sandbox, so although call can be made across components their interaction is limited to some extent.

Additional Reading

Networks, Proxies and VPNs in Distributed Computing – http://www.proxyusa.com/

HTTP (Hypertext Transfer Protocol)

For many of us a network is either our little home setup consisting of perhaps a modem and wireless access point and a few connected devices, or perhaps that huge global wide network – the internet.  Whatever the size all networks need to allow communication between the various devices connected to them.  Just like human beings need languages to communicate so do networks only in this context we call them ‘protocols’.

The internet is built primarily using TCP/IP protocols to communicate, this is used to transport information between ‘web clients’ and ‘web servers’.   It’s not enough though to enable the media rich content delivered to our web browsers and a host of secondary protocols site above the main transport protocol – the most important one which enables the world wide web is called HTTP.

This provides a method for web browsers to access content stored on web servers, which is created using HTML (Hypertext Markup Language).  HTML documents contain text, graphics and video but also hyperlinks to other locations on the world wide web.   HTTP is responsible for processing these links and enabling the client/server communication which results.

Without HTTP the world wide web simply wouldn’t exist and if you want to see it’s origins search for RFC 1945 where you’ll find HTTP defined as an application level protocol designed with the lightness and speed necessary for distributed, collaborative, hypermedia information systems.   It is a stateless, generic and object orientated protocol which can be used for a huge variety of tasks – crucially it can be used on a variety of platforms which means it doesn’t matter whether you’re platform your computer is on (linux, Windows or Mac for instance) – you can still access the web content via HTTP.

So what happens? When someone types a web name or address into the address field of their web browser, the browser attempts to locate the address on the network it is connected to.  This can either be a local address or more commonly it will look out on to the internet looking for the designated web server.   HTTP is the command and control protocol which enables communication between the client and the web server allowing commands to be passed between the two of them.   HTML is the formatting language of the web pages which are transferred when you access a web site.

The HTTP connection between the client and server can be secured in two specific ways – using secure HTTP (SHTTP) or Secure Sockets Layer (SSL) which both allow the information transmitted to be encrypted and thus protected.  It should be noted though that the vast majority of communication is standard HTTP and is transmitted in clear text insecurely which is why so many people use proxies and VPNs like this to protect their connections.