Quality of Service, IPv6 and middle boxes expected to make the Internet less foggy

Amsterdam 05 March 2001At the first edition of the Global Grid Forum Conference held 4-7 March in Amsterdam, Brian Carpenter, IBM Programme Director of Internet Standards & Technology, provided a keynote on future networking and peer-to-peer computing. Being chairman of the Internet Society, the speaker who is equally involved in the Internet Engineering Task Force (IETF) Work Group for Differentiated Services, pictured a realistic view of the hard work which still needs to be done in order to solve today's problems of congestion, the chronic lack of address space, and packets blockages or even hijackings because of firewall protection and interception via gateways and proxies. Mr. Carpenter offered a three-fold solution in terms of Quality of Service control, a fast deployment of IPv6, and the introduction of middle boxes for Grid and P2P computing.

Advertisement

The current status of the Internet has little or no transparency at all for smooth and professional networking. Which solutions will actually be deployed in the near future is difficult and dangerous to predict, according to Mr. Carpenter, who has learnt from previous experiences. The need for open systems standards and network management tools to tackle the problem of protocol diversity was already suggested by him back in 1984. Unlike in the past when IP addresses were plentiful and peer to peer was widely developed, we are now facing a morbid growth of ambiguous and temporary addresses, making it hard to find out with whom one is communicating and where this person is located. The dream of Open Systems Interconnection (OSI) did not come true so far.

Mr. Carpenter stressed that deployment of IPv6, the next generation Internet Protocol, is essential to increase the level of address space. Next to this, the set-up of a decent Quality of Service technology will be crucial to manage the network congestion. Each Internet user launches a chain reaction in which everything connects to almost everything else in the process, as the speaker explained. Thus, performance is influenced by routing, which is affected by addressing, depending on the choice of ISP, steered by the ISP performance, which is directly linked with the user behaviour, that affects the load, which causes congestion, and so on. These non-linear interactions in networking therefore call for an advanced modular technology as well as an evolutionary approach instead of an integrated solution in a monolithic architecture.

Mr. Carpenter proposed two workable approaches in which the first router that is met, has to classify the network traffic. To this purpose, a control manager in a policy system is needed. This can be done utilising integrated services, called RSVP, streamlining the millions of micro flows in millions of service classes, or by means of differentiated services where a host of micro flows share one single service class. In the IETF work group, Mr. Carpenter is focusing on this second approach for which the standards have just been defined. As for the policy system, this item has to include a user interface, a repository database and a Quality of Service policy manager.

Up to this point, there still are a few missing links, such as an Application Programme Interface for both differentiated and integrated services, inter-domain QOS signalling, traffic engineering, congestion control for real time flows, a tool to track end-to-end QOS and receiver capability, measurement techniques, and field experience, as Mr. Carpenter pointed out. In turn, IPv6 will enable a simplified auto-configuration of addresses, next to an improved support for site renumbering and easier design of a mobile Internet Protocol. The initial development of IPv6 started in 1995 whereas a widespread use in wireless applications is anticipated by 2008.

In the meantime, we will face a difficult transitional period in which the old IPv4 systems will have to communicate with the new IPv6 systems through middleware in which IPv6 is encapsulated in IPv4. With handsets however, translation is possible by modifying content in order to send it to a wireless device, as the speaker stated. As long as the Quality of Service is not fully deployed, Mr. Carpenter suggested to move the data nearer to the user by replicating data at edge through dedicated Edge Servers. To solve the lack of transparency, we could apply the "walled garden" model, with use of the Wireless Application Protocol (WAP) and iMode or by using content adaptation boxes.

In addition to IPv6, Internet Service providers can fall back on alternative "under the cover" protocols, like the Border Gateway Protocol (BGP4), which is an exterior routing protocol for exchanging information between Autonomous Systems. A number of carriers and ISPs are using Multiprotocol Label Switching (MPLS) for backbone traffic engineering to replicate and expand upon the traffic engineering capabilities of ATM or Asynchronous Transfer Mode networks but there are some complications. Packet over Optical is applied to just send photons and the Gigabit Ethernet over the Wide Area Network (WAN) sends Ethernet frames over the photons.

In general, Mr. Carpenter predicted a growing application of middle boxes by the industry. A taxonomy is already being developed for Web intermediaries. To date, at least twenty types of middle boxes have been identified. Within the IETF working groups, there lies an enormous task ahead to find out how applications should communicate with middle boxes and to design pluggable extension services in middle boxes. In any case, the Grid and P2P computing models need to be middle-box ready. As for the basic QOS control, the tools are already defined but a lot of integration components are still missing. In spite of the imminent danger of wrong prophecy, Mr. Carpenter concluded with the estimation that wireless IPv6 is still about two years away from now.


Leslie Versweyveld

[Medical IT News][Calendar][Virtual Medical Worlds Community][News on Advanced IT]