. GLOCOM Platform
. . debates Media Reviews Tech Reviews Special Topics Books & Journals
.
.
.
.
.
. Newsletters
(Japanese)
. Summary Page
(Japanese)
.
.
.
.
.
.
Search with Google
.
.
.
Home > Special Topics > Colloquium Last Updated: 15:14 03/09/2007
Colloquium #2: May 29, 2001

Future Prospects for the Internet


Introduction to Ikeda's "Beyond the Internet" essay

Shumpei KUMON (Executive Director, GLOCOM)

The second stage of the IT revolution has begun, along with the bursting of the "Internet bubble." We are entering a new era when attempts are being made to construct optical and wireless network systems that are best suited not for telephone and broadcasting, but for data communications.

These systems most likely will take the form of all IP networks, in which all kinds of information including texts, voice and moving pictures are transmitted via IP protocol. The full implementation of all optical circuits has already started in the core of the network, and is expected to progress to the edge and then to the last mile.

The second stage of the Internet revolution will not end there. What will await us? This is exactly the kind of question that GLOCOM colleagues should be interested in. The following essay by Nobuo Ikeda may be regarded as a summary of our future prospects in this respect. We welcome your comments.


Beyond the Internet

Nobuo IKEDA (Research Institute of Economy, Trade and Industry and GLOCOM)

All Roads Lead to IP

The Internet continues to expand, swallowing up all media along the way. Data communication already is based on TCP/IP (Transmission Control Protocol/Internet Protocol). In five years voice will be transmitted over IP, and within ten years video also will be routed by IP, beating out "digital broadcasting", which has been bucking the Internet wave. But this does not mean that IP is optimal for any medium. On the contrary, IP is not good for any use other than data communication.

The Internet works very simply: it encapsulates original data into IP packets with appropriate length; adds a header that appoints an IP address to each packet; sends the packets separately by tossing them to the network; and decapsulates the packets at the receiving end. This mechanism makes efficient use of limited bandwidth and simplified network control, and makes communication much cheaper than telephone networks.

But IP is not suitable for transmitting continuous signals such as voice and video data. For example, telephone is the medium that connects lines to transmit voice in real time. IP basically lacks connection and does not guarantee simultaneity. IP falls behind conventional television in terms of image transmission, because the continuous video signals are broken down to packets and the packets are radically compressed to unreasonable levels. And IP's "best effort" architecture cannot guarantee quality of service (QoS)*1.

However, this situation will fundamentally change in ten years because of the explosive expansion of bandwidth. In the days when mainframe computers dominated, data were, in principle, processed locally, taking into account that the speed of data transmission was frustratingly slow. The arrival ofpersonal computers with modems faster than kbps paved the way for dial-up services. Today the LAN transmission speed (100Mbps) of average users is comparable to that of a PC's internal data bus, and data is shared at servers and processed through LANs.

Although computational capacity doubled every 18 months, known as "Moore's Law", we now have a proposed "Gilder's Law" named after technologist George Gilder, who says that the bandwidth of optical fibers doubles every six months or more. Unlike TDM (Time Division Multiplexing), where data transmission seems to hit the ceiling around 80Gbps due to the property of semiconductor lasers, WDM (Wavelength Division Multiplexing) technology, which has been in practical use since the latter half of the 1990s, gives infinite amount of bandwidth by increasing the number of wavelengths (lambdas) put over a single optical fiber, at least theoretically.

This technology has realized 28 petabits/sec (10Gbps*2.8 million channels) in the laboratory, while optical fiber's maximum bandwidth in use is 32 terabits/sec. This means that the world's total traffic of any kind could make use of less than a thousandth of a single fiber's potential capacity. The cost per bit is coming nearer and nearer to zero. Supposing that Moore's and Gilder's Laws both will prevail in coming years, communication speeds will be 10,000 times higher than computation speeds, as LAN speeds will be 100Tbps (a million times higher) by 2010, while the PC's width of data bus will be 100Gbps (a hundred times larger) during the same period (see Fig.1). Computers saw drastic parallel innovation, and note-type personal computers have equivalent processing capacity with mainframe computers of a decade ago. There is no reason to doubt that the same thing will happen in the universe of communication.

Fig.1: Bandwidth of LANs and PCs

From Packet to Lambda

No expert questions that the time of "everything over IP" will arrive eventually. The Japanese government has announced its target to spread "ultra-high-speed Internet to ten million households within five years." For the government, "ultra-high-speed" is 100Mbps-grade, mainly supported with optical fiber. So if it ever happens, voice and video data will be freely transmitted without difficulty. Voice needs approximately 30kbps. As for video, even TV-grade transmission occupies about 4Mbps. Provided transmission speeds sufficiently exceed that, the loss in encapsulation is negligible. When any information is transmitted through optical fibers, it is far more efficient to process all data over IP than to separately process them over subdivided bandwidth including telephone, data communication, and television.

The question starts there. If almost all processing is carried out over the network in future, the Internet will evolve into a kind of large parallel-processing computer. However, IP unfortunately was not designed to accommodate this kind of situation. IP is a mechanism to encapsulate data to efficiently share the network on the assumption that the processing capacity of computers is large while communication bandwidth is limited. That is why the capacity of routers or switches is becoming a bottleneck with the phenomenal increase of network capacity. With bandwidth over terabits, IP routing that converts optical signals into electric signals and identifies IP headers one by one cannot possibly meet the load.

For that reason, the latest networks incline to adopt optical switches*2 that work like mirrors to switch lights, without converting optical signals into electric signals. The unit of communication is the lambda (scores of Gbps), not the packet. In Fig.2, the optical switch in the lambda router branches the lambda into regional metro networks, and an edge router converts it into electric signals and carries out the IP routing. In a core network, data format does not matter because data is physically transmitted as bit streams. Edge routers at both ends are similar to a caller and a receiver of the current telephone system. It is "circuit switching" between routers.

Fig.2: All-Optical Network

Rapid advancement in optical communications technologies is about to transform the structure of the telecommunications industry. The immense size of networks held by AT&T or WorldCom no longer provides competitive advantage, and is fast becoming a burden that can deteriorate competitive edge in pricing. Both companies have spun off long distance call divisions. AT&T plans to replace all of its core networks with ones based on distributed optical mesh connected by optical switches. The difference between core and edge is not distinctive. When the price of optical devices falls, metro networks will adopt optical switches, soon after IP routers will be eliminated, and "switchless communication" in which all users are connected with optical fibers directly might eventually emerge.

Telecommunication and broadcasting likely will be integrated with IP entirely by 2010 or so. After that, technologies beyond IP will be deployed in the core networks, but it also will take time for them to come into wide use. The technology already is available today. However, the cost of access for "the last mile" might be an obstacle. On the other hand, given that Usen Broad Networks, a Japanese startup company, commenced 100Mbps-grade service in Tokyo this year, the spread of gigabits optical access within metropolitan areas is not so far in the future for Japan.

The method of identifying terminals without IP addressing is the key to the pure-optical network. It is a situation similar to everyone in the world being connected with Ethernet, so communication by MAC address (Datalink Layer) attached to each device may be appropriate. Interoperability of data may be unified by XML (Presentation Layer) or Java (Application Layer). With these standard processing schemes, underlying Transmission Protocol can be anything, including IP, telephone, or optical switch. It is ideal for users not to need to know any engagements other than the visible ones, by abstracting the underlying layers. The network should be as transparent as air.

As the cost of the network nears zero, each individual will have greater bandwidth than the telephone and broadcasting stations have today. The broadband network currently used for business will also be spread to entertainment such as visual software and games as prices fall. Terminals will become single-function "information appliances" and P2P (peer-to-peer) architecture -- the technology that directly connects clients all over the world -- will be dominant, rather than the centrally controlled services that distribute contents as broadcasters presently do. When authoring a documentary by making use of visual archives, for example, one can virtually edit video materials all over the world, by pointing to the URLs and writing how to present them using XML*3. In a way, the network itself is one big visual archive. Also, as Intel proposes, it is possible to use the network as a massive parallel computer through a P2P system.

Beyond the Internet or a New Internet?

However, it seems that this vision of the future network looks like a retreat to the old telephone system. Optical switches and "optimizing" mechanisms for broadband such as MPLS, RSVP, and CDN*4 may create "islands" of proprietary networks within the Internet. As some people argue, it may endanger the end-to-end (e2e) architecture of the Internet that freed users from the control of big telephone telecom operators*5. If e2e is dead, IP version 6 -- to which the Japanese government has committed strongly -- will be useless because the next generation network will not be the Internet as we know it.

Even today it is dubious that only IPv6 can solve the "crisis" of the shortage of IP addresses. Most servers do not need many global addresses because of such technologies as NAT (Network Address Translation) that translates local addresses into global ones. Moreover, IPv6 will not be very useful if less than one percent of the routers in the world implement it. Even if v6 is adopted as quickly as v4, it will take 20 years to prevail as the international standard. If our calculations are correct, everybody will have terabits of bandwidth by 2020. If so, will they need packet switching?

Of course IP will not be completely wiped out soon. At least in wireless communication, IP may survive because airwaves cannot be absolutely abundant. As wireless LANs such as IEEE 802.11 become the universal access technology in wireless communication, mobile phones will be replaced by wireless Internet connected to wireless LANs. IP might then become the crucial technology to connect wired and wireless communications at the edge.

All-optical communication and computation will not be possible in the foreseeable future. Optical systems have great speed but lack flexibility, so it is not good with store-and-forward communication such as packet switching, due to the difficulty in developing memories that can store optical signals. In addition, optical computers that optically carry out internal processing are yet to come into practical use because of necessary conversion of optical signals into electric signals at the edge.

If bandwidth continues to expand exponentially, everybody will have so much bandwidth that no optimization will be needed and the Internet might survive, so v6 might be the best solution when every device in the world communicates with every other device globally, as "IP fundamentalists" argue. Even so, such nirvana will not come for ten years, so v6 cannot solve the difficult problems associated with the transition to broadband Internet. And once ISPs deviate from e2e, it will become increasingly difficult to return to the "pure" Internet. Other solutions might be payment systems. Although this topic has been much discussed on the Internet, nothing has yet been done.

These problems have profound implications for communication policy. If optimization by vertical integration deviates from the Internet and stifles competition, there will be some need for regulation to enforce telecom operators to "unbundle" their facilities to competitors. But if there is some scope for a new network beyond the Internet, government had better "unregulate*6" operators to encourage investment. Although there is no such superior alternative to IP at the moment, it might be better to search for a completely new open architecture for the all-optical network than to improve IP.

A basic principle in economics is saving scarce resources and wasting abundant resources. In the days when computational resources were scarce, mainframe computers were enshrined in the centers of buildings and operated 24 hours, and people waited for their batch processing time. Today each person has a personal computer whose capacity is higher than that of early mainframes, and leaves untapped most of its abundant computational capacity. Similarly, the days of scarce bandwidth witnessed the innovation to use bandwidth efficiently by switching with telephone exchanges or packing IP packets to the network. With abundance of bandwidth, such technologies would not be necessary. Since the greatest scarcity will be users' time and creators' creativity, bandwidth will be wasted like air.


*1 To be precise, the problem of QoS is not due to IP but TCP. There are many attempts at improving QoS, such as MPLS (Multi-Protocol Label Switching) and RSVP (Resource Reservation Protocol).
*2 Actually they do not "switch" lambdas but "cross-connect" them in fixed routes. But some technologies are being developed to switch lambdas by changing wavelengths. cf. G. Gilder, Telecosm, Free Press, 2000.
*3 A new markup language, named SMIL2.0, written in XML, enables users to produce and edit TV-quality video.
*4 Content Delivery Network that delivers contents faster by storing them into local cache servers.
*5 M. Lemley and L. Lessig, "The End of End-to-End: Preserving the Architecture of the Internet in the Broadband Era", http://lawschool.stanford.edu/e2e/papers/Lemley_Lessig_e2epaper.pdf.
*6 A jargon made by the FCC(Federal Communications Commission) that means forbearance to regulate.

 Top
TOP BACK HOME
Copyright © Japanese Institute of Global Communications