The feature story that follows was originally published in 1997. When I wrote this article as the electronics editor of Seniorhelpline, the Web was just six years old. Dial-up connections ruled. There was no Google, no Facebook, no Wikipedia, no YouTube, no Twitter. There was no Hulu, no Spotify, no Instagram. There was no Cloud computing. There were no smartphones. Netflix was founded that year, but as a rent-by-mail DVD company, not the internet behemoth it is now, slinging trillions of bits over the Net every day—hundreds of thousands times the total internet traffic of 1997.
Back then, the internet was used mainly for email, news groups, and to access the relative handful of websites that were around, like book-seller Amazon, eBay, and the Yahoo, Lycos, and Excite portals. Was the internet dying? No. That was hyperbole. But it was at a crossroads. Something had to be done to accommodate the incredible increase of data traffic we knew was coming and the looming shortage of address space. Soon after that article ran, the internet backbone was expanded and strengthened. Tens of thousands of miles of undersea fiber-optic cables were installed, truly making the Web worldwide.
Since then, data-intensive providers like Netflix, Google, and Facebook have changed the way data is routed over the internet by building private content-delivery networks (CDNs) that run in parallel to the internet backbone. These CDNs eliminate choke points and deliver data quickly, providing uninterrupted video to your smart TV or to smartphones in far-flung corners of the globe.
And the IPv6 addressing scheme, introduced in 1998, is being implemented with enough capacity for billions of unique addresses for every living person. Overkill? Perhaps. But consider how popular such devices as smart speakers and connected thermostats and lights are today. Then remember that the Internet of Things is still in its infancy, much like the internet itself was in 1997. Tomorrow’s applications haven’t even been dreamed up yet. Whatever they are, you can be sure that the internet will be ready to take on the challenge. It won’t die. We can’t let it.—Brian C. Fenton
The internet is at a crossroads. Rather than surfing the Net, many users find themselves wading through the mud, frustrated by delays. Is this a permanent issue? Or are remedies being found in “internet time,” where changes occur overnight?
Considering the demands that have been placed on the internet—the number of users is growing at an annual rate of about 200 percent—it’s remarkable that it has managed to keep up at all. In addition, the range of services that the internet is providing was never envisioned by the people who developed what has grown into today’s Net. The internet began in the late 1960s as a project by the Advanced Research Projects Agency, or ARPA. The main goal of the ARPAnet was to experiment with ways to link university research centers and high-tech defense contractors together. The original ARPAnet linked four computers—at the University of California at Santa Barbara, UCLA, the University of Utah, and the Stanford Research Institute. From there, the internet grew slowly but steadily through the 1970s and 1980s.
One reason ARPAnet was able to grow into today’s internet was its ability to interconnect networks even if they use different local networking protocols, such as Ethernet, Netware, or AppleTalk. The common language that allowed the networks to interconnect is TCP/IP, which stands for Transmission Control Protocol/Internet Protocol.
TCP/IP owes its structure to the internet’s heritage as a Defense Department project. The protocol was devised to ensure that messages of any length could be sent from one computer to another even if parts of the network were inoperative—if the country were under nuclear attack, for example.
Although much of the internet consists of dedicated phone lines owned by traditional telecommunications companies, the technology that allows data to be sent from one computer to another on the internet is far different from a standard phone call.
The telephone network is a connection-oriented, circuit-switched network, while the internet is a connectionless, packet-switched network. When you make a telephone call, the switches at the telephone company’s central office set up what becomes a dedicated line between you and the person you call, for the duration of the call. While you’re using the line, no one else can, and if there’s a problem on the network, you lose your connection.
TCP is a packet-switched networking protocol. It breaks each message into variable-length packets and inserts a header to indicate which message each packet is part of, where the message came from, and where it is going. IP is the addressing part of the protocol suite. It routes the packets from the sender to the recipient, making an effort to find the shortest route available. At the receiving side, TCP software collects the packets, extracts the data, and puts them in the proper order. If some packets are missing, the sender is asked to retransmit them. This turns out to be a very efficient way to move files and messages, but it’s not the best way to send such data as real-time audio and video—you can never be sure that the packets will arrive at their destination in the right order because they might travel via different paths. Along the way, special computers called routers examine the packets and pass them from one node to another until they arrive at their destination. Another responsibility of routers is to decode domain-name addresses (such as popularmechanics.com) to 32-bit IP addresses (126.96.36.199).
The 32-bit addressing scheme is one of the most tangible examples of how the internet is bursting at the seams. At the current rate of growth, the internet will run out of addresses in a little more than ten years.
To be fair, no one could have envisioned the number of people who would be using the internet today, or the varied uses that it would be put to. Remote access, file transfer, and email were the reasons the internet was created. Email is still the No. 1 reason that people access the internet. But the World Wide Web—which didn’t even exist until early this decade—is catching up fast because it gives point-and-click access to virtually all internet resources.
When the Web was developed, it was seen as a tool for serious research and educational exchanges. It is still used for that, but who could have predicted how commercial it would become? In fact, the internet, as originally conceived, forbade advertising. But now that the government is officially out of the internet business—the backbone is run entirely by commercial interests—almost anything goes.
Along with its ease of use, another attribute that has contributed to the Web’s popularity is its cross-platform compatibility. A correctly designed webpage can be accessed on a Unix workstation, a Macintosh, or an IBM-compatible PC with equal results.
With the Web so large, how do you get your page noticed? Some developers feel that one good way is to make it as splashy as possible. Bandwidth conservationists would disagree—as would many users, after waiting several minutes to download a large graphic file when all they’re looking for is some information. Even users with 33.6 kbps modems can get frustrated in a hurry.
To make matters worse, a graphics-intensive page doesn’t slow down information access just for the person downloading it. Remember that each internet data transfer is split into TCP packets and sent down the pipe along with everyone else’s. When that pipe gets full, everyone has to wait.
Consider what happened to Microsoft’s FTP servers after Internet Explorer 3.0 was released: Many of those who were lucky enough just to connect to one of the servers had such long waits that they gave up.
Also, it’s not just the site that you’re accessing that will slow down. Any bits that are being routed to their eventual destination through the site will slow down, too, as they wait to get routed to the next station on the Net.
Slow access isn’t always the fault of external sites. Your internet service provider (ISP) can have a dramatic effect on the speed of your access. Your ISP buys a dedicated line to a larger ISP, which might have yet another dedicated provider, until the line eventually gets to one of the major internet backbones operated by a company like MCI or Sprint. If any provider in the chain hasn’t upgraded sufficiently, you’re going to run into delays somewhere along the line—at least at peak periods when everyone else is trying to send or receive bits, too.
Just as the amount of commercial traffic has taxed the internet, so have some of the new bandwidth-intensive technologies. Want to make a phone call? Do it on the internet. Want to conduct a videoconference? The internet again. Want to control an avatar and roam about virtual words interacting with others? Where else but on the internet?
What’s happening on the internet is exemplified by what has happened on some corporate networks where users have installed the PointCast Channel Viewer to gain access to the PointCast Network, a news service that delivers customizable reports over the internet, with regular updates 24 hours a day. Although the PointCast Viewer isn’t particularly bandwidth-intensive itself, it is able to bring a network to a crawl if too many people install the software.
Demand for such applications as PointCast is being fueled by inexpensive rates for unlimited Net access. Users have no incentive to reduce their bandwidth.
Consumer demand will only increase as cheaper access terminals hit the market. With a WebTV box, you don’t need a computer—just a television. Even video-game players can get in on the action with devices that convert their video-game consoles into internet-access devices.
With so many potential bottlenecks, you’ll want to do everything you can to get the best performance. If you’re using a 14.4 kbps modem, upgrading to a 28.8 or 33.6 kbps modem will make download times seem significantly shorter—as long as your ISP supports the faster access. A new modem technology, x2, developed by U.S. Robotics, increases the top download speed of a standard modem 56 kbps. It takes advantage of service providers whose data servers are connected to the digital telephone network. Leading ISPs, including America Online, Prodigy, Compuserve, and Netcom, are supporting the technology, which is due to hit the market in early 1997.
An ISDN phone line—if you can get one—can speed up your access to 64 or 128 kbps. That’s up to four times the speed of a 28.8 modem—but it often seems faster because the all-digital nature of ISDN ensures that you’ll always connect at the rated speed. Your modem won’t fall back to a slower speed because of noise on the analog phone lines.
The disadvantage of ISDN is that it’s still not available everywhere, and in many places it’s outrageously overpriced—sometimes so high that even businesses can’t justify the expense. Many ISPs don’t support ISDN connections, and it’s still possible that ISDN just won’t ever catch on in a big way thanks to even faster technologies on the horizon.
Cable modems are vying to become de facto high-speed access devices for homeowners and businesses alike. With potential speeds as fast as 40 Mbps—more than 1,000 times that of today’s fastest analog modems—it’s easy to see why cable companies see internet access as a potentially huge moneymaker for them. Unfortunately, cable modems still have a number of obstacles to overcome before you’ll be able to call your local cable company and order up an ultrafast connection. First, cable-modem manufacturers have yet to agree on any standards for the devices. That keeps prices high both for consumers and the cable companies who are building their infrastructure.
Another reason that cable modems offer only a partial solution is that cable plants were never built for two-way communication—they are built to deliver programming from the head end to subscribers, not to accept incoming data from individuals. Without a two-way cable network, a standard analog modem must be used as a back channel. Although your cable-modem connection might be capable of blazingly fast speeds, you might have to share that bandwidth with up to 2,000 other users on your cable-television feeder line. If everybody else is trying to download large files or conduct videoconferences, you will have little bandwidth left over for yourself to use.
The phone companies aren’t placing all their bets on ISDN—especially with the growing threat of competition from cable companies. That is where Asymmetric Digital Subscriber Line (ADSL) technology comes in.
ADSL can conceivably pass data at a rate of up to 9 Mbps, and can do it over normal telephone lines depending on a large variety of factors, including the length of your local telephone loop.
Unfortunately, when these high-speed technologies become widely available at the consumer level, the bandwidth crunch will just get worse unless the internet’s underlying infrastructure is improved. Any improvements, however, won’t come for free—service providers who are operating on razor-thin margins have little incentive to upgrade, especially because the performance they can offer is still limited by the other hosts that they’re connected to. One technology that could potentially have a dramatic impact, however, is known as Asynchronous Transfer Mode (ATM) switching.
ATM addresses the cause of the biggest backups on the internet, the routers that direct email messages, webpages, and files on their way from source to destination, one hop and one packet at a time. By contrast, ATM takes a connection-oriented approach. A message can speed through an ATM switch faster because, in effect, the entire transmission has been preaddressed with its own route.
While ATM is optimized for carrying such multimedia traffic as real-time audio and video, traditional routers are faster and more efficient at getting email and files through the Net. Presumably, finding the right mix of routers and ATM switches could ease delays substantially.
One conundrum is that adding bandwidth to the network is only a temporary solution. Just as new lanes on highways attract more cars, faster internet connections draw more users—and things slow down even worse than before.
One proposed remedy for the internet is to create a sort of toll road on the Information Superhighway—to have users who want to make sure their important messages get through pay an extra charge. Right now, the internet is democratic to a fault. The junk email message from a spammer gets the same treatment as an email message from the president. However, with RSVP, the reservation protocol, and RTP, the reservation transfer protocol, some messages can get priority service.
Currently under development is IPv6, or IP version 6. The new version changes how packets are identified and includes bits to indicate priority.
Just as many municipalities have switched to carpool lanes to boost transportation capacities, internet developers are looking for ways to boost the efficiency and, thus, the capacity of internet links.
IP multicasting is one technique that promises to conserve bandwidth by sending a single data stream to multi-data users, rather than establishing a separate point-to-point data stream for each of the users.
Although the internet’s growing pains are obvious, and the solutions aren’t all easy to implement, the internet is on its way to becoming as important as the telephone network—too important to be allowed to become a victim of its own success.
A Few Words From Vint Cerf, the Father of the Internet
The internet’s demise has been repeatedly forecast by any number of storm crows over the years. There is the 1997 prediction based on an increasingly loaded network spurred by the rapid growth of users, servers, websites, and content. There were earlier predictions of a “Gigalapse” by Ethernet inventor Robert Metcalfe. Others thought ATM or Frame Relay would replace TCP/IP. Then came Multi-protocol Label Switching.
Of course, there were many years during which it was expected by many, including governments, that the Open Systems Internetworking (OSI) standards would prevail over the internet’s TCP/IP. Interestingly, as the 2000s arrived, high speed, broadband cable, and fiber modems, along with digital subscriber loops, promised higher capacity in the core and access components of the internet. Other challenges have appeared. In 2011, the primary source of IP address space, the Internet Corporation for Assigned Names and Numbers (ICANN), ran out of freely available 32-bit IPv4 addresses. By that time, however, the 128-bit IPv6 address and packet format had long been standardized but not very widely implemented. That is still a challenge as only about 30 percent of the internet is estimated to be configured to support IPv6. Most edge devices and routers have the necessary software but many ISPs have not turned it on.
There are additional challenges surfacing but these are less about basic technology than they are about abusive behaviors found especially in online social media but more generally in all layers of the internet and the World Wide Web. The software- and network-driven Internet of Things offers a huge attack surface that is already being exploited by bad actors. Disinformation, misinformation, fraud, propaganda, and other content ills are infecting popular information channels, challenging users to think far more critically about the quality of information they receive or discover online.
This 1997 feature was reprinted in the March 2019 issue.