As the internet becomes more and more central to society, it seems that threats to its health spring out of the woodwork in increasing numbers. Earthquakes at choke points such as what happened off Taiwan 18 months ago, uncontrolled explosions in traffic from video, malicious botnets run by international criminals or even countries, and now…. complexity.
A group of network engineers is looking at The Coming Crisis in Routing and Networking. Here are their bullet points:
- Exhaustion of IPv4 address space and its impact on the size of the forwarding table.
- Growth of the default-free FIB has moved beyond the capacity of many popular routers.
- “Churn” resulting from the acceleration of the growth in prefixes advertised in BGP is
reaching the point where processors in popular routers can no longer converge forwarding
tables between updates. - The deployment of global network resources (storage and computing) has been forced into
NAT and application gateways, even in North America. - IPv6-enabled networks don’t help until users can run IPv6-only stacks.
- Those deploying IPv6 for wide-area services have encountered problems involving both
loss of ‘reach-ability’ in some cases, and even faster growth of the hardware resources needed
Basically, this boils down to complexity. The growth in internet addresses and the routes between them is outgrowing the processing speed of the routers that send the traffic, the topology of the internet is becoming too complex for the tools we currently use. IPv6 can help some, but the pathway to IPv6 remains largely untrodden, undebugged, and untrusted.
But I can’t really take this ‘threat’ too seriously. Why? Because it is really just a matter of attention, the solutions are either known or are incremental improvements to existing processes and technology. Increasing processing power at the routing level is straightforward, and IPv6 will eventually reach critical mass one way or another when the need becomes unignorable. Unlike the problem of scaling transport to 100G, there don’t seem to be any hard-to-predict conceptual or technological breakthroughs required. The threat just needs to be close enough to get enough attention, and people will put resources into the solution.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Internet Backbones · Internet Traffic · Telecom Equipment
Hi Rob.
You listed an interesting and in most cases all-too-familiar mosaic of concerns. Even before the Metcalfe prediction of the Internet’s collapse, which proved to be false, other gloom and doom views were starting be aired from within a variety of sectors — either because of ignorance and uncertainty, or because of fundamental philo-bents.
Some views attempted to reshape thinking around the deterministic fundamentals that stemmed from PSTN and SNA legacy architectures. We continue to see this type coercion today — for example: through misguided implementations of faux QoS and IMS, which, for the most part, serve to de-prioritize over-the-top services as much as they serve to achieve anything else.
See this 1995 article from the November 1995 issue of Business Communications Review (RIP?), for example:
“TCP/IP congestion control: can you win the battle? ”
http://www.highbeam.com/doc/1G1-17878860.html
.. which questioned the sustainability of an Internet in the face of widespread adoption of T1 rates.
Yes, this sort of grumbling never goes away. Those doing the work have a vested interest in portraying it as vital to the survival of the internet. But to an extent, they are right – the work that gets done on these subjects eventually becomes part of the internet, the problems get solved, and we move on to a new set of problems. Gloom and doom is perhaps a natural part of the process of handling the evolution of the internet.