Peering relationships have kept the internet working smoothly for decades, but given the pace of change in today’s cloud-powered digital economy it should not be surprising that peering is evolving too. How should it evolve to best keep the iottabytes flowing? With us today to talk about where peering has been and where it should be going is Peter Cohen, Vice President of Peering, Interconnectivity and Ecosystem Development at QTS. QTS has been actively advocating for more network access points and Internet diversity nationally with and globally.
TR: How did you get involved with peering and interconnection?
PC: I started out in the software business coming out of college before moving to the internet business for the last 20 years. During that time I’ve worked at carriers, data centers, and over-the-top content providers – two stints at each. My responsibilities have included both sales and marketing and engineering, a marriage between the three that has given me guidance in how the internet is connected, what problems exist, and how to solve those company by company.
TR: How has that now led you to your new role at QTS?
PC: There are several things that brought me here. First, I like the data center industry. I think it’s an interesting part of the business that is involved in essentially all connectivity between networks. And second, I have previous history with many of the people at QTS that really gave a selling point for joining the company to continue working with them. People that you may have worked with at another company, whether as a vendor or a customer or a co-worker. And third, we’re doing some really interesting things. As a publicly-traded company with long term vision, QTS is positioned for long term success and growth, and it has an attractive product for all different types of companies across many different industries and in different locations.
TR: How has peering evolved over the years from your viewpoint?
PC: Back 20+ years ago, what you had was sort of a loose infrastructure of IXPs running circuits between each other, essentially routing the internet within a few historical telco buildings. These legacy interconnections took a long time to get going, and were not growing at the speed that the internet needed. So a couple of networks put out an RFP, and that basically initiated the development of the modern, purpose-built data center. With that advent, networks were able to connect in a much faster fashion: things that had taken weeks and months now took days. That was a good thing for the internet because, otherwise, the scaling wouldn’t have existed, and the internet’s growth would have been delayed. Fast-forward a few years and we had 10 buildings in the United States that handled the bulk of peering connectivity. Some edge data centers have been constructed to solve regional needs, and you have had more places try to pop up, but the bulk of internet companies still connect to each other within a very few data centers. This created a reliance on infrastructure in 10 cities or so nationally, and these companies are beholden to those data centers and are subject to cost, power, and space limitations, and to the policies, of those data centers, some of which were never designed with this in mind. This really is putting all your eggs in one basket repeatedly. Having everyone’s peering fail over to another city hundreds of miles away is not a real great plan for long-term growth in the longevity and stability of the internet, but that’s kind of where we’re at now.
TR: We have seen big changes in public internet exchanges and many now span many facilities in a metro area, has that not also filtered down into private peering?
PC: The internet exchange business is a different infrastructure question. There’s a huge value in public exchanges for large numbers of internet companies, but many of the larger folks that are exchanging traffic do it privately. Most won’t be doing that over public infrastructure, and that has been going on for nearly 20 years.. For them, unless they are on an internet exchange in multiple locations, exchanging traffic with another company that’s in multiple locations within the same metro, they wouldn’t actually achieve any redundancy. It’s more of a transport mechanism of exchanging traffic than an infrastructure play for stability and resiliency.
TR: How have new technologies affected the world of peering?
PC: First of all, the lower costs and greater availability of optics has made for larger connections and decreased costs those router-to-router connections. Second, the transport and remote end peering piece, which can be a scaling issue for some companies, has also developed, enabling the connection of hardware to an exchange point or to a data center but actually have the router reside elsewhere. That doesn’t necessarily bode well for reliability, but it does provide a steppingstone toward adding additional sites at a lower initial cost than deploying router to new locations. As a rule of thumb, router gear is more expensive than transport gear.
TR: How has the digitalization of today’s world changed how connectivity is acquired and managed?
PC: The connectivity needs of the website have changed dramatically over the years. In the beginning, companies bought internet access, and essentially resolved all of their network traffic through it, such as web properties or other pre-cloud servers etc. With the digitalization of the modern economy, that single website with which you offered your goods, took orders, and processed them is essentially gone. Now you’re looking at a number of different networks you need to touch in order to operate that website. Some might come from a one-stop supplier of some sort, but even that supplier has to be connected to a huge number of other networks in order to control and deliver their traffic most efficiently. Whereas I could previously just buy internet access, I now need to think about how to get to a banking website, how to store the actual backend database that houses my inventory, how to get to a CDN, how to collect analytics data, and how to get to each individual residential ISP that houses consumers that come to me to buy whatever I am selling then the actual backend database that houses your inventory.
TR: How do you see peering and interconnection evolving in response to these changes?
PC: With regards to infrastructure, it’s the thought of metro redundancy. As an operator I would want to be able to be connected to all those destinations in a secondary location as well, and have that traffic resolved within my market, rather than leaving that market and hairpinning it back in order to deliver traffic. That will manifest itself differently for different types of networks, whether it’s a web property or CDN or an ISP. This makes second sites in major markets to handle peering traffic between the current large networks a start in the right direction. As an example of something missing now, the infrastructure exists for many of those relationships in a single site, but is missing in Chicago or Dallas or Northern Virginia, or New Jersey, wherever it may be in a simultaneous secondary site. That will come in the form of additional data centers, but also it’ll be dependent upon that particular company’s sort of need. It’s not a one-solution-fits-all situation, but rather an overarching premise that the existing infrastructure is not really built for failover and that something needs to be done about it.
TR: Who needs to be on board to make such changes happen?
PC: We need buy-in from each end in order to have the infrastructure work for all involved: cloud-based companies, residential eyeball companies, and CDNs. The internet doesn’t exist and shouldn’t exist in a single data center in any market.
TR: How do you hope to get them to buy in?
PC: I think there’s an education factor. Networks, CDNs, SDNs, CSPs and gamers need to come to the realization that if they are supplying a service to customers, they need to think about what makes it more reliable. What can they do to avoid truck rolls, to reduce outages, downtime, and unnecessary costs? The last thing anyone wants is a customer calling up because they can’t reach a website or can’t purchase something or can’t process something. When you think about that on an SLA basis, it can be hundreds of thousands of dollars of cost for downtime on a per-minute basis, depending on the business. It’s much better to engineer things to avoid failures and outages than to patch them along as they go and hope it’s optimal.
TR: Is anyone standing in the way or is it just inertia?
PC: No one’s really at fault to blame for the existing situation. It’s a marriage between all parties that are involved, so let’s look at the existing situation and see where we can move it forward. None of them work on their own. But where there used to be few parties involved, there are now many, and they need to work together better than they do now. We know what the Internet interconnection infrastructure looks like now, what should it look like going forward?
TR: It’s not as if we haven’t been thinking about redundancy and reliability before now though, what is changing?
PC: I think there’s an unpredictability of the upcoming complexity, with lots of moving parts. But the one thing we know to be constant is the need for quick, reliable, redundant, and cheap interconnection between two companies that want to talk to each other. It cannot be understated how important it is for that to be the case for all the partners you have and get them where they need to be connected reliably and redundantly. We need fast installation between partners without hassles, without building risers, without entrance fees, and without delays, and that has not changed in all my 20-plus years of working on network connectivity.
TR: Thank you for talking with Telecom Ramblings!
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Datacenter · Industry Spotlight · Interconnection
Discuss this Post