The evolution of cloud infrastructure

From the network to the far edge

Ted Pollak

I had a conversation with Jacob Smith, the CMO of Packet. Packet is a company that manages the raw infrastructure in data centers and has various products that automate data center tasks. They are being acquired by Equinix, of dot com infamy, who have quietly returned to a very large presence in US data infrastructure.

The fun part of it was that he didn’t talk too much about his own company or the deal. Instead, it was a conversation about the evolution of data centers and cloud computing. And how today’s low latency demands are changing the formula for success. As a gaming tech analyst, this is a very important area to understand.

Cloud 1.0: The network edge

CDNs (content delivery networks) were the first evolution of using cloud infrastructure. The data centers were placed at hubs in major cities like Los Angeles, Seattle, and Amsterdam. Netflix, Xfinity OnDemand, and other “on demand” video services used these locations. The data centers only needed to store data as it was simply called upon as a stream from the customer’s client device.

Cloud 2.0: The regional edge

The regional edge was needed once people started using the cloud in a dynamic and interactive way. Instead of storage in data centers, you need actual computers in data centers. And you needed the data center to move into smaller regions. This is where we are now.

And there is a turf war going on behind the scenes. Essentially, there is a private network on the internet, enabled by efficiency-focused software layers. What this means is that the internet is not neutral, and there is proprietary software directing traffic for the highest speeds to paying customers. And companies have to pay for efficiency.

Some companies involved in cloud gaming beyond the obvious (Stadia, Nvidia, Microsoft) are Hatch Mobile, a subsidiary of Rovio, moving mobile games into data centers for cloud streaming. Network Next is a company that works with game publishers for optimal routing of their games to their customers. And Haste is a company that gamers can subscribe to who duplicate gaming packet data, send them along different routes, and use the most efficient packet to reduce latency.

Cloud 3.0: The far edge

The far edge is the next evolution in cloud computing. This is essentially bringing the data center to the neighborhood. This will indeed revolutionize computing. As it stands now, cloud gaming actually works well on the regional edge for millions of customers in the United States. Perhaps tens of millions depending on the latency sensitivity of the content. But when far edge cloud computing data centers start popping up we will see potential customers swell many fold.

What do we think?

The takeaway from all of this is quite simple: for cloud computing to be flawless you have to move the server closer to the users because you cannot make the speed of light go faster. So if we want to reduce latency in cloud computing we must bring the light source closer.

Comment from Nvidia

The large IaaS guys selected data center locations based on 2 factors: cheap power and low taxes. That’s why they’re all parked in Oregon, Virginia, Ireland, and Amsterdam.

Most of their applications are data-based and it also worked well to have a few large data centers with large storage farms.

Now along comes 5G edge applications like cloud gaming and you need the opposite.

Your locations need to be based on latency to large cities. You need to be in all of them.

Therefore, GFN spreads out with partners like who put smaller data centers in more cities and we [Nvidia] partner with international telcos to get on the Edge of their networks with peered connections.