When everything is connected, and all of the data those thingies collect is somehow stored, all the fabs in the world wouldn’t be able to make enough memory to hold all of the bits the sensors sense. However, there are a bunch of really smart people who can see this inevitability and are worried about it. Better than that, they’ve got ideas about how to manage the hydra that is growing in the cloud, computers, devices, and thingies. That’s both the good news and the bad. It’s good because there are some really exciting and clever solutions being suggested, studied, and prototyped. It’s bad because it’s chaotic with no clear answers, and as often is the case, the best technology may not win.
We should think of the coming explosion of data and the benefits and dangers it brings as the virtualization of things. Almost anything that can compute and be programmed will be time sliced and shared, using its wasted idle cycles for the common good—a white hat Trojan—in a perfect world. The opportunity for abuse is staggering, the potential for good even greater, if ... if we can harness the power total connectivity, communications, unlimited compute, and stor-age will give us.
Think about it for a moment. Every camera, every one of them, becomes a terminal, every wearable becomes a snitch, everything that moves reports on where it is, how it got there, at what speed, and who and what it met along the way.
In the meantime, machines all over the world contribute to the numbers-crunching of the world’s grand problems, and the answers become tenable, obtainable, with predictable time scales for the answers.
One of the big contributors to solving such grand problems will be parallel processors found in GPUs and Intel’s Phi. Sharing resources through virtualization will squeeze new levels of efficiency and utilization out of processors that might have spent a good portion of their lives idle, consuming power with nothing to show for it.
This week the U.S. National Archives declared it would upload all its holdings to Wikimedia Commons, and in Europe IBM’s partner Softlayer announced a 15,000 server farm for London to match its center in Amsterdam. In addition, IBM added that it will be a Linux platform, and it won’t necessarily be a “mainframe.” While there might be some nostalgia value in having “mainframe” technology offered via the cloud, there is the question of how much value that offers modern firms that haven’t grown up with the big iron. We’ve all heard about edge servers, and ARM-based servers, so all sizes, shapes, and colors of processors will populate the servers that make up the ubiquitous and ever-present cloud.
But the storage can’t scale, or so HP believes, and they propose to keep the data (stored) locally and use the cloud as a giant index to tell us where the data is—the modern day Alexandria li-brary Dewey Decimal virtual index card system.
That’s one of the proposed schemes for the IoT, and it finds resonance with lots of people who for legal, privacy, sovereignty, and competitive issues don’t want, or trust having, their data in a public, or even a private, cloud.
We already carry a load of data with us, up to 64 GB in our smartphones, another 32 GB in our tablets, at least 16 GB in our notebooks, plus 500-GB SSDs (I have two in my M3800). And we are already using online (i.e., cloud-based) storage and indexing systems like Egnyte, Salesforce, and Dropbox. So the common knowledge is that these elaborate, interconnected, globally available systems aren’t novel, and they aren’t very expensive. Therefore, there’s really not going to be a revolution, there almost never is. Yes, virtualization and IoT will be big and important in our lives, and we’ll look back at the turn of the century and wonder how we ever managed with such primitive, limited systems. But wearables, IoT, and virtualization aren’t going to put us in a warp-drive leap into some fantastic science-fiction future. Not that such an event would upset me, mind you.