The next three eras of computing

I've been thinking a lot lately about where computing goes from here, and what it might do for us in the next couple of decades. And then it struck me. The future of computing may be more like taking a step through the looking glass. It changed the way I think about what's next. I call it "Bell's Event Horizon." Let me explain.

The main reason for this post is that I'd like to invite your feedback on the idea. And the idea goes like this...

Transistors, the chips they make possible, and the devices those chips then power, have been shrinking since the 1950s. We now carry super fast supercomputers in our pockets and purses, each with more capability than computers that used to fill entire rooms. And transistors still have some shrinking left to do. This we know. Moore's Law 101.

Bell's Law has held true for the last half century of computing. Bell's Law observes that roughly every decade, computers shrink in both price and physical size and eventually create an entirely new class of computing. Mainframes filled an entire room, mini-computers were the size of a couch, and the first PCs were the size of an ottoman.

Slide1.jpg

PCs shrank down to laptops. Next came smartphones. Now, wearables and IoT have emerged as the next class of computing (wearables are really just IoT applied to humans). The question I've been thinking about is this: What comes after IoT? Does computing just shrink even further and get even cheaper? Is the next computing class that of nanobots and computer "dust"? Perhaps. Or could something else happen? Is there another way to think about all this?

From "ever smaller" to "better virtual"

Perhaps we should stop thinking about ever smaller computers that asymptotically approach zero size, and instead think about what happens once computers have "negative" size and push into another dimension altogether. In the same way that complex numbers have a real and imaginary component, could we think about computers having a physical and virtual component? The virtual component of a computer could describe how big and how realistic a simulation of the real world a computer is able to generate. So, rather than building future models of computers shrinking indefinitely, should we instead think about them crossing through some sort of "event horizon" or "looking glass" and transcending physical form?

Once we cross "Bell's Event Horizon", we could begin to plot out the imaginary/virtual dimension of Bell's Law. In this world, as we progress through time, rather than computing getting physically smaller, we instead see the size and complexity of the VR/AR/MR simulations we are able to create with our computers get larger and more detailed with each successive generation. For example, our first generation beyond the event horizon could be the ability to simulate individual objects with a high degree of realism. Those objects would be realistic enough in a mixed reality scene that to an observer they would be indistinguishable from real objects. The next generation beyond that would be the ability to render entire virtual scenes, totally realistically. This could either be totally digital (VR) or a mixed reality scene where the entire scene is modified in a 100% realistic way. The view from your living room window might be changed to look convincingly like your house is located on the top of a mountain. The next obvious generation beyond total scene rendering would be full world rendering, an entire world simulation in full, realistic detail. Again, this could be either 100% digital (VR), or mixed reality (MR). In a mixed reality world, you could "edit" the entire world to your liking. Perhaps you would choose to remove all graffiti, or you might enjoy seeing dragons flying above you in the street, or if you like, you could see the entire world as if were made out of cheese. Your choice. At each new step in this virtual domain of Bell's Law, the size of the simulation made possible by the computing class grows to a new level.

Feedback please

Here's the point in this post where I have hopefully explained enough of my thinking that you get the concept, and I ask for your feedback on the idea. What do you think? Stupid waste of time, or am I on to something? Could this be a helpful construct to think about what the next classes of computing described by Bell's Law might be? Is that useful to help us think about what's next? 

I'm not saying this is the only way to think about the next few classes of computing. It's clearly not. For example, other new dimensions could be considered here, such as artificial intelligence. How could AI be incorporated into a model like this? Are there levels of AI we might consider as the underpinnings of the next classes of computing? ANI, AGI, and ASI are obvious ones, but there may be more subtle levels we could consider. Or is that just muddying the waters, and over-complicating things?

This idea of Bell's Event Horizon is offered as just one lens through which it may be helpful to think about what's next. About what we might choose to build in the future, and what we might do with the next several major computing classes.

I invite your comments. Please help me either kill this idea as a bad one, or help me make this seed of an idea stronger and worthy of sharing more broadly. Thanks! Can't wait to hear all your thoughts.