Follow-up: Video of 100-year old Lyle Becker trying VR for the first time

I recently had the distinct pleasure of being in the room when my friend, Lyle Becker, first got to try out VR. I wrote about the experience in a recent post. Just think about the incredible technological change that Lyle has seen in his 100 years on the planet.

Video is often able to tell a story even better than you can with words and photographs. A couple of my old buddies in Intel's internal communications team were there with Lyle to capture the moment. They put together a really nice video (Running time 1:48) that shows Lyle adventure into virtual reality. You can spot me lurking in the background in a few scenes, wearing a blue, short-sleeved shirt.

If you want to feel good about something today, this video is worth a couple of minutes of your time. It was also picked up by Venture Beat where Dean Takahashi wrote a short piece and shared the video.

Enjoy!

Kudos: The video was made by a couple of my awesome former coworkers, producer, Rob Kelton, and videographer/editor, Tim Herman. They did a great job capturing the moment for posterity! Well done, gents!

The day 100 year-old Lyle Becker first tried VR

When I first met Lyle Becker he was just 80 years-old and virtual reality as we understand it today was largely still the stuff of science fiction. Two decades later, my friend Lyle is a feisty centenarian. And today he gets to try VR for the very first time.

Surrounded by friends, family, and several Intel employees, Lyle gets ready for his first adventure into virtual reality, inside a secret VR lab at Intel's campus in Oregon.

Surrounded by friends, family, and several Intel employees, Lyle gets ready for his first adventure into virtual reality, inside a secret VR lab at Intel's campus in Oregon.

Lyle is the liveliest 100 year-old person that I know. He still tries to walk about a mile each and every day, and he’s a delight to be around. He's curious about everything. Especially when it comes to new technology.

Just think for a moment about how much technological change Lyle has seen in his lifetime. When Lyle was a kid he went to a one-room school, by horse. He lived in a home on the prairie with one light bulb and no telephone. And his dad didn't rig up that light bulb until Lyle was 12 years-old. Since then he’s seen the rise of the motor car, the coming of commercial radio and then TV, the democratization of air travel, the nuclear age, jets, rockets, antibiotics, satellite communications, 8-track, digital watches, CDs, cloning, cell phones, the Internet, PCs, social media, streaming video, and so much more. And Lyle has loved it all. Well, perhaps not 8-track.

When asked what he considered to be the most impressive technology breakthrough that he's seen, without hesitation Lyle replied, “GPS”. It's a technology that most of us now take for granted, but if you stop and think about how GPS actually works, it's still amazing stuff.

These days, Lyle’s hearing need a little help from modern technology, and he wears thick glasses to help him see, but his mind and his wit is still as sharp as ever. I hope that I'm as healthy and strong as Lyle is when I'm a hundred years old. Heck, we all do.

Lyle is a regular reader of this blog. He comments on my posts, both here and on Facebook, more than any other reader. He loves technology and is fascinated by what’s coming in the future, whether it be self-driving cars, hearables, or mixed reality.

Lyle’s daughter, Patty, is a close friend of mine. This afternoon, on a warm, blue-sky day, Patty drove Lyle out to Intel’s Jones Farm campus in Hillsboro so that he could visit Intel’s expansive VR lab and experience virtual reality for the very first time. And wow, did he LOVE it.

Lyle wasn’t really quite sure what to expect. I asked him what he anticipated virtual reality might be like and he said, “I’ve seen 3D movies. I’m guessing it’ll be a bit like that”. I told him to expect that it would feel like he had zoomed through the screen, and was actually now inside the movie itself.

As the Intel engineer slipped the HTC Vive headset over Lyle’s head, a small crowd of friends, family, and Intel employees had gathered to share the moment.

100 year-old Lyle Becker tries virtual reality for the first time.

100 year-old Lyle Becker tries virtual reality for the first time.

The first stop on Lyle’s tour of VR was a trip to the bottom of the ocean, thanks to and app called theBlu: Encounter. Lyle sat comfortably in a chair as he was virtually immersed in water and surrounded by jellyfish, turtles, and a beautifully-rendered coral reef. Lyle doesn't like to make a fuss about most things. He calmly takes everything in his stride. Perhaps that's why he's lived so long. And so when Lyle said "Yeah, this is good. Very good", and later "that's amazing", those who know Lyle well knew that he was mightily impressed by what he was seeing.

Using the hand controls didn’t come very naturally and it took a few minutes for Lyle to figure out how to move around and interact with the virtual seascape, but within a few minutes he was prodding at jellyfish and having a whale of a time (pun intended).

When I asked him afterwards which experience was the most impressive, it was this first one. Often I find that it’s the very first VR experience that you have that stays with you the most. For me, it was a Hubble space telescope demo that I’ll always remember, since that was my very first experience of high-quality VR.

Lyle’s next experience was to hover thousands of miles above the surface of the planet and to fly over the streets of Florence and around Devil’s Tower, thanks to Google Earth VR. The experience wasn’t intuitive and it took Lyle quite some time to get the hang of zooming in and out, and moving around the planet. Most VR experiences still need some work before they are as natural and intuitive as it would be to just use your hands. Today, users need to be trained on how to use a complicated combination of triggers, buttons, and joypads. Brain-machine interfaces, natural language voice interfaces, and high-fidelity hand sensors like the ones from Leap Motion, can’t come soon enough.

My-Hanh Eastep helps Lyle get ready for his virtual flight.

My-Hanh Eastep helps Lyle get ready for his virtual flight.

In his younger days, Lyle was a pilot. He served in World War II, flying over the Himalayas between India and China. He was also a commercial pilot for over a quarter of a century, and later became a flight instructor and an air traffic controller. Needless to say, Lyle knows his way around a plane and loves to fly. The Intel engineers had saved the best experience for last.

As a special treat, Lyle’s final VR experience of the day was to sit in the cockpit of a single-engine plane, courtesy of the very latest in VR flight simulation technology, AeroFly FS2. Using two joysticks, one controlling flight and the other controlling throttle, Lyle took to the air. It was a delight to watch him flying again. Lyle was back in his element, flying around the skies with ease.

“Is there a place that I can land this thing?”, he asked. Since it was early-access software, Lyle was told that no, there was nowhere for him to land yet, but he could crash the plane into the ground if he wanted to. “No, I don’t want to do that. That’s not right. I think I’ll just keep on flying”.

Lyle showing us how it's done as he flies around inside the virtual world created by AeroFly FS2.

Lyle showing us how it's done as he flies around inside the virtual world created by AeroFly FS2.

Virtual reality is an amazing technology that is able to transport people in time and space. VR has an incredible set of potential applications for training, simulation, communication, entertainment and more. But today it was used to reunite an old pilot with the freedom of the skies. And it was a joy to watch.

When asked what he would like to see VR be able to do in the future, Lyle said he would love to be able to experience Mardi Gras on Bourbon street and listen to the Dixie bands play. One day he would love to get a headset of his own so that he can use it to watch YouTube videos from around the world, and experience them in the comfort of his own room.

Wonderful to see Lyle again. He's happy to see me, I promise. He just doesn't smile that much. :)

Wonderful to see Lyle again. He's happy to see me, I promise. He just doesn't smile that much. :)

When told that the next step in VR headsets was wireless technology, Lyle quickly responded, “I don’t have that much time left, so you better get busy with that.”

My thanks to My-Hanh Eastep, Aisha Bowen, Bryan Pawlowski, and the other folks at Intel that made this all possible today.

Most of all, my thanks go to Lyle for being such a great inspiration. For always being so curious. And for being one of my most avid followers. I hope Lyle had as much fun today experiencing VR as we all had watching him try it.

Lyle can't wait to see what VR will make possible next. And neither can I.

Interview: PDX Executive Podcast

1200x630bb.jpg

We're in the middle of the vacation season, I've been traveling quite a bit for work, and I also spent a couple of weeks in Europe to celebrate a milestone birthday (21 again). As a result, the Bald Blog has suffered with no new content for about six weeks. Shame on me,

By way of a makeup, here's a link to a podcast of an interview that I did earlier this week for the PDX Executive Podcast with Dan Bruton. In it we discuss what a futurist does, and then go on to discuss a range of topics including: the future of media, blockchain, how industry sectors are now colliding, and why I choose to live in Portland, Oregon.

Enjoy.

(Interview also available on Soundcloud here)

Addendum: Next wave of automation

Just shortly after I had hit the "publish" button to share my most recent post ("The next wave of automation", I stumbled across this new video. It's nicely made and talks directly to the very same topic, though it focuses much more on the human and societal impacts of broad automation. I thought it was worth posting here by way of an addendum to my previous post. It's worth taking a few minutes to watch and absorb it. 

Talking about the next huge wave of automation, how it might reshape our society, and what we might wish to do as a result, is probably the prime conversation we should all be engaged in right now. And yet, as a society, we really aren't.

The next wave of automation

Tsunami.jpg

As a futurist, I spend a lot of time thinking about how businesses will be transformed with new technology. No industry sector is immune from the next big wave of automation. And that wave is hitting right now.

A rich alphabet soup of technologies will change the way that all businesses operate: AI, IOT, AR, and 5G will combine with advanced sensing, robotics, and analytics to automate or semi-automate most business processes. Leaders who embrace this next wave of automation will position their companies for long-term success. Those who avoid it risk decline and irrelevance.

Automation is coming in many forms. In this post I’ll walk you through four different ways that automation will show up in business within the next five years. Often I'll use retail examples since retail is a sector that many of us are familiar with. We nearly all shop. And many of us either work in retail, or worked there earlier in our lives.

Here are four areas that every leader should be watching closely as they think about th future of their business.

1.     Artificial Intelligence replaces the gut

GOODBYE GUT

Many jobs today require a human being with relevant experience and skills to make a set of decisions. Examples include: picking stocks, selecting next year’s fashions, or laying out an advertisement so as to attract maximum attention. People in these roles use their training, their experience, and their judgment to make decisions that they hope will optimize business results. Often, they use their gut. They call upon wisdom that they can’t quite explain. They might call it intuition. And often humans are wrong. We have cognitive biases that can betray us and lead us to the wrong conclusions when presented with data. Often that data can just be too complex for a human to process. Sophisticated analytics, leveraging machine learning, can spot patterns in a sea of data that a human being will never see on their own.

Business decision-making is going to change dramatically in the next decade. Artificial intelligence algorithms are improving at a rapid rate. Tasks that formerly relied on the guts of humans (who were judged successful if they were right perhaps 50%-60% of the time) are now potential candidates for artificial intelligence (AI) to take over. 

2.     business process AUGMENTATION

The Internet of Things (IOT) is perhaps an over-used and over-hyped term but it will have profound implications for every business in the near- to medium-term. Smart companies will embrace IOT as a way to semi-automate every business process. A good approach is to look at each business process and break it down into the tasks best done by a human, the tasks best done by an algorithm, and the tasks best done by a robot. Think of it as building teams that include humans, algorithms and robots, working side-by-side. This allows companies to augment each and every major business process.

As an example, consider the example outlined in the diagram below. A smart fitting room in an apparel store can sense which garments a customer has decided to try on using RFID and then look up information on a customer by identifying their phone with a Wi-Fi hotspot. Analytics can use a “goes with” database to recommend accessories and other garments that might look good with items the customer is trying. After checking to see the items are in stock, the algorithm pushes a pick list to the store associate, mapping the items on a store map using the planogram. The store associate quickly picks the items and brings them to the fitting room.

This approach maintains the humanity in the brand, but augments the store associate’s fashion sense and speeds their ability to pick product. Also note that the dexterity of the human far exceeds anything that a robot can do today, making them the best choice to do the picking of clothing and accessories from the store. Embracing IOT means a totally new labor strategy. It’s time for HR and IT departments to become best friends.

3.     Autonomous machines

Bossa Nova robotics is making robots to roam the aisles of stores and take inventory using a range of 2D and 3D cameras.

Bossa Nova robotics is making robots to roam the aisles of stores and take inventory using a range of 2D and 3D cameras.

The robots are coming! For years, robots have built our cars and more recently vacuumed our floors. Beyond that, robots have largely been confined to science fiction. Advances in sensing, AI, batteries, and mechanics are finally making robots broadly viable. Now that robots have technology to "see" their environment they can safely be placed in spaces with human beings. You can already find robots in many areas. Amazon purchased Kiva robotics to automate their warehouse picking processes. Bossa Nova robotics has been building and testing exciting new robots able to do stock-taking. Lowe’s Orchard Supply Hardware has experimented with robot greeters, able to recognize what customers are looking for and escort them to the correct aisle of the store.

This is Harvey, a robot that shuttles around plants at nurseries, saving humans from having to do highly repetitive, strenuous work that leads to back injuries.

This is Harvey, a robot that shuttles around plants at nurseries, saving humans from having to do highly repetitive, strenuous work that leads to back injuries.

SAM-100 is a robot that lays bricks. SAM-100's manufacturer, Construction Robotics, claims that their robot can lay 3000 bricks in a single day, compared to about 500 for a skilled human bricklayer. Harvest Automation has built robots to help with back-breaking tasks in nursery farming. Their HV-100 product can be used to thin out shrubs as they grow, normally a miserable, tough task for migrant human labor. HV-100, also known by the nickname, Harvey, can complete these tasks entirely autonomously, and can even partner with other robots to work in a team.

Starship technologies delivery robot. Can hold up to 2 bags of shopping and has a range of 3 miles. Autonomously pilots its way along the sidewalk to deliver to homes.

Starship technologies delivery robot. Can hold up to 2 bags of shopping and has a range of 3 miles. Autonomously pilots its way along the sidewalk to deliver to homes.

One of the most transformational robotic innovations comes from Starship Technologies. They have built fleets of delivery robots able to autonomously navigate sidewalks to deliver items direct to a customer’s front door. The 6-wheeled robots live inside a specially converted van, which acts like a mother ship. Algorithms guide the (human) driver of the van to the optimally-central, safe place to deploy the robots. The robots then scatter to do their deliveries and return to the van ready to go to the next drop-off point. Starship claims their technology will ultimately enable deliveries for a dollar. Any business that provides good or services to customers needs to be asking themselves what their one-dollar-one-hour delivery strategy is. How many people will still come to your store or place of business when your competitors are using robots to deliver products in an hour, for a dollar?

4.     Augmented workers

Daqri builds an augmented reality helmet designed for the construction and maintenance sectors. Here a worker is shown instructions on how to operate a valve using augmented reality.

Daqri builds an augmented reality helmet designed for the construction and maintenance sectors. Here a worker is shown instructions on how to operate a valve using augmented reality.

There are still many tasks that robots just aren’t suited for. And that will be true for the foreseeable future. Robots have strength, reliability, and endurance on their side. They’re great for repetitive tasks and dangerous environments. But they just don’t have the dexterity, creativity, or empathy that most human beings are born with. Mixed and augmented reality technology holds great promise. A headset with augmented or mixed reality can be used to guide workers on how to perform a task, essentially offering real-time, on-the-job training. This enables complex tasks to be performed by pairing the insights of algorithms and AI with the physical dexterity of a person.

What you have created in this scenario is an augmented worker. Trials of augmented reality task management are already underway in the construction, maintenance, and logistics sectors. Take a look at what Daqri is doing as a prime example. Applications in many industrial sectors are bound to follow, including in distribution centers, customer service, surgery, education, manufacturing, and beyond. Microsoft is also chasing this space with their HoloLens and Windows Holographic platforms.

Construction workers wearing Daqri headsets so that they can "see" where ducting, electrical, plumbing, and other systems are to be located based on architectural plans.

Construction workers wearing Daqri headsets so that they can "see" where ducting, electrical, plumbing, and other systems are to be located based on architectural plans.

No company is immune from the next big wave of automation. Both blue and white collar jobs will be transformed, and some jobs will be entirely automated. The smartest leaders will find ways to maximize the effectiveness of their employees rather than seeking to replace them with automation technology. The best teams will be made up of people, algorithms, and robots working closely together. The digital intelligence of algorithms and robots will support the emotional intelligence of human employees so that they can optimize business operations and deliver the best overall experience for customers.

At the end of the day, brands are about trust. And trust comes from the humanity in your brand. Smart leaders will direct their operations teams to use automation as a way to amplify the humanity of their brands, and not to replace it in the name of progress.

Buying a car in 2027

By 2027, you'll care far more about the software in your car than any of the hardware.

Mercedes F 015 autonomous concept vehicle

Mercedes F 015 autonomous concept vehicle

Ten years ago you probably weren't even aware that there was any software running in your car. Yet there were millions of lines of software code running in most cars, even back then. This software was in the engine control system, the anti-lock braking system, and other subsystems in the car. Today, any car with navigation, bluetooth connectivity, or a fancy digital dashboard obviously has even more software running in it.

Back in 2007, you primarily thought of your car in hardware terms. And you chose it based on how much hardware you could get for a given price. You might have considered things like how many cylinders the engine had, how many speakers the banging stereo had, or how many cup holders could be found scattered around the vehicle.

Here in 2017, we tend to have a more nuanced view of cars. The petrol heads will always obsess over the engine's capabilities, and of course most of us care about how our car looks. But to a large extent cars have become "good enough" and so you don't really worry too much about the engine, the transmission, or most of the other hardware in the car. Whichever car you choose, you know it is likely to be fairly reliable and have enough oomph to get you around.

We are in a transition period for cars that is similar to the transition that phones went through in the last decade. People care more and more about the connectivity of their cars, and they are starting to care more about the software features of the vehicle: braking assist, steering assist, parking assist, and so on.

By 2027, you'll choose your car based on the software it's running and how many cool apps it has. You might even think about what operating system your car runs on, and how well that operating system interfaces with the other smart objects and systems in your life--the operating system that runs your phone, your home, or your life. 

Source: Mercedes

Source: Mercedes

Source: Mercedes

Source: Mercedes

Cars will be spaces that we spend time in. Spaces where we do things other than driving. And so cars will evolve to provide a range of in-journey services. That means lots of new software. We will expect apps for in-journey entertainment, apps to keep the kids occupied, apps so that we can do office work on the go, apps that turn the car into a meeting room, apps for sleeping, and so on.

The killer app will, of course, be autonomous driving. I agree with Elon Musk's recent statement that a decade from now every car will have autonomous capability as standard. The cost of adding autonomous driving features will continue to fall as sensor and computing prices tumble. Once the cost adder for autonomy heads below $1000, and then below $500, why would anyone bother buying a car that doesn't have it? The benefit of having it is just so compelling. It was the same way with ABS. It's pretty hard to buy a car today that doesn't have anti-lock brakes as standard.

By definition, an autonomous vehicle doesn't need a driver. And with no driver there is less reason to have a single owner. That means that self-driving cars are innately more shareable. And when autonomous cars are shared they become even more valuable.

A fleet of on-demand autonomous vehicles in our major cities will improve safety, aid traffic flow, and reduce environmental impact. They will also allow us to repurpose land in our cities, widening our sidewalks and freeing up space that was given over to parking lots and street corner gas stations. Perhaps most importantly, on-demand autonomous vehicles will improve access to mobility services for people that previously were unable to drive: blind people, people with disabilities, people who are too young/too old to drive, or people who previously could not afford to travel by car. By 2027, an autonomous car will be a lot cheaper to ride in than a taxi (or even an Uber or Lyft) is today because you no longer have to pay for the human driver's time.

Volkswagen Sedric fully autonomous concept

Volkswagen Sedric fully autonomous concept

People will still own cars. But they may own less of them, especially in cities. A family that previously owned three or four cars might go down to two, or even one vehicle if they also have access to a high-quality, responsive on-demand transportation service.

As software becomes the most important part of our cars we will think about them in a totally new way. They will become a space that we inhabit. They will become a service we can enjoy on demand. And they will become far safer. Many thousands of people will no longer die in road accidents each year. We are talking about a total transformation of human mobility the likes of which we have not seen since the arrival of the steam engine.

It's an exciting road ahead.

Goodbye GUI, hello VUI

More and more, we find ourselves talking to technology. And when technology understands what we are asking it to do, and then does it, it feels magical. 

As a futurist, I try and make an effort to road test all the latest services and gadgets, particularly when it comes to my home. I love being able to walk into my condo with an armful of shopping and shout out, "Alexa, turn on welcome home" and see the lighting all come on to greet me. I have other voice settings to turn on music, turn on the TV to my favorite show, or turn the lights down to "mood" mode and turn on the fireplace. It might sound like overkill, and that perhaps I've gone a step or two too far with the home automation, but it's the sort of thing you don't want to be without once you don't have it. A bit like keyless entry to cars felt like a silly luxury, until you had it and realized you never wanted to part with it.

Over the coming years voice will be an ever bigger part of the way we interact with technology. Gartner recently predicted that by 2019, a full 20% of all smartphone interactions will be ones you have with a virtual personal assistant. And by 2020, they predict that the majority of devices will be designed to function with minimal or zero touch. For this to happen, voice recognition and natural language processing will have to get a lot better, but they are both on a decent trajectory. It's looking good.

All this implies that for achieving a good many tasks in our day we will be interacting less with computers through touch and graphical user interfaces (GUIs), and more by using our voices. GUIs run on graphics processors in the client (your PC, mac, tablet, phone, or other devices with a display). Voice interfaces, let's call them Vocal User Interfaces, or VUIs, typically do not run locally on your device and instead rely on computing capability in the cloud to make sense of what you've said and then respond to you.

We are moving from GUIs on the client, to VUIs in the cloud.

As the price of computing continues to drop, we will see more and more smart, connected devices. If the marginal cost of making an object smart and connected is negligible, and there is utility in making that object smart and connected, then designers will make that object smart and connected. It's really that simple. So watch out for smart shoes, smart toasters, smart tennis racquets, in fact smart everything.

These smart devices aren't all going to have a display on them. In fact most of them won't have a display for both cost and aesthetic reasons. To use many of these devices, you're not going to want to whip out your smart phone and use a companion app so you can interact with them. It's much more natural to just talk to them. This is why the VUI is looking so attractive.

To be able to talk to the virtual coach embedded inside your smart tennis racquet is much more natural and intuitiive on the tennis court than trying to use a graphical interface.

"How's my serve, Bjorn?", you'll ask. (Assuming your tennis coach is perhaps the digital embodiment of tennis legend, Bjorn Borg)

"Try not to twist your arm to the left quite so much as you bring your arm up", your virtual coach, Bjorn, might reply.

The GUI isn't going away. It will still have a place in our lives for many of the things that we do. But for many other activities, the GUI will be joined by it's new cousin, the VUI.

 

 

 

Top seven trends to watch in 2017

Advances in technology continue to shape our world, and we can expect to see further advancements in 2017. Here is a list of some of the things I’m watching and expect to see move forward next year. Not all these trends will yield products in the next 12 months, but they are still trends we should all be watching. Investments will be made in 2017 that will bear fruit in 2018, 2019, 2020 and beyond. It's not an exhaustive list but it’s my current hit list of some of the most interesting things going on in tech at the moment.

Merry Christmas, Happy Hanukkah, and I hope you enjoy my list of seven things to watch in 2017.

 

Hearables

Next year we will see the release of a range of in-ear technology designed to translate languages in real time, give you super hearing, and generally have more control over what you hear and how you hear it. Check out my recent post on the topic for more information.

Bottom line: Smart headphones and a range of other in-ear “hearables” could be in the top ten Christmas gifts for 2017.

 

Digital personal assistants and voice

Get used to speaking to things rather than touching things. 2007 was the year of touch which went mainstream with the launch of the first iPhone. 2017 may be the year of voice. Millions of people are already talking to their Amazon Echo, Google Home and other voice-based devices. As these interfaces get better (think of them as currently being crawling babies, getting ready to walk and then sprint) more and more people will use them to get more and more things done. The future of voice is digital personal assistants (DPAs). Again, these are in their extreme infancy today, despite what Mark Zuckerberg would have you believe. The usefulness of a voice assistant is really measured by two main factors…its ability to understand what you want, and then it’s ability to deliver that thing that you wanted. The accuracy of voice assistants will continue to improve in 2017 by applying learning algorithms to all the conversations we had with them in 2016. Recognition accuracy will steadily creep up over time, rising from 90-96% today towards 99% and beyond. In 2017 we may also see some DPAs become more conversational, as Viv attempts to be. And all those engineers at Apple must be working on something, so let’s hope Siri gets a significant boost in capability next year and can understand more of what we ask her to do. Also watch for new partnerships, new deals and new skills to be added to DPA platforms so that they can do more for us when we ask them to. Alexa can already order you an Uber. Wouldn’t it be cool if she could also understand and deliver on a request like “Alexa order me a chicken tikka masala with jasmine rice, a garlic naan bread, four poppadums and a couple of onion bhajis, to arrive at 7pm tonight.”

Bottom line: expect voice to become a more important component of your digital life in 2017

 

Virtual, Augmented and Mixed Reality

This year was a decent year for virtual reality, but it still hasn’t captured the public’s imagination. Sony released their Playstation VR product, which is actually pretty decent for the price, but didn’t promote it for Christmas and so it’s being seen as a disappointment. Next year expect prices for VR headsets to drop precipitously, fuelling increased demand. A $299 headset is much more appealing than $599 or $799. Watch for companies like Lenovo, Dell, Intel and others to make strides in this entry-level space. Also expect VR to go cordless giving true six degrees of freedom for the first time in major VR platforms.

Expect some initial disappointment with the industry’s first efforts around mixed reality (MR), but understand that this is a necessary phase before things get steadily better and MR becomes a broadly compelling platform. Magic Leap are scheduled to show their first commercial product at CES in January but the word on the street is that it won’t deliver on the full promise of their technology in this first rev. Microsoft will likely improve their HoloLens product and they continue to push the Windows Holographic platform as a way to try and unify VR/AR/MR software development efforts. They have first-mover advantage in this space and seem to be getting good traction.

Bottom line: VR prices will drop significantly in 2017. Watch the AR/MR space closely. Once the experience improves and prices drop, this will eventually become the main interface for your digital world.

 

Self-driving vehicles

None of the major car companies are expected to release a fully autonomous vehicles next year, but trials will expand and petabytes of data will be gathered by Tesla, Google, and others as their cars drive millions of miles on real roads. This data will be used to improve algorithms and boost the reliability of autonomous driving systems. In 2017 we may also see some major municipalities announce plans to embrace fleets of autonomous vehicles within certain zones of their cities. Also expect to see further experimentation with autonomous pods designed to carry 4-8 people at once, like the one shown in the photo above. The most interesting services that will pop up in this area will be journey management systems that span multiple transport modalities. Imagine being able to buy one ticket that dispatches a self-driving car to your door to shuttle you to a public transport hub, then includes a ride on a train or tram, and then another self-driving car waiting for you at the train station to whisk you the last mile or two to your destination. Software infrastructure to make this possible will be developed in 2017. Also look for experimentation with new business models that blur ownership and rental. For example, you might get a good deal on a new car if you agree that a percentage of the time when you’re not using it, it can be used by an Uber driver to generate revenue that is split between you, the driver, and the car company. These kind of models get even more interesting once your car becomes fully autonomous and you no longer need the Uber driver. Autonomous vehicles will reshape human mobility in profound ways. What happens in 2017 will pave the way for big launches in 2020, 2021 and beyond.

Bottom line: Watch the big corporate deals being done around self-driving cars in 2017 and get ready for first real products in around 2020.

 

Smart objects

The Cognitoys Dinosaur

The Cognitoys Dinosaur

Smart objects are portals to digital value. They can create entirely new business models and essentially turn products into services, experiences, or transformations, all of which are more valuable than just a simple, dumb product. In 2017, look for a broad range of companies to experiment with smart, connected objects as a way to reimagine product design and generate new revenue streams. Designers will need to experiment here and assess how much value they can create digitally versus physically.

Bottom line: Smart objects are coming. Amazon Echo, Hiku, and the Cognitoys dinosaur are just the beginning.

 

Analytics

More businesses will embrace analytics, not just to understand what’s happening in their operations, or to predict what their customers might do next, but to make core business decisions. Next year may be the year of prescriptive analytics as powerful machines leverage massive amounts of data to remove the “gut” from gut decisions and instead shift businesses towards data-drive decision-making. Expect data-driven decision-making to be used more extensively in areas like product design, planning, operations, and around how to best deploy labor.

Bottom line: No matter what industry you are in, if your company doesn’t already have a comprehensive strategy around predictive and prescriptive analytics, now is the time to get on it. No ifs, ands or buts.

 

Robots

Look for the release of more robots in 2017, each designed to take on specific tasks. We already have Harvey working in plant nurseries, SAM-100 laying bricks, and Baxter working on assembly lines. Perhaps in 2017 we will see the release of a first version of Moley, a robotic kitchen designed to cook us meals in our homes by motion-capturing master chefs. Cool stuff, if they can do it.

Bottom line: The robots are coming. In 2017 we will see a few more of the many robots we expect to see flooding our world in the next decade.

Underpinning many of these trends is one final "megatrend", that of artificial intelligence. Learning algorithms will continue to make giant leaps forward in 2017. Machine learning, deep learning, one-shot learning, sparse learning and other AI fundamentals will all see improvements next year. They will lead to voice assistants that understand us better, cars that drive more safely, better fraud detection, robots that do their robot things more intelligently, drones that fly more safely, analytics that spot more important hidden insights, smart objects that are smarter, wearables that adapt to our individual needs, and so on. 

One thing is for sure--2017 is set to be yet another exciting year for technology! Let me know what you think. What are the big trends that I didn't include here, that perhaps should have been?

The next three eras of computing

I've been thinking a lot lately about where computing goes from here, and what it might do for us in the next couple of decades. And then it struck me. The future of computing may be more like taking a step through the looking glass. It changed the way I think about what's next. I call it "Bell's Event Horizon." Let me explain.

The main reason for this post is that I'd like to invite your feedback on the idea. And the idea goes like this...

Transistors, the chips they make possible, and the devices those chips then power, have been shrinking since the 1950s. We now carry super fast supercomputers in our pockets and purses, each with more capability than computers that used to fill entire rooms. And transistors still have some shrinking left to do. This we know. Moore's Law 101.

Bell's Law has held true for the last half century of computing. Bell's Law observes that roughly every decade, computers shrink in both price and physical size and eventually create an entirely new class of computing. Mainframes filled an entire room, mini-computers were the size of a couch, and the first PCs were the size of an ottoman.

Slide1.jpg

PCs shrank down to laptops. Next came smartphones. Now, wearables and IoT have emerged as the next class of computing (wearables are really just IoT applied to humans). The question I've been thinking about is this: What comes after IoT? Does computing just shrink even further and get even cheaper? Is the next computing class that of nanobots and computer "dust"? Perhaps. Or could something else happen? Is there another way to think about all this?

From "ever smaller" to "better virtual"

Perhaps we should stop thinking about ever smaller computers that asymptotically approach zero size, and instead think about what happens once computers have "negative" size and push into another dimension altogether. In the same way that complex numbers have a real and imaginary component, could we think about computers having a physical and virtual component? The virtual component of a computer could describe how big and how realistic a simulation of the real world a computer is able to generate. So, rather than building future models of computers shrinking indefinitely, should we instead think about them crossing through some sort of "event horizon" or "looking glass" and transcending physical form?

Once we cross "Bell's Event Horizon", we could begin to plot out the imaginary/virtual dimension of Bell's Law. In this world, as we progress through time, rather than computing getting physically smaller, we instead see the size and complexity of the VR/AR/MR simulations we are able to create with our computers get larger and more detailed with each successive generation. For example, our first generation beyond the event horizon could be the ability to simulate individual objects with a high degree of realism. Those objects would be realistic enough in a mixed reality scene that to an observer they would be indistinguishable from real objects. The next generation beyond that would be the ability to render entire virtual scenes, totally realistically. This could either be totally digital (VR) or a mixed reality scene where the entire scene is modified in a 100% realistic way. The view from your living room window might be changed to look convincingly like your house is located on the top of a mountain. The next obvious generation beyond total scene rendering would be full world rendering, an entire world simulation in full, realistic detail. Again, this could be either 100% digital (VR), or mixed reality (MR). In a mixed reality world, you could "edit" the entire world to your liking. Perhaps you would choose to remove all graffiti, or you might enjoy seeing dragons flying above you in the street, or if you like, you could see the entire world as if were made out of cheese. Your choice. At each new step in this virtual domain of Bell's Law, the size of the simulation made possible by the computing class grows to a new level.

Feedback please

Here's the point in this post where I have hopefully explained enough of my thinking that you get the concept, and I ask for your feedback on the idea. What do you think? Stupid waste of time, or am I on to something? Could this be a helpful construct to think about what the next classes of computing described by Bell's Law might be? Is that useful to help us think about what's next? 

I'm not saying this is the only way to think about the next few classes of computing. It's clearly not. For example, other new dimensions could be considered here, such as artificial intelligence. How could AI be incorporated into a model like this? Are there levels of AI we might consider as the underpinnings of the next classes of computing? ANI, AGI, and ASI are obvious ones, but there may be more subtle levels we could consider. Or is that just muddying the waters, and over-complicating things?

This idea of Bell's Event Horizon is offered as just one lens through which it may be helpful to think about what's next. About what we might choose to build in the future, and what we might do with the next several major computing classes.

I invite your comments. Please help me either kill this idea as a bad one, or help me make this seed of an idea stronger and worthy of sharing more broadly. Thanks! Can't wait to hear all your thoughts.

Hearables: The next big wearable platform

BlueEar.jpg

Hearables, which are computers that fit inside your ear, are going to be a big deal. Their capabilities will go way beyond what traditional wireless headphones or hearing aids can accomplish. In this post, I'll explain why we will all be augmenting our hearing within just a few years.

Let's first define the term. A hearable is a computational device that fits in your ear and is wirelessly connected to other devices, and thus ultimately to the Internet. Hearables may perform a wide range of tasks, from enhancing your hearing ability, to measuring your biometrics (such as your heart rate), to providing you with information and services. Most will be 2-way communicators, including both a speaker and microphone.

Your phone will still be with you most of the time, but will be spending a lot more of it's time tucked away in your pocket or purse.

Why focus on the ear?

Wearables are widely hailed as the next big wave of technology. There are many places on your body that it might make sense to sport wearable technology. These include your wrist (e.g. Apple Watch), your feet (e.g. Nike+), your eyes (e.g. Microsoft HoloLens), beneath your skin, and of course, in your ears.

For a few years now I've been thinking that in-ear technology that is built around an audio/voice interface will have a valuable position in our constellation of wearables. Voice interfaces haven't really taken off yet for two main reasons: 1) the accuracy just wasn't good enough (now mostly fixed) and 2) nobody wants to be in an office where everyone is talking to their computer at the same time, and where everyone can hear everyone else's device (and perhaps sensitive or private information) talking out loud.

There are two key advantages of an ear-based interface that make it appealing. Firstly, it's personal...nobody else can hear what you're hearing. Secondly, in-ear technology is ready for prime time right now, whereas augmented and mixed reality technology, which focus on a visual interface, are still several years away. Even when AR/MR hit the mainstream, audio interfaces will still be less distracting than visual ones for a lot of use cases. And you'll need audio to go along with your AR/MR experiences anyway.

Audio interfaces are discrete and natural. If implemented well they can be just like having a personal assistant who whispers just the right information in your ear at just the right time. 

Why now?

The major tech companies have realized that to make their platforms even more indispensable to modern life, they need to get even more personal. They must transfer their value from the phone and bring it even closer. Services like Google Assistant, Cortana, Alexa, Siri and others are most valuable when they are right there and available totally hands-free. Apple's delayed AirPods are likely the beginning of a whole new platform upon which they will create new value and through which they will deliver new services. And as I outline below, an avalanche of interesting startups are rushing into this space too.

The technological under-pinning of wireless audio devices, Bluetooth, is also getting a major revamp this year. Bluetooth 5.0 will quadruple range, double bandwidth, and dramatically reduce the power consumption for stereo audio connections, making it possible to build devices that can operate for days on a single charge.

The hearables are coming.

What new things will we be able to do with hearables?

OK, so why would you want to stick a small computer in your ear? Well, there are lots of good reasons why. Hearables won't just improve your hearing and make it even more convenient to listen to music. They will enable you to communicate in new ways, and maybe even translate language for you in real time.

Let's review the major applications of hearables. I've broken them down into major categories (that get more and more interesting as you go down the list):

  1. Traditional sound-related applications
  2. Augmented hearing
  3. Biometric capabilities
  4. Information/communications services.

Let's review these one by one. And yes, I'm saving the best for last. If you can't wait, feel free to scroll down until you see the Babel Fish picture :)

Traditional sound-related applications

This group of applications are essentially what you're able to do today with existing products spanning the headphone, hearing aid, and ear plug markets, only better.

Just like headphones, hearables will enable you to listening to music and make phone calls via a bluetooth connection to your phone. They will also help mitigate hearing loss, boosting sound levels, or intelligently boosting the signal/noise ratio for sounds you're interested to hear, like voices. They will likely not market themselves as hearing aids though, partly to avoid stigma, but mostly because you need to get FDA approval to do that. Hearables can also capture sound from one side of the head and redirect it to the other ear for people like my pal, Mark, who was born deaf in one ear. Some hearables will also offer hearing protection, digitally compressing sound that enters the ear canal to limit the volume of sound and minimize damage when in noisy environments like concerts, industrial, or construction sites. This is the focus of the new ProSounds H2P product.

Augmented hearing (better than normal hearing)

Hearables will go way beyond what traditional headphones and hearing aids can do today. New devices like the Hear One will allow you to decide how the world sounds to you, applying your own personal EQ and noise reduction. Want to hear a little more or less bass while at a concert? No problem. Where hearables start to get interesting is when you can apply increasing levels of intelligence to how you process and thus perceive sound.

Smart noise cancellation allows you to remove the sounds you don't want to hear (babies crying at the mall, the rumble of an air conditioner at your office, plane noise, the screech of a subway train) but keep the sounds you don't want to miss. For safety, cyclists need to be able to hear traffic noise while listening to music. And parents will still want to hear the sound of a child crying upstairs while watching TV.

Enhanced or "bionic" hearing is one of the promises of some hearables. These devices will intelligently boost quieter sounds and enable you to have better than perfect normal human hearing. This could be useful for first responders listening for survivors, hunters tracking animals, and law enforcement and military chasing bad guys. Enhanced hearing will also come in handy for people at the back of a room listening to under-amplified public speakers. While we're at it, why not build hearables with the ability to replay the last 10 seconds of what you just heard in case you missed something?

Real-time language translation

OK, now things start to get much sexier. How about a hearable that translates other languages in real-time? Waverly Labs is currently building a product called the Pilot which will attempt to translate languages in real-time, much like the Babel Fish in Douglas Adams' great novel, The Hitchhiker's guide to the galaxy. Waverly imagine a world without language barriers and they are targeting the Pilot at a $299 price point; $199 if you sign up to their Indigogo campaign. Let's see if they deliver something decent in May 2017, their projected launch timeframe.

Changing voices

Future hearables may also enable us to augment and modify the sounds that we hear, and change them to our liking. Applications may emerge that let you alter voices, perhaps letting you soften a strong regional accent that you find hard to understand, or hear everyone as if they just sucked the helium out of a balloon. You might download voice packs from your favorite movies and assign new voices to your friends and family. Make your boss sound like Darth Vader and your partner sound like C3-P0 or Princess Leia. Start thinking now about who in your life you'd give Yoda's voice to. And Jabba.

Biometric measurement

Some hearables will come with a range of biometric capabilities and will be able to use sensors in your ear to measure your heart rate, blood pressure, temperature, ECG, and even your blood oxygen level. Accelerometers will also enable these devices to measure your movement and activity. So hearables will be able to do activity monitoring, as well as health and wellness monitoring. You will be able to build a medical record of your life signs that are stored on your phone and that you can share with clinicians should you choose. Biometric hearables could also automatically sound the alarm and summon help if they detect a heart attack, or some other major medical issue.

Biometrics can also be very valuable in authentication and security. When combined together, a wide range of biometrics (heart rhythm, ear shape, gait) can form a signature that's unique to you. A hearable could authenticate you to other devices the same way an Apple Watch can sign you into your MacBook now.

NEC claims they can measure the reflection of sound waves in the ear canal to look for the unique shape of an individual ear with greater than 99% accuracy. When combined with other biometric signatures to further boost accuracy this could provide a new root of trust for security applications that require continuous authentication. Goodbye and good riddance, passwords.

Information and communication services

Apple's delayed AirPod wireless headphone product

Apple's delayed AirPod wireless headphone product

Perhaps the most interesting corner of the hearables market is the ability to use in-ear computing as a new platform for delivering information and communications services. This is the area of most interest to the major compute platform vendors like Google, Facebook, Amazon, Microsoft and Apple.

At the most basic level, information hearables can be used to do much of what your phone does today, just delivered in audio form. They will be able to give you notifications and reminders. Future versions will deliver context sensitive, subtle prompts and reminders such as this: "This is Jeff McHugh, you met him and his wife Shirley at a party last May".

Nobody wants intrusive notifications blurted out at inopportune moments when they are trying to concentrate, or when they are in the middle of a conversation. Hearable platforms will need to understand a user's context and use intelligence to choose the right time and place to deliver different categories of information. All information is not equal. Successful hearable platforms will need to assess the relative value of information and decide what to present to the user, and when.

As I've discussed in a previous post, digital personal assistants are going to become a primary interface for many of us in the coming years. They will allow us to control our environments ("turn on the lights"), order services ("I need an Uber"), get information ("What's the score in the Cubs game?"), get directions, and much more. This is where platforms like Microsoft Cortana, Google Assistant, Amazon Alexa, Samsung Viv, Facebook messenger, Apple Siri, and others want to go. And some are doing a better job than others. The natural place for these services to reside is in your ear, not on your phone.

Personal audio networks

Ever found yourself shouting into the ear of the person next to you at a loud concert? Or lost someone at the supermarket and then had to wander around looking for them? Personal audio networks are a great future application of hearables. You link your hearables to the hearables of others on a temporary basis to form a private audio network that only those of you on the network can access. With a personal audio network you just speak, and the other people on that network hear you. When combined with smart noise-cancellation and digital compression you'll be able to talk to your buddies at your next death metal concert without even raising your voice and they will hear you perfectly. Even if you're under assault by 150 decibels or stood 150 feet away from them at the bar getting drinks. And teamwork at the supermarket will get much easier: "Hey honey, while you're off grabbing that yoghurt please grab a 2% milk too, please. See you in the cereal aisle".

Theatro is already making a hearable product aimed at the retail market. It replaces those "walkies" you sometimes see store associates wearing which are essentially just radios that broadcast to every other employee in the store. The Theatro hearable intelligently routes communications to the right person or people you need to reach, and has all kinds of cool features including being able to ask to find subject matter experts, connect you directly to your buddy if you're a new employee, and have managers share messages to all staff. If you're in the retail world, it's worth looking at. 

New services will sit on top of hearables

BitBite listens to your munching sounds to figure out what you eat

BitBite listens to your munching sounds to figure out what you eat

Look for new, perhaps unexpected services to be layered over the top of hearable platforms once they become widespread. For example, BitBite uses hearables to listen to the sounds you make while eating and analyze your eating and nutrition habits. The software then provides you audio feedback that is designed to help you improve your diet and alter your eating patterns accordingly.

Expect to see the emergence of digital coaches and other services that are based on artificial intelligence. For example, you might load a "Digital conscience" app that whispers guidance in your ear as if Jiminy Cricket were perched on your shoulder.

The companies to watch in the hearables sector

Many companies are racing towards to the hearables sector from different starting points and are now on a collision course at the center of the market. Major players eyeing this space include:

  • Established hearing aid companies are busy adding features (bluetooth connectivity and microphones for music, phone calls). Key companies: Beltone, Phonak, Oticon, Resound, Sivantos, Starkey, and Widex.
  • Established headphone companies are adding connectivity and hearing augmentation features, and some are adding biometric capabilities. Key companies: Bose, Jabra, Koss, LG, Motorola, Samsung, Skullcandy, SMS, and Sony.
  • Established computer companies are either building new hearable devices of their own (e.g. Apple), or are looking to bring their services to the ear via partnership with other hardware vendors. Key companies: Amazon, Apple, Facebook, Google, Microsoft, and Samsung.
  • Hearable startup companies - A number of interesting new startups are beavering away on new hearable products that do things from smart noise cancellation to real-time language translation. And once they have the hardware deployed, I'd expect them all to bring a steady stream of new innovation through software apps. The winners will build app stores for their devices. Key companies (and their first products) to watch: Doppler Labs' Here One product, Nuheara IQ buds, Alpha Audiotronics Skybuds, Nura, Bragi's Dash, ProSounds H2P, and Human Inc's Sound headphones, that look like designer Spock ears.
Human Inc's secretive new Sound product

Human Inc's secretive new Sound product

The winners in the hearables market will ultimately be the tech giants (Google et al) who will capture the lion's share of the value created here. The hardware manufacturers (Jabra, Skullcandy, Resound etc) are still in with a good chance of success though. Good quality hardware will command a premium and enable healthy margins. But branding and marketing will play a very large part, as for any fashion item. Just look at what Beats managed to achieve with decidedly average quality headphones and a big marketing budget. Hearables are fashion items, and always will be. Expect hearables to be very popular with the MTV generation. We listened to music too loud when we were younger and some of us now need help with our hearing but aren't ready to stop being cool. Hearing aids, NO! Hearables, GREAT! So long as the features, fashion, services and value are there, these things will sell like hot cakes.

Closing thoughts

New technology always affords us amazing new possibilities, but also comes with risks. We will always need to understand and minimize the risks so that we can enjoy all the benefits.

Hearables may make life easier for us, and make it easier to access services simply using our voices. They may help us to improve our health, hear better, listen better, understand people that speak other languages, be on time more often, remember people's names, and generally do a better job of looking after our hearing.

But we will need to think carefully about how we implement these devices and services. Privacy must be a primary concern. Hearables will need to be designed with privacy at their core. After all, we are talking about an always-on microphone here. What gets recorded? Where does information get stored? Who owns it? Who controls access to it?

And how will it feel if we essentially start introducing new voices into our heads? Will we start to feel schizophrenic when we have multiple software agents offering us advice throughout our days? After prolonged use, would we feel odd when we take our the hearables and the voices stop? Much ethnographic work needs to be done here to understand how personal assistant voice agents whispering in our ears will work best with the way people live their lives. There are important limits to be explored here.

What do you think?

What's your own interest level in some kind of hearable technology? What features would you want to see from the ones I've described here? And what features would you want that I didn't cover?

 

Staying employable in the future

Career advice from a futurist

The robots are coming! The robots are coming! As I discussed in a previous post, technology is going to gobble up many jobs in the next decade. Autonomous machines including self-driving cars, robots, and drones are being developed at a furious pace and will displace many blue collar workers. Artificial intelligence and analytics will replace many well-paying, white collar jobs currently done by humans. Are we facing a disaster in the employment market? What can you do to ensure you remain employable in the long term?

When my audiences ask me what the future holds for jobs, I give them a stock answer trotted out by most futurists, and then I add several important caveats that help them understand what's at stake in the coming decade. Let's start with that stock answer and then we will get into the meat of it.

From milking cows to making lattes

The stock answer: Technology has always destroyed jobs as it marched relentlessly forward. But technology also creates new jobs in that renewal process. Two hundred years ago 99% of people worked on the land. It was back-breaking, miserable work and the life expectancy of farmers was less than 40 years. Today less than 1% of America's population works on the land. People who would have been dragging a plough behind a sweaty horse two centuries ago are now car mechanics, hair stylists, or crafting lattes at Starbucks. And that is a good thing. Their quality of life is better, the quality of the food they eat is better, and they will live over twice as long as their farming ancestors of the 1800s.

Destruction and creation go hand-in-hand

Historically, technology creates more jobs than it destroys. And the jobs that it replaces are usually more unpleasant, lower-wage jobs. The new jobs that are created by technology and infrastructure advancements typically require a higher level of skill than the old jobs that are replaced. Education is therefore vital to this renewal process. Without accessible, affordable, quality education people get left behind. Unemployment balloons, and those working have to support those that don't.

THE LAST REFUGES: CREATIVES AND PEOPLE PEOPLE

So what types of jobs are safe from this technological invasion? Machines won't match humans for a very long time when it comes to tasks that require true creative thinking and problem-solving. These are tasks filled with challenges that have no prior context upon which a machine can draw experience. Examples of roles that fit into this category include designers, engineers, artists, researchers, architects, scientists, film-makers, creative writers (though not all writers), urban planners, marketing managers, hair stylists, creative directors and yes, futurists. Many people in these roles will see technology assistants that significantly aid them in their jobs, but their jobs will not be totally replaced by technology.

The other category of jobs that will not be replaced by technology are those that mostly involve human-to-human interaction. These are jobs that require human skills such as empathy, compassion, nurturing, negotiation, caring, persuasion, motivation, and social perception. Here are a few examples of jobs that fall into this particular category: Nurses, physician assistants, teachers, first-responders, law enforcement, salespeople, dental hygenists, caregivers, people managers, child minders, and all others that are in high-touch customer service.

WHITE COLLAR WORKERS ARE IN THE CROSSHAIRS TOO

Make no mistake, white collar workers are NOT immune from this round of automation.

Blue collar workers got hit hard over the last four decades. Millions of jobs were lost to the double whammy of outsourcing and the continued march of automation. (Side note: Automation might actually help bring some manufacturing jobs back to high-cost labor markets as the partnership of humans and machines becomes the most cost competitive way to build goods, and transport costs become a significant part of cost of goods). And higher levels of automation, including autonomous transportation, will continue to replace blue collar jobs in the next decade.

But the next BIG wave of job displacements will be amongst white collar workers. Entire categories of work, previously thought to be immune to replacement by technology, will tumble to a new wave of tech within the next decade. Some of these jobs will be gone within just five years.

The capabilities of artificial intelligence and machine learning are leaping forward at an astonishing pace. Jobs that involve repetitive, if highly-skilled, tasks are most at risk. The types of roles at risk include:

  • Radiologists
  • Financial advisors
  • Auditors
  • Accountants
  • Paralegals
  • Personal assistants
  • Bookkeepers
  • Travel agents
  • Legal aids
  • Administrative support
The Freightliner Inspiration autonomous truck, in trials now

The Freightliner Inspiration autonomous truck, in trials now

Many readers may find it easier to discard the contents of this post as the ramblings of a futurist. It's perhaps more comfortable that way. But please remember that even many computer scientists thought that computers would never be powerful enough to take on tasks like driving a car. Just a decade ago, some would have told you it was impossible. Autonomous vehicles are now a reality and will be available for sale within a few years. The tens of millions of people that rely on driving as a way to earn a living are all at risk. Truck drivers, delivery van drivers, taxi drivers, and all the people driving Uber and Lyft cars today...they are all at risk. Yet a decade ago, few people would have thought that possible, including some well-informed scientists. Continued progress in artificial intelligence WILL destroy entire job sectors. Just ask all the people that used to operate switchboards, work in typing pools, dig ditches, or work a loom. And consider all the bank tellers currently looking for work as consumers shun branches for online banking.

So what should you do to ensure you remain employable for the foreseeable future? Here are a few career tips from a futurist:

1) Focus on jobs that aren't easily replaced by robots and algorithms. These are tasks that aren't repeated over and over, where creativity, adaptability or dexterity is key. Jobs that involve a lot of human interaction are generally safer than those that don't. It will surprise some that being a nurse is very likely a safer job long term than being a doctor. The diagnostic skills of doctors are easier to replace with artificial intelligence than are the caring skills of a nurse. In the shorter term (next decade), manual jobs that require high levels of dexterity will remain immune from technological disruption. So jobs like cooks, cleaners, gardeners, dentists, surgeons, carpenters, and repair people are very safe for now. Robots are just too clumsy and slow when it comes to the manipulation of tools and objects in non-repetitive ways. For now.

2) Don't wait. Switch before you need to. Don't wait to make a career change until you absolutely have to. Try and get ahead of the curve. If the writing is on the wall for your job role, it's better to be out looking for a new line of work NOW rather than when everyone else in your profession is forced to make a leap. If you can, start getting the new training that you need now, in parallel with working in your current role.

3) Keep learning. All the time. Keep the learning muscles strong so that if you need to switch careers and learn something totally new, you are ready for it. We will all need to maintain continuous learning and stay on top of changes hitting every industrial sector in the coming decade. Be sure to make use of the many online training resources that are out there. Many high end universities let you audit their classes for free.

4) Be agile. Don't cling to a particular career path for too long. Be ready to switch and start something new.

5) Figure out who you are. This is a tougher one. Most people have no idea what they want to do for work, if not the thing they are currently doing. Start asking yourself that question now. Again, don't wait until the moment comes when you have to reinvent yourself. Think about a scenario where all jobs pay equally...what would you do then? In an era where most workers will have to retrain at least once and have two or more careers in their working lives, many of us will get a chance at Life 2.0, and will be able to look at our first careers as a practice run. Think now about what you would love to do should your entire profession disappear tomorrow. And start planning for that today.

Finally, whatever line of work you're in, watch your ass. Technology is getting better each and every day :)

Diversity 2025: Working with non-humans

The nature of work is changing. The most important skill workers need is the ability to work well with others. Soon that will include the ability to work alongside autonomous machines and algorithms. Welcome to the new diversity. 

Diversity and inclusion remain key to business success, and to social justice. Savvy companies are working hard to make their workplaces more inclusive. They understand the benefit to maintaining their market relevance and to boosting their speed of innovation, benefits that go way beyond any government requirement to meet a statistic. Despite efforts, many industries still suffer a great lack of workplace diversity, whether you consider the dimensions of gender, race, age, or sexual orientation. And we should remain focused on all these dimensions of inclusion. But we must now add another dimension when we think about workplace diversity in the future: non-humans. The most effective teams of the future will not just include more women, more people of color, more LGBTQ, and people of all ages; They will also include robots and algorithms.

Your next co-worker may not be human

You are your network

Prior to the industrial revolution your value as a worker was pretty much related to your physical ability: your strength, stamina and dexterity. With the coming of the Information Age, people were not only valued for what their bodies could do; their value was now measured by what they knew and how well they could create and process information. 

Welcome to the Network Age

We now live in the Network Age. Knowledge is being commoditized and your value is less related to what you know, and more related to who you know, the strength of your relationships, and how well you are able to leverage your personal network to get things done. And that network is now extending to include non-humans.

A considerable amount of human knowledge is now just a click away on any computer. And experience on the job is losing relevance too. Rapid change is making some experience obsolete. And in many areas human judgement made possible by decades of experience is being replaced by learning algorithms that make better quality decisions than humans ever did.

Consider the "merchant prince" in apparel companies. This person uses their experienced 'gut' to decide on the new clothing line that best anticipates next year's fashion trends and the best way to display and market merchandise. These people are already being made obsolete by predictive analytics. The decisions these people make affect the actions of many, many people in a clothing company. Now all those people are essentially being guided by algorithms.

As a result of this commoditization of knowledge and experience, your ability to collaborate on a team becomes more important to an organization than the knowledge you have rattling around inside your head. It's not to say that knowledge no longer matters. Physical attributes like stamina still remain relevant too. But your ability to network, and to put that network to work for you, will be your most vital skill.

The Network Age is upon us. Knowing how to collaborate to find the information you need to get something done is what matters. In many cases, algorithms will provide much of that information to us.

The new diverse work team: humans and digital intelligence

Working with Bob, Becky, and the HAL2000. 

Getting along with other humans has always been important. To be successful in the workplace of the future, we will all need to be comfortable working alongside digital intelligence too. This includes both autonomous machines and algorithms, respectively the physical and non-physical instantiations of digital intelligence.

Smart managers will resist the temptation to simply find ways to replace humans with robots and algorithms as technology advances. I know this temptation is strong in some sectors of U.S. industry right now as bean counters brace for the arrival of the $15 minimum wage. Companies risk stripping out the humanity from their operations, and thus their brand, if they blindly take this approach. Every brand has a human element at its foundation.

Leaders should step back and consider ways to optimize their labor force by forging partnerships between humans and machines (or humans and algorithms). Humans and machines each have different strengths and weaknesses.

Robots vs humans vs algorithms

Machines have many advantages over us humans. Robots are much stronger than us. They also have endurance and speed on their side. Algorithms, analytics and A.I. can spot complex patterns in vast seas of data that humans just cannot see, and they operate at incredible speed.

Don't panic. All is not lost for humanity. We humans still excel in many areas where machines will remain weak for the foreseeable future. Our skills related to creativity, dexterity, and adaptability are fine examples. Most of us also have strong empathy for other people, a vital skill for all aspects of customer service. We are of high value just by virtue of our humanity. After all, nobody wants to be told they have stage three liver cancer by a machine.

Radiologists beware: algorithms are coming for your job

Technology will do best at repetitive tasks (even very complex ones) that can be learned by analyzing huge data sets. These are tasks that are repeated over and over again and that have a measurable outcome. Radiologists are highly-trained and highly-skilled. But if you show a deep learning algorithm enough CT scans and X-rays of potential tumors and then tell it which ones are positive and which are negative results, you can teach it to be a very effective radiologist. The diagnostic component of other jobs such as doctors and mechanics will go the same way. So too will the jobs of CPAs, insurance underwriters, and auditors. Expect algorithms first to show up as assistants, working alongside the human experts, advising and offering an expert "opinion". Once the machine's accuracy outstrips that of the human, the human will be freed to take on more of the tasks that machines can't do, like spending time face-to-face with patients. The machine becomes a partner to the human, enabling them to achieve far more in a single day and to focus more of their time on what they do best: interacting with other humans.

Augmenting human capabilities with digital intelligence

Managers will need to learn how to think through business processes and design their teams so that tasks are intelligently split between human and non-human labor. (Non-human labor is really capital if you ask economists, but let's set that aside for now). Managers will need to think through which tasks are best handled by humans, which by robots, and what role algorithms, analytics and A.I. can have to help the humans do a better job.

As an example, consider the service a sales associate gives in a high-end clothing store. When you want to try on a few items, he or she takes the garments from you, finds you an open fitting room, and then carefully lays out all the clothing in the room ready for you to try. What you might not be aware of is that the associate is also eye-balling all your choices to understand your size, color preferences, and general style. Once you are safely installed in the fitting room they run off back into the store to gather matching items you might also want to try. As well as providing a valuable styling service, this is also a way for the store to sell-up and increase revenue. To do this well, the sales associate must a) have a good sense of style, b) accurately remember your size and all the garments you picked, c) know the inventory of the store and what is in stock.

This business process can be parsed into two pieces: a human piece, and an algorithmic piece. RFID sensors in a smart fitting room can read RFID tags on items to figure out what garments the customer took in with them, including their exact size and color. A wifi sniffer can recognize the MAC address of a customer's phone and look up previous purchases if they have previously downloaded and used the store's app. All this information can be used as input to an analytics engine. Using a "goes with" database, created by a designer, the algorithm looks up each item the customer has in the changing room to find other items that it can suggest will make nice outfits: shoes, accessories, and so on. The algorithm checks which items are in stock and where they are located in the store. It plots the optimal path to pick all the items from the store floor and sends this information to the store associate.

The store associate, guided by the algorithm, then whizzes around the store and picks out the clothing items and accessories and returns them to the fitting room to give to the customer. Their dexterity and visual abilities mean this job is best done by a human, not a robot.

This partnership of human and algorithm gives the highest-quality result for the customer, maximizes the chance of selling up (and thus perhaps the chances of boosting the sales associate's commission), and saves the associate time, enabling them to serve more customers. The styling component of the service becomes automated which means even associates with poor fashion sense can deliver terrific service. It is the associate's ability to interact with the customer and provide friendly, speedy service that matters most. Something a machine just can't do.

In summary

The robots are coming. Analytics and AI will transform every sector of industry. Every business will need to find the optimal pairing of human and machine. And we will all need to learn to work for, and alongside machines.

Leaders will need to learn how to examine each business process and understand the best way to split tasks intelligently between humans and algorithms. They will need to build high-functioning teams that include humans, robots, and algorithms. Smart leaders will resist the temptation to blindly try and replace labor with digital intelligence and will instead find ways to build strong partnerships between the humans and non-humans in their organizations.

If you need help to build a strategic plan for the future of human and machine partnerships in your organization please contact me at www.baldfuturist.com to talk about doing a futurecasting workshop.

If you found this article helpful, please share it. And as always, I welcome your comments!

Your new manager will be an algorithm

It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.

Algorithms for task management

Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of "temporal Tetris" with the parcels and packs them to optimize for space and for the planned delivery route--packages that are delivered first are towards the front, packages for the end of the route are placed at the back.

Load_Plan_Optimization_Software.png

Load optimization software has been used for many years to ensure the optimal space usage and balance in shipping containers, trucks, pallets, and air freight. Logistics companies use software like Logen Solutions' CubeMaster and Cube-IQ from MagicLogic. Similar algorithms will help us all in the future.

Algorithms also tell UPS and Fedex drivers the best route to take to minimize the length of delivery circuit and reduce fuel consumption. They go way beyond typical route-planning and optimize the route to reduce the total number of left-hand turns needed (in countries that drive on the right-hand side of the road) to make it easier for the drivers and to save time.

Algorithm managing humans in retail

Algorithms also help to manage people in retail stores. One of the key things a store manager does is ensure that the store is "in compliance". What that means is that all the things are on the shelves or racks where they are supposed to be. Shoppers are notorious for picking up things (a fluffy grey sweater, a nice piece of beef, or a carton of orange juice) and then changing their minds a few minutes later. Rather than take the item back where they got it, they just dump it off wherever they are. So meat ends up in the cheese section, and fluffy grey sweaters end up with the rose-print swimsuits. People working in stores are forever roaming the floor looking for items that aren't where they are supposed to be and returning them to their proper location. Because customers buy more when the shop looks nice and tidy.

Stores that have invested in RFID (radio frequency identity) technology and electronically tagged all their inventory can now know where every item in their store is located. RFID sensors in the ceiling can "see" the location of every item. Algorithms quickly learn where items are supposed to be and spot when items are left where they shouldn't be. This information is then passed to task management software platforms that mobilize a human and give them a list of items they need to relocate. That list might be communicated via an app on a phone, on a tablet, or on a wearable device, either on the wrist or in the ear.

Detecting and acting upon business events

It's all about detecting business events and then taking appropriate action. In the retail case, the business event was non-compliance of a piece of inventory. The action was telling a human to move the item back to where it should be. Simple. But there may be much more complex business events that you might want to look out for. And the action you might take may be something done by a human being, by a machine (e.g. a robot), or by another algorithm.

The more sophisticated your ability to sense what is happening in the real world, the more information algorithms have to enable them understand what is happening in a business. If algorithms can understand the world better they can be used to spot complex business events that need to be acted upon, and then trigger an appropriate business process to respond to that event. Customers can be better served. Issues can be speedily resolved. And profits can be boosted.

Cameras can be excellent sensors, especially once machine vision technology is added into the mix. Cameras built in to shelves on a grocery store can look across the aisle and look out for compliance issues, understand how long shoppers dwell as they review products/advertising, and a lot more. Cameras at train stations can spot suspicious, unattended packages left on crowded concourses. 

Cameras that can also sense depth make even better sensors. If you could mount depth-sensing cameras above tables in a restaurant you could look for important business events like, "drink levels on table two are getting low", or "customer raised hand for over two seconds on table seven", likely indicating they are interested in having a server swing by. In both cases, a message could be passed to the appropriate member of the wait staff letting them know they should probably stop by those two tables soon if they want to optimize their tips.

Wifi hotpots in stores can be used to sniff the MAC address of every wifi-enabled smartphone that walks into the store. If the wifi system spots the MAC address for the phone of a known big spender, store associates can be alerted to look out for the customer, be prompted with their name, and then give them the appropriate level of sucking up.

As the cost of sensing plummets, expect more and more businesses to embrace the Internet of Things as a way to sense important business events and then act upon them using the humans in the system. Ultimately businesses will use a combination of ways to act upon business events, using a team of humans, autonomous machines (robots), and algorithms working side-by-side to respond to sensed business events.

We will all be managed and coached by algorithms

None of us will escape this shift towards being managed by algorithms. Sophisticated analytics software will not only help us predict what might happen next, but also give guidance on what we should do. Oncologists are already getting an automated "second opinion" from expert systems built on IBM's Watson technology. Lawyers will work alongside algorithmic paralegals. Marketers already use predictive and prescriptive analytics to tell them who to target with their marketing campaigns and how to optimize response rates.

For many of us the way algorithms will manage most of us will be through our personal assistants. Siri, Google Now, and Cortana are toys today compared to where personal assistant technology will be in the next decade. As each of us gain a digital personal assistant (DPA) able to manage our calendars, advise us on our personal finances, book our travel, remind us to order flowers for mother's day, and much more, we will find ourselves under the spell of algorithms. As these personal assistants become proactive rather than reactive, we will find that our lives slowly become managed by algorithms, helping us to optimize the path of our lives in whatever ways we choose. They will help us be more productive, prompt us to do things that aid personal growth (based on our expressed ambitions) and even start to supplement our memories.

Our DPAs will feel both like helpers and also like coaches. "Steve, don't forget to pick up the dry cleaning". "Steve, you need to respond to your new client's email by 1pm today". "Steve, would you like me to order some flowers for your parents' anniversary today? You know they like peonies."

DPAs will also help to supplement our memories, particularly when we run into people we haven't seen in a while: "Steve, this is Tony, you met him and his wife Ann at a party last summer. Be nice, he has a 74% chance of being a helpful business connection". All whispered privately into the ear at the appropriate moment.

Our DPAs will also talk to our health-sensing network and our wearables to understand our mood and offer us emotional support: "Steve, I know you're stressed about this meeting, but just focus on the goal and you'll do great." Or "Steve, you seem tired, the nearest Starbucks is on the right two blocks up. Let's pull over. I've pre-ordered you a cup of earl grey tea with 2%."

Merging with technology

We have been on a path towards starting to merge with our technology for some time. Before mobile phones, most people knew all the phone numbers for all their closest friends. Now we have out-sourced that job to our phones. We also outsource our sense of direction to Google Maps and have embraced the communications capabilities of phones as a way to augmented our natural communication capabilities. I don't think it's an accident that the leading mobile OS is called "Android". When we need to communicate with somebody that's nearby we talk and move our hands. When we need to talk to somebody that's remote, we use our phones. It's almost like they are a new "organ" for remote communcation.

As algorithms and robots become more and more useful to us, and we deploy them widely throughout both our work and personal lives, we will outsource more and more to technology. We will partner with technology to get things done, to communicate with others, to monitor our wellness, to learn and grow. And we will become more and more reliant upon them.

Imagine two people talking to each other twenty years from now. Each person is getting advice whispered into their ears from their DPAs, advising them what to say next to help them overcome any social anxiety they might have and to get whatever result they are seeking. It could be one person trying to get that big order from the other that will make them a hero back at the office. It could be a person trying to secure a first date. Or perhaps the two people are just trying to make good conversation, but getting help to go beyond small talk. In this scenario we might examine the old philosophical question, "Who am I?", in a totally new way. Who is really calling the shots? How much of who I am is now "me", and how much is the augmentation I have chosen to build around me?

Next time we will dive into what I call "The new diversity". As algorithms and robots become a vital part of our work environment, inclusion no longer is just about building high-functioning teams by embracing diversity amongst humans. The most effective work teams of the future will combine humans and machines (both robots and algorithms) to deliver the best business results. But that's a subject for next week.

For now, what do you think about the idea of being managed by an algorithm? Are you already managed, in part, by an algorithm? If you examine your current job how could you see it made easier with a software assistant of some kind? And how does that make you feel?

A sporting future

Many people love sport. While not everyone is a rabid sports fan most people enjoy at least one sporting event. For me, my weaknesses are the World Cup and Wimbledon. 

Why are we talking about sports in a blog about the future? It turns out sport is a really fun frame for us to think about how technology will change our lives. So buckle up, and let's take a look at the future of sport. 

We will explore how we will experience and enjoy sport in the future, how technology will continue to augment sport, and how technology is creating entirely new sports. Finally we will review the prospect of sport played between machines. Before long we may see machines playing sports against each other using humans as playing pieces on a field. To understand how, read on.

Experiencing/watching sport

With the mobile revolution now long upon us we are able to watch sport anytime, anywhere. High-speed networks and high-resolution screens put on-demand TV right in our pockets. We can stream sports events from all over the planet on a whim. Social media lets people talk about their favorite sports, their favorite teams, and to relive moments with others. All cool. So where does viewing sport go next? Let's start with the obvious stuff.

Virtual Reality

We are still watching sports on flat two-dimensional screens, whether those be giant TVs or on our watches. For this summer's Olympic Games, NBC experimented by making over 100 hours of content available to view in Virtual Reality. Check out their site for details. Watching sports in VR gives you the feeling of sitting exactly where the 360 degree camera is located at the event.

Augmented Reality

Microsoft wants to take your sports viewing experience even further. They envision highly social new ways to experience sport in the comfort of your living room. If you haven't seen it, be sure to check out the vision video they created in partnership with the NFL:

Microsoft shows us what the future of watching the NFL might be like if we all had a HoloLens in our home.

New volumetric scanning technology could take virtual and augmented reality watching of sports to a totally new level. Replay Technologies, acquired by Intel earlier this year, has already changed the way people experience that old TV sporting staple, the action replay.

Replay combines the feeds of thirty or more high-definition 4K cameras (placed strategically all around a stadium) inside a powerful computer to captures live action in full 3D. They essentially turning real life into a video game representation of such high quality that it looks totally life-like. Once you have captured the world in this way you can fly a virtual camera through a scene and place it pretty much anywhere you want to. 

If you haven't seen this technology yet, you can view the video here to see what I'm talking about. It's been around for a while. NBC sports used this technology for the 2012 Olympics to compare the acrobatic performances of gymnasts in sporting events. Check out that video here.

Now imagine where this goes next. You can now capture reality in three dimensions, and have complete freedom of movement to place a camera anywhere in the action and point it in any direction. Combine that capability with virtual reality technology and you have something very exciting. You could now place yourself anywhere in the action. Want to be stood on the goal line during a penalty shoot-out? No problem. Want to see through the eyes of your favorite player? Can do. Want to be the ball? Yep. Want to stand on the track as the racing cars thunder around and through you? That too. 

As volumetric capture technology improves and VR becomes broadly deployed we will all get to choose exactly how we each want to experience our favorite events.

Quantified sports with IOT

Intel, who are heavily focused on the future of sport, did some interesting work with ESPN at the Winter X Games. Extreme sports are cool, but unless you do them yourself it's not always easy to tell just how amazing an athlete's performance truly is. By putting sensors on everything from BMX bikes to snowboards, the X-Games was able to exactly quantify just how high, how far, or how gnarly every jump or stunt really was. Accurately measuring athletic achievement really enhances the viewing experience for extreme sports. Check out the video below to really get a feel for what I mean here:

Augmenting sport

Sensors are a great way for viewers to gain a better understanding of what is actually going on when a sport is being played. But they can also help players, coaches and referees gain better insight too. Wearable sensors enable coaches to understand how their players are performing under the pressure of competition. Players could get feedback on their performance and automated coaching on how to improve it. Referees could get the help they need to make better decisions.

Sensors have the ability to remove most, if not all, of the subjectivity from sport. In the process, they could reduce the blood pressure of avid fans that sometimes feel cheated by a ref that makes a bad call.

It started with chalk dust. Back in 1974 we got the very first electronic line judge. It used conductive tape to sense the landing of tennis balls on indoor tennis courts. The sensor assists the human line judge and determines if a ball was in or out. ("You cannot be serious" - John McEnroe). This same system was also used to call "foot faults" by linking a sensor on the base line to a microphone that listened for the sound of the ball being served by the player to determine if the player's foot went over the serve line before they struck the ball. The system didn't become widely deployed until the "Hawk-Eye" system came in, using video cameras instead of wires to sense violations. In 2012, the International Football Association approved the use of goal-line technology to determine if a ball rattling around inside a goal ever crossed the line or not. 

As sensors and analytics get ever-more sophisticated, we could soon find ways to objectively measure if a player has committed a foul in soccer. Algorithms could analyze and determine if the player went for the ball, and if they were successful in doing so. The referee, even with the help of their assistants, still can't always see what's happening on a soccer pitch. Sensors and analytics could even be trained to look for diving and faking, removing that scourge from the beautiful game once and for all.

As well as changing the way we experience, enjoy and measure sports, technology is also creating entirely new sports, and new sporting mashups that span the physical and virtual worlds.

NEW TECH-ENABLED SPORTS

Real-time virtual racing

As well as being good for accurately measuring the action in sports, sensors can be used to enable viewers to participate virtually in an event. Live, in real time.

Formula E, the all-electric category of motor-racing, fully instrument all their cars. Every car is bristling with sensors. Formula E know exactly where every car is on the track during every race, in real time. Alejandro Agag, CEO of Formula E, was sat next to me while we were speaking on a panel at CES a couple of years ago. He told me he has plans to ultimately enable video game players to join live races, and go head-to-head with real drivers, from the comfort of their living rooms. If and when Formula E (and other sporting events) make this real-time data stream available through an API to game developers, it will enable the creation of a range of games where players can compete virtually in live events.

Formula E has already started to move in this direction by embracing the gaming community. They partnered with the creators of the Forza Motorsport series of games to host video game competitions that pit some of the world's best gamers against real Formula E drivers. The real drivers are still playing video games alongside the gamers. The next step is real racers racing real cars virtually against gamers racing on consoles.

Beyond that, there may be another blend between physical and virtual we might look forward to: remotely driven or robotically driven cars. With motor racing deaths a sad constant in this dangerous high speed sport, might we see drivers racing remotely? And how long will it be before we see human drivers racing up against cars driven by artificial intelligence?

Drone racing

Technology is giving us new types of sport as people find new things to race, and new ways to compete with each other. Drone racing has emerged as a popular new activity. Cameras connected to goggles worn by the racer give them a first person view of what the drone is 'seeing'. The video below captures a little of what goes on.

Robot battles

For a number of years, the BBC has been producing a show called Robot Wars. With over 160 episodes made since 1998, Robot Wars pits robot designers against each other as their creations battle it out in the ring with hammers, chainsaws, buzz saws and spikes. The winner is the robot that crushes, bashes, slashes or burns its opponent to destruction. The robots also need to avoid hazards on the battlefield in the form of pits, fire, spikes and catapults. It's like a gladiatorial battle, with robots. 

As artificial intelligence improves, expect these robots to no longer need remote control by their designers to triumph on the battlefield. Hugh Jackman's movie, Real Steel is probably not that far away after all.

Machine vs machine sport

In the examples above, sport is being often being played between humans using machines as a proxy: remotely controlled drones, robots, and perhaps even race cars. But let's flip that around: what about when machines compete against each other, using humans as playing pieces?

For years, analytics has been used to help teams scout for new players. Software reviews the sporting records and capabilities of thousands of up-and-coming players and figures out the optimal mix of talent for a team. Hollywood made a movie, Moneyball, back in 2011 that was all about how analytics were used to improve the fortunes of the Oakland A's baseball team.

The real future of analytics in sports is in real-time predictive and prescriptive analytics: software that analyzes a game as it is being played, correlates it against data it has on thousands of previous games, and makes both predictions and suggestions on what the coach of a team should do next. Consider (American) football. By spotting weaknesses in the opposing team's defensive strategy, the weakness of a particular player, or choosing the statistically most likely response to a play based on previous history, coaches might get a serious edge over their competition. 

Will future coaches be coached by computers whispering plays into their ears?

Will future coaches be coached by computers whispering plays into their ears?

As wearables, AI, analytics, and voice capabilities get better and better over the next few years, we are looking at a time when coaches will themselves have a coach. An artificial intelligence coach that constantly runs possible scenarios, weighs the odds, and guides the coach on plays. This AI coach will be able to converse with the human coach and they will be able to collaborate on decisions.

If this provides competitive advantage (and it will) then before we know it every coach of every major team will have the same technology whispering advice into their ears throughout each game. As the accuracy of predictions improves and coaches learn to trust their "in-ear coach", an interesting question emerges. Who is really playing the game now...the humans, or the machine? 

Think about that for a moment. What you now have is two powerful computers playing a game with each other using human playing pieces on their game board (which in this case a football pitch). In Robot Wars and drone racing humans safely compete with dispensable robots. In this scenario were are at the opposite end of the scale with computers (safely) competing using humans that get hurt as they play.

Before too long it won't just be football players that are being controlled by computers. Soon, almost every one of us will be managed by algorithms. But we will save that particular topic until my next post.

What do you think about this post? What are you excited about when you think about where sports might go in the future? I'd love to hear from you.

The next battleground: The 4th Era of Personal Computing

The smartphone wars are over. Apple and Google won. Intel, Microsoft, Blackberry and others lost. But this is old, old news. The smart money has already moved on. The next platform battles are already well underway.

Understanding where the new battlegrounds have moved to is vital for anybody in the tech sphere. Even if you're not in tech, knowing where the tech world is going next is key to future success. It should inform your business strategy, your partnership strategy, and probably your personal financial investment strategy. 

The fourth era of personal computing

I believe we are moving into the fourth era of personal computing. The first era was characterized by the emergence of the PC. The second by the web and the browser, and the third by mobile and apps.

The fourth era of computing is a potent combination of technologies and an alphabet soup of computing buzzwords:

  • IOT (Internet of Things, including wearables)
  • AR (augmented reality)
  • Natural interfaces (voice, gesture, and expressions)
  • 5G networking
  • PAs (personal assistants)
  • AI (artificial intelligence)
  • CaaS (conversation as a service)
  • Social networks (or the "life platforms" that they are evolving into)

The fourth personal computing platform will be a combination of IOT, wearable and AR-based clients using speech and gesture, connected over 4G/5G networks to PA, CaaS and social networking platforms that draw upon a new class of cloud-based AI to deliver highly personalized access to information and services.

To understand what you can expect from this fourth era, keep reading. First, a quick explainer on what it takes for the fourth era to happen, which helps us figure out the timing we all need to plan for.

Moore's Law + Bell's Law + The DATA SPIRAL

This fourth era of computing comes about as a natural consequence of Moore's Law playing out, and also Bell's Law (a new class of computers emerges about every decade) taking it's course. Computing devices at the edge are getting smaller and cheaper, making IOT and wearable applications viable. In a seeming paradox computers are also getting exponentially bigger and more capable in the cloud, powering breakthroughs in pattern recognition, cognitive computing, deep learning and all manner of artificial intelligence.

In my previous post, I described the importance of the data spiral, the idea that data is collected using value created from a previously-collected, lesser data set. Services are created that operate using a complex data set. The service is designed to collect a new data set that can then be used to deliver new  value and collect yet another new data set. A spiral of value creation. Think of it as the Moore's Law of data, if you like.

The data spiral is essential to evolving the next generations of artificial intelligence. Deep learning algorithms need data to munch on so help them learn about the world. 

If you combine the power of the data spiral with the power of Moore's Law and Bell's Law, it leads to a new era of computing, what I am choosing to call the fourth era of personal computing. And it's starting in limited ways now, and really takes off starting in about 2020, with the full platform in place by 2025. Think about it: you only got your first glimpse of an iPhone less than 10 years ago, and look how the mobile revolution has taken over the world.

The fourth era of personal computing

So what does the fourth era of personal computing look like? It's a world of smart objects, smart spaces, voice control, augmented reality, and artificial intelligence. Screens largely disappear, possibly including your smartphone, which today has become the indispensible remote control for our lives. You might still use a smartphone to perform some tasks in much the same way we still use PCs to achieve some tasks. For example, I'm writing this post on a Macbook, not my iPhone. But by 2025, for most daily tasks, I think we will be ditching the smartphone and will access most information and services using our voices, gestures, and with the help of personal assistant services, chatbots, messaging services, and other AI-based interfaces. Social networks will blossom into "life platforms" able to connect you to people, brands, and services. Sophisticated chatbots will front many of these interactions and help you do everything from buying stamps, to organizing a birthday party.

Want to book a weekend away? You will tell your personal assistant what you want in plain language: "Find me two flights to Austin the weekend of my birthday that gets us home by 5 o'clock on Sunday and recommend a hotel near 6th street".

Your PA will understand all the complexity embedded in your request and then find you the best deals from whatever flight and hotel aggregation services you trust. It will do all this using an understanding of your preferences and needs that it has built over time through working with you, much the same way a good human PA might do.

You might talk to your PA using a device in your home (think Amazon Echo today), your smartphone, an in-ear wearable, your connected car, or whatever other Internet-connected microphone you happen to have near you at the time.

Your PA will also help you through your day, offering you reminders, coaching, and support. "Steve, now would be a good time to collect the dry cleaning. Don't forget to pick up milk while you're out." Or for the memory-challenged amongst us (like I am), a personal assistant could whisper into our ears via our smart earbuds: "Steve, this is Gary, you met him last year. He's friends with Buzz and works at Adidas."

Need to do a load of laundry? Tell your washing machine what you want: "These whites are extra dirty so give them a good wash, and finish when I get home from work". The washer figures out to wash the clothes hot, add steam, and do an extra wash cycle for heavy soil. It starts the load an hour and twenty minutes before your expected arrival time. It finishes up right around the time you walk through your front door. How does it know your schedule? Simple, it's friends with your personal assistant and gets an easy consult.

Microsoft HoloLens demo of a weather object. You might place such an object on your coffee table, on the wall in your hall, on the dash of your car, or anywhere in your environment you'd like.

Microsoft HoloLens demo of a weather object. You might place such an object on your coffee table, on the wall in your hall, on the dash of your car, or anywhere in your environment you'd like.

The fourth era of computing is one filled with virtual objects rather than app interfaces. Why hunt, peck and scroll through a few screens on your phone to find out the weather forecast when you can either just ask your PA, or glance at the virtual weather object you placed on your coffee table.

 

MIT's "Enchanted umbrella" shows how information can be delivered in simple, new and interesting ways - no more weather app needed.

MIT's "Enchanted umbrella" shows how information can be delivered in simple, new and interesting ways - no more weather app needed.

Or better still, just glance at your umbrella by the front door, and if the handle is glowing softly that means it's going to rain and you should take it along with you. Yep, that's a thing. My friend David Rose over at MIT created an "Enchanted Umbrella" that does just that. His book, Enchanted Objects, is well worth the read.

Expect a world filled with smart objects, and to be spending your time in smart spaces that are able to understand and respond to the needs of the people that inhabit them. We will enjoy myriad new ways to interact with information and services. And we will start to form relationships with digital personal assistants that help us through our day.

The fourth era is one of simpler, more natural interfaces, personalized assistance services, virtual objects, chatbots, life platforms, and more. I'll be exploring these ideas in future posts, including a discussion on how we might prefer these technologies to evolve. There are many open questions to be answered. For example, who would a personal assistant ultimately answer to? You, or its provider? 

What do you think about the fourth era? Make sense? How do you see these technologies combining in the future to create an entirely new platform? I'd love to hear your feedback in comments.

The data spiral

The data spiral - a service gathers a data set essential to delivering a newer, higher order service that in turns gathers an even richer data set that continues the upward spiral

The data spiral - a service gathers a data set essential to delivering a newer, higher order service that in turns gathers an even richer data set that continues the upward spiral

Google Earth was great for seeing your house from space. We all had a lot of fun with that. It also turns out to be one of the most important things Google ever did and set them on a path to greatness.

Why mention Google Earth, a 15 year-old technology, in a blog about the future? Because it's a terrific starting point to illustrate the concept of the data spiral. The data spiral is the modern equivalent of Moore's Law, and everyone in business needs to understand it. Because whoever has the data, wins.

Google originally purchased the technology that became Google Earth from a small company called Keyhole Inc. Similarly, Google Maps came from the purchase of a company called Where 2. Boy oh boy, did Google get a bargain. Let me explain.

THE NEW MOORE'S LAW IS THE DATA SPIRAL

The data spiral is a lot like the Moore's Law of old. Engineers have kept Moore's Law going for over half a century by using the chips of the day to power computers that helped them design the chips of tomorrow. These new, faster, cheaper chips were then used to create the next generation after that.

This self-sustaining loop of increasing computing performance benefited us all. New fast chips let us all run ever more complex and demanding software. And that new software then created a vibrant market for faster and faster hardware. And so the world turned.

A new equivalent to Moore's Law has recently emerged. This time it's all about data.

Here's how it works: A software product is created as a way to provide some kind of service, but also to collect and store a new data set. This data set is then used to deliver a new product or service, one that may not have been possible before. This new service in turn gathers an entirely new data set that wasn't possible to gather before. And so on and so on.

GOOGLE AND THE DATA SPIRAL

Google's Waze service aggregates the location and speed data of drivers using the service to build an overall picture of current traffic flow

Google's Waze service aggregates the location and speed data of drivers using the service to build an overall picture of current traffic flow

Google does this all the time. Services like Google Maps and Google Earth rely on a detailed data set of global geography and feature maps. Google uses these data sets to deliver services that let them gather data about you. It then uses that data to make money and develop even better new services. For example, Google logs all the searches you do on Google Maps, tracks your location, and knows your speed of movement. By studying where you are most evenings Google figures out where you live. Similarly it has a pretty good idea where you work. It even knows where you buy your groceries and where your kids go to school. It builds a detailed picture of you as a consumer (which is another juicy data set) that it can then sell to advertisers in the form of advertising services. Google aggregates all that location and speed data to build a traffic data set, which is how navigation services like Google's Waze work. Google wouldn't be able to do any of this without the underlying data set of maps. Google knows that if they invest in creating the right data sets (for example, consider all the effort they put into building images for Google Streetmaps) they will be able to use it to gather even more valuable data sets in the future.

UPS and Fedex optimize their delivery driving routes (saving time and fuel) using data from navigation services. UPS estimate their new ORION routing system will save them 10 million gallons of gas and reduce the distance their drivers travel by 100 million miles annually by the end of the year. 

Uber and Lyft rely on navigation and Google traffic services to create their value (and gather passenger data). Google's self-driving test cars use existing navigation data to drive around and gather even more detailed data of streets and environments. Tesla's cars, bristling with modern sensors, are building highly detailed maps of the road network as their drivers zoom around the streets in them. All that data is used as an input to improve its own autonomous driving systems.

DATA MAKES MORE DATA

Google's Google Now personal assistant service builds on Google's understanding of your habits to anticipate what you will do next. That allows them to make smart recommendations and target you with offers in the moment. And as your comfort grows with Google Now and you see how valuable it can be to you, you are increasingly more likely to offer up even more personal information and give Google Now access to your travel plans, your email inbox, and more. The data spiral accelerates and even more data is gathered, processed and stored.

YouTube is another huge source of valuable data for Google. Teaching a computer to "see" is vital to the future of robotics and autonomous machines. By creating a service that enables hundreds of millions of people to share billions of hours of video, Google has built up a gigantic video data set. This treasure trove of videos enables them to build visual recognition algorithms that do an excellent job of understanding scenes, objects, and context.  The video data is used as a training set for a new class of deep neural networks able to not just understand what's in a scene, but assess it on aesthetic grounds. Google research has described how they use deep learning techniques to find optimal thumbnails in YouTube videos.

Understanding visual, audio, and other sensory input is a key capability for the future of computers. Expect that even more impressive services (that will of course gather even more data) will be built on top of this ever-improving recognition capability. 

THE DATA SPIRAL FUELS ARTIFICIAL INTELLIGENCE BREAKTHROUGHS

Visual recognition is just one component of the research going on in the field of artificial intelligence. The data spiral is vital to the development of these new artificial intelligence platforms. More data sets lead to more insights, and more data.

The new Moore's Law is the data spiral. The companies that embrace the data spiral in the coming decades will do just as well as those that rode Moore's Law through the 80s, 90s and 2000s. Invest and plan accordingly.

Whoever has the data, wins.

The world talks back

Today, I spoke to a green, plastic dinosaur. And he talked back.

In this post I'll discuss what this tells us about the future, and what the most important question will be for designers in the coming decade.

More and more of the objects in our lives are going to have that ability to converse with us over the next decade. For most people, this phenomenon started with a bit of a whimper as we were exposed to smartphone services like Siri, Google Now, and Cortana. Over time these mostly disappointing voice-based personal assistants have got a bit better, but they still aren't good enough to make us all abandon apps and the keyboard, and their integration with other apps and services is only just beginning to happen. Their abilities don't in any way rival that of a real human assistant. But that will change. And fairly fast. Personal assistants will be one of the next big platform battles. 

A future of voice and personal assistants

Voice-based AI is coming to a device near you. Soon. But it won't just be with devices. Expect voice to be everywhere. Google's personal assistant, Amazon Alexa, Microsoft Cortana, and newcomer Viv are about to battle it out for our attention. You will be able to access these voice services through a wide range of new devices. Amazon has taken an early lead with the Amazon Echo family of devices and it's Amazon Fire TV. But expect a massive wave of new products coming in the Christmas 2016 timeframe.

Over the last few years I've been watching the following trends play out and head on a collision course towards something pretty cool:

  1. Smart, connected objects
  2. "Conversation as a service" and chatbots
  3. AI
  4. Personal assistants

Cheap computing and connectivity, combined with exponential improvements in AI, chat bots, and personal assistant (PA) technologies, are going to yield the ability for many objects in our world to hold meaningful conversations with us. When this capability gets good enough (and we are getting pretty close) we can look forward to much simpler interfaces on everything from our washing machines to our cars. We will also each have a digital personal assistant that we can reach almost anywhere, and through any device, to help us through our days. Computing is going to get even more personal. Especially when those voice services move into our ears with a new breed of in-ear wearables. But that's not what this post is all about. This is a story about the future of smart objects and AI.

you can glimpse the future in a piece of green plastic

Nowhere is this technology future perhaps more starkly on display than in the package that arrived in my mailbox recently. It was my birthday, so when I opened the box that the postman had delivered I first wondered if it was to be some kind of birthday present. Turns out it was a birthday present from myself. One that I sent from the past to my future present self. About four months ago I went online and ordered myself a Cognitoys Dino as part of a Kickstarter campaign. If you haven't seen one of these yet, let me enlighten you. And then we'll get into why this little piece of green plastic is so important.

Meet Jimmy, my dino. I say "my dino" because I named him. It was as simple as pressing the button on his tummy and saying, "Your name is Jimmy".

Setup was a breeze. Once I'd popped in the batteries, the dinosaur told me in his muppet-like voice that I should download the Cognitoys app to my phone. The app allowed me to enter my name, connect to the wifi hotspot in the dinosaur, and then connect him to my own home wifi network. It was a breeze and I was up and running in less than two minutes.

From what I could tell in my very preliminary testing, Cognitoys have done a very nice job of building what could be a wonderful toy for a child. I found myself wishing that I could wipe away four decades or so and enjoy him as a 6-year old kid. My new friend Jimmy seems happy to tell me jokes, answer my questions, engage me in conversation, and teach me things along the way. The experience has been cleverly designed to teach kids about the world by engaging them in stories that trigger their imagination. They probably don't even realize that they're learning. To them, they're just having a conversation with their friend and playing a game.

So, why is this little piece of plastic such a big deal?

Let's be honest, the physical piece of what Cognitoys has created is nice, but not really that impressive. When you boil it down, what you are buying is a piece of moulded plastic, a button, a light, a microphone, and a speaker, all connected to a cheap little computer that can connect to wifi. But that's not what you're really buying here. That's only the physical piece of the value you are purchasing. The true value is in the simple device's connection to an IBM Watson supercomputer in the cloud. That's the bit that enables a child to hold a conversation with it, to explore the world through story and imagination, and to start to build bonds with a new talking friend.

A small, green dinosaur signals much about our future

All product designers should take note of this simple little piece of green plastic. It demonstrates a new choice that every designer will need to make in the near future for all new projects: What portion of the value and experience that I want to create exists in the physical domain, and what portion in the digital domain?

My contention is that almost every product will become a "smart" product. The plummeting cost of computing and connectivity coupled with the high value of making an object smart and connected, means that most "things" in our world will cease to be dumb. Physical products will become a portal through which digital value can be delivered. In this case, that value is a conversation with an AI that has been tuned to teach kids about the world through the use of story, humour, games, and interaction. But the value could come in many different forms, as I'll discuss shortly.

The electronics element of the Cognitoys Dinosaur is pretty simple and inexpensive. Building a conversational service on top of IBM Watson that's tuned for kids took a lot to develop, I'm sure. But this piece of the product is incredibly scalable. So if Cognitoys sell a lot of these toys, and I hope they do, they will make bank. Because the margin on those plastic toys must be pretty healthy, and the AI that brings the dinosaur to life scales at very low marginal cost. 

The Proverbial Wallet, created at the MIT Media Lab by David Rose and his team. The wallet has a variable strength hinge, connected wirelessly to your bank account. As the funds in your account dwindle, your wallet literally becomes harder to open. A clever solution to help people better understand their spending in a very visceral way. 

The Proverbial Wallet, created at the MIT Media Lab by David Rose and his team. The wallet has a variable strength hinge, connected wirelessly to your bank account. As the funds in your account dwindle, your wallet literally becomes harder to open. A clever solution to help people better understand their spending in a very visceral way. 

The connected service element of a product need not be as complex as the conversational service deployed in this dinosaur, however. I've spoken before about the experimental wallet that was produced at MIT. Most people have little or no understanding of their current bank balance. MIT's smart, connected wallet becomes harder to open as the balance in your bank account ebbs, making it harder for someone to unknowingly overspend. 

So what does all this mean?

We are starting to see glimpses of the next era of personal computing, one characterized by voice, IOT, personal assistants, chatbots, AI, and ultimately augmented reality. The combination of these elements will ultimately replace today's mobile/app compute model to become the predominant way people interact with information and services. The race is on to develop the new platforms and technology behind this new infrastructure. More on that in my next post.

Within a decade, smartphones, those little slabs of computing goodness that we hold so dearly, will follow the PC and the mainframe into a world of ever-diminishing relevance to daily life. Sure, they will still be around the way the PC is still around, but the new way of getting things done in daily life will involve smartphones and apps a lot less than it does today.

Prepare for a world where wearables, smart spaces and smart objects are able to have complex conversations with you and help you through your day. Just look at what Amazon is already doing with their family of Echo products, and the Alexa voice platform.

Jimmy the Dino is the beginning of something marvelous.

The future of social media

The future of social media

Social media platforms as we know them today are at the starting point of an exciting evolution that will lead them to become ever more valuable to us in our day-to-day lives. They will develop to be much richer experiences than we see today, and they will evolve to become life platforms, reaching into many new areas of our lives, well beyond the social.

Social media platforms have already become the address book for the planet, making it easy to find billions of people you might want to connect with. These platforms are already evolving to enable you to connect not just to other people, but also to brands, services and other institutions. So rather than call an airline, or go to their website to make a booking, you will communicate with them through online platforms. And most likely you won’t be communicating with a human representative of the company, but an algorithm. Whether you call this technology A.I., chatbots, or conversations as a service, it’s pretty much all the same thing—a set of complex, learning algorithms that mimic a human interface and can help you achieve a set of tasks.

The text and image-based world of today's social media will eventually become a relic, giving way to rich new types of interactions using VR and AR

The text and image-based world of today's social media will eventually become a relic, giving way to rich new types of interactions using VR and AR

Our interactions through social media, whether with another human being or a chat algorithm, will also become much less about text and more about richer types of media. We have already seen platforms like Facebook, QQ, Twitter, WeChat, and Snapchat embrace video. Some platforms already feature 360 video that enables you to pan around a video image as it plays by moving your phone. This is a baby step towards full 360 degree video immersion as we enter the era of virtual and augmented reality (VR and AR). Our desire for intimacy with those we care about will be met by unifying us over time and space within VR and AR worlds. These new platforms will combine with social media to enable us to feel like we are sharing the same physical space with others, whether in real time, or with pre-recorded virtual video messages. I live 5000 miles away from my parents and other relatives who all live back in the UK. The thought of these enhanced interactions is intriguing and exciting—being able to feel present at my nephew’s birthday parties, or able to share joke and a beer with my dad, or to watch a video message from my mum just to say she was thinking of me. Social VR and AR will never be a replacement for real physical presence and contact. But in situations where that is impractical we can simulate these feelings of intimacy by essentially tricking the visual and auditory systems of our brains so that they feel virtual presence.

Social VR selfie shown at Facebook's recent F8 developer event

The open question is whether we will want to use these more intimate methods of interaction with strangers and the avatars that will embody chatbots. Will we want to summon video representatives of mortgage brokers into our living rooms when we are looking to refinance our homes and want to “chat” about rates and loan options? Will we want to have a peppy virtual representative of United Airlines appear on our couch next to us so we can chew him out for the awful flight we endured earlier in the week? Will we want to invite Amazon into our living rooms to put on a fashion show showing a specially curated set of clothing, designed just for us? Maybe, maybe not.

One thing is for sure. Social media is going to evolve rapidly, and the stream of text-based updates about friends both near and far will seem quaint when compared to what these platforms become. The same way we might look back at the early days of home computing with its character-based interfaces and rudimentary windowed graphics, we will look at the social media platforms of today and marvel at how clunky, limited, and impersonal they were. Social media platforms will evolve to become full communication platforms, able to connect us virtually with any person, and brand, or any other entity in the world. Or at least with their virtual representation. Which leaves me with the final question over the future of social media…will we eventually seek digital representation for ourselves? Will we train chatbots to handle simpler interactions on our behalf, and to represent or even pretend to be us? How will we know when we are really talking to somebody else rather than being fobbed off by a facsimile sent to intercept us and politely deal with us? 

Expect social media to evolve to become a full-on soup to nuts commercial platform too.  Today eBay takes a small slice out of every transaction made on its site and social media sites have long wanted a piece of that action. Social media platforms will charge brands for the privilege of accessing their users and conducting business with them. They will also continue to make good money helping brands to laser-target all of those users with tempting offers. The difference is that social media will no longer just be about advertising and demand-generation, but will extend their offering to encompass the entire sales process including product discovery, purchase, and after-sales support. Once we link our bank accounts to these platforms, and they connect us directly to brands and services, we will see a big chunk of shopping move into these virtual spaces and marketplaces. This is a world in which titans like Facebook and Amazon will finally come face to face, and clash in a mighty battle for mindshare. 

The future of social media is both exciting and daunting. Expect these platforms to expand well beyond the social, to become more commercial, and to embrace VR and AR with fervor. These new platforms, now almost unrecognizable versus their humble beginnings, will simultaneously make our lives easier and more fulfilling while at the same time continuing to turn us into commercial targets ripe for harvest. The old adage in social media will still stand: if the product is free, then YOU are the product.

Developing needs to get exponentially easier - Thoughts from the Global Forum

A couple of weeks ago I got to attend the Global Forum in Toronto. The event was awash with political celebs, ambassadors, captains of industry, and even a smattering of media types. And the conversations were thought-provoking, and thoughtful. I thoroughly enjoyed it.

The most pleasant surprise to me was the amount of focus on income inequality, sustainability, long term growth (versus short term shareholder appeasement), and a desire to address big problems.

Hannah Kuchler, Gary Shapiro, me, and Paul Warrenfelt on stage at the Toronto Global Forum

Hannah Kuchler, Gary Shapiro, me, and Paul Warrenfelt on stage at the Toronto Global Forum

I sat on a panel alongside the charismatic and high-energy @GaryShapiro, CEO and President of the Consumer Electronics Association, and also with the affable Paul Warrenfelt, Managing Director of T-Systems (a division of Deutsche Telecom). We were interviewed by Hannah Kuchler of the Financial Times.

The main topic of the conversation was how innovation will be shaped in the future. Much discussion of the power of Moore's Law ensued. But one of the central questions that came up was - how should companies think about staffing for all this future innovation, implying that to embrace technology and fully leverage it, all companies will essentially need to become tech companies and hire small armies of electronics engineers and C++ programmers so they can add intelligence inside the products they make, whether those be smart running shoes, or a smart umbrella. My response was to think beyond that. If we are right, and as Moore's Law continues to drop the price of computing and connectivity we move into an era of smart objects, smart spaces, and smart infrastructure, then we will HAVE to find a new way to build 'smart' into things. There will not be enough software programmers in the world if every company needs to add 'smart' to their products to compete. Designing smart products HAS to get exponentially easier. 

Consider the fact that in the early days of the web you needed amazing programmers to build even a simple web page by today's standards. Now there are tools that make it easy even for web idiots like me to build a whole website complete with video, animation, RSS capabilities, and that even resize automatically for the device they are consumed on. For example, I built this site with Square Space, and I didn't need to learn a single line of HTML to do it. (Phew).

Smart objects of the future will be built around standard hardware building blocks (look at what Intel is doing with products like Curie), and software tool chains will become much easier so that anyone will be able to be a developer. Anyone will be able to 'code'. The same way anyone can now be a web designer (up to a point). GUI-based development tools with drag and drop capabilities will hide the complexity of development. In the same way we took a huge leap from machine code, to assembly language, and then to high-level languages like C++, now we need yet another level of abstraction and complexity reduction. One that makes it possible not just for first-time makers to create smart, connected objects that link to the cloud for some or all of their functionality, but that allows creative people and designers to use 'smart' as an ingredient, just as they would think of using leather, plastic, glass or metal.

The companies that develop these types of capabilities will be the winners in the new era of smart objects. The point of competition will move from just cost, to easy and speed of implementation. And indeed towards "time to scale". Look at what MIT has been doing with their Android App inventor. Or what they are doing with Scratch, a programming language for kids. Or what Lego is doing with programming for their Mindstorms products. There's a big opportunity here for someone.

What do you think? Am I crazy, or will improved computing capability in the development environment help us overcome and mask the true complexity of development that we experience today?

Moore's Law vs Metcalfe's Law

Future value creation will come from the combination of Moore’s Law and Metcalfe’s Law. The two are multiplicative and feed off each other, too. Cheaper devices are made possible by Moore’s Law. This increases the number of nodes boosting a network's Metcalfe value. And an increase in the value of the network attracts more nodes bringing volume economics to bear on Moore’s Law. And the cycle continues.

Many semiconductor companies remain focused on capturing value by marching forward with Moore's Law, shrinking transistors and creating new capabilities in silicon. That's all goodness, but as Moore's Law becomes harder to pursue, they will need to find other ways to deliver value to their customers than just keeping Gordon's promise.

I believe that the challenge for the semiconductor industry will be to gracefully shift from delivering value almost exclusively through Moore’s Law, to delivering value through a combination of Moore’s and Metcalfe’s Law, and ultimately perhaps Metcalfe alone.

One thing semiconductor companies are really good at is doing things at SCALE. They make hundreds of millions or even billions of chips every year, made in factories the size of several football pitches. They make the most complex machines ever made by human beings. And they all work. Kind of amazing really. For them, it's go big or go home. They don’t do anything in small measures. They work at scale as part of their business model. And scale is what Metcalfe is all about.

Companies like Facebook, Google, Twitter, and Netflix have created value out of Metcalfe's Law and embraced scale by building massive user networks. 

Semiconductor companies that are still trapped exclusively in the Moore’s Law paradigm may face commoditization and collapse if they can’t see beyond the next few steps in the game. Smart ones will already be planning for the era beyond Moore's Law and looking for ways to create value from scale derived from networks rather than the volume scale Moore's Law has brought for the last fifty years. Finding new ways to more easily connect IoT hardware and software platforms through APIs, and flexible computing platforms (cloud, SDI, etc) has to be where the most value is now for those companies.