Let's talk about AI


Artificial Intelligence is the second coming of computing. Its impact is already profound, and it will soon change all of our lives.

One hundred years ago, the world changed with the coming of electrification. Roughly fifty years ago, the digital computer revolution began. Now, the AI revolution is upon us, and the impact on human life will be as profound as both electrification and the digital computer have been.

Over the last 50 years, we have seen the world change because of the impact of what I now call "traditional digital computing". This is the type of computing that brought you spreadsheets, word-processing, databases, PowerPoint, the Internet, and video games. To use these types of computers you run programs, and the results you get are deterministic, which means to say that if you run a spreadsheet with a set of data one day, and run that same spreadsheet with the same set of data a decade later, you'll get the exact same results.

Artificial Intelligence (AI) is fundamentally a different type of computing, able to solve a set of problems that were either impossible, or very difficult and expensive, for traditional digital computers to solve. AIs are trained rather than programmed. You train them with vast amount of data. While AI is currently run on top of traditional digital computers, it's essentially simulating an analog computer, modeled after the way our brains work. I'll save the explanation of how this type of computer works for another post. Today I want to focus on the main uses of AI.

Why now?

Geoff Hinton, the father of modern AI

Geoff Hinton, the father of modern AI

Before we get to what you can use AI for, let's briefly review why it's suddenly all over the news in the last few years, and how we got here. The term "Artificial intelligence" was first coined in the 1950s. And the core algorithms behind today's AI were developed in the mid-1980s. Yet the big, recent advances in AI didn't start until around 2011. We had the core algorithm thanks to AI luminary Geoff Hinton, who created the "back propagation" approach back in the 80s. It took until the 2010s before we had enough computing performance available to us to train neural networks using those algorithms, and enough data to train them with. The computing performance came in the form of graphics chips from companies like Nvidia, which turn out also to be useful for training an AI. The avalanche of data came because of a dramatic fall in the cost of storage, and because of billions of people sharing photos, videos, and other data with tech companies. Just think about YouTube. Users upload 300 hours of video to YouTube every minute. Google has amassed an estimated 7 billion videos as a result. All this data can be used to train AIs. 

So we now have the algorithms, the computing capability, the data storage, and the oceans of data needed to make AI go. What can it do?

Four main uses of AI

Artificial intelligence can be used for lots of things, and you're likely already using it every day. If you use email, you're using AI. Spam filters use AI to help identify spam email. AI is used to identify viruses in your virus checker, and to create your credit score. It's at work when Facebook suggests a friend to tag in a photo, when Google makes suggestions as you enter words into the search bar, and whenever you talk to a voice agent like Alexa, Cortana, or Siri.

I've found it helpful to think about the uses of AI in four main categories:

  1. Seeing, hearing, and understanding the world
  2. Finding important patterns in oceans of data
  3. Learning from experience
  4. Imagination and content creation

More breakthroughs and usage categories may come, and there are definite overlaps between these categories, but this seems to be a good working set of AI usage categories for now. Let's review each one briefly. Think about how each capability might have an impact on your business, or your life, as you read through these next sections.

1. Seeing, hearing, and understanding the world

Machine vision.png

For the first time, machines are starting to open their eyes and ears. Using AI, computers can now see, hear, and understand something about the world they inhabit. It's not a true understanding (yet). When a computer recognizes an image of an apple and correctly identifies it as such, it doesn't really understand what an apple is, where it comes from, what it tastes like or anything like that. But it does know that a particular image is categorized with the five letters a-p-p-l-e.

It is this new-found ability to see, hear, and thus gain a rudimentary understanding of the world, that has enabled cars to drive themselves, and robots to safely inhabit the same spaces as humans. For decades, industrial robots have built our cars and other goods inside our factories. If you were to wander through those factories you'd see that the human workers and the robots are all separated by a tall steel fence. This is to protect the workers from harm. Why? Simple: because those robots can't see the workers and thus can't avoid them if they get in the way. Squished workers would quickly ensue.


A new AI-powered class of robots is able to sense its surroundings and either stop or take evasive action to avoid humans or other obstacles around it. This is leading to a new generation of robots that is flooding into every industrial sector, and ultimately into our lives. These smart robots can co-exist safely in the same spaces as humans, working alongside them. This has given rise to the term "cobots", cooperative robots. 

Tech companies have also made giant strides in a computer's ability to hear and understand human speech. While voice agents like Siri, Cortana, Alexa, Bixby, and Google Assistant still have a long way to go, they are getting significantly more capable every year. Actually, every week.

AI is being applied to all the main tasks involved in delivering "conversational computing", where you can talk naturally to a computer and it understands and can act upon your request. AI will help voice agents like Siri and Alexa to hear your speech even when there's background noise; It will help to improve a computer's ability to figure out what words you were saying, what those words mean, and what it is that you're asking it to do. Perhaps most importantly, tech companies are making advances in AI's ability to hold a conversation and understand context.

Most voice agents only enable pretty simple interactions. For example, you might ask, "What's the weather today?" and you'll get a response about today's weather forecast for your location. If you ask a follow up question such as, "How about in London?", you'll usually confuse the AI. It's like it totally forgot that you just had a conversation about the weather, and therefore it can't make the leap to figuring out that you are asking about the weather in London. Tech companies are racing to bring conversational computing into our daily lives, with Microsoft and Google currently out in front. Google recently demonstrated its voice agent, Google Assistant, making phone calls to a hair salon and a restaurant to book appointments. Videos of these two demos are included below.

Expect to see continued advances in AI's ability to see, hear, and understand the world around it. This will lead to more natural ways for humans to interact with computers, and also to all manner of autonomous machines (robots, drones, autonomous cars and trucks, passenger drones, autonomous ships, mobile stores and more) that will safely navigate their way through our world.

2. Finding important patterns in oceans of data

chest xray.jpg

Artificial intelligence is great at spotting patterns. Fundamentally that's what's happening when AI is being used to understand what it's "seeing" in photos and videos. But AI's pattern-finding abilities can be used in lots of other interesting ways too.

AI is being used to look for patterns in medical charts such as CT scans, fMRI and x-rays. That pattern might indicate a broken bone, a tumor, or some other ailment. An AI can now be used to look at a medical chart and give a first-pass diagnosis on what it sees, based on its experience with having seen millions of other charts. These AIs can offer a great second opinion to experienced human radiologists. Eventually, these AIs may get so good at this task that the radiologist can focus on other, higher-value, and more patient-centered tasks.

AI is also being used to search for new drugs and new materials. The AIs are trained with data describing existing drugs, or materials, and their properties. The AIs look for patterns in this information and can then extrapolate and suggest other new drugs/materials, that from its perspective seem like they might have the desired set of properties.

A similar approach is being used to optimize factory operations, predict electricity grid demand, and assess insurance risk.

3. Learning by experience

You don't program an AI, you train it. Which means that it learns from the things that you show it. This same property can be used by AIs to essentially train themselves. By trying out different strategies, AIs can learn over time. The AlphaGo machine that beat the world's best human Go player in 2017 learned to play like a champion by playing millions of games of Go inside its own head, and learning from the experience. Bipedal robots can learn to walk in the same way. The video above shows a robot learning to walk. The first segment shows the robot walking the way it was programmed to walk by a human. It walks tentatively, a bit like it's perhaps pooped its pants. Using reinforcement learning, the robot learned to walk 1.8 times as fast, as shown in the second segment. Pretty impressive.

4. Imagination and content creation

This is the piece that usually freaks the most people out when I talk about it in my presentations. Computers recently gained an imagination and the ability to have limited creativity. Using a technique known as Generative Adversarial Networks (GANs) researchers have been able to create AIs with impressive new capabilities. A GAN is essentially two AIs pitted against each other. One AI is trained to create content. The other AI has been trained to spot fake content. They work together, much like a forger working in partnership with an art detective, to both get better at what they do. Eventually, the forger AI can create really incredible content. This content could be text, images, or even video. And the output can be impressive. 

In late 2017, Nvidia showed some work it had been doing with GANs. It trained a GAN using thousands of photographs of "celebrities" that were harvested from the web. The GAN was then used to generate images of what it considered to be a celebrity photograph. The results are eerily impressive. Click this video below to see what I mean:

This type of technology isn't just useful for making creepy photos of pretty people that don't exist. GANs have many incredible potential uses that will change the way we all work and live our lives. For example, it won't be long before GANs are used to improve our photography. A GAN can take a low resolution image, and output a high resolution version of that same image. They can also be used to take an image taken in very low light, and output an image with much higher exposure, without the normal noise artifacts that you get by boosting the ISO on the image sensor. GANs can also be used to remove unwanted pieces of an image. For example, Google has demonstrated how AI can be used to remove a chainlink fence in the foreground of an image, and "imagine" what would have been behind the areas of the image occluded by the fence.

Creative AI will have many business uses and will enable computers to work in close partnerships with people. One example of this is in engineering design. There are lots of different ways to design an industrial valve. Future CAD tools will take the original design created by a human engineer and "riff" on it, creating perhaps a couple of hundred alternative options that achieve the same functionality, but that are designed in either subtly or radically different ways. The CAD tool will then be able to run simulations on all the design options, testing them for joint stress, reliability, manufacturability, and estimating bill of materials. The result will be that an AI-enhanced CAD tool will be able to take an engineer's original design and offer smart suggestions on how the engineer might improve that design to reduce cost, improve quality, or make it easier to make. These capabilities are already being tested and will soon be deployed. Very cool stuff.

Another example is in dentistry. Researchers at the University of California, Berkley, working in partnership with Glidewell Dental Lab, are using GANs to help automatically design dental crowns. By looking at scans of both sides of a person's jaw, the AI is able to design a crown that will look aesthetically pleasing, properly fill the gap in the patient's tooth line, and optimize the crown for bite contact. The researchers claim that the AI-generated crowns already outperform those designed by humans.

Don't panic

Artificial intelligence is gaining a lot of new capabilities, and is learning new skills all the time, but it's still a long way from being able to do all the things that humans can do. Some jobs will be replaced by the capabilities of AI. But many jobs will be enhanced with new tools that are turbo-charged by artificial intelligence. These tools will remove some of the drudge work from our lives, and also help us to get more done and to be more creative. Doctors will get to spend more time with patients. Photographers will capture even more breathtaking images (check out the new Arsenal AI assistant). Engineers will get AI-assistants to help them design better products, to code better software, and to deliver more efficient and elegant solutions.

Andrew Ng, one of the leading thinkers in AI today, once said, "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” Andrew Ng was the leader of the Google Brain project, is the former chief scientist at Baidu, and is now the founder of Landing.Ai, a company focused on using machine learning to solve problems in the manufacturing sector.

Artificial intelligence is already transforming business and our lives. We are at the beginning of a huge transformation of every sector of industry. It's an exciting time, and much as our great-great grandparents had their lives transformed with the coming of electricity, we are about to experience dramatic changes to our lives with the widespread availability of artificial intelligence.

This is a once-in-50-years event. Electricity around the turn of the 20th century, computing mid-century, and now AI towards the beginning of the 21st century. We should all think of it in those terms. AI is a big, big, BIG deal.

I can't wait to see what problems people will solve with AI in the coming decade, what new experiences we will all enjoy as a result, and where AI will take humanity going forward.

Like any technology, we will need to proceed carefully, and with eyes wide open. But AI will improve our lives in myriad ways. I'm excited to see what major human problems AI will be able to help us solve during my lifetime.

If you are interested to learn more about artificial intelligence, and how it might reshape your business, then please check out some of the latest classes that I offer, including "Artificial Intelligence 101" and "AI for business". You can also learn more by downloading my latest catalog. or by visiting my website, baldfuturist.com.

Get ready for an exciting few years of radical transformation!

Steve Brown, Futurist