The future of photography
I've loved taking photographs since I was a kid. Back then I took pretty terrible photographs with my Kodak Disc Camera. In more recent times, I still enjoy snapping images and occasionally get out with my Digital SLR and try to get a bit arty or capture a cool fleeting moment in time. Like most people, the vast majority of the pictures that I capture are taken with my phone. And they're not bad, especially with the latest phones. The great news is that photography is about to get a LOT better.
A few years from now, it'll be harder and harder to take a bad photo because our cameras will be getting help from artificial intelligence (AI). AI will help our cameras to take better images in low light, to fix focusing issues after the fact, to make any video slow motion, and even to improve the resolution of photos. Artificial intelligence will make it much easier to take a high-quality photograph. You'll still need to point your camera at something interesting, and think about how your image is composed, but AI will help you to take care of the rest.
AI will also turbo-charge image editing software and make it easier to clean up images, fix problems, or change out backgrounds. Things that once took digital artists an hour to do with Photoshop will be accomplished in seconds.
Let's take a look at how AI will change photography forever. First of all, let's examine how AI is being used to fix images: Brightening dark photos, fixing noise, or removing unwanted objects in the foreground or background. A lot of this work is at the research stage, but it's already pretty impressive. Nvidia and Google have shown some quite impressive demos.
Noise and light
Nvidia researchers trained an AI with pairs of images, one normal image and one with severe noise introduced. The resulting AI is now able to take in a noisy image that it's never seen before, and create a version with the noise removed. This approach will also improve photos that are taken in low light conditions. When our cameras encounter low light they automatically boost the gain on the image sensor to try and capture more light in the scene. That introduces noise into the image and makes it grainy. If you can fix that, then you can take much better images in low light. This de-noising isn't just for improving your holiday snaps. It can also be used to improve the quality of medical imaging like MRIs and CT scans, as you'll see in the de-noising video demo below.
This particular branch of AI was first created in 2014. They are called Generative Adversarial Networks (GAN) and have proven incredibly useful when it comes to improving the quality of images and videos. I explain GANs in my last post here. And you can see more examples of de-noising of images in this short video from Nvidia Research:
A similar approach can be used to improve the brightness of an image that was shot in very low light conditions. By showing an AI hundreds of thousands or even millions of images, it can be trained to understand image composition, and which areas of an image are important, such as people's faces. An AI can then intelligently fix the lighting on an image, ensuring that the subject of an image is properly lit without you having to spend time dodging and burning the image in an editing package. Check out the video below from Relonch. They trained an AI with just 100,000 images. The AI can now intelligently fix the lighting on images, highlighting the subject of each shot automatically.
Some AIs can go even further, taking a digital photo that was taken in extremely low-light conditions and pulling a decent image from it. Here's a short video from Two Minute Papers that shows the amazing results that are possible:
Another company, Arsenal, is taking a different approach. It uses AI to control your DSLR and automatically set the exposure, aperture and shutter speed based on the scene and subject you have pointed your camera at. The AI was trained on thousands of high-quality images so that it knows from experience the most appropriate settings for your camera for the particular shot you have composed. It can even control photo stacking to help get you amazing composite images.
OK, so our photos are now properly lit and de-noised. What else might we want to fix? Google demonstrated how a GAN AI might be used to remove unwanted foreground objects, like the chain link fence shown in the image below. To do this, the AI has to use its experience to "imagine" what the pixels behind the chainlink fence would have looked like. Pretty cool stuff.
Nvidia has also demonstrated image editing software that's been turbo-charged with generative AI. This stuff isn't ready for primetime yet, but imagine where this is going. In the video below, again from Nvidia Research, we see how the AI paints in areas of the photo that are removed by the editor. Using its experience of other photographs, it fills in the pixels to complete the image.
This is early-stage research, but it's already quite impressive. Imagine being able to apply this kind of technology to old, scratched up or damaged photographs of your great grandparents, for example.
Nvidia researchers have taken this approach to the next level, using semantic editing. By labeling areas of an image, they can use an AI to quickly turn that area of the image from "trees" to "buildings" as you'll see in the video below:
Adobe has demonstrated a cool AI-powered tool coming to future editing software, codenamed Project Cloak. Videographers can use it to remove objects from a video scene. Check out this short video demo:
Google and Magic Pony (now owned by Twitter) have done work where they apply GANs to improve the resolution of images, delivering what is referred to as "super resolution". The AI is trained in a similar way to the low-light or de-noising GANs. It is shown many, many pairs of images, one high-resolution image, and one low-resolution version. The AI learns the relationship between the two and is then able to generate a high-resolution version of any low-resolution image it is given. The results aren't perfect, but they are pretty good. The approach works just as well on video. This will likely lead to new ways of doing image compression, making it possible to send high-quality video over low bandwidth network connections.
Incredible Slow-motion video
Nvidia uses a similar approach to super resolution to make slow-motion videos. Instead of using an AI to imagine the additional pixels needed to create super-resolution photographs, they use AI to imagine the interim frames needed to make smooth slow-motion videos. Video on your phone is usually shot at 30 frames per second (fps). To create slow-motion video without making it judder, you need to take more frames in one second, and then play it back at a typical 30 frames per second. So you might shoot video at 240fps, and play it back at 30fps, so the action moves at 1/8th normal speed. But what if you shot video at a normal 30fps and it turned out to be a terrific video clip that you later wanted to watch in slo-mo? AI to the rescue. This next video from Nvidia Research shows how successful they have been using AI to automatically create slo-mo, including taking video from The Slow Motion Guys, and slowing it down even further. Mesmerizing to watch:
AI is going to change the future of photography. It will give us smarter cameras that can take better photographs, and it'll allow us to improve our existing photos and videos and apply a new level of creativity to images. So long as you can point your camera at an interesting subject, and compose a decent frame around that subject, your AI-powered camera should help you to take amazing photographs. We are moving to an era where AI collaborates with us on projects, helping us to realize our vision and get great results without having to do all the fiddly work. This holds true in many other areas of creative endeavor, from design (generative design), to scientific discovery.
AI is an incredible new tool. Taking terrific photographs and video is just the beginning.