Table of Contents
AI to Revolutionize Real-Time Holography
Fans of science fiction are familiar with the interactive, real-time holograms of the Star Trek universe, where characters can visit the ship’s holodeck to immerse themselves in other realities with the help of three-dimensional holographic figures capable of moving and speaking in realistic ways. In today’s world, real-time holographic displays that are virtually indistinguishable from “real life” have remained the stuff of science fiction–until now.
Recent research from Stanford University promises to make real-time holography a part of daily life, using a combination of advanced machine learning and “camera in the loop” real-time feedback to produce highly accurate 3D renderings, with the potential to revolutionize fields as diverse as gaming, medicine and education.
What is Holography?
The concept of holography is a relatively old one, with origins in the “ghost imaging” techniques of the 1860s. Modern holography arose from the work of Hungarian scientist Dennis Gabor, who won a Nobel Prize in 1971 for his research, and until the advent of today’s artificial intelligence and imaging technology, holographic technology remained similar to his early work.
A hologram is a real-time capture of light wave patterns, using diffraction or scattering of light to give the appearance of three dimensions. Unlike a two-dimensional photograph, a holographic image has depth, parallax and other features that create the illusion of a “real” object that can be viewed from different angles. In that way, holography is similar to sound recording, where intersecting patterns of sound waves are captured and preserved.
But conventional holography has limitations. Standard holographic technologies are unable to capture the complexity of “real-world” scenes, including movement, sound and depth. Holographic images are often filled with artifacts and pixilation from scattered light, which affects their accuracy and prevents a truly realistic appearance.
Creating a truly three-dimensional effect that recreates all the elements of a real-world scene only becomes possible with advanced machine learning, which uses the complex algorithms of deep neural networks and the rapid processing of massive amounts of data to create three-dimensional scenes in which people and objects move and interact in ways similar to “real life.”
By incorporating real-time camera feedback into the training of deep neural networks, researchers at Stanford University’s center for Human-Centered Artificial Intelligence have solved that problem, opening doors for a new era of “mixed reality” with far-reaching effects for all aspects of daily life.
AI and Neural Networking Transform Holography
Artificial intelligence, or AI, is a broad term that refers to the design and development of “smart” machines capable of learning and deciding independent of human input. Built on neural networks that aim to replicate the workings of the human brain, these machines “learn” through continuous training to recognize patterns in data and extrapolate from them to make decisions about new information.
Now, researchers at Stanford are combining advances in artificial intelligence with the concepts of optics to create algorithms capable of instantly recreating real-world scenes that remain true to their constantly changing complexity. That becomes possible with the addition of what the Stanford team calls the “camera in the loop”—incorporating a real camera into the training protocol of a neural network they named the Holonet.
The Holonet network learns to reproduce accurate 3D images by first creating an image and then projecting it onto a display. Then a digital camera captures the image and sends it back through the system, so that the network can compare it to the original for accuracy. In that way, the system improves its ability to accurately recreate a given image. Eventually, it learns to recreate novel images never encountered in the training.
Because advanced neural networks can constantly mine massive data sets for new patterns, these networks can develop new algorithms not only capable of reproducing an image but also of incorporating data from a variety of other sources to create a multidimensional experience that incorporates movement, sound, and other sensory details. These elements combine to produce a holographic construct that can stand alone or be overlaid on existing reality to create a new augmented, or mixed reality.
AI-Powered Holography Opens Doors to a New “Reality”
The emergence of “intelligent” holography has enormous implications for enhancing human experiences in just about every sphere of life. AI and digital tech experts point to a near future in which smart holograms can create an interactive, three-dimensional reality that changes the way humans connect, learn, and experience the world. The possibilities include:
Using data from scans such as ultrasound, CT, and MRI procedures, intelligent holograms could recreate a patient’s body in 3D space, so that doctors can examine specific tissues and organs from multiple perspectives. Likewise, holographic recreations of operating rooms can help surgeons plan and prepare for procedures well in advance. This kind of holographic technology could also play a major role in training healthcare professionals, allowing them to view the human body in a variety of ways and to practice procedures in real time.
Holograms are already being used to create immersive experiences for entertainment and other kinds of events. For example, in 2014 India’s Prime Minister Narenda Modi used holographic imaging to “appear” at several rallies simultaneously, and an entirely holographic singer became a sensation in Japan. Smart holograms can create a virtual concert, fashion show, or other event that provides an interactive, immersive experience closely resembling physical reality.
Public Safety and Health
New holographic technologies have the potential to improve public safety and emergency responses, too. Digital technology experts point out that constructing accurate 3D maps could help responders such as firefighters and search and rescue teams find their way in unfamiliar territory. Likewise, accurate holographic renderings of buildings, roadways and other structures could help to pinpoint defects and vulnerabilities to earthquakes and other natural disasters. Smart holographs could also help train emergency responders and public health officials.
Real-time holography can revolutionize learning and collaboration, making it possible to conduct interactive meetings with people half a world away, and to collaborate remotely on projects when all parties have access to the same 3D prototype. Students could learn from instructors anywhere in the world, who could interact with them for feedback and practical exercises.
The creators of Stanford’s Holonet point out that creating the necessary hardware to support its advances in intelligent holography is still in its infancy. Possible avenues include eyeglasses capable of projecting holograms directly into a user’s eyes, or 3D cameras or scanners that can be built into existing devices such as smartphones and tablets. Added to personal devices people use every day, intelligent holography can create a new kind of enhanced reality that anyone can access at any time.
The immersive holodecks of Star Trek may be far in the future, or they may not materialize at all. Today’s emerging technologies for creating real-time holographic experiences rely on tiny holographic displays that put virtual reality at anyone’s fingertips. With a combination of advanced machine learning and classic holographic technologies, smart holography creates visual experiences that are practically indistinguishable from real life.
#virtualreality #intelligentholograms #machinelearning #neuralnetworks #holography