Tech

Stanford’s VR breakthrough could mark the end of bulky headsets – thanks to AI


wear sunglasses

Research team at Stanford from left to right: Brian Chao, Manu Gopakumar, Gun-Yeal Lee, Gordon Wetzstein, Suyeon Choi (Photo by Andrew Brodhead).

Image: Stanford Engineering

One of the biggest criticisms of AR and VR, and specifically Apple’s vision of what they call “spatial calculation” is the bulk of eyewear. There’s no doubt we’ve reached the point where some XR devices and experiences are amazing, but there’s a pretty annoying wall to overcome to use them.

The equipment was heavy, ugly and uncomfortable, and while the child was four years old Quest 2 is available for $200prices go up and up, with $3500 Apple Vision Pro causing the wallet to explode.

Also: Why Meta’s Ray-Ban Smart Glasses are my favorite tech product this year

While we have long seen the promise of VR and we all expected the technology to get better, we mostly had to trust the historical rate of technological advancement to bring gives us the assurance of a more realistic future. But now, we’re starting to see real science happening that shows how all of this could happen.

A team of researchers at Stanford University, led by associate professor of engineering Gorden Wetzstein, has built a lightweight glass prototype can display digital images before your eyes, blending them seamlessly with the real world. His team specializes in computational imaging and display technology. They are working to integrate digital information into our visual perception of the real world.

“Our headset looks out at the outside world like a pair of everyday glasses, but what the wearer sees through the lenses is a rich world covered in full 3D computer vision,” said Wetzstein. colorful, vivid”. “Holographic displays have long been considered the ultimate 3D technique, but it has never achieved a major commercial breakthrough…Maybe now they’ve got the great application they’ve been working on.” waited for many years.”

Also: Best VR Headsets of 2024: Tested and Reviewed by Experts

So what is Wetzstein’s team doing differently from its work at Apple and Meta?

mannequin-arglasses

Prototype holographic glasses

Image: Stanford Engineering

The Stanford team is focusing on fundamental technologies and scientific advances in three-dimensional augmented reality and computational imaging. They are working to create new ways to create more natural and immersive visual experiences using complex techniques such as metasurface waveguides and AI-driven holography.

Metasurface waveguide?

Let’s decode both words. Metasurface is an engineered material consisting of small structures precisely arranged on a surface. These structures are smaller than the wavelength of the light with which they interact.

The idea is that these tiny nanostructures, called waveguides, manipulate light in strategic ways, changing phase, amplitude and polarization as it passes through the material. This allows engineers to control lighting in great detail.

What we saw with both the Quest 3 and the Vision Pro was the use of a traditional computer screen but scaled down to fit our eyes. The display technology is impressive but it is still an evolution in terms of screen output.

arglasses-highquality-glassesdesigns.png

Image: Stanford Engineering

Stanford’s approach eliminates that so the computer doesn’t directly control the display. Instead, it controls the path of light using a waveguide. Oversimplifying it, it uses the following three approaches:

Spatial light modulation: The computer CPU or GPU controls the spatial light modulator (SLM) to modulate the light entering the waveguide. These are small devices used to control the intensity, phase, or direction of light on a pixel-by-pixel basis. By manipulating the properties of light, they self-manipulate and manipulate light at the nanoscale.

Complex light patterns: A VR device calculates and generates complex light patterns, allowing the headset to dictate the specific ways in which light interacts with the hypersurface. This in turn will modify the final image the user sees.

Real-time adjustment: The computer then makes real-time adjustments to the nano light sequences, based on user interaction and environmental changes. The goal is to ensure stable and accurate display of content under varying lighting conditions and activities.

You can see why AI is important in this application

Pulling off all this industrial lighting magic isn’t easy. AI needs to do a lot of the heavy lifting. Here are some things AI must do to make this happen:

Improved image formation: AI algorithms use a combination of physically accurate modeling and learned component properties to predict and adjust how light propagates through holographic media.

Optimized wavefront manipulation: The AI ​​must adjust the phase and amplitude of light at different stages to produce the desired visual result. They do this by precisely manipulating wavefronts in an XR environment.

Handling complex calculations: Of course, this requires a lot of calculations. It is necessary to model the behavior of light in the metasurface waveguide, dealing with diffraction, interference and light dispersion.

While some of these challenges can occur when using traditional top-down computing, most processes require capabilities beyond those of traditional approaches. AI must enhance in the following ways:

Complex pattern recognition and adaptation: A defining characteristic of AI capabilities, especially in terms of machine learning, is the ability to recognize complex patterns and adapt to new data without explicitly requiring new programming. With AR holograms, this capability allows AI to process thousands of variables related to light transmission (phase shifts, interference patterns, diffraction effects, etc.), then adjust for changes dynamically. flexible.

Real-time processing and optimization: That dynamic adjustment needs to be done in real time, and when we’re talking about light entering the eye, the need for truly instantaneous feedback is necessary. Even the slightest delay can cause problems for the wearer, ranging from mild discomfort to severe nausea. But with AI’s ability to process massive amounts of data as it streams and then make instantaneous adjustments, human-compatible light processing for AR vision is possible.

Machine learning from feedback: Machine learning allows the XR system to improve dynamically over time, processing camera feedback and continuously refining the projected hologram, reducing errors and enhancing image quality.

Handling non-linear and multi-dimensional data: The mathematics involved in how light interacts with complex surfaces, especially metasurfaces used in holography, often requires data-driven calculations that are extremely non-linear and contain many array of data points. AI is built to manage this data by leveraging machine learning’s ability to process complex data sets and perform real-time processing.

Integrate diverse data types: The data available to create the images needed in holographic AR is not limited to huge sets of X/Y coordinates. AI can process optical data, spatial data, and environmental information and use it all to create synthetic images.

What does it all mean?

Without a doubt, the biggest factor hindering the popularity of XR and spatial computing devices is the majority of headsets. If such function is found in Quest 3 or Professional vision available in a traditional pair of glasses, the potential is huge.

Also: Meta Quest 2 vs Quest 3: Which VR headset should you buy?

There is a limit to how small the glass can be when embedding a traditional display. But by changing the optical properties of glasses, scientists will create the most widely adopted augmented reality device in history: our glasses.

Unfortunately, what the Stanford team has right now is just a prototype. This technology needs to be further developed to move from research to basic science, to engineering laboratories and then to manufacturing. While the Stanford team doesn’t predict how long that will take, it’s probably fair to assume the technology will be around for at least five to 10 years.

But don’t let that discourage you. It’s been about 17 years since the first iPhone was launched, and even in the first three or four years of the device, we’ve seen huge improvements. I expect we will see similar improvements over the next few years to the current crop of XR and spatial computing devices.

Of course, the future is out there. What will this look like in 17 years? Perhaps the Stanford team has given us our first glimpse.


You can follow my daily project updates on social media. Be sure to subscribe My weekly newsletter updateand follow me on Twitter/X at @DavidGewirtzon Facebook at Facebook.com/DavidGewirtzon Instagram at Instagram.com/DavidGewirtzand on YouTube at YouTube.com/DavidGewirtzTV.

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button