27 September 2023

In the bag: Why the future of virtual reality lies in a chip packet

Start the conversation

Courtney Linder* says researchers have found a way to recover images of the world reflected in a potato chip bag.

Photo: jakkapan21

Mirrors aren’t the only shiny objects that reflect our surroundings.

It turns out a humble bag of potato chips can pull off the same trick, as scientists from the University of Washington, Seattle, have made it possible to recreate detailed images of the world from reflections in the snack’s glossy wrapping.

The scientists took their work a step further by predicting how a room’s likeness might appear from different angles, essentially “exploring” the room’s reflection in a bag of chips as if they were actually present.

This is analogous to a classical problem in computer vision and graphics: view synthesis, or the ability to create a new, synthetic view of a specific subject based on other images, taken at various angles.

There is distinctly detailed information hidden in the glint of light reflected from a lustrous object.

Scientists can deduce the object’s shape, composition, and condition — like whether it’s wet or dry, round or flat, rough or polished — all from the patterns of that light.

Usually, these images are so distorted to the human eye that you may not even notice they’re there.

“Remarkably, images of the shiny bag of chips contain sufficient clues to be able to reconstruct a detailed image of the room, including the layout of lights, windows, and even objects outside that are visible through windows,” the researchers noted in a paper published to the preprint server ArXiv earlier this year.

These scientists were inspired by Massachusetts Institute of Technology researchers, who proved back in 2014 that it’s possible to turn everyday objects, like a bag of potato chips, into “visual microphones” by using high-speed video to study minute vibrations in the object.

Those vibrations are then extracted to partially recover the sound that produced them.

To create the environmental reconstructions, which the researchers call “specular reflectance maps” or SRM, they used handheld RGB-D sensors to take 360-degree video of certain glassy, reflective objects.

The sensors fuse together depth information recorded in each pixel of the image with regular RGB imaging, which uses red, blue, and green light to create an array of colours.

One example of a consumer RGB-D sensor lies in Microsoft’s Kinect system, which the company introduced a decade ago for its XBOX 360 gaming console.

The Kinect, which powers movement-based games like Just Dance, allows players to “grab” items, hit dance moves, or fight opponents through depth perception.

In one instance, the team recorded footage of a bag of Corn Cho, a type of puffy corn chips covered in chocolate.

Using what they’ve called an SRM estimation algorithm, the scientists turned the morphed reflections into approximations of what the real environment might look like.

While the simulated images are still muddy and distorted, the researchers were able to recover great detail, like the image of a man reflected in a window.

The scientists’ algorithm works on virtually any shiny object.

In another scenario, they used a porcelain cat statue to recover the alignment of fluorescent lights on the ceiling.

It took the algorithm an average of two hours per object to turn the reflections into representations of the environment.

This work could become potentially problematic, though.

Child predators or stalkers could potentially download an image from a social media website, like Instagram, without the creator’s consent.

Then, they could deploy the new algorithm to find out private information about where the image creator lives.

Luckily, Instagram images don’t contain depth information (yet), so this dystopian use of the algorithm isn’t likely at the moment.

In video game design — particularly in augmented reality and virtual reality applications — shiny objects look a bit off because it’s challenging to reproduce the reflections from every angle from which you can view that object.

The researchers found that by first deconstructing the reflections, it’s easier to create realistic renderings of the reflective object in simulated settings.

So, the hope is to see better gaming graphics in the future.

In the meantime, to test this theory out, pay attention to mirrors, windows, and other shiny objects in the games you play.

There’s a pretty good chance their reflections are nowhere near as advanced as those seen in a simple bag of potato chips.

* Courtney Linder is Senior News Editor at Popular Mechanics. She tweets at @linderrama.

This article first appeared at www.popularmechanics.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.