26.04.2024

CornerCamera made at MIT lets you see through walls

Seeing through walls and spying around corners may sound like a superpower, but advances in technology are now making this a reality.

A system, dubbed CornerCameras, developed at MIT, uses smartphone cameras to peer round corners and check what’s on the other side.

The ability to see around obstructions could help firefighters find people in burning buildings or enable self-driving cars to detect pedestrians in their blind spots.

Seeing through walls may sound like a superpower, but advances in technology are now making this a reality. A system, dubbed CornerCameras, developed at MIT, uses smartphone cameras to peer round corners and check what’s on the other side

Seeing through walls may sound like a superpower, but advances in technology are now making this a reality. A system, dubbed CornerCameras, developed at MIT, uses smartphone cameras to peer round corners and check what's on the other side

HOW IT WORKS

Most approaches for seeing around obstacles involve special lasers.

Researchers shine cameras on specific points that are visible to both the observable and hidden scene, and then measure how long it takes for the light to return.

However, these so-called ‘time-of-flight cameras’ are expensive and can easily get thrown off by ambient light, especially outdoors.

CSAIL’s technique doesn’t require actively projecting light into the space, using natural light sources instead.

From viewing video of the penumbra, CornerCameras generates one-dimensional images of the hidden scene.

By observing the scene over several seconds and stitching together dozens of distinct images, the system can distinguish distinct objects in motion.

The project was developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

CSAIL’s imaging system uses reflected light to detect objects or people in a hidden scene and measure their speed and direction of travel, all in real-time.

To explain how it works, imagine that you’re walking down an L-shaped hallway and have a wall between you and some objects around the corner.

Those objects reflect a small amount of light on the ground in your line of sight, creating a fuzzy shadow that is referred to as the ‘penumbra’.

Using video of the penumbra, the system can stitch together a series of one-dimensional images that reveal information about the hidden objects.

Although the system can’t be run directly from a smartphone at the moment, as it currently requires a laptop and specialist software, this may not be far off.

Katherine Bouman, who was lead author on a new paper about the system, said: ‘Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going.

‘In this way, we show that walls and other obstructions with edges can be exploited as naturally-occurring “cameras” that reveal the hidden scenes beyond them.

‘If a little kid darts into the street, a driver might not be able to react in time.

‘While we’re not there yet, a technology like this could one day be used to give drivers a few seconds of warning time and help in a lot of life-or-death situations.’

Most approaches for seeing around obstacles involve special lasers.

This image shows the camera pointing at the penumbra, the fuzzy shadow created at a corner wall by reflected light. Unseen objects (B) cast a small shadow (in grey) by reflecting light that is joined by shadows created by seen objects (A) which the system observes

This image shows the camera pointing at the penumbra, the fuzzy shadow created at a corner wall by reflected light. Unseen objects (B) cast a small shadow (in grey) by reflecting light that is joined by shadows created by seen objects (A) which the system observes

Researchers shine cameras on specific points that are visible to both the observable and hidden scene, and then measure how long it takes for the light to return.

However, these so-called ‘time-of-flight cameras’ are expensive and can easily get thrown off by ambient light, especially outdoors.

CSAIL’s technique doesn’t require actively projecting light into the space, using natural light sources instead.

It also works in a wider range of indoor and outdoor environments and with off-the-shelf consumer cameras.

From viewing video of the penumbra, CornerCameras generates one-dimensional images of the hidden scene.

A single image isn’t particularly useful, since it contains a fair amount of noisy data.

From viewing video (pictured - original frame) of the penumbra, CornerCameras generates one-dimensional images of the hidden scene. The system can then stitch together a series of images to reveal information about the objects around the corner

From viewing video (pictured – original frame) of the penumbra, CornerCameras generates one-dimensional images of the hidden scene. The system can then stitch together a series of images to reveal information about the objects around the corner

By observing the scene over several seconds and stitching together dozens of distinct images, the system can distinguish distinct objects in motion.

The team was surprised to find that CornerCameras worked in a range of challenging situations, including in the rain.

‘Given that the rain was literally changing the colour of the ground, I figured that there was no way we’d be able to see subtle differences in light on the order of a tenth of a per cent,’ added Dr Bouman.

‘But because the system integrates so much information across dozens of images, the effect of the raindrops averages out, and so you can see the movement of the objects even in the middle of all that activity.’

The system does still has some limitations, however.

CSAIL's technique doesn¿t require actively projecting light into the space, using natural light sources instead. By stitching together dozens of distinct images together and looking for changes in colour, the system can distinguish objects in motion in the hidden scene

CSAIL’s technique doesn’t require actively projecting light into the space, using natural light sources instead. By stitching together dozens of distinct images together and looking for changes in colour, the system can distinguish objects in motion in the hidden scene

It doesn’t work if there’s no light in the scene, and can have issues if there’s low light in the hidden scene itself.

It also can get tripped up if light conditions change, like if the scene is outdoors and clouds are constantly moving across the sun.

With smartphone-quality cameras the signal also gets weaker as you get farther away from the corner.

The researchers plan to address some of these challenges in future papers, and will also try to get it to work while in motion.

The team will soon be testing it on a wheelchair, with the goal of eventually adapting it for cars and other vehicles.

Leave a Reply

Your email address will not be published. Required fields are marked *