Shopping? Check out our latest product comparisons

Scientists reconstruct visual stimuli by reading brain activity

By

September 23, 2011

Scientists have created a system that is able to visually reconstruct images that people h...

Scientists have created a system that is able to visually reconstruct images that people have seen, by reading their brain activity

In the 1983 film Brainstorm, Christopher Walken played a scientist who was able to record movies of people's mental experiences, then play them back into the minds of other people. Pretty far-fetched, right? Well, maybe not. Utilizing functional Magnetic Resonance Imaging (fMRI) and computer models, researchers at the University of California, Berkeley, have been able to visually reconstruct the brain activity of human subjects watching movie trailers - in other words, they could see what the people's brains were seeing.

The study involved placing three subjects in an MRI scanner, and having them watch two sets of Hollywood movie trailers while in it. The fMRI was used to measure blood flow through their brains' visual cortex, as they were watching the trailers. A computer used this data to virtually divide their brains into small three-dimensional cubes called voxels. Computer models of each voxel were then created, incorporating information about how that real-life section of the brain responded to different types of visual stimuli. In this way, the computer was able to match up specific voxel activity with specific visual patterns from the trailers - it acted as a Rosetta Stone, of sorts.

The resulting movie reconstruction algorithm was then fed 18 million seconds of random YouTube videos, which it matched up with what should be the corresponding voxel activity. For each image in the trailers, it then chose 100 images from the YouTube videos, whose voxel activity most closely resembled that of the trailer image. These 100 images were combined into one blurry composite image, that resembled the one image from the trailer. When strung together, those composite images presented a somewhat trippy yet recognizable facsimile of the complete trailer.

So far, the system can only reconstruct movie trailers that subjects have already viewed. As the UC Berkeley technology is developed, however, it is hoped that it could be used visualize what is happening in the minds of stroke victims, coma patients, and other people not able to adequately communicate. It could also be used to improve human-computer interfaces, such as those that allow handicapped individuals to control devices using their thoughts.

The video below shows parts of the original trailers, with the reconstructions playing alongside. Below it is a video that displays images from the trailers, with some of the YouTube images that were used to create their composite equivalents.

The research was published yesterday in the journal Current Biology.

About the Author
Ben Coxworth An experienced freelance writer, videographer and television producer, Ben's interest in all forms of innovation is particularly fanatical when it comes to human-powered transportation, film-making gear, environmentally-friendly technologies and anything that's designed to go underwater. He lives in Edmonton, Alberta, where he spends a lot of time going over the handlebars of his mountain bike, hanging out in off-leash parks, and wishing the Pacific Ocean wasn't so far away.   All articles by Ben Coxworth
8 Comments

Many years ago, I knew this will happen. One day in the future, analysis of "memory" chemicals in the brain of a murdered victim will show who is the murderer.

Salim Khalaf
24th September, 2011 @ 05:30 am PDT

Fascinating and scary at the same time.

Joel Detrow
24th September, 2011 @ 06:25 pm PDT

This has already been demonstrated using cats as specimens in a film on youtube - they wired electrodes into a cat's brain and showed on a monitor what the cat was thinking/seeing. So, not exactly a new discovery, more likely the first time they've gone public with it. When I've found the link I'll post it.

Jamie_S
25th September, 2011 @ 02:44 pm PDT

Further to my last comment, here you go...



Jamie_S
25th September, 2011 @ 02:45 pm PDT

"The resulting movie reconstruction algorithm was then fed 18 million seconds of random YouTube videos"

Okay. That's not very practical for broader use.

"These 100 images were combined into one blurry composite image"

Aaaaaand that's never going to yield decent results.

So, this is cool and interesting, but it's sort of designed to be terrible. When I first saw the result, I said, "That doesn't look like Steve Martin so much as it looks like a random guy in a black T-shirt making a YouTube video." Since it's clear from the sample set that that's EXACTLY what it is, it suggests this has great potential if the output can be decoupled from YouTube.

Tysto
26th September, 2011 @ 07:36 am PDT

@Jamie_S:

Great post. Not sure how real it is, but I think the big differentiator with the results of this article is the ability to retrieve the image from memory rather than view in real time.

Knowledge Thirsty
26th September, 2011 @ 10:49 am PDT

Pirates!

Nitrozzy7
28th September, 2011 @ 10:56 pm PDT

Reminds me of the visual cortex recording technology depicted in Wim Wenders' (then) futuristic epic, "Until The End of the World" - Max Von Sydow playing the brilliant & obsessed scientist whose masterpiece gizmo allows him transmit imagery to his blind wife (and then is later modded to record one's dreams).

kenstru
8th November, 2011 @ 10:42 am PST
Post a Comment

Login with your gizmag account:

Or Login with Facebook:


Related Articles
Looking for something? Search our 28,277 articles