Penn Researchers Provide New Insights Into How People Navigate Through the World

The ability to assess surroundings and move through the world is a skill shared by many animals, including humans, yet the brain mechanisms that make it possible are poorly understood.

Now, in a paper published in the Proceedings of the National Academy of Sciences, researchers at the University of Pennsylvania have offered new insights into how people understand visual scenes and how they figure out which paths to take to navigate through them.

The research was conducted by Russell Epstein, a psychology professor in Penn’s School of Arts & Sciences, and postdoc Michael Bonner.

“I was interested in this question,” Bonner said, “because I was thinking about the basic, most fundamental things that humans and other animals use vision for, and it occurred to me that navigation is one of them.”

To investigate this, the researchers conducted two separate experiments. In the first experiment, they created a set of artificially rendered rooms with different arrangements of doorways through which people could exit. In the second experiment, the researchers used images of real-world scenes. In both experiments, the participants were asked to look at the scenes while performing simple visual tasks that had nothing to do with navigation.

The researchers used functional magnetic resonance imaging, or fMRI, which allowed them to look indirectly for neural activity in the brain. FMRI monitors blood oxygenation levels in different parts of the brain so that, when a certain part becomes active and thus takes in more oxygen, they can measure it.

Instead of just looking at the overall level of activity in each part of the brain, the researchers looked at the pattern of activity. This allowed them to see what sorts of things that area of the brain distinguishes between, indicating what kind of information is represented in that brain region.

In the first experiment, which was tightly controlled since the researchers artificially created and manipulated the rooms, they found strong evidence that a particular region of the brain called the occipital place area, which is part of a network of brain regions called the scene selective visual cortices, seemed to be encoding the spatial layout of where the open pathways were in the scene, which the researchers refer to as “navigational affordances.”​​​​​​​This was surprising, as previous theories had suggested that other regions of the brain were more likely to do this navigational layout encoding. To follow up, the researchers conducted the second experiment, using more naturalistic stimuli to see if they could reach the same results from people looking at real-world scenes with many other complex factors.​​​​​​​In addition to finding that this brain region extracts information about navigational affordances, they found that it does it automatically, even when the participants weren’t told to look for pathways to exit.

“When you look at a room, or some other kind of scene, you're not just interested in what it looks like or what shape it is,” Epstein said. “You’re also interested in what you can do in it. This means figuring out where you can go, where you can walk to and where your path is blocked. Our results suggest that the visual system is pretty attuned to this kind of information, to the extent that it extracts it automatically, even when it’s not required.”    

The main goal of this research, said Bonner, is to understand how visual computations are carried out in the brain, which may lead to advances in medicine.

“Navigational difficulties are one of the primary impairments in Alzheimer’s disease and stroke patients,” he said. “The more that we know about how the navigational system works in the brain and how these functions are broken up into different cortical regions, the better we might be able to deal with these kinds of neurological impairments and to design rehabilitative treatments.”

It may also contribute to the development of navigational aids for blind people.

According to Bonner, another possible benefit of understanding how biological systems implement these sorts of functions is in the realm of computer science. Researchers might be able to gain deep insights into general-purpose algorithms for accomplishing these kinds of feats.

“There's currently nothing that exists in computer science that comes even close to being able to navigate as efficiently as biological organisms,” said Bonner. “If we know how brains do this, maybe we could design autonomous vehicles using the same principles.”

The researchers are starting to get a much deeper understanding of the visual system and biological computation in general.

“The thing that I find exciting,” Bonner said, “is the idea of revealing the fundamental biological processes that allow us to carry out everyday behaviors like looking out at the world and figuring out where we can go. We can identify these basic behaviors of humans and other animals, try to map out their neurological basis and now, with advances in computer vision and biologically inspired artificial intelligence, we can even start to get at the specific computations they're performing.”

This research was funded by the National Institute of Health’s National Eye Institute grants 2R01 EY022350 and R21 EY027047. 

real room 3