In our daily life, we move in the environment, grab objects, and perform a number of actions that are fundamental to our survival. These actions may seem very natural and effortless despite the fact that the brain performs an extremely complex analysis of the patterns of light and sound that reach our eyes and ears in order to determine the structure and shape of the surrounding objects and events. The faculty's common interest is to understand the perceptual mechanisms that allow humans to accomplish these tasks.
Fulvio Domini studies how humans derive the 3D structure of the world through vision. This is a very difficult problem to solve for both artificial and biological systems since objects are three-dimensional but our eyes only register their two-dimensional projection (also called retinal image), like the film in a camera. Moreover, when an observer moves in a 3D environment continuous geometrical distortions lead to deformations on the retinal images. However, the brain can make perfect sense of these unstable images, giving rise to a conscious experience of a stable world. This puzzling phenomenon is the main focus of Fulvio Domini's research. The problem in which he is mostly interested is how the visual system interprets dynamic images.
Laurie Heller studies the human ability to understand what events are happening in the environment through sound. While the sound waveform
contains a great deal of potential information about its sources
properties, no single acoustic feature specifies a particular object or
action. Information about sound sources is complex and time-varying, and it
is not known to what degree or in what form it is exploited by human
listeners. Perceptual experiments are conducted to address how high-level
Mike Tarr uses multiple methods (psychophysics, fMRI, ERP, computational modeling) to explore how biological vision systems recognize objects across variations in task, viewing conditions, and experience. Specific projects include: understanding the cognitive and neural underpinnings of expertise acquisition, including face recognition; how observers compensate for the effects of lighting, that is, shading and shadows, in a scene; and the roles of surface properties such as color and texture in visual object representation and recognition.
William Warren studies the visual control
of human action -- specifically, how visual information, the dynamics
of the motor system, and the physics of the world collude to yield
adaptive behavior. One project studies the visual guidance of locomotion
in complex dynamic environments from information such as optic flow,
using an immersive virtual reality lab. Dynamical models of human
behavior are simulated, with applications to mobile robotics, computer
animation, and vehicular control. Related work investigates the navigation
strategies and spatial knowledge people use to navigate in larger-scale
environments, also using the VR lab. Other projects that investigate
how perception and action complement the physics of the world include
how infants learn to bounce in a baby-bouncer, and how adults bounce
a ball on a racquet, and how outfielders catch fly balls.
|go back to research home|