Studies in the Locomotion project include the development of a dynamic model of online steering, interactions between pedestrians while walking, multisensory control strategies for steering and obstacle avoidance, the role of fixation, attention, and walking speed during moving obstacle avoidance, and the effect of peripheral visual field loss on mobility.
Dynamical Model of Online Steering
We have developed an empirical model of human locomotor path formation based on experiments conducted in the VENLab. We have identified a set of elementary steering behaviors and collected data on how participants: (a) steer to a stationary goal, (b) intercept a moving target, (c) avoid a stationary obstacle, (d) avoid a moving obstacle. Each component in the steering dynamics model describes how an agent steers (the time-course of heading) during one of these four behaviors. The model simulates the observed human paths, and in the case of obstacle avoidance, the routes selected around the obstacle. We plan to extend this model to a variety of novel and more complex situations (see below).
Our steering dynamics model can accurately simulate an individual agent's locomotor behavior with respect to objects in the environment. The aim of this study is to extend our investigation to interactions between agents, and to test whether the model generalizes to pedestrian dynamics or requires new components (e.g. for pursuit, following or escape behavior). We will begin by studying scenarios with two participants, use the results to program a dynamic virtual agent, and conduct studies with similar and novel scenarios using a human participant and a virtual agent. Finally, we aim to simulate pedestrian traffic flow and compare the results with observations of pedestrian traffic.
Multisensory control strategies
The aim of this study is to determine the contributions of visual, podokinetic, proprioceptive, and vestibular information to steering control. This will allow us to test three strategies: (a) the generalized heading strategy, where the heading error between an agent's current heading direction and the goal direction is specified by all of these variables, (b) the optic flow strategy, in which the goal direction is specified by optic flow and the visual direction of the target, (c) the egocentric direction strategy, where the heading error is specified by idiothetic information and the egocentric direction of the goal.
Complex behavior: Avoiding multiple moving obstacles
Here we seek to extend the steering dynamics model to more complex scenarios involving multiple moving obstacles. We will investigate the roles of sequential fixation, attention, and walking speed on the routes participants select around two or more moving obstacles. By manipulating these variables, we hope to identify the strategies people use when walking through cluttered spaces; we hope to incorporate these strategies into the steering dynamics model.
Modeling mobility with peripheral field loss
This study seeks to test and simulate the effects of peripheral visual field loss on mobility. We will focus on cases of severe PFL ("tunnel vision") resulting from retinitis pigmentosia. We propose to investigate failures of stationary and moving obstacle avoidance in PFL patients and matched controls in the VENLab. We hypothesize that the higher incidence of collision in PFL patients stems from failure to detect the obstacles that fall outside the residual field of view; we plan to modify the steering dynamics model to try to simulate the observed locomotor paths.
Studies in the Navigation project include the integration of long-range navigation with on-line visual control strategies, the geometric structure of spatial knowledge used in navigation, its dependence on learning through interactions with the task environment, the extraction and recognition of landmarks, and the incorporation of new knowledge in an evolving spatial representation. This work also applies how humans learn, represent, and recognize individual objects to landmark detection and recognition.
Integrating Learned Routes
The layout of an environment must be learned from experience with particular routes. One possibility is that path integration serves to link environmental locations together into a metric "cognitive map", which enables new short-cuts. However, Dyer (1991) found this was not the case in honeybees, who depend on salient visual landmarks to find a shortcut to a known location. We are investigating whether humans can integrate learned routes into metric spatial knowledge, and if so, what factors influence this ability.
Path integration is a navigation strategy that utilizes information from self-motion to estimate distances traveled and orientation changes while exploring an environment. This information about self-motion can be acquired from either visual information (optic flow, landmarks, etc.), body senses (e.g., proprioceptive, vestibular, and efferent information), or both. In the natural world, these two sources of information about self-motion are redundant making it difficult to assess their separate roles in path integration. Using a return-to-home task in virtual reality, we can manipulate the availability of both visual information and body senses to identify the relative roles of each in path integration.
Do people rely on metric structure (distances and angles) or ordinal structure (relationships between junctions and paths) when walking from one place to another in a learned environment? This research is addressing what knowledge about the properties of the environment people acquire and use in navigation. We are exploring whether landmarks contribute to navigational strategies, as well as the effect of initial learning on how an environment is remembered and subsequently navigated. By using a virtual garden "maze", we can monitor the paths people choose between various object locations to determine the underlying structure of their spatial knowledge.