Perception and Action
The perception and action group investigates how we extract information from the environment and use this information to guide our actions, and how such interactions result in learning and memory. Studies investigate the flow of information from perception, such as object recognition, to how attention and eye-movements guide the selection of action, how response can be switched between different stimulus properties, how actions are directed through 3D space and how memory systems interact. The group uses various behavioral measures such as recording hand and eye-movements, neuroimaging techniques such as EEG and fMRI, as well as investigating patients with brain lesions and manipulating neural responses with Transcranial Magnetic Stimulation (TMS) and Direct Current Stimulation (DCS).
Grant funders of this research have been BBSRC, EPSRC, ERC, ESRC, Wellcome trust, Leverhulme and Unilever.
The following example publications give a sense of the research. Further details can be seen in the People and Discoveries tabs.
Boehm, S. G. & Sommer, W. (2012). Independence of data-driven and conceptually driven priming: the case of person recognition. Psychological Science, 23(9), 961–966, doi: 10.1177/0956797612440098
White, O., Dowling, N., Bracewell, M. &; Diedrichsen, J. (2008). Hand interactions in rapid grip force adjustments are independent of object dynamics. Journal of Neurophysiology, 100, 2738–2745.
Buckingham, G., Main, J.C., Carey, D.P. (2011). Asymmetries in motor attention during a cued bimanual reaching task: Left and right handers compared. Cortex, 47, 432–440.
Sestieri, C., Sylvester, C.M., Jack, A.I., d’Avossa, G., Shulman, G.L. & Corbetta, M. (2008). Independence of anticipatory signals for spatial attention from number of nontarget stimuli in the visual field. Journal of Neurophysiology, 100, 829–838.
Houghton, G., Pritchard, R. & Grange, J.A. (2009). The role of cue-target translation in backward inhibition of attentional set. Journal of Experimental Psychology: Learning, Memory & Cognition, 35, 466–476.
Saville, C.W.N., Dean, R.O., Daley, D., Intriligator, J., Boehm, S., Feige, B. Klein, C. (2011). Electrocortical correlates of intra-subject variability in reaction times: Average and single-trial analyses. Biological Psychiatry, 87, 74–83.
Thiemann, U., Bluschke, A., Rescg, F., Teufert, B., Klein, C., Weisbrod, M. & Bender, S. (2012). Cortical post movement and sensory processing disentangled by temporary deaferentation. NeuroImage, 59, 1582–1593.
Leek, E.C., Cristino, F., Conlan, L.I., Patterson, C., Rodriguez, E. &; Johnston, S.J. (2012). Eye movement patterns during the recognition of three-dimensional objects: Preferential fixation of concave surface curvature minima. Journal of Vision, 12, 1–15.
Cooper, S. & Mari-Beffa, P. (2008). The role of response repetition in task switching. Journal of Experimental Psychology: Human Perception & Performance, 34, 1198–1211.
Song, J-H., Rafal, R.D. & McPeek, R.M. (2012). Deficits in reach target selection during inactivation of the midbrain superior colliculus. Proceedings of the National Academy of Science, 108, 1433–1440.
Van Koningsbruggen, M.G., Gabay, S., Sapir, A., Henik, A. & Rafal, R.D. (2009). Hemispheric asymmetry in the remapping and maintenance of visual saliency maps: A TMS study. Journal of Cognitive Neuroscience, 22, 1730–1738.
Takahashi, C., Diedrichsen, J. & Watt, S.J. (2009). Integration of vision and haptics during tool use. Journal of Vision, 9, 1–13.
Bestelmeyer’s research focuses on evaluating models of face and voice perception,. She uses cognitive neuroscience (fMRI, TMS, EEG) primarily with adaptation techniques to study the neural basis of the perception of paralinguistic aspects of voice such as affect and other socially important attributes. Her research examines the extent to which similar brain systems are used to process voices and faces.
Boehm’s research interests lie in human learning and memory, employing behavioural, electrophysiological, functional imaging and neuropsychological methods. He investigates how different forms of learning and memory may be engaged and interact with each other depending on task demands, strategies, intentions and other influences, with a particular focus on understanding the mediating neural processes.
Bracewell studies sensory perception, and sensorimotor control, in healthy volunteers and neurological patients. He conducts preclinical and clinical research, aiming to translate laboratory findings to the clinical setting. He uses a number of experimental techniques, including EEG and MRI, non-invasive brain stimulation (transcranial magnetic stimulation-TMS, and transcranial direct current stimulation-tDCS), and behavioural measures (e.g., kinetic and kinematic analysis of movement, sensory psychophysics, and lesion-behaviour correlations).
He studies sensorimotor control processes in healthy volunteers and neurological patients. Current projects include recording and quantifying manual asymmetries in left-handers and right-handers and quantification of attentional biases towards the preferred hand, which may prove a useful marker for cerebral asymmetries for speech and language.
Cross is interested in the cognitive strategies and experiential factors that shape neural processes linking action with perception.She addresses questions of how the brain and behaviour are changed by different types of experience with action execution and observation. To do so, she uses a variety of perceptual, visuomotor, and learning paradigms along with an interdisciplinary approach that involves fMRI, TMS, and psychophysics.
He is a neurologist who currently investigates two main issues concerning human cognition: 1) the representation of spatial information in visual memory, and 2) the nature of spatial expectancies guiding visual attention and orienting. He studies healthy volunteers and neurological patients and makes extensive use of fMRI and measures of behaviour.
He uses a range of techniques such as ERP, behavioural measures and computational modelling of neural networks. His recent focus has been to understand how behavioural responses can be rapidly shifted to different properties of a stimulus, and the role of inhibition in these task-switching processes.
She investigates the cognitive control processes that enable attention to be selectively focused on particular perceptual inputs, how such attention can be maintained and how it can be subsequently switched to new inputs or object properties. She employs measures of behavior and electrophysiological techniques such as ERP, and studies healthy participants and clinical populations such as Parkinson’s disease.
Her research has focused on attention and perception, with a particular interest in the interactions between exogenous and endogenous processes, the role of inhibition in selection, and cognitive control an aging. She employs both behavioural and neuroimaging (fMRI) techniques, and studies healthy participants and people with focal brain lesions.
The purpose of his lab’s research is to better understand how the brain controls meaningful, goal-directed hand actions, with the ultimate goal of harnessing this knowledge to promote the rehabilitation of people with movement problems. His team uses a combination of behavioural and neuroimaging methods, and tends to focus on better understanding the neural control of grasping and tool use.
He investigates depth perception and stereoscopic vision. In particular, he examines how visual information is used to guide actions such as grasping, how vision and touch are combined during tool use, and basic aspects of the eye’s focusing response. He is also developing and testing novel stereoscopic display approaches designed to better suit human vision. He employs a range of techniques, such as recording eye and body movements in virtual reality experimental environments.
Perception and attention
How do we perceive the world, and our position in it, from vision, proprioception, and audition? How do our brains organise and prioritise the enormous amounts of information available, so that we can make sense of the structure and events in the world?
Visually guided movement
The efficiency and fluidity of human movements belies the computational complexity required to control them. How are targets for movement identified and selected? How are properties of these target objects estimated to plan the movement? How is information in visual ‘co-ordinates’ transformed into an appropriate form to control complex effectors like the arm and hand (and even tools that we may be holding)? How are movements monitored and controlled ‘online’ to be maximally efficient?
How do we select targets we want to look at? How are target locations encoded? What role does our eye position sense play in our perception of where things are? Are eye movements affected by disorders such as attention-deficit hyperactivity disorder?
Humans can store a vast amount of different information in their memories. They can remember events from their past and the knowledge about the world they obtained over their lifetime. In addition to this declarative memory, humans also show a variety of other forms of non-declarative memory. Non-declarative memory usually leads to changes in performance and can occur even when the original learning episode cannot be remembered. The various forms of memory tend to co-occur at any point in time. Therefore, at any given moment, several forms of memory may be active, and they might work independently of each other or interact. We investigate these dynamics of human memory with both behavioural methods as well as with event-related brain potentials. The aim is to explore and describe the flexibility in how different forms of memory are invoked and how they interact.
As well as studying the above topics in normal, healthy participants, the group carries out a large amount of research on the effects of different kinds of neurological impairment on these processes.
Object representation and recognition
Imagine that you are sitting in a room observing the scene around you. You see a cup, and a guitar, and you instantly recognise them. If you want to, you can reach out across space and pick them up. In fact, this is something that most of us can do effortlessly. But how do we do it? How does our visual system work? How can we perceive, recognise and interact with objects so easily? How do we know where particular objects are in space, and how to reach them? And what happens to our visual system when we are no longer able to recognise objects as is the case of some individuals following an injury to the brain?
Research in our group has provided far-reaching insights into how object representations are organised and structured, and how we are able to perceive and recognise object shapes. Some of our work has investigated how early perceptual processes in visual attention involve independent inhibitory and facilitatory systems that interact with object structure. We have called this phenomenon ‘Structure-based modulation of IOR’ (e.g., Leek, Reppa & Tipper 2003. Inhibition-of-return for objects and locations in static displays. Perception and Psychophysics, 65, 388–395), and its appearance has been investigated widely by researchers in other labs around the world, including investigations of object-based visual selection in Parkinson’s disease patients (e.g., Possin et al., 2009 Space-based but not object-based inhibition of return is impaired in Parkinson’s disease. Neuropsychologia, 47, 1694–1700).
In other work we have shown surface structure and curvature minima play key roles in the representation of object shape mediating visual object recognition. We provided the first direct evidence from eye movements about fixation patterns during single object recognition (e.g., Leek et al. 2012. Eye movement patterns during the recognition of three-dimensional objects: A fixation preference for surface curvature minima, Journal of Vision, 12, 1, 7, doi: 10.1167/12.1), and about how saccade patterns during recognition can be disrupted by brain damage in patients with acquired visual agnosia (e.g., Leek et al., 2012. Eye movement patterns during object recognition in visual agnosia. Neuropsychologia, 50, 2142–2153).
Understanding human tool use: information from vision and touch is integrated
When we use tools our brains receive information about the sizes, locations etc. of objects from both vision and touch. An ‘optimal brain’ would combine or integrate these signals because that would allow us to estimate properties of the world with the greatest possible precision, over a wide range of situations. For ‘multisensory integration’ to be effective, however, the brain must integrate related signals, and not unrelated ones (imagine, for example, if the apparent size of the coffee cup in your hand was affected by irrelevant objects that you also happened to look at, like buildings, people and so on).
The brain could achieve this by considering the statistical similarity of estimates from vision and touch (haptics) in terms of magnitude, spatial location, time of occurrence etc.? This would be effective because in normal grasping visual and haptic signals arising at the same time/place etc. are in fact more likely to originate from the same object than more dissimilar signals. The problem is complicated by using tools, however, because they systematically perturb the relationship between (visual) object size and location, and the positioning and opening of the hand required to feel an object. Said another way, our visual estimates are derived from the object itself, but our hand can only feel the object via the handles of the tool. Our research investigated whether the brain is able to understand the resulting sensory remapping, and appropriately integrate information from vision and haptic touch when using tools.
We used a stereoscopic display, and force-production robots, to create a virtual environment in which we could independently control the visual and haptic properties of objects, and create ‘virtual tools’. Psychophysical results showed that the brain does integrate information from vision and haptics during tool use in a near-optimal way, suggesting that when we use tools, the haptic signal is treated as coming from the tool-tip, not the hand.
As well as extending our understanding of human tool use, these results have practical implications for the design of visual-haptic interfaces such as surgical robots. This work is published in the Journal of Vision
Building 3D stereoscopic displays that better suit the human visual system
Stereoscopic 3D (S3D) displays are enjoying a huge surge in popularity. In addition to entertainment, they have become common in a range of specialist applications that includes operation of remote devices, medical imaging, scientific visualisation, surgery and surgical training, design, and virtual prototyping. There have been significant technological developments— the red-green glasses are long gone—but some fundamental problems remain. Specifically, S3D can induce significant discomfort and fatigue and give rise to unwanted distortions in depth perception.
As S3D viewing goes from an occasional to a routine activity, and more of us become users of the technology, there is a pressing need to better understand how our visual system interacts with S3D media. An EPSRC-funded project in Simon Watt’s lab has investigated this issue. In the real world, when we move our eyes to look at near and far objects (vergence), muscles in the eye also change the shape of the lens (accommodation), to focus at the same distance. In S3D viewing, however, we have to look at ‘objects’ that can be in-front-of or behind the screen, while remaining focused at the screen surface (see picture). This ‘decoupling’ is a key cause of problems, and the researchers have constructed a novel display (initially developed with researchers in the United States) designed to address this by simulating a continuous range of focal distances, as experienced in the real world, using a small number of discrete focal planes.
The researchers have carried out extensive experiments to develop and evaluate this approach, including measuring eye movements and the focusing response of the eye. The results have provided information on how to build improved S3D displays, and to produce effective S3D content. This has resulted in invited talks/workshops not only for specialists in the S3D industry, but also at more general display-industry venues, and international standards-setting bodies (the Society for Motion Picture and Television Engineers, or SMPTE).
This research also has a significant basic science component, and the results have provided new insights into the mechanisms underlying how our eyes focus. An example of published work from this project can be found here:
Paralinguistic cues and social interaction
Social interactions involve more than "just" speech. Similarly important is a, perhaps more primitive, non-linguistic mode of communication. While what we say is usually carefully and consciously controlled, how we say things, i.e. the sound of our voice when we speak, is not. The sound of our voice may therefore be seen as a much more "honest" signal. I am interested in the effects of these paralinguistic cues on social interaction and how these are processed in the brain. One such signal is vocal attractiveness, which is known to influence the speaker's success at job applications, elections or short-term sexual relations. In a series of experiments we were interested in what makes a voice attractive.
Dr. Patricia Bestlemeyer and colleagues found that the voices which were perceived as more attractive tended to be more similar to an average voice composite in terms of its formant frequencies and sound "smoother" (see the vocal spectra above with an original voice on the left and a highly averaged and attractive voice on the right).
Next, she measured brain activity while participants listened to the same vocal sounds ("ah") and performed an unrelated task. Activity in voice-sensitive auditory cortex and inferior frontal regions were strongly correlated with implicitly perceived vocal attractiveness. While the involvement of auditory areas (see A. to the right) reflected the processing of acoustic contributors to vocal attractiveness (e.g. frequency composition and smoothness), activity in inferior prefrontal regions (B.; traditionally involved in speech) reflected the overall perceived attractiveness of the voices despite their lack of linguistic content. Dr. Bestlemeyer's results provide an objective measure of the influence of hidden non-linguistic aspects of communication signals on cerebral activity.