top of page

Meet the team

We are a research group at the MRC Cognition and Brain Sciences Unit, School of Clinical Medicine, at the University of Cambridge

Programme Leader
Alex Woolgar
  • Twitter

I am a Programme Leader at the MRC Cognition and Brain Sciences Unit, Professor (Grade 11) of Cognitive Neuroscience at the University of Cambridge, and an honorary Associate Professor at Macquarie University, Sydney. I am fascinated by how the firing of billions of cells in our brains gives rise to our ability to perceive, think, and act. I especially want to understand the brain mechanisms that enable humans to pay attention - underpinning our ability to behave in complex, diverse, and flexible ways. To study this I draw on a range of human brain imaging and stimulation techniques, and develop approaches that push the limits of what we can ask about how the brain works. I am honoured to get to work with the brilliant bunch of bright and enthusiastic scientists below.

JJ_KCL_Colour.png
Elizabeth Michael
Postdoctoral Research Fellow

Given the variability of our visual world, flexibility in how we process sensory information is a critical feature of efficient perception. My research focuses on how the visual system responds to different types of challening environment, using concurrnent brain stimulation and neuroimaging to identify the  neural processes that support this behaviour. In parallel, I am interested in how the efficacy of interventions (e.g. behavioural training, brain stimulation) interacts with individual differences in neural architecture and function.

DSC_8334 Alexandra Woolgar.jpg
Jade Jackson
Postdoctoral Research Fellow
  • Twitter

I use a combination of neurostimulation (TMS) and neuroimaging (fMRI) techniques to investigate selective attention in the human brain. My previous work has focused on how and where task-relevant information comes to be prioritised in the brain (Jackson et al. Journal of Cognitive Neuroscience, 2017; Jackson et al. Cortex, 2018), and the causal influence of disrupting this prioritisation on information coding across the brain (Jackson et al., Biorxiv, 2020). My current projects involve disentangling the relationship between enhancement vs inhibition using fMRI-MVPA and using concurrent TMS-fMRI to causally link information coding to behaviour.

betterBrain_noBG_40-40_trans.png
2023_cbu_profile.jpg
Hannah Rapaport
Postdoctoral Research Fellow
  • Twitter

For many years, there has been a widespread assumption that autistic people who do not speak are also unable to understand language. However, the growing community of non-speaking autistic letterboard users challenges this assumption. As Autistic non-speaking advocate and Spellers documentary star, Jamie, states, “we think, feel and learn just like everyone else”.

My research aims to determine whether ElectroEncephaloGraphy (EEG), in combination with machine learning, can be utilised to detect a brain signature of language comprehension in non- and minimally-speaking autistic people. We are currently developing statistical approaches to differentiate between brain responses to meaningful and nonsensical spoken sentences.

Moataz Assem
Wellcome Trust Early Career Fellow
  • Twitter
  • LinkedIn

Moataz investigates working memory brain circuits. His research aims to understand how we “actively” focus on limited information while also maintaining a broader "hidden" cognitive background. To this end, he utilizes innovative combinations of advanced techniques such as transcranial magnetic stimulation (TMS), electrocorticography (ECoG), and precision functional MRI (fMRI), alongside anatomical data from non-human primates. This multimodal approach holds significant implications for circuit-based clinical interventions. Moataz is collaborating with top institutions globally, including Northwestern University, Washington University in St. Louis, the Stem Cell and Brain Research Institute/INSERM in Lyon, and the University of Oxford.

MoatazAssem_edited.jpg
Jade
Hamid
Hannah
Moataz
Dorian
Dorian Minors
PhD Candidate
  • Twitter

I'm interested in the kinds of simple neural mechanisms that may underpin intelligent behaviour. My previous work has explored how simple neural network properties inspired by the brain might facilitate higher order aptitudes, and specifically how honey bees might solve an abstract conceptual problem thus (Cope et al. PLOS Comp. Bio., 2018). My current project explores how a popular model of decision-making may allow us to distinguish analogous computations in the human brain using MEEG.

dorian.minors-small.png
nadene_green_gw.png
Nadene Dermody
PhD Candidate
  • Twitter

My research will focus on uncovering the mechanisms through which information is exchanged between the "multiple-demand" (MD) network and more specialised regions, such as visual cortex. While the MD regions have been shown to selectively and flexibly represent task-relevant information moment- to-moment, how these regions interact with domain-specific regions to give rise to goal-directed behaviour is not yet known. My current project aims to contribute to our understanding of this by combining MEG and fMRI data, using multivariate pattern analysis techniques, to derive a spatially and temporally resolved account of how and where information is exchanged throughout the brain.

Runhao Lu
PhD Candidate
  • Twitter

A distributed "multiple-demand" (MD) network across the frontoparietal brain regions is thought to be crucial for human intelligent behaviours because of its incredible function to flexibly and adaptively process task-relevant information. Although this network is commonly co-activated during demanding tasks, potential functional differentiation among MD regions have long been discussed but no clear consensus yet. My research aims to use M/EEG, concurrent TMS-fMRI and TMS-EEG to causally examine the distinct contributions of the individual MD regions. In particular, I ask whether these regions work differently in terms of enhancement vs inhibition during selective processing of visual information.

Picture1.png
Yuanjun_photo.jpg
Yuanjun Kong
PhD Candidate (Visiting)

In daily life, people often select a task-relevant target from the surrounding distractors, for which visual attentional selection is required. My previous work has focused on the neural mechanisms underlying visual attentional selection and distractor suppression using EEG (Zhao et.al, 2023). In addition, the auditory system always monitors other inputs and detect changes to enable attentional shifts toward unexpected events. I’m also interested in the relationship between visual attentional selection and auditory change detection from a cross-modal perspective in healthy and psychiatric populations.

PhD Candidate
Yuena Zheng

We live in a world flooded with information, yet our cognitive resources are limited, meaning that only a small amount of information is attended to and processed in depth over a period of time. I am interested in the neural mechanisms behind this selective attention, especially for the top-down control dominated by the PFC and driven by the current task goal. My current project aims to use MEG and computational modelling to investigate the dynamic neural representation of the PFC in the presence of visual competition and the information flow between the PFC and the visual cortex during this process. 

Yuena Zheng.png
Sichao.png
Sichao Liu

Roboticists are making use of insights from neuroscience  and brain science to build better performing robots with human-like intelligence and autonomy. As a nascent research domain, the fusion of brain science, artificial intelligence and robotics represents a neuroengineering approach. My research is focused on using brain-computer interfaces to forge a direct and online communication between brain and computer/robot, developing artificial intelligence algorithms-driven decoders to reveal the patterns under neural activities (e.g., EEG and MEG), and finally precisely interpreting human thoughts into commands that robots can understand for the assistance of human daily activities and advanced applications in multiple fields.  

PhD Candidate
Chentianyi Yang

It is estimated that 30% of children with autism are minimally verbal. Traditional clinical tools have struggled to distinguish the exact stages of linguistic or motor processing at which they have difficulties to express their thoughts through speaking. In my project, I will tackle this problem by investigating the mathematical functions of the brain using cortical entrainment. By analysing brain imaging data (e.g. EEG and MEG) and the predictions of automatic speech recognition models, I will bring new insights into the pathways of language processing for both autistic and neurotypical brains.

processed_red_crop.png
Nadene
Runhao
Lab Alumni
Collaborators
bottom of page