top of page

Research

My goal is to understand the principles governing the brain functions that recognize objects and select actions to achieve behavioral goals (i.e., object recognition and decision making). We train macaque monkeys on various object recognition tasks and measure and manipulate population neural responses from multiple brain areas using state-of-art recording techniques. We then use contemporary analysis methods to reveal neural dynamics and communications, which will be related to computational models. We also perform human behavioral experiments and neuroimaging. 

 

Keywords: object recognition, decision making, macaque monkeys, neural population recordings, human behavioral experiments, computational modeling

Image by Vladislav Babienko

Perceptual decision making

Animals are evolved to successfully decide to act based on external events and objects for survival. For example, if we find an old apple in the fridge, we carefully inspect if it is still edible and decide to eat it. Such choice behavior is called “perceptual decision making.” We study this process by measuring behavior and recording neural activity of animals performing tasks such as categorization of face stimuli. We have found that face categorization behavior could be understood as a process to integrate sensory evidence over multiple facial features and over time (Okazawa et al., 2018, 2021). Neural recording from the parietal cortex revealed that neurons encode decision formation on a non-linear manifold in state space (Okazawa et al., 2021). These observations contribute to developing computational models of decision making. We now seek to understand how decision signals in parietal or frontal areas are formed by inputs from sensory areas that encode object information.

3D Monochrome Objects

Visual object recognition

For successful behavior, our brain must recognize what objects are present in front of us. When we see an image of an object (e.g., a coffee cup), we immediately recognize its category, but it is a computationally challenging process, and we still do not clearly know how the brain does this. Images we see are extremely rich and complicated, and the brain has to extract and encode meaningful information. Previously, we have shown that neurons in mid-level visual areas encode naturalistic textures, which are important components of object images. The neural texture selectivity could be explained as responses to higher-order statistical parameters of images (Okazawa et al., 2015, 2016). Now, we seek to further understand how neural populations in visual areas dynamically respond when processing object information.

Outstanding questions

  • How do we successfully choose appropriate actions depending on contexts?
    The same stimulus can lead to different actions in different contexts. For example, if there is food in front of you, you will casually take it at your home but will not take it without permission at others’ homes. How is sensory-based behavior adjusted in a context-dependent manner?

  • How do we actively explore the environment to seek information for decisions?
    Every time we move our eyes, the visual scene we perceive changes vastly. When we make decisions about visual images, how do we move our eyes? How is this behavior related to decision-making processes?

  • How do we recognize object categories by integrating visual images?
    Many objects are composed of multiple parts, so multiple visual features in an image should be combined to recognize objects. How does the visual system encode complex combinations of image features?

  • How do we learn object categories from experience?
    You do not know what objects are present in the world before you are born, so the object categories you know must be learned through experience. How does this happen?

  • How do we learn and leverage rules and regularities governing the outside world?
    To interact with the environment, you must also know the optical and physical rules and regularities of the outside world. How do we learn and leverage them?

bottom of page