Indiana University 

Vision Lab

About

Wholes and parts face stimuli

Visual Pattern Perception


How are we able to detect, discriminate, identify and classify the incredible number of objects that we continually encounter in our environment? Our research is directed towards understanding some of the basic processes that mediate these remarkable abilities. The approach we take is to treat the human visual system like an elaborate signal processing device. We use a combination of psychophysical techniques, mostly involving externally added noise, to characterize some of the inner workings that mediate visual pattern perception.


Some of the current topics that we are interested in include:


Biological motion in noise stimulus

The Perception of Biological Motion: Human observers can recognize the direction of walking, identity, gender and even the emotional affect of another person from the movement of a small number of points placed on the body. In the past, this ability has been characterized as an example of the highly efficient nature of visual pattern recognition. Our recent work has suggested that, in fact, such point-light displays carry the same amount of information provided by full-body displays, and that people are actually highly inefficient at using point-light information. Our current research is directed towards understanding why this might be the case.



The Efficiency of Feature Integration: Many classes of stimuli are characterized as ‘holistic’, in that the combination is thought to be processed more efficiently than the individual parts. Faces are an excellent example of this idea. Our recent research has been directed at testing this idea by adapting a summation-at-threshold technique that has been used in the past to measure the summation properties of human spatial frequency channels. So far, our results have shown that feature integration efficiency for human faces is only as good as one would predict from an observer’s performance with the individual parts. Even more surprising is that and integration efficiency is less than one would predict from the the individual parts for objects that are physically fragmented but seen as perceptually complete.



Visual completion stimulus
Facial expression in noise stimulus

Facial Expression Recognition: Under natural conditions, we see facial expressions as dynamic events that unfold over time. But we also often see static snapshots of faces in places like magazines or photographs. Our recent research has looked to see if the visual system processes expressions more efficiently when they are expressed dynamically rather than statically. Surprisingly, we have found that the dynamic aspects of facial expressions play little role in people’s ability to extract their information.





Perceptual Learning: Practice in perceptual tasks typically improves performance. Such changes in performance can be due to a combination of mechanisms -- a decrease in the amount of variability or ‘noise’ in the system, or an increase in the strength of the ‘signal’ that gets through the system (or a combination of the two). We have used a combination of noise-based methods to measure changes in signal and noise as a function of perceptual learning, and have found that internal noise does not seem to be the source of improvements in performance. Instead, it appears that the efficiency of the non-noisy aspects of visual information processing improve as learning is taking place.




Facial feature map
Visual memory decay texture stimulus

Visual Memory Decay: A similar set of questions can be asked with respect to the underlying mechanisms that mediate the decay of short-term visual memories. Contrary to the predictions of most models of visual working memory, our initial results have shown little change in internal noise as a function of the duration that items are required to be stored in short term memory.






Visual Completion: The images that are cast on our retinae are two dimensional, and yet the world is 3-D. So then how do we see the world as 3-D? This is a difficult problem (often called the ‘inverse projection’ problem), and it relies on the visual system making some strong inferences about the way objects behave in the world. One such inference is that objects continue behind places where they are partially hidden by other objects, and that they continue in a particular kind of way. We have been interested in characterizing the inferences that the visual system makes when attempting to complete partly occluded objects. Our research has focused on using a noise-based technique called ‘response classification’ to explore the processes involved in visual completion. Using this approach, we have found that the human visual system appears to treat the inferred edges of partly occluded objects much like real edges.



Calssification image for visual completion

To learn more about the above and other projects going on in the lab, please see our publications and presentations.