Prof. Chubb receives grant to catalogue image properties sensed by human vision
- January 11, 2010
- Results will help to create conceptual framework for formulating theories about visual perception
Take a good look at the picture at right. At first glance, it appears to showcase a rocky, shell covered landscape, maybe a sandy ocean bottom. Upon closer inspection, however, the focal points are two well concealed cuttlefish camouflaging themselves against their surroundings. Still can’t see them? You’re not alone, says Charlie Chubb, UCI cognitive sciences professor.
“Human visual perception is very crude in terms of what it extracts from a scene,” he says. “Things we see are made up of many different textures and substances, but we summarize and simplify these substances using very few qualities. This makes the world we see dramatically different from the world that exists.” These shortcuts taken during the visual perception process are what make camouflage effective in tricking the eye and concealing what – or who – is really there.
For the past 20 years, Chubb has studied the process of how we see and take in the world around us. One of his recent projects, funded by the Defense Advanced Research Projects Agency (DARPA), has aimed at exploiting what is known about visual perception in order to “break” camouflage effectiveness using algorithmic-based visual aids that highlight, rather than hide, camou-clad targets.
Yet despite what may seem to be a great deal of knowledge about visual perception, says Chubb, the field is very much in its infancy.
“Aside from a few familiar properties such as brightness and contrast, we still don’t know all of the basic image properties the visual system uses to discriminate between things,” he says. “Without this knowledge, trying to investigate visual perception and cognition is like trying to do chemistry without the periodic table of elements--effectively shooting in the dark.”
In September, Chubb and UCI cognitive scientist George Sperling were awarded a $420,000 grant from the National Science Foundation to identify and catalogue the elementary image properties sensed by human vision. The resulting “Table of the Dimensions of Preattentive Visual Sensitivity” as it is tentatively being called, will provide scientists with a conceptual framework for formulating theories about visual perception which, according to Chubb, has been missing from the field.
To develop the DPVS table, the researchers will perform tests in which human observers will be asked to differentiate between visual textures in graphic displays made up of micropatterns, or various patterns of dots, that differ in color, intensity, shape, contrast and/or orientation. Textures will be created using different proportions of micropatterns, and observers will be asked to detect the location of a target patch of one texture embedded in a background of a different texture.
Chubb explains: “By discovering which of these texture differences people can sense, we can learn how many neural systems in human vision are differentially sensitive to micropatterns in a given class and what those neural systems actually sense. Each neural system that we discover and analyze through these experiments will add another dimension of preattentive visual sensitivity to the table.”
While refining our understanding of visual sensitivity to familiar properties - such as brightness and contrast – Chubb says the project will also reveal new properties to help explain visual sensitivity to different textures. Already, he and his collaborators have discovered a previously unknown neural system they call the “blackshot” system that human vision uses to differentiate between textures. This system picks out only those elements of a scene whose intensities are very close to black, ignoring all others.
Findings from this research could have wide, practical applications from enabling more effective camouflage design to uses in fields where reproducing, enhancing or transmitting visual information comes into play - such as in television and computer displays - where vast quantities of information need to be manipulated at lightning speed.
“If we can visually match any randomly scrambled mixture of different gray scales - no matter how many different gray scales the mixture contains - with a mixture of just four, that’s a pretty big simplification that could lead to substantial savings in time, space and money if properly applied in image compression algorithms and display technology,” he says.
Funding for this research began in September and will run through August 2012.
-photo courtesy of Roger Hanlon, research collaborator, Marine Biological Laboratories,
Woods Hole, Mass.
Related News Items
- Winding road
- Most Republicans and Democrats agree: Immigrants make the U.S. a better place to live
- Stimulants: Using them to cram for exams ruins sleep and doesn't help test scores
- Andrew Yang's 'freedom dividend' echoes a 1930s basic income proposal that reshaped Social Security
- Affirmative action an increasingly divisive issue for Asian Americans