Register here for the Zoom link: https://docs.google.com/forms/d/e/1FAIpQLSehaKGazM01zMdHBQz2Ca7H7u91tkzNtQJbequ5965FhbYmIA/viewform  

Human observers can see shapes and positions of objects in a real 3D scene veridically from a 2D retinal image of the scene and the observers can recognize the objects and interact with the object reliably. But, the visual system does not work for arbitrary 3D scenes or for arbitrary 2D images. The visual system is designed to work efficiently with a subset of all of the possible 3D scenes and with a subset of all of the possible 2D images. The subset of 3D scenes can be characterized by a priori constraints and the subset of 2D images can be characterized by invariants that are introduced by the constraints. Sawada has been studying which a priori constraints affect the 3D perception by using applied-mathematical, computational, and behavioral approaches that incorporate both conventional psychophysical methods and XR technologies. Sawada will present these studies about the 3D perception.