Department of Brain and Cognitive Sciences Department of Earth and Environmental Sciences: Paleomagnetic Research Group MVRL: Multidisciplinary Vision Research Laboratory

An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences


The Immersive Experience



Experiments in Eye-Tracking during Immersive Viewing of Geologically Significant Scenes

During the field-trip to go along with the geoscience course, we also took time to visit other nearby locations in order to capture high-resolution panoramas away from the students and experts. These views were of scenes relevant to the geoscience topics of the course, and were indicative of the other types of scenes that the students and experts had been observing. As part of our study, we want to see if there are similarities between the eye-movement patterns of a subject (novice or expert) in the field versus in a classroom-type setting. Because of the large field-of-view required to capture the significant regions of these scenes, we presented rendered panoramas to the subjects in a semi-immersive environment at the Rochester Institute of Technology's Center for Student Innovation (CSI).

In this semi-immersive environment we have 4 projectors connected to local servers, and they are facing a curved, white wall for the display. These projectors can be tuned to color parameters befit our scene content as supervised by an expert geologist. Through a signal-muxing box we can use a standard desktop computer with two dual-DVI outputs to run all four projectors simultaneously. With this setup we can specifically generate a single panorama to be fit across all 4 projectors as if they were four regular computer monitors.

All images were rendered through a custom structure-preserving projection and blending, and displayed at 5120x800 resolution. Differences between the projectors were carefully minimized for color variation and skew. Variability within the projected panorama only remained noticeable with cloud movements between individual image capture. The questions differed slightly for each scene presented, but subjects were typically asked to search for evidence of active geological processes or the geological history of area.

Part of the color-correction and structure-preserving aspect of the panorama display was to include dividing lines (only a few pixels wide) between each projector. This was shown to have the perceptual effect that color-variations became less-apparent as projectors overlapped, and slight structural variaitons in the scene content, based on the projection parameters, were not immediately noticeable (i.e. without pointing them out). More details on this setup can be found in the M.S. thesis work of Brandon May: “Imaging Methods for Understanding and Improving Visual Training in the Geosciences”.

When students entered the projector area, an image of a scene they had visited during the field experience was displayed (to get them acclimated to the projector experiment scenario with a familiar scene). Participants were run in small batches, 3-4 subjects at a time, going through all scenes. As in the field trip, the professor began by asking them to look at each new scene while thinking about a question he posed; he timed them for 80 seconds, and then the discussed the current scene. Participants were allowed to walk around within a small area marked off by table; however, we noticed that only experts tended to move about.

Given the lowlight conditions, the video capture for the scene camera of our mobile eye-tracking systems did not produce imagery typical of our capture quality. However, unlike most mobile conditions, the view presented to the observers was completely constructed, so we have the original high-resolution panoramas readily available. Most important in capture is the eye-movement data, which has no problem with lowlight conditions, because of the infrared LED illumination and infrared-pass filter on the eye camera. This way we have viable eye-tracking data captured, and directly equivalent high-resolution imagery of the scenes displayed to the subjects.

We are in the process of developing analyses for the data captured during these experiments. Please check back later for example results and links to any publications.
Thank you!