Department of Brain and Cognitive Sciences Department of Earth and Environmental Sciences: Paleomagnetic Research Group MVRL: Multidisciplinary Vision Research Laboratory

An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences


Data Processing



For this project we have three distinct stages of data processing which we have called the Processing, Visualization, and Analyses stages, as also noted by the links in the navigation bar above. Data processing deals with the videos and imagery from the mobile eye-tracking systems and the panorama capture systems. Data visualization is where we develop tools, diagrams, and methodologies for working with all of our imagery, eventually in the hopes of developing a virtual field-trip experience. The data analyses are where we focus on probability, statistics, data metrics, and machine-learning procedures in the search for patterns and distinctions among, and between, our novice and expert geoscience subjects (observers on the field trip).


Mobile Eye-Tracking Data

To (pre-)process our mobile eye-tracking data, we are currently using the Yarbus software developed by Positive Science, LLC. This software creates the geometric mapping between the imagery from the eye(-facing) camera and the scene(-facing) camera of the mobile eye-tracking. Then, either through an automated method or by direct manual selection from technicians, the pupil centroid and corneal reflection centroid are determined and mapped from the eye image into the scene image, as a representation of the observer's gaze. This gaze position, in the scene camera, is called the Point-of-Regard (POR), and this should indicate where the person was looking out into the scene, at the time of that video frame's capture.

Eye-movement Event Detection

After (pre-)processing the mobile eye-tracking videos, we are left with a single video (the Yarbus output video, above right image) and a corresponding text-file, which contains all the pupil centroids and the mapped POR position, in pixel coordinates. These are just raw, frame-to-frame position values of the eye movements, so they do not indicate any eye-movement events (i.e., saccades or fixations). Our current eye-tracking videos run at 30 frames-per-second, so we are only able to determine fixation events, leaving inter-fixation events to be considered as saccade events or blinks. To determine the fixation events, we currently use the SemantiCode software developed by Daniel F. Pontillo, Thomas Kinsman, and Dr. Jeff B. Pelz. You can find out more about SemantiCode from the following publication and patent:
SemantiCode: using content similarity and database-driven matching to code wearable eyetracker gaze data (ETRA 2010)
Methods for assisting with object recognition in image sequences and devices thereof (U.S. Patent No. 20,120,328,150)


We have also begun to research applications of classical Recurrence Analysis, through the work of Ph.D. Candidate Tommy P. Keane. This work is still in development, but some free-to-use code and diagrams can be found below. The most recent publication at ETRA 2014 details the general development of statistical metrics for classical Recurrence Analysis and contrasts it against the trend of applications in eye-tracking analyses which we would like to more precisely refer to as Reoccurrence Analyses. The second work, from the IEEE WNYIPW 2013, is a very brief example of how the innate temporal-pattern-matching characteristics of classical Recurrence Analysis could perhaps be applied for eye-movement event detection. Again, these are works that are still in development, but these introductory publications and the provided code may still prove of some use.


Eye-movement sequence statistics and hypothesis-testing with classical recurrence analysis


Published in the Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA 2014) by the ACM, (Paper Link) DOI: 10.1145/2578153.2578174

Errata: On page 147, left column, second paragraph, the sentence beginning “You will notice further down in Algorithm...” is a typo. This sentence should have been removed between the reviewed and final drafts, since the algorithm referenced was also removed from the paper during the blind review rounds. Sorry for any confusion.

Provided is a zip archive containing open-source (free to use, share, and modify with attribution please, though no assumed affiliation) python code files, an example image, and example data for computing and visualizing Recurrence Analysis and Reoccurrence Analysis as discussed in the publication. The computations are in their own modules so that they can be incorporated into other software. This is a prototype while we are developing optimized Cython code and integrating these methods into our mobile eye-tracking data viewing/analysis framework. Header comments in all files provide email information for sending any questions or comments. Thank you!

Software with Example Data (zip archive)

Implemented in python (RecurrenceReoccurrenceViewer.py), the GUI requires matplotlib, Qt, and PySide, but the data processing code itself is provided as two separate modules which can be used directly. The data is assumed to be integers or floats inside python lists instead of depending upon numpy.
This code has not been optimized and is provided as-is. Feel free to contact Tommy P. Keane for questions, comments, or help.



Image Sequence Event Detection via Recurrence Analysis

To be published in the proceedings of the 2013 IEEE Western New York Image Processing Workshop (WNYIPW) being held on Friday November 22, 2013 at RIT in Rochester, NY. IEEE WNYIPW 2013 Website (Not published to IEEE Xplore yet)

Code, example data, and diagrams are currently provided without license stipulations as we are in the processing of forming a licensing plan. Provided that data is only being used for academic research or personal edification, there should be no issue once our official licensing terms are established. Thank you for your patience, support, and cooperation. Please feel free to contact us with any questions.

Most of the details of this work will be in the published proceedings (a link to be provided upon publication after Nov. 22, 2013), so here we will just share a brief overview and some diagrams, with a link at the bottom of this page to our current code and test data.


Figure 1. Example of frame from eye video (grayscale from IR-pass filter).

Figure 2. Illustrative diagram of 2-dimensional time series data and the embedding parameters.

Figure 3. Visualization of event-detection parameters and time delay embedding parameters with original data (video frames).

Figure 4. Example of recurrence plot output for event detection (temporary parameters implemented).

Software Example (main.py)

Example Data (190 MB, .zip)

The example data archive contains text file and folder of eye images (190 MB, .zip). These are images directly from our mobile eye-tracker.
The software example is implemented in python (main.py), though the code should not be considered “polished”, and it is dependent upon MatPlotLib for plotting the recurrence matrix.
You may notice that the data is assumed to be integers or floats inside python lists instead of having a dependence upon numpy.
This code has not been optimized and is provided as-is. Feel free to contact Tommy P. Keane for questions, comments, or help.