25 September 2023

The eyes have it: How AI can now identify people by their eye movements

Start the conversation

Kyle Wiggers* says researchers have developed a system that learns to identify individuals based on their eye movement behaviour when reading.


Photo: Avantgarde Concept

Our eyes wander as we read text, and not just in the figurative sense — between a series of rapid motions called saccades, eyes remain still for just 200–300 milliseconds on average.

Those movements are rich with subtext — they’re driven by cognitive processes involving vision, attention, language, and motor control — and according to new research from the University of Potsdam, Weizenbaum Institute for the Networked Society, and Leibniz Institute for Agricultural Engineering and Bioeconomy, they’re enough to identify a person pretty accurately.

A paper published on the preprint server Arxiv.org (called “A Discriminative Model for Identifying Readers and Assessing Text Comprehension from Eye Movements”) describes a system that learns to associate eye movement behaviour — including scanpaths, or gaze patterns — with individuals.

“Identification based on eye movements during reading may offer several advantages in many application areas,” the researchers wrote.

“Users can be identified unobtrusively while having access to a document they would read anyway, which saves time and attention.”

First, the team identified scanpaths that could be observed with an eye-tracking system, which they correlated with “lexical features” in phrases of text (like word frequency, type length in numbers, syllables, and parts of speech).

The resulting generative model inferred the likelihood of a given scanpath by taking into account not just the amplitude and duration of each saccade, but the subtle differences across five saccade types:

  • Refixate the current word at a character position before the current position.
  • Refixate the current word at a position after the current position.
  • Fixate the next word in the text.
  • Move the fixation to a word after the next word.
  • Regress to fixate a word occurring earlier in the text.

The team used the model to derive a Fisher kernel — a function that measures the similarity of two objects — that could compare scanpaths.

To test the system’s accuracy, the researchers next recruited volunteers to read 11 texts presented in a randomised order, each fitting on to a single screen.

Their eye movements were recorded with an SR Research Eyelink 1000 eye tracker.

So how’d the AI perform?

In a test set of 62 readers, the Fisher kernel with lexical features (the team tested at least one model without them) achieved identification accuracy of up to 91.53 per cent.

That’s not quite as high as fingerprints’ 99.8 per cent, but the team claims it’s state of the art.

“We conclude that this model significantly outperforms the semiparametric model of [Abdelwahab, Kliegl, and Landwehr] in some cases, which, to the best of our knowledge, is the best published biometric model that is based on eye movements,” the researchers wrote.

It’s not the first time we’ve seen AI use eye movements to derive insights.

In a study conducted by the University of South Australia, University of Stuttgart, Flinders University, and the Max Planck Institute for Informatics in Germany, researchers describe a machine learning model that can predict traits like sociability, curiosity, and conscientiousness from a person’s eye movements alone.

* Kyle Wiggers is a technology journalist and AI correspondent at Venture Beat. He tweets at @Kyle_L_Wiggers and his website is kylewiggers.com.

This article first appeared at venturebeat.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.