A videotape may show us what we see or point at in the real world. But, It does not specify the movement of our eyes or gazes during any action. Then how do you know what your gaze looks like when you are observing anything?
Well, Paul MacNeilage and his team in the College of Science at the University of Nevada have come up with a technology to track our head, body and eye movement through space. The team has gathered a video database with more than 240 hours of the first-person video. This database will help us to understand the organization of the brain and the perceptual tendencies in humans better than before.
In order to gather these videos, the team developed a headset to record the videos and keep track of every recording. The headset apparently has ‘has two cameras facing forward to see the world and two cameras facing the eyes to track eye movement’. Moreover, four labs are participating in the research and each of them has five such headsets.
How does the headset work?
As mentioned above, the headset has two cameras in front and two cameras face the eyes. The front cameras record things happening in the surroundings while the other two cameras record the eye movements.
The headset also has an Inertial Measurement Unit. This is a kind of motion sensor that combines an accelerometer and a gyroscope. The headset is connected to a laptop carried on the backpack.
Paul MacNeilage, the assistant professor and neuroscientist in the College of Science at the University of Nevada, Reno, says:
We wanted to use mini-computers, but they weren’t robust enough to handle our needs, so we ended up with a laptop in a backpack, it makes the headset a little more user-friendly so our subjects wouldn’t be distracted by the tech. We decided to go with a Pupil Labs product for the base and added devices to it. We didn’t want it to be too distracting for others, either.
The technology used in the headset is much advanced. It collects GPS data, runs four cameras, accesses software and records three video streams at once. Quite impressive, isn’t it?
MacNeilage adds further,
The system measures head and body movement through space. This allows us to reconstruct visual input moment to moment and get insights on sensory-motor control. No existing database includes head motion.
To gather data, the team went out to campus looking for objects to observe behavior changes when we try to navigate our environment. Using a simple paradigm MacNeilage and the team is trying to find out how people sample the visual environment with their eyes.
MacNeilage is hopeful that its Visual Experience Database can be used to support research and impact future research in fields in the future. This new technology might have various uses including in the field of analysis and recognition of images. We can also use it for neuroscience, vision science, cognitive science, and digital humanities and art.
Most importantly, VED technology will have multiple uses in the field of artificial intelligence. Other AI technology has a previously built bias, but the VED may come handy as it can deliver a new source of more accurate data.
We aim to build our database with biases based on human perception. Our database will have biases consistent with human behavior, which could be an advantage for AI.
Compared to the current AI system, this new technology will be more accurate as it will introduce human-centered biases. This may bring forth a new era to the Artificial Intelligence Technology. The new VED technology can be used for AI-based applications, e.g. self-driving cars, gadgets, etc.
What’s more interesting is that this can also help us to understand the human brain and senses better. Don’t you agree? Please share your views with us.