Sensor-equipped headbands could leverage neurological data to allow users to open and operate apps with their thoughts, no gestures required.
Hands-free, brain-controlled virtual environments are coming, as startups work to “hack” the human mind and use brain data to control prosthetic limbs, keyboards, and other machines.
Now, a new patent from Microsoft (granted January 8th) highlights a device that would decode EEG readings to allow users to launch and operate certain apps using their minds.
Users would “train” the device to recognize how their neuro signals respond when they focus on a certain object, and algorithms would learn from their brains’ behavior.
Electrodes placed along the scalp would read neuro signals, based on “voltage fluctuations resulting from ionic current within the neurons of the brain.” This would ultimately allow users to draw or move objects in a digital environment using just their minds.
An algorithm training process would educate the system on the user’s unique brain signaling. The system would learn which signaling patterns correlate with intention to perform operative gestures, such as:
- finger pinch movements or swipes
- head or limb movements (tilting or nodding the head, raising or lowering an arm)
- facial movements (smiling, furrowing the brow, intentionally blinking)
- full body movements (squatting, twisting the torso, bending at the waist, jumping)
- hardware interactions (keyboard strokes, screen taps, mouse clicks, button pushes, etc.)
The patent explains,
“Neurological user intention data corresponding to a physical gesture is generated when a user thinks about and/or focuses on a movement, in the same way an amputee might think about moving an amputated limb.”
The neurological data gathered would then be communicated to a computer system used to run an application.
The application in question could be just about anything: the patent mentions apps such as “3D modeling software, a video game, a virtual reality or augmented reality simulator, an audiovisual service, a word processor, a spreadsheet application, a web browser, a database manager.”
Even an app capable of controlling mechanical tools or machinery — such as for “moving and operating robotic arms” — could be operated using the patent’s technology.
The patent uses word processing (specifically, copying and pasting) as another potential application of this technology. The neurological data gathered by the patent’s device could be granular enough to understand whether the wearer wanted only to copy-paste, or to copy-paste and match formatting:
This concept of understanding a person’s intentions based on neurological data applies to a number of larger research and development initiatives focused on brain-machine computing.
For example, a number of scientists and researchers are using brain data to understand when the intentions behind our so-called “free will” actions — such as moving a limb or turning our heads — first manifest at a neurological level.
By tracking the moments between deciding on an action (intent) and taking an action, scientists hope to help people predict or modify their decisions.
This means that in the brain-controlled Microsoft Word of the future, you might not even need to think about source formatting: your document might predict and paste your copy the way you like it before your neurons ever even start firing.