Tech companies and researchers are working to turn Elon Musk's vision of "neural lace" brain computing into reality. Their efforts aim to merge natural brain function with artificial intelligence.

Elon Musk’s vision for a “neural lace” interface has brought the concept of AI-connected brains to the forefront. While the idea might sound far-fetched, brain-machine interfaces are actually closer than most people think. Neuroscientists and technologists have been working on this concept for years.
Already, technology can enable paralyzed people and stroke victims to “type with the mind” and control external machines with their thoughts – no brain surgery required.
Simultaneously, the US Department of Defense is working to create more sophisticated brain-sensor implants while IBM is feeding human brain data into AI systems with the ultimate goal of eventually facilitating direct computer-brain communication for healthcare purposes.
In this analysis, we examine current research on brain-machine interfaces, the next hurdles to achieving AI-augmented brains, and existing and potential applications for this mind-altering technology, from gaming and education training to behavior modification.
TABLE OF CONTENTS
- Advances toward AI ‘brain training’
- Mastering ‘mind control’ of machines
- Robotic commands from the nervous system
- Moving to ‘precision communication’ with the brain
- Startups seek to help us ‘co-evolve’ with technology
- The commercial applications of brain signals — gaming, education, market research, and beyond
- Connected cortexes of the future
Researchers across many different fields and industries are already enhancing human capabilities by integrating the nervous system with machines.
Research teams and startups in the “brain tech” space tend to have one of two overlapping goals:
- Improving the lives of people with neurological damage (due to afflictions such as stroke or paralysis) by connecting their brains to computers; and
- Eliminating the barriers between human minds and machines to augment or modify natural human brain function with computing and artificial intelligence.
The latter goal is the long-term vision of tech pioneer Elon Musk, who famously believes humans must bring the power of the internet into our minds to remain relevant in an AI-driven society of the future.
As he explained in remarks at a June 2016 conference:
“If you assume any rate of advancement in AI, we will be left behind by a lot. Even the benign situation if you have an ultra-intelligent AI, we would be so far below them in intelligence that we’d be like a pet, basically, like a house cat.” –Elon Musk
Musk’s brain-tech startup Neuralink is using its $27M in disclosed funding to advance this vision for a “direct cortical interface” (or “neural lace” technology) that would allow artificial intelligence systems to be layered into our cognitive function – giving us the enhanced smarts and “functionality” necessary to keep ahead of AI machines.
The benefits that artificial intelligence could provide to our normal brain function are practically endless.
Broadly, AI systems and machine learning models (both known as “neural networks”) are trained – using vast amounts of data – to rapidly process information, recognize patterns, and make predictions.
If such networks are trained using our own neurological data, and then deployed into our normal function, AI can be used to “train” and improve our thinking and behavior.
Practical applications for this AI-brain tech are already emerging.
Advances toward AI ‘brain training’

Researchers at IBM are developing a system to analyze brain waves and use AI to predict epileptic seizures.
Today, epileptic seizures can only be detected moments before they happen. With their work, the IBM team hopes to develop a system that could give someone enough advance notice of an epileptic episode that they could get to a safe place, or call a friend for help.
- IBM’s work uses advanced machine learning (or “deep learning”) to analyze “neurophysiological signals” from epilepsy patients. These signals are assessed from electroencephalogram (EEG) and local field potentials (LFP) data that is acquired from sensors outside the body.
- The brain data from sensors is computed through an experimental neural network running on IBM’s brain-inspired (aka “neuromorphic”) microchip, TrueNorth – which is capable of capturing and processing data simultaneously, in real-time.
- Because TrueNorth is capable of real-time computing, IBM believes it could someday work in tandem with a brain-monitoring implant that would assess EEG or LFP data 24/7 to spot patterns of an oncoming seizure in a person’s brain signals.
- Ideally, the implant would then notify its wearer of an imminent seizure by sending a wireless signal to a smartphone.
“Real-time analysis of brain-activity data at the point of sensing will create the next generation of wearables at the intersection of neurobionics and artificial intelligence.” –IBM research study
IBM’s work has exciting potential, but only initial research work has been performed with the system so far (with a primary focus on training the neural network to understand EEG and LFP data).
And IBM’s challenges with the project speak to the larger barriers facing neuro-tech researchers across many different disciplines.
Today, three separate systems would be required to make IBM’s vision of epileptic-seizure detection possible:
- A sensor system that takes in EEG and LFP data from the brain, either from a brain implant or sensors worn on the skin
- A microchip (or other computing system) capable of rapidly processing that data
- A trained deep learning algorithm capable of using that data to predict seizures with a high level of accuracy
A future in which all three of those systems can seamlessly work together inside the brain is likely coming, but will require further advances in the implants and other devices that enable communication between computers and brains.
These devices are known as brain-machine interfaces (BMIs) and brain-computing interfaces (BCIs).
Mastering ‘mind control’ of machines

Facebook made headlines in April 2017 with the announcement that it had 60 engineers working on a non-invasive “direct brain interface” that could make it possible to type with your mind.
That may sound exciting, but brain tech is one area where the social media giant is playing catchup with the broader scientific community.
Research on BCIs and BMIs – much of it funded by government sources – has been going on since the 1990s. (As we’ll discuss in a later section, the work has achieved impressive outcomes on “mind controlling” keyboards and other devices.)
The distinctions between BCIs and BMIs are nuanced but important.
Brain-computing interfaces (BCIs) capture, measure, and translate macroscopic brain information – typically from electroencephalograms (EEGs), which record the electrical activity of brain signals.
- EEGs measure information that is known to control behavior, but acquire it in a diffuse and unspecific way: Since EEG data can be picked up from brain scans or from electrodes (aka sensors) affixed to the scalp or elsewhere on the body, BCIs do not always involve in-brain implantations of devices; some use sensor-equipped headbands, for example, to pick up and decode human brain signals.
- BCIs often take in EEG data, process it through artificial intelligence and/or machine learning algorithms, and then use the algorithmic commands to direct external hardware or software to execute a task – such as with IBM’s above innovation.
Brain-machine interfaces, on the other hand, probe the brain for more specific information:
- BMIs send sensors into different “levels” of the brain to extract microscopic or “mesoscopic” information – such as the individual spikes of electrical activity inside brain cells, or the broader electrical-current activity surrounding cells (known as the brain’s “local field potentials” or LFP data). By working at more granular levels, BMIs can modify the way that neurons signal each other.
- With some exceptions, extracting that information typically requires implanting devices inside the cortex. The neurochip being developed by brain-tech startup Kernel (discussed below) is one example.
In their current iterations, both BCIs and BMIs rely on many of the same technologies – such as sensor-equipped microchips and minimally invasive brain implants. And today, they both do essentially the same thing: Read and “decode” an individual’s cognitive intentions, based on EEG or LFP data, and turn that data into robotic commands.
Robotic commands from the nervous system
Today, BCIs and BMIs can capture enough brain information and decode it rapidly enough to control external hardware and software – enabling the operation of things such as keyboards or artificial limbs, as the examples below show.
As BMIs and BCIs advance to process even higher volumes of data at an even more rapid pace, the devices will become more useful for helping to bring AI into the brain. For now, however, researchers are mainly using or developing BCIs and BMIs to mitigate the effects of neurological disorders.
Get more insights on neurotech trends in CB Insights’ Healthcare Horizons report.
The BrainGate research collaboration first began work on its “intracortical brain-computer interface” over 15 years ago. The project – which is comprised of clinicians, scientists, and engineers from Brown University, Massachusetts General Hospital, and several other institutions – aims to help restore communication and mobility to people with paralysis.
In its current iteration, the BrainGate Neural Interface System implants a tiny silicon chip – just one-sixth of an inch square – into the motor cortex of the brain. The 100 tiny sensors (or “micro-electrodes”) that protrude from the chip penetrate the brain to about the thickness of a quarter, where they can tap into the electrical activity of individual nerve cells.
With these micro-electrodes in place, the neural signals associated with a patient’s intent to move a limb can be “decoded” by a computer in real-time and used to control machines.
Earlier this year, the BrainGate technology enabled three clinical trial participants with paralysis to “type” via direct brain control at the highest speeds and accuracy levels reported to date by “pointing and clicking by thinking about the movement of their own hand.”
Watch it happen in the video below.
Other BCI research being done at the University of Adelaide in Australia might soon help stroke patients recover from debilitating damage by measuring brain electrical signals from the surface of the scalp:
Every time a patient imagines performing a specific motor function, such as grasping an object, the BCI microchip reads the electrical signals and transmits them to a computer. An advanced algorithm then interprets the brain signals so that a computer can supply the right “sensory feedback” signal to activate target muscles to move.
As the process repeats, the stroke victim’s brain and body re-learn how to work in harmony: One patient assessed for a recent study (published in the journal Royal Society Open Science) achieved a 36% improvement in hand motor function in just ten 30-minute training sessions.
Moving to ‘precision communication’ with the brain
As impressive as the aforementioned BCIs are, they are many miles removed from Elon Musk’s vision for a direct, invisible interface between the mind and the digital world.
And even Musk recognizes that reaching a future of “connected cortexes” starts with addressing health problems: He says Neuralink aims “to bring something to market that helps with certain severe brain injuries” by ~2021.
For science to reach a point at which always-on, internet-connected devices and systems could modify the organic function of our brains – rather than simply sending neural signals that engage specific actions – we’ll first need implantable brain sensors capable of reading information from many millions of individual brain cells simultaneously as well as decoding them in real time.

Musk isn’t the only one putting millions behind that idea. DARPA, the research arm of the US Department of Defense, is pursuing that goal with its Neural Engineering System Design (NESD) program.
The NESD’s aim is to develop “an implantable system able to provide precision communication between the brain and the digital world.”
As part of the initiative, DARPA has awarded a $15.8M grant to a Columbia Engineering team that is working to develop a BCI that would read more than one million electrodes from a single microchip. (BrainGate’s, for comparison, can read just 100.)
The microchip development work is part of the Columbia team’s research toward a “bioelectric interface” for the visual cortex.
The project’s lead researcher, Ken Shepard, says the interface would “allow patients who have lost their sight to discriminate complex patterns at unprecedented resolutions.” (Reports on Shepard’s latest work have also speculated that interfaces to the visual cortex could someday enable computers to “see” what we see.)
Startups seek to help us ‘co-evolve’ with technology
Startups are also working to advance BCIs and get us closer to a brain-machine interlace future, for medical purposes and beyond.
Paradromics is working to develop “massively parallel neural interfaces” capable of decoding brain information in real time. Paradromics says its “next-generation brain-machine interfaces” will ultimately “increase the data transmission rate between brains and computers 1,000 fold.” They call this Broadband For The Brain.
As part of an $18M DARPA contract awarded in July 2017, Paradromics first aims to develop an implantable device that can help stroke victims relearn to speak.
Kernel, which has raised $100M, hopes to initiate clinical trials for a brain-implantable microchip they call a “neuroprosthetic.” Their device is similar to one that was shown to restore memory and improve information recall in rats (per a study in 2011).
As with Paradromics, Kernel’s work is currently focused on chip development related to neurodegenerative diseases. But Kernel claims its tech will someday “mimic, repair, and improve” human cognition using AI. The team perceives the work as a scientific contribution toward “unlocking the neural code” and “extending the life of our mind.”
Kernel’s founder and CEO Bryan Johnson, who previously led Braintree to an $800M acquisition by eBay in 2013, launched the new company in 2016. Like Musk, he envisions his company as helping humans keep up with technological advances:
Of course in Johnson and Musk’s vision, helping human intelligence advance is framed as a noble, if grandiose goal.
But as Johnson and other entrepreneurs recognize, furthering our “co-evolution” with technology will not only help society with the impacts of AI and automation, but also create plenty of commercial value too.
The commercial applications of brain signals — gaming, education, market research, and beyond
Using BCIs and “wearable BMIs,” some startups are already preparing to cultivate brain intelligence from individuals and use it for commercial purposes.
- Neurable, which has $2.3M in funding, is building non-invasive BCIs (shown at right) for “immersive computing” in virtual-reality and mixed-reality gaming: By detecting and analyzing changes in brain activity, Neurable believes it can use thoughts to guide the trajectory of video games and other media content viewed through AR/VR headsets.
- BrainCo, with $5.5M raised, is applying a similar concept to the classroom: By using brain-signal detection to monitor and analyze the attention levels of students wearing its Focus 1 BMI headset, BrainCo says it can help schools “optimize student engagement” and craft better teaching strategies. (BrainCo’s founder launched the startup to improve education in China.)
As the BrainCo video below shows, the idea of optimizing teaching and learning is all about students’ brain data: The information culled from BrainCo devices would be used by the company’s clients for objective analysis of how to improve teacher and student performance.
The classroom is also just the beginning of BrainCo’s big-picture vision for creating a massive “brainwave database” – which it intends to utilize in the clinical field, as well.
BrainCo’s brainwave-database vision may be grand, but the concept of using brain data for purposes related to attention-monitoring and engagement has already infiltrated the worlds of marketing science and entertainment.
For example, Moran Cerf – a renowned neuroscientist and Kellogg School of Management Marketing professor – has performed studies for brands and companies in the film industry to show them how they can use brain intelligence in their filmmaking.
By understanding how attention and engagement levels rise or drop throughout the course of a movie, for example, entertainment studios can modify trailers (or even elements of plot and character) to better appeal to audiences.
In many ways this is an extension of the way companies already use data on human behavior to guide strategy – just as Netflix uses the data it collects on users’ viewing activity to guide the content it produces.
In the BMI/BCI version of this, the data on individuals is more granular and instaneous – tracking not only what content users consume, but how their brains behave as they consume it.
BCIs and BMIs could ultimately help the entire marketing discipline bypass the flaws of focus groups, which rely on consumers to be reasonably honest about their feelings and biases: EEG data can today already provide broad information on brain responses to stimuli, but internet-connected brain devices could take the analysis a step further – giving brands real-time insight into how their target audiences’ brains perceive and respond to content or products.
Connected cortexes of the future
Cerf’s work is also going much further than the movies, as well as much deeper inside the human head.
As he explains in the interview below, he’s one of a small subset of scientists conducting research studies inside live human brains – typically those of wide-awake patients waiting for brain surgery.
The knowledge culled from that neurological research is being used for many multi-disciplinary purposes. One of those is understanding what Cerf calls “free will” – essentially the voluntary actions we undertake with our own human agency (such as moving a limb, or selecting something from a choice of options).
By unlocking greater knowledge about the electrical signals that underpin our voluntary actions, scientists may be able to plug the related brain-signal data into AI systems to predict – or perhaps even modify – our decisions.
“We can trace the moments in your brain from the action of, say, raising your right hand, back to how far before you did it could we have ‘known’ that you would do that… The question is can we know about it before you know it yourself.”
The applications for this “free will” prediction also link back to the BCI research being done with stroke victims.
When you consider that neuroscientists like Cerf may be able to use AI to “predict” that a person with neurological damage is about to think about moving a limb, you can see the opportunities for technologists to use that information for human benefit. (Perhaps by designing a robotic system that pre-emptively “knows” to execute walking movements or other natural gestures for paralyzed people.)
The opportunities for using AI and neurodata to predict actions and decisions also extends into the potential for behavior modification and health improvement.
An AI system like IBM’s, for example, can today be trained to decode brain data to predict seizures. With further “predictive knowledge” about how our brains behave as they make decisions, the same tools could “know” that you’re about to make a semi-habitual, unhealthy choice – such as reaching for a salty snack.
With internet-connectivity interacting with your brain signals, it’s feasible to think your mind could be auto-tuned to compel or incentivize you toward a better decision (like eating the almonds instead of the chips).
Without going into specifics, Cerf says this kind of behavior modification work is underway.
And that’s just one example: With machines inside our minds, we could be trained (through machine learning) to learn information and execute tasks, such as computing mathematical calculations, without even using our inherent cognitive function – rather integrating the machines right into our thinking and allowing us to automate those calculations.
So how close are the “connected cortexes” of the future?
As with so many other innovations, it is not simply a matter of developing the tech.
Connected cortexes will rely on the support of regulators: Researchers will likely encounter plenty of restrictions as they pursue clinical trials for new BMI and BCI innovations, and neurotechologists will most likely only have limited access to human volunteers outside the outpatient-surgery setting.
The social and economic impacts of incorporating AI into the brain will undoubtedly play into regulatory support or resistance, as well. Just as one can imagine plenty of positive potential applications of AI-powered brains modifying one’s own bad behavior, it’s easy to imagine a long list of negative applications.
Elon Musk is undaunted. Musk has argued that brain-machine interface devices could be in use for people without disabilities within the next eight to ten years. (Notably, BrainCo’s wearable BMI prototypes were already demoed successfully at CES 2017.)
As our “co-evolution” alongside the machines moves forward, we may see many more startups and mature technology companies seeking out ways to access and utilize brain information for healthcare and business benefits.
If you aren’t already a client, sign up for a free trial to learn more about our platform.