Evan Chen

Brain Computer Interfaces

October 18, 2024

~Written July 2024

On the journey to the fully realized future of how brain-computer interfaces will integrate deeply into our lives, there are 3 loosely defined stages of progression: diagnostics, collaboration, and information highway.

Diagnostics

Most of the established scientific research and understanding of the brain are in this phase. Given the human brain's multidimensionality, various diagnostic devices and procedures are used to evaluate its different areas and conditions.

Some common devices include electroencephalograms (EEGs) which track neuroelectrical activity, magnetic resonance imaging (MRI) machines which can produce detailed images, transcranial ultrasound probes which measure blood vessel flow, and near-infrared spectroscopy (NIRS) devices which detect blood oxygen level changes. Note that each device evaluates the brain with different physical properties: electricity, magnetism, acoustics, and electromagnetism (ie. light), respectively.

Developments in BCI have thus far relied on these medical devices, with researchers aiming to associate observed neural activity with specific intents that would translate to some action being taken. Without getting too much into the specifics of healthcare procedures and comparisons between each investigative method and device, these practices are the core hardware infrastructure and scientific research atop which current BCI startups are building.

Collaboration

Leveraging existing medical devices and sensors alongside generative AI, most BCI startups aim to repeatedly prompt and capture neural activity to train AI models that associate particular brain signals with user intents and actions. There is a range to this collaboration, each at different levels of research and development.

Brute Force, Narrow Control

The most direct path to coordinating user intent with digital actions is by leveraging existing medical devices that capture neural activity, repeatedly prompting users to isolate brain patterns, and then directly associating each pattern with a specific action.

Gaming

In late January 2023, Twitch streamer Perrikaryal took social media by storm when she played Elden Ring with just her thoughts via a commercial EEG device. The headset would continuously monitor her neural activity while she was playing, and then execute attack commands when certain patterns were picked up. This capability was trained by her repeatedly visualizing a small set of actions (e.g. picking up a box) until the device could consistently recognize the resultant neural patterns, then binding those patterns to button inputs.

Not only was achieving this control challenging and time-consuming, but functionality was also extremely limited. To achieve acceptable performance in recognizing just one neural pattern, which only trended to 60-70% accuracy, took hours of manual, repetitive visualization training. And recognition triggers were binary: either the signal was recognized and commands got executed, or... nothing. Furthermore, reliably distinguishing between more than a few neural patterns, which is a foundational capability, requires large amounts of generalized, population-level training data. Not the least to say that the stratification and depth of this data across multiple physical modalities, sensor types, demographics, and use cases will likely be necessary to provide substantive training.

Neuro Prosthetics

For individuals with motor impediments considering prosthetics, especially devices that can be precisely controlled, there are two primary signal pathways user intent can be captured from: myoelectric (muscles) or neural (brains). Myoelectric prosthetics use electric signals generated by muscles and surrounding nerve endings to control the device, which comes from the user moving the rest of their body. BCI devices involve direct communication between the brain and the prosthetic, associating specific neural activity in the motor cortex to the range of prosthetic movement.

However, in the realm of BCI, there is a significant gap between motor control and human understanding. Neural signals for muscle movement are spatially specific and temporally precise, allowing for structured understanding and control, but strictly within the bounds of physical movement. Getting to human understanding is likely at the other side of the spectrum because neural activity for thoughts and words is distributed across the brain and discerning between the beginnings and ends of thoughts is undefined and completely fluid.

Looking forward

Though there has proven to be an early success across many use cases, improving signal processing and optimizing decoding will be integral for continued progress.

Signal processing faces two primary challenges: signal clarity and feature extraction.

  • Signal clarity can suffer from noise, artifacts, and impediments as sensors capture neural signals. Various methods can be used to filter and reduce interference, but there are tradeoffs between the complex layering of multiple techniques and the retention of important signal components without degradation. It would be interesting to see novel reduction methods when using multiple sensor types simultaneously, or sufficiently generalizable signal clarity frameworks that can be applied across swaths of device types (ie. open-source interference mitigation).
  • Feature extraction involves isolating key characteristics or patterns in the cleaned signals of neural activity. These selected features are used to decode intent, so granularity and sensitivity are key to maximizing brain-computer interactions. However, there is a lot of complexity in making the design decisions for which types of sensor(s) to use and which types of features to extract. Sensors vary across cost, commercial readiness, intrusiveness, resolution (ie. temporal, spatial), and physical domain (ie. electricity, magnetism, electromagnetism, acoustics, thermal). This is important because sensor composition informs the range of features that can be extracted, and the extracted features inform how signals captured should map to user intents. Put differently, given a set of devices and features, going too broad risks significant complexity and reduced speed while going too narrow risks inflexibility and obsolescence.

Decoding is the connection between features extracted from neural scans and instructions for the computer to execute. Understanding intent is still nascent in its development as scientific research continues to understand more about the brain. Development thus far relies heavily on particular sensor placement, requires repetitive user data, and offers a limited feature set of available mappings.

Cortical Coupling and Emulation

This is where communication between humans and computers begins to take shape around BCI progress.

AI Research

Cortically coupled computer vision (CCCV) for training image models was initially introduced in 2006. Researchers wanted to combine the robust object recognition capabilities of the human visual system with existing BCI hardware and neuroscience knowledge to train an image classification model. This was done by recording EEG data of participants while a sequence of images was rapidly flashed in front of them, designed as a visual “oddball” paradigm, to capture the brain’s sensitive responses to deviant stimuli in a series of repetitive stimuli. Then, images that correlated with the neural signals expected to be present when our brain recognizes deviation (P300) were extracted and labeled to be training data for the classification model.

This opened the door to how humans could effectively augment and work alongside computers. Since then, advances in BCI technology for signal acquisition and processing, refined knowledge on localization of neural activity as related to cognitive functions, and all the AI model intelligence advancements in the past few years have set the stage for continued progress on not only humans augmenting AI but also AI augmenting humans.

Neurobiological Systems

As some AI researchers double down on artificial neural networks, others are working on imitating biological neural networks or even using animal neurons themselves: neuromorphic computing and biocomputing, respectively.

Neuromorphic computing involves reconfiguring existing hardware to replicate the structures and processes of biological neurons and synapses to enhance speed and efficiency. For example, University of Manchester researchers from the EU’s Human Brain Project (HBP) wanted to mimic the sparse neural activity observed in biological systems, in which specific regions of neurons only fire when necessary rather than all at once. They developed a large-scale spiking neural network (SNN) architecture, called SpiNNaker, to replicate this behavior of sparsely activating neurons and achieve more efficient model computation.

Biocomputing involves using biologically derived molecules, such as DNA, RNA, proteins, and cells, to perform computational tasks efficiently. This is done by using biological molecules to store and process information, creating biological logic gates with chemical inputs to produce outputs, or combining advanced computers with “mini-brains”, 3D cultures of brain tissue and neurons that mimic brain structure and function, to improve its energy efficiency.

Though these neurobiological systems have primarily focused on gaining efficiencies for existing AI frameworks rather than for BCI, it is also within this discipline that a better understanding of biological nervous systems can lead to scientific breakthroughs in human consciousness and facilitate better human-computer interaction.

Looking forward

The future points toward a seamless collaboration between humans and AI, enhancing our cognitive and physical capabilities. This progress underscores the need for multidisciplinary collaboration across technical fields of neuroscience, computer science, biology, and engineering as well as civil fields of ethics, legal policy, and societal economics.

Direct System Control

Commercial applications have not emerged in this area yet, but government research has been ongoing, particularly for aerospace and defense applications. The primary focus has been to allow humans to mentally control an individual or group of devices.

Defense

The Department of Defense’s DARPA has been investing heavily in BCI research for hands-free drone control. As early as 2015, DARPA enabled a paralyzed woman to control a virtual F-35 Joint Strike Fighter with just their brain via an implanted microchip; three years later, the agency announced their research progressed to allow a user to not only steer multiple jets at the same time, but also receive signals back from aircraft about environmental conditions.

Aerospace

NASA has developed BCI research for pilots to help control airplanes. Rather than complete control, the goals of this research were to automate portions of flight control that can improve multitasking and track the cognitive and physiological conditions of pilots during flight. Just as generative AI is meant to augment human responsibilities in the digital realm, this NASA research helps to augment humans in the physical realm with human-machine symbiosis.

Looking forward

Military and aerospace research will likely continue well before commercial and consumer applications can become broadly viable. On this path towards wide adoption, there remain numerous technological and ethical concerns. These above applications were mostly conducted on small sample sizes and BCI interactions were tailored for particular individuals. Sufficient generalization and more rapid device onboarding and training remain large barriers as neural research continues. Furthermore, meaningful controls must be in place to prevent brain data abuse and limitations must be configured to prevent malicious actors from abusing this developing technology.

Information Highway

This area of BCI is much more forward-looking and focused on where the medium- to long-term developments may arise and the potential for what use cases may look like.

Communication → Telepathy

Progress continues to be made in translating human thought from neural signals to words and sentences. Coherent uni-directional (one-to-many) communication will likely be viable in the medium term as a deep understanding of a select few individuals will have immediate applications for military & defense to unilaterally direct troop movements among other applications. Long-term, bilateral communication (many-to-many) will come from generalizing understanding across larger swaths of people and developing frameworks for people to receive words and sentences for brain-to-brain communication. Furthermore, there are likely extensions to sophisticated interrogation tactics, advanced neurobiological weapons, and large-scale robotic systems control.

Neuromodulation → instant learning / downloadable skills

This perspective harkens back to the sci-fi series The Matrix, where Neo learned kung fu via direct brain download. Though seemingly far-fetched, there is already precedence for neuromodulation, where brain and nerve activity can be altered through targeted delivery of various stimuli to specific neurological sites in the body. Medical applications already exist for modulating pain signals as they travel to the brain via electrical spinal cord stimulation (SCS), and startups like Prophetic aim to induce and stabilize brain states via electrical and ultrasound stimulation. The sci-fi future might be closer than we think!

Related Posts

No items found.

Stay up-to-date on everything

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form