Cracking the neural code, with a light touch

Do we have to crack open skulls to get into each others heads?

Editor
Lux Capital

--

Behold Silicon Valley’s latest obsession: the brain. This key organ has eluded physicists, biologists, psychologists, philosophers, and religious scholars. A massive body of knowledge has been built around the mind, yet we know so little about how the brain actually works. Our limited knowledge of the brain has inspired computer scientists to build “deep nets,” a tool that can train computers to learn by identifying and establishing patterns in large amounts of data. This “artificial intelligence” has advanced to the point where many scientists believe that AI will surpass human intelligence in a couple decades. To prevent humans from being subject to robotic overlords, technology entrepreneurs Bryan Johnson and Elon Musk have set out to augment human intelligence with artificial intelligence through Kernel and Neuralink, respectively, by understanding how the brain interprets and generates signals representing memories, senses, emotions, motor functions, and our sense of self.

Bryan and Elon are being advised by today’s top scientists, such as Ted Berger and Philip Sabes who have have gone about cracking the neural code by surgically implanting electrodes into living brains to listen for individual neurons firing. Miguel Nicolelis and John Donoghue were among the first to control machines directly with thoughts. Starting decades ago, they implanted arrays of approximately 100 electrodes into living monkeys, which sent neuronal activity to a computer which would map firing patterns to the movements of a nearby robotic arm. The monkeys learned to control their thoughts to achieve desired arm movement; namely, to grab goodies. These arrays were later implanted in quadriplegics to enable them to control cursors on screens, and robotic arms.

Researchers successfully trained computers to capture and interpret brain signals over 15 years ago. This work performed by Miguel Nicolelis and his colleagues at Duke, was published in Nature in late 2000. In this setup, they planted 96 fine electrodes into a monkey’s brain and were able to accurately predict the monkey’s intent and compare it against actual arm movement. Over the past seventeen years, the technology has become more robust, reliable, and has been tested in humans. However, it still requires a direct physical connection to the brain which keeps it beyond reach for normal, healthy individuals. With access to so few humans, how quickly can this technology realistically evolve to the point where it can be useful for all people?

This work gained international attention when a paraplegic was scheduled to kick off the 2014 World Cup using technology developed by Nicolelis’ group, which is a significant advance over the earlier monkey work. However, the general approach suffers from three fundamental shortcomings: 1) it requires a dangerous, potentially deadly surgical procedure, 2) eventual scar tissue buildup near the electrodes reduces signal fidelity over time, requiring regular surgery, and 3) the models have to be trained for every individual, so there isn’t a one-size-fits-all solution. Although consumers would be willing to train an interface if the value proposition were strong enough, very few would be interested in cutting their heads open.

The invasive nature of this technology has limited its development in humans to participating quadriplegics, paraplegics, and severe epileptics. Scientists developing this technology require access to massive amounts of funding, surgeons, and access to this scant group of patients who qualify as test subjects. Even this amount of progress by the scientific community, considering these constrained circumstances is awe inspiring! Imagine the progress computer science would have made had the developer community been limited to a few dozen computers! This scarcity begs the question: how can deep nets be trained to insert thoughts and take commands directly from our brains if the datasets are going to be so small? Can we crack the neural code without having to crack open the skull?

Ed Boyden his colleagues compared several invasive and non-invasive approaches for detecting neural signals in this work, depicted above. Electrical recording is probably the most popular approach since it can achieve single-cell resolution, but requires needles to be surgically implanted into the brain and is fundamentally limited as to how much of the brain can be covered. Optical recording can provide more coverage though also requires a hole to be burrowed into the skull to fluoresce the firing neurons. Magnetic resonance is non-invasive, but only captures the activity of a population of neurons. Molecular recording promises to generate a report of the firing activity of single neurons in the form of DNA, though that DNA “report” will need to be harvested and sequenced which is more conducive to offline research vs. real-time systems [arXiv:1306.5709v7].

How about a different approach: one where we leverage the existing high-fidelity connections to the brain? Our senses of smell, taste, touch, sight, and sound are as hard-wired into the brain as USB ports are connected to CPUs. Unfortunately, these interfaces are low-bandwidth. It can take years of seeing, hearing, and practicing to communicate in a new language. Our emotions in response to seeing a loved one, or hearing that special song, results from many hours if not months of experience from all of our senses. How can we “install” that knowledge and experience, or “export” that emotion and expression in an instant, as easily as downloading an app to our phones or uploading attachments via email?

Analogous to neural nets in which we don’t seek to understand the underlying values in its various layers of neurons, but rather care about input and output, can we train nets to stimulate one, or a combination of our senses to imprint an experience, capability, intuition, or even an alternate sense of self into our minds? Or, vice-versa, can neural nets be trained to sense minute movements and twitches in our eyes, limbs, blood oxygenation, or other biological signals and translate them to an experience or expression? Like autonomous cars, where new vehicles off the assembly line will benefit from the knowledge gathered from the countless miles driven by existing autonomous vehicles, can humans benefit from other humans as well?

For example, newborn babies, saddled with programming that’s millions of years old, begin life as cavemen and rely on parenting to be upgraded to modern society in the years prior to adulthood. Parental guidance comes through spoken words, written text, but most importantly, the observation of parents’ behavior with each other and other people. All this arrives at the babies’ brains through their senses. Though most children have 18 years to become adults, many require going well into their 20s to mature (at least most men do), because of the time needed to generate the capacity to simulate consequences of actions, empathize with others, and learn the basics needed to function in modern society.

Behold the future: Babies benefit from the corpus of all Humans and Artificial intelligence upon birth, and throughout their lifetimes. All humans are equally “smart,” and its those who creatively put this “pre-installed” wisdom and intuition to good use who are the most handsomely rewarded. This is in stark contrast to today where we have a terabytes-per-second Internet backhaul and gigabytes per second arriving at our phones, yet we still have to read, watch, and listen; which today, carries a bandwidth on the order of bytes per second. Let’s discover a way to combine and manipulate our natural senses to build a high-bandwidth link to our brains. We will use this link to train deep nets that insert desired memories, and acquire desired expressions, through our existing senses, interfacing them to our smartphones, and uploading to the web. Imagine posting to Twitter or Instagram Stories without ever touching or talking to your phone.

The concept of building a high-bandwidth link over a medium (i.e., our nerves) that was universally accepted as “slow” isn’t new. We adopted modulation schemes, starting with time-division multiple access (taking turns), to frequency-division multiple access (breaking up frequencies into channels), to code-division multiple access (mixing the signal with special codes so it can be picked up by the desired listener while other listeners see it as white noise). DSL, or digital subscriber lines, is a simple example of frequency-division multiple access: one can transfer data at megabits per second over the air or the same copper wire that Alexander Graham Bell put in buildings for voice. I’m sure many of you will remember the dial-up modem days in which computers shared the same, narrow frequency channel used for voice. Dial-up was not only slow: it was impossible to make a call while connected to the internet at the same time. In contrast, the multiple access approach allows us to carry on phone conversations while data is being uploaded on the same wires. The same basic principal of multiple-access allows our our microwaves, cordless phones, and laptops to share the same wireless spectrum.

We’ve succeeded in squeezing out more bandwidth than ever from the air and wires. We have achieved levels of data compression, and channel communications density, that is reaching theoretical limits in electronics. Meanwhile, we type on keyboards, hear speeches, and read the written word as quickly as we have for many decades, if not centuries. I believe that there is a way to build a bi-directional superhighway from our brains to our worlds through our senses and biological signals, by cracking the neural code, without cracking open our skulls. Credit for the comparison of modulation schemes to Vivek Patel.

Can we widen the bandwidth of our senses as we have for our airwaves and twisted-pair copper? Can we look at an image, and have an underlying message be communicated and impacted on our minds, voluntarily, without disturbing the image? We already look at images, and activate the memories derived from all senses as a result of remembering an experience. For example, we look at the picture of a loved one and hear their voice, the butterflies in our stomach from seeing them smile. We relive good and bad moments spent together, and what we’ve learned from those moments — all from an image. Can we use the same process, perhaps by combining several senses, to implicitly, and quickly impart these experiences? Conversely, artists paint images, write stories, compose dance, and create music to express themselves, by impacting their bodies against canvases, keyboards, and instruments. Is there a more efficient means of expression by cracking our neural code?

I believe that if enough incredibly talented people set their minds to widening the bandwidth of our senses, then we revolutionize the way we take information in, and express ourselves. This tool will be as revolutionary as artwork on caves, the written word, and the telephone. Founders wanted!

Shahin Farshchi is a partner at Lux Capital, to which he interfaces his brain and funds hardware, robotics, AI, and space startups. Follow him on Twitter:@farshchi

--

--