Mind and machine

Cutting-edge efforts to map the human mind are opening up extraordinary possibilities, including novel ways of tackling disease, interacting with machines and even enhancing our intelligence. How did we get to this stage? Where might we go from here? And should we be excited or worried — or both?

Simon Dewar, Investment Director, Rathbones

Great thinkers have wrestled with the complexities of the human mind for thousands of years. From Socrates to Descartes, from Darwin to Crick, philosophers and scientists alike have tried to unravel its workings and fathom its relationship with the body and beyond. What has changed over time is that the focus has steadily shifted away from its evolution and towards questions around how it actually functions and is structured.

The answers lie in neurons. These are the basic units of our nervous systems and the fundamental building blocks of intelligence. An adult brain contains more than 85 billion of them, each with around 10,000 connections to other such cells.

Sensory neurons react to stimuli such as sound, light and touch, sending signals to the brain or the spinal cord. Motor neurons receive these signals, controlling our every movement — from muscle contractions to glandular output. Trillions of minute junctions, known as synapses, allow the signals
to pass from one neuron to another in a process that is partly chemical and partly electrical.

The latter attribute has attracted scientific attention ever since Luigi Galvani, an 18th-century physicist and biologist, found that the legs of dead frogs twitched when struck by a spark. Galvani posited that this was due to an electrical fluid carried to the muscles by the nerves. His discovery gave us ‘galvanism’, which in turn gave us ‘galvanise’ — meaning to shock or excite something into action.

The second half of the 20th century saw attempts to understand neurons become ever more precise, diverse and molecular. Today scientists are getting closer not just to decoding the electrochemical signals in the brain but to composing and delivering them. This opens doors to some incredible treatments and advances in human capabilities. Some would say it also opens a Pandora’s Box. 

Innovations and interfaces

The quest to map the mind has always drawn on achievements in other fields, among them anatomy, physiology, mathematical modelling and, more recently, optogenetics, cognitive psychology and computing. Many of these arenas have witnessed substantive advances, not least since the turn of the millennium, propelling neuroscience into an age in which what once seemed inconceivable might soon be within grasp.

Crucially, as our comprehension of the nervous system flourishes, cutting-edge thinking is encompassing not just how the mind operates but why it sometimes fails — and, by extension, how it might be repaired or even enhanced. As a result, the treatment of numerous medical conditions increasingly looks set to involve interaction between the human brain and machines.

This has actually already been happening for longer than most of us might guess — as evidenced by cochlear implants, which for decades have helped tackle hearing problems by converting sounds into electrical signals that are then sent to the brain. Although there is no direct interaction with neural tissue, such apparatus might be regarded as a primitive example of what has come to be termed a brain-computer interface (BCI).

Similarly, one of the most common forms of surgery for Parkinson’s disease, deep-brain stimulation (DBS), was first approved in 1997. Extremely fine wires tipped with electrodes are implanted
in the brain via extensions tunnelled under the skin behind the ear; they are then linked to a pulse generator to deliver high-frequency stimulation that alters some of the signals that cause the condition’s movement-related symptoms. Although not a cure, this approach is more effective than medication in many cases.

At Brown University, Rhode Island, researchers developed BrainGate, a BCI that uses a small array of electrodes implanted in the brain’s motor cortex. These detect the neurons that signal planned motion in the hands or arms: the signals are communicated through wires poking out of the skull, and a computer decodes them and translates them into movements.

Since 2004 BrainGate has assisted more than a dozen people with paralysis. Thanks to her newfound ability to communicate with a robotic arm, one woman was able to take her first sip of coffee without aid from a caregiver since the stroke that had paralysed her 15 years earlier.

Maybe most famously, Matthew Nagle, the first person to receive an implant, was in effect able to play bat-and-ball computer game Pong with his mind after mastering the required moves in just four days. “If your brain can do it,” Brown professor of neuroscience John Donoghue said in 2006, “we can tap into it.”

Beyond BCIs

The phrase ‘brain-computer interface’ originally surfaced in the academic literature in the 1970s, when the University of California, Los Angeles, carried out a study partially funded by the US government’s Defense Advanced Research Projects Agency (DARPA). Today the notion of a BCI is becoming an ever more sophisticated reality, with household-name tech giants responsible for some of the most significant breakthroughs.

Microsoft is among those at the forefront. In 2018 it launched its AI for Accessibility initiative, a five-year programme intended to accelerate the creation of artificial intelligence solutions that could benefit more than a billion people with disabilities. Around $25 million in funding is at present being made available to universities, non-governmental organisations and inventors, with larger investments promised for the scaling up of would-be game-changing innovations.

And then there is Elon Musk, of Tesla fame, whose Neuralink Corporation is pioneering a new kind of BCI that aims to embed flexible “threads” in the brain and use them to transmit information to a wireless receiver worn as an earpiece. The threads would be thinner than a human hair; they would also be implanted by a robot. One goal, as with BrainGate, is to enable people with paralysis to communicate with electronic devices at a higher level.

During a presentation in July 2019, teasing the supposedly top-secret project’s results to date, Musk reportedly surprised even his own colleagues when he announced: “A monkey has been able to control a computer with its brain.” Despite insisting that his speech was not a vehicle for hype, he elicited further and more widespread astonishment when he declared: “We hope to have this in a human patient by the end of next year.”

Musk himself subsequently stressed that Neuralink would not work towards “taking over people’s brains”. Rather, he said, the principal objective would be to “achieve a symbiosis with artificial intelligence”.

Yet this is where the line between ‘progress’ and ‘dystopia’ tends to become blurred. Perhaps few people would object to BCIs being used to ameliorate medical conditions or cure diseases; but if this should lead to the ever-greater fusion of human and machine, as critics fear and some experts fully expect, then what might the future hold?

Things to come?

Two years ago, appearing before the World Government Summit in Dubai, Musk warned that humans could be rendered useless in an era of ubiquitous AI. Machines would be making perfect sense of data at a rate of more than a trillion bits per second, he said, while the flesh-and-bones stragglers of Homo sapiens would still be laboriously tapping messages into their smartphones. The best course of action, he asserted, would be to merge the two.

“We’re already cyborgs,” Musk said. “Your phone and your computer are extensions of you. But the interface is through finger movements or speech, which are very slow.” He ventured that a “high-bandwidth interface to the brain” might “solve the control problem and the usefulness problem”. If we do not accept as much, he claimed, the proliferation of an AI “smarter than the smartest human on Earth” could end life as we know it.

This manner of vision is by no means novel. Irving John Good, a contemporary of fellow codebreaker and computer scientist Alan Turing at Bletchley Park during the Second World War, wrote in 1965: “The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Futurist Ray Kurzweil coined the term ‘the singularity’ to describe the moment when machines become infinitely more intelligent than humans. The World Economic Forum has officially recognised “the singularity” as one of the most pressing issues around AI.

Kurzweil has predicted that we will necessarily meld with computers and that our thoughts, like so much data today, will be stored in the cloud. This raises a host of questions and concerns. Will our perceptions, emotions, decisions and memories remain our own in those circumstances? Might they be ‘hackable’ or serve as minuscule components of one monolithic, shared system?

Information can go both ways. We already access the sum total of humanity’s knowledge via our phones, tablets and laptops, so is the next logical — or even inevitable — step really to be able to download it all into our brains? Is this is our sole hope of keeping pace with machines? Would we not then become machines ourselves?

“It’s going to be all mixed up,” says Kurzweil. “There’s not going to be a clear distinction between a human and a machine.”

There is also a military dimension to this. DARPA, the organisation that helped finance the 1970s research on brain-computer interfaces, remains at the forefront of neuroscience research. The organisation was created in response to the Soviets’ launch of Sputnik 1 in 1957. Its aim is to prevent the US receiving technological surprises ever again — and to create some of its own.

Former DARPA director Arati Prabhakar has always been enthusiastic about the potential of this branch of science but readily and repeatedly addressed the ethical challenges throughout her time in post.

“In a possible future,” she says, “neural technology will enable a soldier to focus under fire by turning his heart rate down, or to sense an odourless biological threat, or to directly and intuitively direct a whole bevy of military systems that could keep an adversary at bay. In that future will the military ban neural enhancement, the way we ban performance-enhancing steroids today? Or, conversely, will neural enhancement become a condition of military service?

“Neural technologies could enable people across society to overcome depression, to boost our physical health, to learn complex tasks in a flash… In that future will society think about neurotechnology the way we think about braces or even laser eye surgery? Or is there a time when we can begin to imagine a disturbing gap between the neural enhancement haves and have-nots?” 

We have not yet completely cracked the neural code, which very probably does represent science’s final frontier. As we get closer, though, Prabhakar’s words will come to have greater significance. She says: “With these big possibilities come some big choices. In the choices we make we will reveal who we are and who we will become
as human beings.”

Are we already cyborgs?

A technology that advances the mind-machine relationship but falls short of brain-computer interfaces is biohacking. In Sweden, where it has been available since 2015, around 3,000 people have undergone the necessary procedure — usually a simple injection of a microchip into the hand.

Supporters enjoy the convenience that biohacking can bring. For instance, they can use the data contained on a chip to open doors, register train tickets or make payments.

Yet problems around security persist. As well as concerns over who should be allowed to share personal information stored in this way, there is the grisly prospect of hands being sliced open — or off — to obtain a potentially valuable source of data. There are also fears that such implants could lead to infections or to reactions in an individual’s immune system.

Rate this page:
No votes yet