Tuesday, July 5, 2022

Mind Control Shooters... The Return of The Manchurian Candidate ...

 

Can Machines Control Our Brains?


The raging bull locked its legs mid-charge. Digging its hooves into the ground, the beast came to a halt just before it would have gored the man. Not a matador, the man in the bullring standing eye-to-eye with the panting toro was the Spanish neuroscientist José Manuel Rodriguez Delgado, in a death-defying public demonstration in 1963 of how violent behavior could be squelched by a radio-controlled brain implant. Delgado had pressed a switch on a hand-held radio transmitter to energize electrodes implanted in the bull’s brain. Remote-controlled brain implants, Delgado argued, could suppress deviant behavior to achieve a “psychocivilized society.” Unsurprisingly, the prospect of manipulating the human mind with brain implants and radio beams ignited public fears that curtailed this line of research for decades. But now there is a resurgence using even more advanced technology. Laser beams, ultrasound, electromagnetic pulses, mild alternating and direct current stimulation and other methods now allow access to, and manipulation of, electrical activity in the brain with far more sophistication than the needlelike electrodes Delgado stabbed into brains. Billionaires Elon Musk of Tesla and Mark Zuckerberg of Facebook are leading the charge, pouring millions of dollars into developing brain-computer interface (BCI) technology. Musk says he wants to provide a “superintelligence layer” in the human brain to help protect us from artificial intelligence, and Zuckerberg reportedly wants users to upload their thoughts and emotions over the internet without the bother of typing. But fact and fiction are easily blurred in these deliberations. How does this technology actually work, and what is it capable of? Already in 1964, Delgado’s technology could induce a surprising amount of control in human brains. Simply by energizing implanted electrodes, he could quell a raging brain storm mid-seizure, or suppress mental illnesses in an instant — but he could also command a person’s limbs to move, overwhelm a person with sexual ecstasy or plunge them into deep, suicidal despair. No wonder people got nervous about this technology. Even recently, widely respected neuroscientists have sounded the alarm. A cautionary editorial, published in 2017 in Nature, opens with a scene that could have been found in an episode of Black Mirror, a show whose plots often center on mind control technology. The neuroscientists describe a scenario in which a brain implant that enables a paralyzed man to control a prosthetic arm suddenly goes haywire because the man feels frustrated, and it attacks an assistant with its steely claws. I find this Frankenstein scenario ridiculous. Electrodes placed in the motor cortex to activate prosthetic limb movement do not access emotion. Moreover, no matter what you may read in sensational articles, neuroscientists do not yet understand how thoughts, emotions and intentions are coded in the pattern of neural impulses zipping through neural circuits: The biological obstacles of mind hacking are far greater than the technological challenges. Today’s BCI devices work by analyzing data, in much the same way that Amazon tries to predict what book you might want next. Computers monitoring streams of electrical activity, picked up by a brain implant or a removable electrode cap, learn to recognize how the traffic pattern changes when a person makes an intended limb movement. For example, the ongoing oscillations in electrical activity surging through the cerebral cortex, known as brain waves, are suddenly suppressed when a person moves a limb — or even thinks about moving it. This phenomenon reflects an abrupt change in communication among thousands of neurons, like the sudden hush in a restaurant after a server drops a glass: You cannot understand conversations between individual diners, but the collective hush is a clear signal. Scientists can use the interruption in electrical traffic in the cerebral cortex to trigger a computer to activate a motor in a prosthetic arm, or to click a virtual mouse on a computer screen. But even when it is possible to tap into an individual neuron with microelectrodes, neuroscientists can’t decode its neuronal firing as if it were so much computer code; they have to use machine learning to recognize patterns in the neuron’s electrical activity that correlate with a behavioral response. Such BCIs operate by correlation, much the way we depress the clutch in a car by listening to the sound of the engine. And just as race car drivers shift gears with precision, this correlational approach of interfacing human and machine can be very effective. Prosthetic devices that match the brain’s electrical activity with sensorimotor function can prove life-changing, restoring some lost function and independence to people who are paralyzed or who suffer other neurological losses. But there’s more than fancy technology at work in BCI devices — the brain itself plays a huge role. Through a prolonged trial-and-error process the brain is somehow rewarded by seeing the intended response occur, and over time it learns to generate the electrical signal it knows the computer will recognize. All of this takes place beneath the level of consciousness, and neuroscientists don’t really know how the brain accomplishes it. It’s a pretty far cry from the sensational fears and promises that accompany the specter of mind control. For the sake of argument, however, let’s imagine that we do learn how information is encoded in neuronal firing patterns. Then, in true Black Mirror fashion, let’s say we want to insert a foreign thought via brain implant. We still have to overcome many obstacles, according to the neuroscientist Timothy Buschman, who is actively pursuing research using brain recording and stimulation. “I will know which brain region to target, but there is no way I will know which neuron,” he told me in his lab at Princeton University. “Even if I could target the same neuron in every individual, what that neuron does will be different in different individuals’ brains.” No matter how much industrial power someone like Musk brings to the problem, Buschman explained mathematically that biology, not technology, is the real bottleneck. Even if we oversimplify neural coding by assigning a neuron to be either “on” or “off,” in a network of only 300 neurons we still have 2300 possible states — more than all the atoms in the known universe. “It is an impossible number of states,” Buschman said. Ponder for a minute that the human brain has about 85 billion neurons. But what about Zuckerberg’s plans of users uploading thoughts and emotions? Reading information out of the brain is more feasible than downloading information into it, after all. Indeed, Marcel Just and his colleagues at Carnegie Mellon University are now using fMRI to reveal a person’s private thoughts, in an effort to understand how the brain processes, stores and recalls information. They can tell what number a person is thinking of, what emotion they may be feeling or whether they are having thoughts of suicide. This brain-machine mentalism works by asking people to have a specific thought or cognitive experience over and over while inside an fMRI machine. Since cognition and emotion activate specific sets of networks throughout the brain, machine learning can eventually identify which constellations of brain activity patterns correlate with specific thoughts or emotions. Remarkably, the brainwide activity patterns revealing private thoughts are universal, regardless of a person’s native language. A surprising finding from this research is that the brain does not store information the way we might think — as discrete items categorized logically in a database. Instead, information is encoded as integrated concepts that encapsulate all the sensations, emotions, relevant experiences and significance associated with an item. The words “spaghetti” and “apple” are logically similar in being food items, but each one has a different feel that activates a unique constellation of brain regions. This explains how Just can use the very slow method of fMRI, which takes many minutes to acquire brain images, to determine what sentence a person is reading. The brain does not decode and store written information word by word, the way Google Translate does: It encodes the meaning of the sentence in its entirety. This technological mind reading might seem scary. “Nothing is more private than a thought,” Just said. But such fears are simply not grounded in fact. Similar to the BCI used to operate a prosthetic device, this mind reading requires intense cooperation and effort by the participant. People can easily defeat it, Just’s colleague Vladimir Cherkassky explained. “We need the person to think about an apple six times. So all they have to do is think about a red apple the first time, a green apple the next time, maybe a Macintosh computer, and we are done.” Critics often cite ethical concerns with BCI: loss of privacy, identity, agency and consent. They worry about abuses to enhance performance or the destruction of free will, and they raise concerns over disparities within society that reduce access to the technology. And, yes, as with any technology it’s possible that bad actors can use it to cause deliberate harm. These are all good points, worth consideration as the technology improves. But it’s also worth remembering that we already face and accept such concerns from other biomedical advances, such as DNA sequencing, anesthesia and neurosurgery. It is natural to fear what we do not understand. For most of us, fear of mind control is an abstraction, but Copeland faced the reality of letting scientists open his skull and implant electrodes in his brain. When I met him in 2018, Copeland’s brain implants had been removed, because the electrodes have a limited lifetime. “Looking back at it,” he said, “I would do it as many times as they would let me.”