Facebook is Building an All-Star Team to Help You Type With Your Brain

But things could get dystopian, fast.

Image courtesy of Flickr.

People like to use voice recognition to send texts, Facebook neuroscientist Mark Chevillet said at a tech conference in Boston this week, but they don’t like doing so when other people are around them, like any number of embarrassing Tinder messages. How could Facebook solve for this all-too-common customer pain point? Well, naturally, a “silent speech interface” is what’s needed. "Type directly from your brain," he said.

Cheville works in Facebook’s secretive Building 8, where the company is developing moonshot projects that would make Larry Page blush—reportedly ranging from augmented reality to the discomfiting brain scans Chevillet was presenting on. Mark Zuckerberg is assembling an all-star team to do it: The head of the group is Regina Dugan, who used to lead DARPA, the Department of Defense’s research wing; she was poached from Google last year. Chevillet was last at Johns Hopkins, where he helped develop a brain-controlled prosthetic arm.

Chevillet says that over the next two years, his team will build a noninvasive system for reading the speech signals in the brain, which will translate that neural activity into speech (or in this case, texts) at the rate of 100 words a minute. By his estimation, the goal is simple enough: “We just want to be able to get those signals right before you actually produce the sound so you don’t have to say it out loud anymore,” he told the MIT Tech Review. Early evidence for the effectiveness of this type of tech includes a recent study of people with paralysis showed that they were able to use their brains to select letters with an onscreen cursor, capping out at eight typed words a minute.

But, like Megan Thielking notes at STAT, thought-texting is a bit of a tall ask: you’d need noninvasive sensors for reading brain signals, algorithms to suss out the words you’re trying to say, and sophisticated AI to deduce which word among a given set is the right one. Right now Facebook is leaning toward “diffuse optimal tomography” to get this done, which she describes as a process of “shining near-infrared light on living tissue and pinpointing the tissue's properties based on how that light scatters — in this case, looking at the neuronal states that correlate with the thought of a particular word.” According to neuroscientists not involved in the project, that kind of technology is still ten or twenty years away.

When you start thinking about the motivations and assumptions that factor into projects like this, things get even murkier. Cade Metz, formerly of Wired, has argued that it’s really about Silicon Valley status signaling: brain-computer interfaces are the hot new thing—Elon Musk is getting into it, after all, so it must be in!—but the technology is so far out that it may be more pipe dream than reality. Indeed, just because a hot new piece of tech comes out doesn’t mean consumers even want it—recall the stratospheric hype around Google Glass, and its ignoble crash back down to Earth.

Critics also contend that underlying the pursuit of the brain-computer interface is the assumption that the brain itself operates like a computer—with some sort of “code”—which is a deeply contentious perspective in the cognitive sciences. (Spoiler alert: the brain is not hardware; it’s an organ.)

The cyborg elephant in the room here, of course, is whether or not we really want our devices to directly read our brain activity, and, if technologists are actually able to make this happen, what it would mean for society. The iPhone X scanning your face is invasive enough; University of North Carolina sociologist and tech critic Zeynep Tufekci warned that face-reading technology that would allow Apple or another organization “to identify protesters, to figure out if you're depressed or manic—and how to monetize that.” The “same family of technologies will be used to classify you—right or wrong—as a criminal or a terrorist—or what your sexual orientation is,” she added.

If that’s the case with your face, imagine the kind of surveillance that Facebook or another company could do with the very activity inside your skull. Academic research is already starting to use brain scans to tell whether people knowingly committed a crime. What could Facebook, pulling $8 billion in revenue a quarter, find out about you by having loads of frame-by-frame data on your neural firings? The word “thoughtcrime” comes to mind.

In a speech earlier this year, Dugan, the leader of Building 8, said that she and her colleagues show up to work every day “terrified” by their effort to build something that’s never been built before. “Why do we sign up to be terrified each day?” she asked herself. “That is the price we pay for the privilege of making something great.” Hearing about what’s slated to come out of her lab, we’re terrified too—but for altogether darker reasons. 

Technology