Facebook’s experiment with a brain-reading interface was inevitable. As the company continues to develop their AR glasses, the input method is one of the most important qualities as it must be as accessible and straightforward as possible. No-one wants to fiddle with buttons or external controllers; the most frictionless way to swipe through an AR lens is through the mind itself. The final stage for a completely seamless experience is not using any fingers at all.

However, that would mean Facebook would be reading the mind itself. This is dangerous without proper regulation. Following Cambridge Analytica, Facebook has shown that they treat data as an asset, and the company can exploit it for more value. Facebook already has enough information to understand every human on its network, deeper than the people themselves. If Facebook also had access to the brain, then the complexity and richness of the data available could be dangerous for one company to handle.

This is critical to explore now. Regulators are notoriously slow to react to upcoming developments, so they should be acting now before Facebook makes significant steps in the technology. Also, AR glasses are set to become the next global hardware trend; with billions of people affected by poor eyesight, Facebook (and Apple) are primed to dominate a market with their products. The time to act is now.  

New rules must be implemented to protect the rights of users who own their data.

Background on Facebook’s experiment

On July 30 2019, Facebook published a blog post with an update on its brain-reading computer interface. Organised by University of California researchers backed by Facebook Reality Labs, the company detailed the results of an experiment in decoding words via implanted electrodes. University subjects listened to multiple-choice questions and answered them aloud; then electrodes recorded the corresponding activity in the brain. The team looked for the patterns that corresponded with the exact words, linking the two together. From this procedure, the system can associate brain patterns with words.

It was not an extensive experiment. It was tightly controlled, with only nine questions and 24 possible answers. The implants were highly invasive, closely connected to the brain. Moreover, the patients were reciting the answers out loud, rather than just thinking them. If the researchers were developing a car, they have just managed to make a metal frame, devoid of wheels or components. In its current form, it is a far cry from the science fiction future of conducting applications with thoughts alone.

Yet even in its primitive state, it gives a glimpse into the future. As Facebook says in its post, “being able to decode even just a handful of imagined words — like ‘select’ or ‘delete’ — would provide entirely new ways of interacting with today’s VR systems and tomorrow’s AR glasses.” Beyond this, the mind can compose a text message, then send it to their friends, without using hands.

Facebook analysing data from the brain

Reading the brain is one red flag. The system would allow users to ‘type’ sentences out relatively quickly, with the system quickly interpreting the brain’s signals into readable content. Once sent, that data would then be on the system. Encrypted, but still stored.

The other red flag – no, the ten red flags billowing on top of a warning sign starring a red flag – is how Facebook analyses the data. The company can extrapolate the intentions, personality, and interests of a person based on what is said, profiling the user. A conversation is worth more than its words; it can be broken down and expanded to create hundreds of different conclusions about a person.

Now imagine the kind of sensitive data Facebook can pull directly from the mind. When someone types a text, we are presenting a curated version of ourselves, which we then send to our friends or family. We articulate the words, then send over the internet. However, imagine if Facebook could read the mind without the articulation stage; raw, transparent thoughts with no filter, without being translated to a screen. Currently, Facebook reads all our posts; the next step is understanding how posts are read and interpreted by users.

“The future is private.”

Facebook’s likely counter-argument is that the user’s data will be private. Since F8 earlier this year, the company made clear that they are designing privacy into the core of every product. All of Facebook’s messenger apps will have end-to-end encryption. Also, an independent board will oversee Facebook’s actions.

Problem solved, right? Facebook is pivoting the definition of privacy to rectify its public image, to become a company that can be trusted by its users. Except after a $5bn fine from the Federal Trade Commission, the largest in its history, Facebook’s stock price rose. It was a fine with little control over how the company runs itself. The action was a feeble, weak attempt to reign in the platform, a slap on the wrist rather than removing the limbs themselves.

Facebook retains some flexibility, meaning it can conduct further activities while ensuring the data is not shared with outside parties as easily. As Matt Levine points out, “Facebook did some things that [many] people are upset about, some of which (certain sorts of data sharing) probably violated the laws or its earlier consent decrees, and others of which (certain sorts of data collection) didn’t.” So the data is still being collected by the company – which can then be exploited by itself.

controlling Facebook
Photo: Tom Ffiske.

Regulating Facebook’s access to brain activity

Brain activity goes beyond digital ethics towards a new strong of thought – neuroethics, the study of moral conduct relating to the mind. “To me, the brain is the one safe place for freedom of thought, of fantasies, and for dissent,” Nita Farahany, a professor in neuroethics, told MIT Technology Review. “We’re getting close to crossing the final frontier of privacy in the absence of any protections whatsoever.”

With a new level of intimacy, a new layer of rules should be introduced to protect users and their rights. Here are some suggested principles to follow:

  • Limited access. Regulators should control which organisations can access and use the data; for example, political campaigns would have restricted access.
  • Transparent design. The neuroethical design of Facebook’s brain interface must be open and understandable by regulators and agencies, to fully understand what kind of data is collected.
  • Understandable algorithms. Similarly, the way the data is used must be open and understandable for regulators and agencies, to see how the data is interpreted and what new conclusions are drawn.
  • Ownership. The user owns the data, not Facebook. If the user wishes for the data to be deleted, all data must be removed.
  • Open for users. Any user can access the data Facebook has collected about themselves, and the types of conclusions the company has made from said data.
  • Active opt-in. For every new update, Facebook outlines the details in a clear and concise way, and users opt into the ways in which their data will be used. This is so Facebook does not roll out a new update with new capabilities that users passively accept; users actively need to understand what the new update is, and accept the impact it will have on their owned data.

Reasons behind the principles

The first three principles revolve around the design of the product. Facebook does not grant access to its algorithm easily nor explain how its algorithm works. That is understandable, as it is one of the most valuable formulas in the world. However, it should be open for regulators to see, as it has a direct impact on how the company uses the harvested data. Then, the data’s use is controlled, to ensure that it leads to no harm or exploitation.

The last three revolve around the user. Data privacy revolves around ownership and use, and the user themselves should own the users’ brain data. Facebook can use the information, but the user has much more control over how. The action ensures that the user fully consents to its use, as well as understanding how the company uses the information.   

Entering a new world of data privacy

These principles are broad, encompassing the spirit of protecting users. Regulators should protect people from their data being used against them for nefarious purposes, and current legislation is inadequate for brain interfaces.

Regulatory bodies are notoriously slow for responding to technology. Uber dominated the taxi industry before new city rules held them back. Airbnb’s dominance closed hotels and impacted local communities before governments stepped in. Also, Facebook’s mishandling of people’s data let campaign groups take dark approaches to win their political seats. Steps should be taken now to protect users in the future.

The clock is ticking.


avatar

Tom Ffiske

Editor, Virtual Perceptions

Tom Ffiske specialises in writing about VR, AR, and MR across the immersive reality industry. Tom is based in London. 

Subscribe to Virtual Perceptions

Keep up to date with the trends and topics of the immersive reality industry, from gaming to healthcare and beyond.

Summary
Facebook's brain-reading interface must be regulated as soon as possible
Article Name
Facebook's brain-reading interface must be regulated as soon as possible
Description
Facebook is developing a brain-reading interface which reads the thoughts of users. The technology should be regulated to protect the data right of people.
Author
Publisher Name
Virtual Perceptions
Publisher Logo