Brain-Computer Interfaces (Terra Futura)

A brain-computer interface was a piece of technology that allowed humans to connect their brains with computers. This was so revolutionary it caused profound changes to society. Humanity would also be changed by this.

Background
The history of brain-computer interfaces went back to the year 1924 when German scientist Hans Berger discovered that the brain had electrical activity. This led to the invention of electroencephalography or EEG. These devices worked simply by detecting the electrical signals of the thinking brain. The drawback was that they would sometimes get confused. Other technologies such as magnetoencephalography (MEG) and magnetic resonance imaging (MRI) were invented during the mid-20th century. They went directly to the brain. The drawback was that they were bulky due to the big magnets. This would change atomic magnetometers allowed for their miniaturization. Meanwhile, nanotechnology would go even farther and even attach to different systems of the body including the nervous system.

Description
Brain-Computer Interface technology came to humanity in four stages.

Stage One
Tech Level: 10

Around 2010, the first brain-computer interfaces were put on the market. These were crude and could only performs operations that were simple. One example was the necomimi, named after the anime genre. The necomimi was a headband with a brain wave sensor and motorized cat-shaped ears. When the wearer concentrated, the ears moved up. When the wearer relaxed, the ears moved down. When the wearer was excited, the ears wiggled. The Japanese company that invented this, neurowear, also invented a tail that did the same thing called Shippo. Both necomimi and Shippo did one thing only. They moved up or down, and sometimes, they wiggled based on a person's mood. That was simple. But as technology advanced, brain-computer interfaces would be able to do more complex tasks.

Stage Two
Tech Level: 10

In 2020, the 5G standard for mobile phones was released. One of the standards for the new mobile phones coming to the market was that some of them were wearable like in a wristwatch. However, computers were now being placed onto lenses like in a visor, glasses, or even contact lenses. These kinds of smartphones were equipped with brain-computer interfaces allowing people to mentally control them. neurowear got into this business itself. These were still comparatively rare because the technology was sluggish and unreliable. A whole lot of concentration was required to use them properly. It would take advances in neuroscience to advance the technology to next level.

Stage Three
Tech Level: 11

Reverse engineering the brain was a process in two steps. (Note: The details will be discussed in its own articles.) The first step, of course, was to create a full map of the brain. Thanks to Moore's Law and the human brain project, by 2030, a complete map of the human brain was created. This would have unexpected consequences for the brain-computer interface industry. neurowear, which had just been acquired by the Open Handset Alliance, Inc., as well as other companies created brain-computer interfaces that were much better than the previous generation. Using methods that were non-invasive, people could now send detailed, real-time messages while the operating systems were greatly improved. Miniaturization was making the technology in use much more comfortable than before. Not only that, but nanotechnology was making brain-computer interfaces become implantable inside the brain.

Stage Four
Tech Level: 11

As the year 2040 approached, the technology in the brain-computer interface was perfected. It was so cheap that even developing countries wanted in. Privacy and security issues were also resolved by personalized firewalls. By this time, computing, nanotechnology, medicine, and neuroscience were experiencing such exponential progress that it was becoming almost impossible to understand. There was now so much information on this stuff that it as hard for the human brain to interpret. The solution was to merge human intelligence with artificial intelligence. The most advanced method involved directly implanting nanobots in order to directly link neural activity and electronic circuitry together combining the best aspects of both human and artificial intelligence. There was no longer a need for a monitor or a projector. The nanobots could produce a virtual screen in the field of vision of the user. The operating system was controlled by the thoughts of both the user and the AI. This allowed many individual actions to be performed at once. Virtual reality was being revolutionized allowing for full immersion and augmented reality. Nanotechnology was revolutionizing other fields, too. Brain-computer interfaces allowed people to increase their intelligence and control all nanobots in their bodies. Nanobots were eliminating disease, regulating blood pressure, allowing people to change and customize their appearance, accelerating healing, repairing some age-related damage, and helping people control appliances and lighting in their houses. Society would be greatly changed by this. The line between man and machine was beginning to blur. By the end of the 21st century, no clear distinction would exist.