News & Perspectives

Brains meet machines

Brains meet machines

Technology use changes our brains in real time
Perspective// 12 Apr 2016
Brains meet machines

What we do each day changes us. How we invest our time, the thoughts we think and what we’re exposed to changes us both mentally and physically. The way we use our shared technologies is already significantly altering the physical structures of our brains in ways we are just becoming aware of. Here are two examples:

Long Term Memory vs The Internet

We rely on the Internet to store knowledge for us. According to a China/America study, the use of this external memory is changing the way our brains retain data, and can actually shrink the impacted parts of our brains up to 20%. The more we Google, in other words, the less likely we are to retain what we see. Instead of remembering data, we remember where to look for the data: a much smaller piece of information.

Here’s why this matters: Both critical thinking and pattern recognition require us to have access to the actual data formerly held in long term memory. The “pointers” to this data are not enough. Will this lead to a diminishing of our ability to do critical thinking?

Geospatial: Use it or Lose it

We rely on GPS and Google maps in lieu of navigating the world using physical maps. In response, the hippocampus shrinks. (Conversely, in long-term London cab drivers, it grows!). Active navigation strengthens all of our spatial abilities, not just the ones that get us around town.

Here’s why this matters: Atrophy in the hippocampus has been implicated in memory related illnesses. Possibly more importantly, when we rely on a screen, we don’t look at the world around us - we begin to think of places in the abstract. (Researchers: Bohbot, Leshed, Newcombe).

We are conducting one big uncontrolled experiment on ourselves. It’s important to pay attention to the results as we move through the process.

We are embodied beings

There are many promising technologies for brain-to-machine communication, but what of the body? After all, we are integrated systems—not brains in a jar. What happens when we isolate the brain from the body? It turns out that decoupling sensory inputs and natural bodily response has an unexpected impact.

Contact lens

Gamers’ Syndrome?

One proxy for an experimental study group may be long-term gamers. Anecdote-based research suggested that prolonged play seems to result in irritability and depression. Why does this happen? What researchers discovered was that, without the engagement of muscles and full-body oxygenation, the regulation of neural chemicals is significantly changed.

In a real world “fight or flight” situation, adrenaline is released in order to provide the energy needed to respond to an emergency. But in such a situation, the adrenaline release would likely be followed by REALLY running or actually fighting. Running and/or fighting, in turn, produces chemicals that regulate the impact of adrenaline on secondary systems in the body. When playing in a virtual environment, there is no mediating response from the muscles—leading to a build-up of adrenaline and other neuromodulators. The result is that anxiety, tension and irritability mount. Over the long term, the players’ adrenal glands, liver and kidneys are negatively impacted.

Augmented Reality, Virtual Reality and the Body

Of course, with the evolution of physical interfaces and the rise of immersive environments, games and even sports will eventually be played in a way that integrates (and reacts to) a full body response. Our actual brains, as we’ve seen, may one day be unable to distinguish between a computer-generated and naturally occurring environment.

Virtual reality

The man to machine continuum

Throughout human history, homo sapiens has employed tools and technologies that increase, extend, mimic and replace our inborn capabilities. Our internalization of these technologies—from the artificial heart to the positronic brain—call for a new lexicon; one that recognizes and validates the ongoing continuum from unadulterated humans to our digitally uploaded descendants. We propose a framework that merges the human and digital across multiple dimensions—including physical capacity, computing power, the ability to learn and reason, creativity and feeling. Each typology is infinitely expandable, into vast new niches and classifications. Each raises questions on future ethics and laws. And all carry the potential to transform existing industries, create new ones, and condemn others to the fate of the slide rule and ear trumpet.

oHumans

Organic Humans, or naturally born Homo sapiens with no modifications. There are 360,000 births per day, worldwide. In January of 2016, genetic editing of embryos was approved in the UK. Query: Is a Genetically Modified Human (GMH) still certifiably organic?

Cyborgs

The term “Cyborg” defines a symbiosis of human and machine, even if subtle. Machine parts can be embedded into humans (such as RFID chips in the wrist), or attached as capability-enhancing, integrated prosthetics.

Human microchip

Box AI

Box AIs are intelligences that are not attached, or have not yet been attached, to any formal function. Examples include IBM’s Watson and Google’s Deepmind. Their thinking power may eventually be connected to any form of bot or droid.

IBM Watson

Bots

Bots, or Robots, are machines that perform certain actions. These can be simple programmed actions (like vacuuming a floor), or more robust dynamic intelligent interactions (a programmed strike drone). There are robotic arms robotic pets, human-like robots, insect and hummingbird-size robots, and flying bots (drones).

Vacuum cleaner

Android

The evolving combination of AI, sensors and robotics, along with flexible, skin-like materials has allowed the creation of the first androids: AI embodied in a recognizably human form, with logic and task direction (and sometimes speech).

Android

dHuman

It won’t be long before we have true human-seeming machines, or digital humans. Construction with bioand programmable materials, the ability to model emotions (through analysis and facial expression), and an interface subtle enough enough to understand humor and make jokes—all these will make it increasingly difficult to distinguish between man and machine.

dHuman