AI is a technological buzzword that generates a lot of excitement but also an element of dread. Will we see hyper-intelligent AI begin to replace human workers, or, in the most extreme scenarios, human civilization? Hardly. In fact, the differences between artificial and human intelligence mean that collaboration is key.
Artificial 'intelligence' can be something of a misnomer. When we say that a person is intelligent, clever, or smart, we usually assume a kind of general all-purpose aptitude. But intelligence can manifest in many different ways, and can, depending on context, refer in varying degrees to things like book-learning, memory retention, mathematical ability, problem-solving, social skills, artistic ability or technical mastery.
Artificial intelligence, or AI, refers to machines manifesting behavior that reminds us of human intelligence, but just as there are distinctions between the way intelligence manifests in humans, so we should also take care when referring to machines. AI can be fantastically effective and surpass humans at certain skills, but falls far, far behind in others.
Your brain is not a computer, and humans and AI don't think alike
So just because the word intelligence is used, we should not fall into the trap of assuming that AI 'thinks' in the same way that we do. For years it was popular to compare computers to the human brain and vice versa. But the human brain, although a powerful learning machine from birth, doesn't store information and move it from long-term to short-term memory the same way as a computer.
- The beauty and the beast: the two sides of artificial intelligence
- Machine learning and AI: how smartphones get even smarter
In an essay titled The empty brain, Robert Epstein, a senior research psychologist at the American Institute for Behavioral Research and Technology in California, stresses the difference between human brains and computers. Rather than being information processors, human brains make connections between information received through the senses, connections that are strengthened by relevance to our subjective experience of being in physical bodies. The context of information is important to us - does it help us meet our survival or social needs? Even newborn babies are able to, for example, pick out human voices from ambient noise before they learn any words of their own.
Teaching computers to mimic this understanding of context is at the heart of machine learning. We train AIs in pattern recognition to sort information into groups that make sense to us humans, for example, recognizing images: "dog", "sky", "road", to match the tone and cadence of a voice to different situations or types of people or putting together text into forms that resemble a horror story, or a recipe, an advertisement or news broadcast.
The thing is, of course, that AIs are usually terrible at this, and even after many rounds of intensive training, the products of neural networks are often only useful as amusing word salads or bizarre images than anything else. Earlier, we reported on a neural network's attempts to come up with cookie recipes, not because it was amazingly intelligent, but because it was dumb. Just close enough to the real thing, but dumb enough to be hilarious:
trained the neural net for a little longer. i'm not sure if this is better. maybe this is better?— Janelle Shane (@JanelleCShane) December 7, 2018
so far it has asked for:
1 cup greased bananas; granulated
1/2 cup milk or very crumbs
1/4 cup white wheat oat liquids pic.twitter.com/5FlouWWWJE
Without a capacity for abstract thinking, tasks that are relatively simple to humans who understand the context of something can be a real struggle for AI.
KeatonPatti "forced a bot to watch over 1,000 hours of Olive Garden commercials", and Extra Crispy filmed the result...tempted?
AIs may not have gotten the hang of our relationship with food just yet, and in all fairness, do make better commercials with more funding and less dialogue. But AI does have some advantages over us in some areas. First of all, AIs are fast: most humans can't totally memorize a song, picture, book etc in a second, to say nothing of classifying or reproducing it perfectly. It usually takes study and concentration, and multiple attempts. An algorithm can not only save perfect copies, but so classify millions of images in a second, and make perfect copies ad infinitum, so long as it has the storage capacity.
The goal of neural networks is to eventually 'teach themselves' but they are first trained by highly educated and specialized human engineers to who set up reward systems and guide the algorithms to produce results that we understand. The training data itself is gathered by human observation.
AIs that cannot only classify and retrieve data, but can navigate the real world in a similar way to us may be the eventual end goal, but as breakthroughs are celebrated, it's important to remember just how slow the progress leading to these milestones really is. For example, video games make excellent training scenarios for AIs, and the victories of say, OpenAI or Deepmind over human experts may be impressive, it takes a lot of training to get there.
A human who picks up a video game quickly learns how it works, how to dodge enemies, which items to collect, how to jump, run or whatever, and understands the goals of the game. Moreover, the human retains and applies those skills to the next level, or a totally different game, or a real-life scenario that reminds them of the game. For an AI, every new level starts from scratch, as much as we do our best to make them smarter.
The Deepmind and OpenAI agents who triumphed over humans had the equivalent of hundreds of years of practice compared to the humans, and if you dumped them into a game that was just slightly different in rules, would fail in places where a human would quickly adapt.
As another example, OpenAI's Dactyl learned to manipulate and rotate simple objects with a robot hand:
Impressive, yes, and novel. But it still took effectively a hundred years of virtual training time to learn these movements that it still fails at over 20% of the time. Once again, the human infant has the advantage.
Augmented intelligence is the future
This isn't to look down on the impressive progress in AI that we see in our modern age. Only to say that the possibility of human intelligence being surpassed or replaced by AI has been greatly exaggerated. The key strengths of AI: data processing, perfect memory, speed and tirelessness, can all be a great boon to us, but ultimately human intelligence will always be needed to give the work of AI meaning.
Tasks that the human brain struggle with, such as storing and categorizing data on a huge scale, to, for example, deliver hyper-personalized content for products and advertising, or file millions of images, check millions of parts for defects, million of transactions for signs of fraud or scan thousands of faces per second searching for telltale signs of disease symptoms for diagnosis, should be offloaded onto AI. Effectively, it fulfills a human desire faster and more efficiently, in the same way a vehicle gets you where you want to go faster or a photograph/document helps you remember something in detail. In today's data-driven economy, working with huge amounts of information generated from billions of people, AI makes perfect sense.
Our brains are very efficient in a lazy kind of way. If something is easily accessible by tools, computerized or otherwise, we're less likely to use our precious grey matter to store it inside our own heads. This doesn't make us stupid, but rather frees up mental resources for other, potentially more advanced tasks. It can take some getting used to. It may be that oral storytellers of ancient societies strongly objected to the use of writing to record history and store information in libraries. Perhaps it seemed unnatural and lacking in 'soul' - can a mere paper with characters written on it deliver an epic poem with the passion and deftness of a bard? -, but the end result was a mass enrichment of human culture, technical expertise, and productivity.
Rather than AI as artificial intelligence, perhaps we should think of it more as 'augmented intelligence', where the intelligence being augmented is our own. Repetitive data processing work being offloaded into machines guided by humans should empower us to further develop our own intelligence in new ways. We aren't building a robot future, but rather a cyborg one, with machines as extensions of ourselves.
How do you the see the future of human and AI collaboration? Do you believe that machines alone could ever be as intelligent as humans?