AI was once the stuff of science fiction and theoretical research before quietly working behind the scenes of online services. Now, we're starting to see the early stages of AI become widespread across the consumer market. As we hand over more and more management of our daily lives to algorithms, will our old ideas of personal responsibility continue to make sense?
Technology's effect on culture and society can be subtle. When presented with a new toy or service that makes our lives easier, we're quick to embrace and normalize it without thinking through potential consequences that only emerge years down the line.
Privacy was the first casualty
Take personal privacy, for example. When Facebook was rocked by the Cambridge Analytica scandal, it may have dominated the headlines and set the chattering classes ablaze, but amid all the outrage, what struck me as the most common reaction outside of media pundits and tech enthusiasts was indifference.
But why did people shrug and say "so what?" to such a massive leak of our personal information? To its use in sleazy advertising and to manipulate the results of important elections?
Perhaps because the technical processes behind it all are too complex for most individuals to have a clear idea of exactly how it happened. The user license agreements for all the different services a person can use are themselves dense and opaque, and we don't have the time to read, let alone understand all of them. In fact, a study has shown that in order to read all of the privacy policies you encounter, you'd need to take a month off from work each year.
Yet many of us agreed to this Faustian bargain anyway, and gave up our privacy because say, Facebook or Google's services (among others), were too good not to use. Plus, all our friends (or our competitors, in a business context) were using it, and who wants to fall behind?
- The Cambridge Analytica scandal: how Facebook crossed the line
- Facebook sets itself against China, Russia, all human life
The question of how we got here is still being explored, but the fact remains: personal privacy in 2018 isn't what it used to be. Expectations are different, with many of us perfectly happy to give up information to corporations at a level of intimacy that would have shocked previous generations. It's the price we pay for entry into the world of technology, and for the most part, we're happy to do it.
You can urge people to use VPNs and chat on Signal all you want, but for the most part, the cultural shift has already happened: protecting privacy isn't a concern for most people. Not enough for them to take any active steps, as much as one could complain.
Personal responsibility will be next, thanks to AI
AI horror stories usually invoke fears of it becoming conscious and somehow turning against humanity. But the more realistic anxiety is that machine 'intelligence' cannot really regard us all. Like any tool, it serves to make a task easier, faster, more efficient. But the further that tool is from a guiding human hand, the fuzzier the issue of personal responsibility becomes.
Privacy is one thing, but responsibility becomes serious, and can be quite literally a matter of life and death. When something AI-powered goes wrong and causes harm, who bears responsibility? The software engineers, even if the machine 'learned' its methods independently of them? The person who pushed the 'on' button? The user who signed a now-ubiquitous stream of dense legalese without reading it to get quick access to a service?
Self-driving cars are at the forefront of this ethical dilemma. For example, an autonomous vehicle developed by Nvidia is taught how to drive via a deep learning system using training data collected by a human driver. And to its credit, the technology is amazing. It can keep in its lane, make turns, recognize signs and so on.
- Autonomous cars are ready but face many challenges
- The Mate 10 Pro's AI drives a car...with us inside!
All good, so long as it's doing what it's supposed to. But what if an autonomous car decides to suddenly turn into a wall or drive into a lake? What if swerves to avoid crashing into a pedestrian, but ends up killing its passenger in the process? Will the car have its day in court?
As things stand now, it could be impossible to find out why or how accidents happen, since the AI can't explain its choices to us and even the engineers that set it up won't be able to follow the process behind every specific decision. Yet, accountability will be demanded at some point. it could be that this issue will keep autonomous vehicles off the market until it's perfectly resolved. Or, it could be that the technology becomes too exciting, so convenient and so profitable, that we release it first and ask the difficult questions later.
Imagining AI involved in a car accident is a dramatic example, but there are going to be more areas of our lives in which we will be tempted to give over responsibility to the machine. AI will diagnose our diseases, and 'decide' who lives or dies, make multi-million dollar trading calls, and make tactical choices in war zones. We've already had problems with this, such as people with asthma being wrongly graded as low risk by an AI designed to predict pneumonia.
- How Google's DeepMind is grappling the the ethical issues of AI
- AI in healthcare: could it do more harm than good?
As AI becomes more advanced, it'll probably make the best decisions...99.9% of the time. That other 0.01% of the time, perhaps we'll just shrug like we did with the Facebook privacy scandal.
Smart assistants and apps will take on more responsibility
Let's zoom in a little closer, onto the individual. At Google I/O, the Mountain View colossus showcased a couple of ways for AI to make our lives a little easier. Virtual assistants have entered the mainstream in the last year or so, becoming a key part of many American's homes. Google's Duplex demo showed how you can delegate booking appointments to Assistant, having the robot make a phone call for you and book a haircut or a restaurant reservation. Google also wants to use Duplex for automated call centers, conjuring an amusing scenario of two robots having a conversation with human language.
Sounds, cool, right? Except, well, there's a certain level of trust you give your virtual assistant when you let it act as your proxy like this. Communication over these tasks may sound simple, but is actually fraught with potential problems.
For example, when we speak to each other, we pick up on subtle cues in our voices and attitudes to get an impression, human to human, of who we're talking to and act appropriately. Even with that, you know how easy it is to mortally offend someone by accident and cause an argument or an outrage.
Where does the responsibility lie, however, when a virtual assistant says something perceived as offensive or embarrassing? If virtual assistants are somehow prevented from saying potentially offensive things, even ironically or as a joke or criticism, is that 'your' voice being censored? It's going to take a lot more than 'ums' and 'ahs' for AI to really be able to talk for us.
Watch Google Duplex in action at the Google I/O 2018 demo:
Another big theme, both at Google I/O and Apple's WWDC this year, was software that managed its own use, in the name of 'digital well-being'. The rather patronizing idea is that we won't leave our devices to go out and smell the roses unless our device reminds us to.
The users can set preferences for this kind of thing, of course, and yet I feel that having our wellness and time management handled by AI isn't far off, with a smart assistant managing our routine of health, work and entertainment according to what its learned from our habits, fitness, environment and so on. And it could be very positive for many, though I would personally find that level of micromanagement a nightmare.
Of course, humans will resist handing over responsibility to AI unless there's a real advantage to be gained. And there will be advantages...in convenience, productivity, entertainment and so on. The advantages will be too good to resist, and I'm not one to advocate banned technology. We'll embrace AI technology for its benefits, and then adjust our social expectations around its reality.
Like it or not, our society will adapt to find a place for AI
The classic AI horror story usually details a super-intelligent machine that becomes self-aware and turns on its creators. While the evil AI is about as realistic as vampires or werewolves or other horror fodder, our struggle with AI will be real but more mundane, the trade-off between convenience and accountability in myriad aspects of our daily lives.
But there is no consciousness behind artificial intelligence as we know it nowadays, no self or mind. We aren't building AI gods, but rather phenomena that are more akin to the complex but unthinking systems in nature. Things we depend upon and harness, but don't control.
In the worst case scenario, complaining about or questioning the methods of algorithms may be as absurd as questioning the tides or the wind. In the best case scenario, responsible development and consumption will keep ultimate responsibility in the hands of human beings who can trust each other, instead of pointing blame at the black box.
What do you think of the role that AI will play in our daily lives? Do we already trust algorithms with too much responsibility?