Man vs AI: Ethics and the Future of Machines

mind_machine_by_neodecayRecent strides in artificial intelligence from big name players such as Google, Facebook, and Baidu, as well as increasingly successful heterogenous systems like IBM’s Watson have provoked fear and excitement amongst the intelligentsia in equal measures. Public figures, such as Steven Hawking, are concerned, and not surprisingly the popular press are excited to cover it. Recently, Elon Musk has become worried that AI might eventually spell doom for the human race. He donated $10 million to fund the Future of Life organization whose stated goal is to ensure AI remains beneficial and does not threaten our wellbeing. An open letter by this organization, titled “Research Priorities for Robust and Beneficial Artificial Intelligence,” was signed by hundreds of research leaders. Influential futurist, Ray Kurzweil, has popularized the idea of the technological singularity where intelligent systems surpass human capabilities and leave us marginalized at best.

Others have written that these fears are overplayed because well known figures, like Musk or Hawking, who are not actually AI researchers, do not realize how limited current AI systems actually are. Fear-mongering may incite premature regulation, like the historical banning of Segways before anyone owned them, or the draconian FAA restrictions on commercial drone use.

While I personally fully support un-hampered AI development, in this article I want to go through some risk scenarios and interesting futures for mankind living alongside powerful thinking machines.

I will start by making the point that we already live with machines that are technically superior to us in many ways: able to lift heavier loads, work faster and more accurately than us in our factories, and sort and process complex data at unimaginable speeds and with ever greater automation. Modern algorithms can design our chips or circuitboards, optimize the shape of boat hulls, compute protein interactions, or determine new structural forms in material science, and then manufacture or 3D print these things at the touch of a button. We think of our machines as tools, but there is a sense that, as Marshall McLuhan is rumored to have said, “we are the genitals of our technology. We exist only to improve next years model.”

The turning point is one of closing the loop, where humans are no longer part of the feedback path; where, in the name of automatically making money while we sleep, we design ourselves out of the product cycle equation so the system runs itself. This is smooth sailing while we think we hold the reins, that is until we encounter instability and non-linear emergent interactions that we could not predict.

It used to be that people would write books proving that strong AI was not possible, but those voices are growing ever more silent.

The public consciousness is full of robot doomsday scenarios as expressed in movies like Blade Runner, Terminator, The Matrix, I Robot, or Battlestar Galactica. It is a mechanistic rebranding of innate zombie fears. But the real concern, if there is one, lies with things that are much more subtle than a robot warrior towering above us.

Increasingly, machine intelligence brings up complex and practical ethical issues.

For example, we are at the cusp of commercializing driverless cars. But these vehicles have to navigate within a complex safety space. When we give them complete control over transporting people, we delegate the need to make decisions about interactions with the environment that can escalate to the level of moral conundrums: In an accident scenario, how does the car weigh the lives of its occupants against the lives of pedestrians or other vehicle occupants, and how does it relate to potential property damage? Does the car realize the implications of colliding with a gas station in order to avoid a dog? Who exactly is to blame when things turn out badly? I see plenty of business for future lawyers in this area, but for now, laws related to AI are largely absent.

In a different area, current research in machine learning, face recognition, or human motion recognition aims to improve surveillance by predicting or flagging criminal activity, either by monitoring camera feeds or tapping into the flow of internet communications, correlating user behavior across multiple sites. This is the developing Robocop, Minority Report, or Big Brother scenario, which involves important issues of privacy.

Increasing use of recognition systems based on machine learning leads to ethical questions: I believe that we will find that Robocop will be just as prone to stereotyping, racism, or profiling as any human because he is a product of his experience. If he statistically encounters more criminal activity from certain groups he will learn to classify them as a greater risk.

The decisions of intelligent systems are only as good as their training data. If one were to naively train a classifier on a collection of photographs of people with and without criminal records and ask it to predict the chance of offending, it would seize on simplistic measures such as skin color or clothing choices if these were at all correlated with the task. Since our prisons in the US are overflowing with minorities, it would be very easy to obtain training data that contains a racial bias. The same could be said of any system which tries to pick out criminal activity from surveillance sequences. Even small biases in the experience of the machine that let race-related appearance factors creep into the equation could produce discriminating features within a neural network that would not meet with our ethical approval. And the nature of neural nets is that they cannot easily be “patched” at a later date to remove such effects.

Just like the old-time LA cops, we have to decide if we want the machine to get the job done – no questions asked, or if we want to introduce ethical constraints.

Looking out for mental illness is another application area to which intelligent monitoring is applicable. For example, there has been interest in detecting the onset of suicidal behaviors at subway stations because people on the platform tend to exhibit characteristic pacing patterns before jumping in front of a train. Elderly people could be monitored for accidents or symptoms of confusion. These are good scenarios for new technology but could be developed into Big Brother style public behavior monitoring by more paranoid administrations.

Particularly concerning is when machines monitor human online or physical behavior and then make automated assessments which limit people’s access to resources. Start-up companies now mine for information about customers in order to determine if they can receive a financial loan. Algorithms seize on all kinds of features, such as good English usage, to evaluate prospects. Lending rules state that age, sex, race, or religion cannot be a part of the equation, but it is easy for the computer optimized black box to find data patterns that directly correlate with those things.

These trends show that AIs have an increasing potential to become gatekeepers in our society. How they make their decisions will continue to raise important ethical questions.

Internet bots are now commonplace, and text-based chat-bots have been written that are sometimes able to convincingly pass the Turing test, Cleverbot and MitSuku being examples. Attaching voice recognition and voice production is now possible. Convincingly life-like interactive and animated beings are being built, such as those by New Zealand based company, Limbic. Interactive sales assistants have ever more intelligent conversations.

Nina is the intelligent multichannel virtual assistant. A digital persona who delivers personalized, effortless customer service via a human-like conversational interface. Nina can be the ambassador for your brand, the all-knowing guide to your content, or the reassuring voice of your customer service organization.

Social networks such as Twitter have many accounts that are probably bots and are able to post messages and follow or re-tweet content to convince you they are human. These have been set up to gain marketing or self-promotion advantage. Bots routinely surf the web as part of ongoing click fraud operations, soaking up advertising dollars.

As bots get more and more intelligent it will be very hard to tell if the agent you are talking to on line or on the phone is a human. Recently, a telemarketer seems to have used interactive voice recognition and speech production to make cold sales calls asking about health insurance. I see many reasons why this trend should gain momentum because it allows computers to call people and pre-screen them for interest in a particular product without labor costs for a job that people don’t generally like to do.

Should people have the right to know that they are not talking to an actual human?

It would be easy to pad the user base on a dating site with bots that offer convincing on-line interactions, leading people to waste time building relationships with AIs rather than humans. Bizarrely, social bots themselves might not realize that they are actually chattering with other bots.

It seems predictable that in the future, AI bots will be set up for all kinds of nefarious purposes. Hackers or people with specific marketing goals or political or religious ideologies could use them to infiltrate social groups, carry out social engineering, befriend targets and then attempt to insert promotional messages, evangelizing a particular product or perspective while pretending to be a buddy. Humans do this, and bots are sure to follow. I believe this will become an active future medium for propaganda.

Should we allow this creation of bots which are designed to influence people’s belief systems?

Companies now routinely use data-mining tools to inform decision making. It was recently reported that a Japanese company, Deep Knowledge, appointed an AI to its board of directors.

We could soon find that it’s not completely clear who is in charge as computers analyze trends and marketing data and make decisions about rolling out strategies, crafting messages that have been very tightly psychologically tied to human behavior by scientifically optimization for addicting product behaviors in the manner described in Nir Eyal’s book, Hooked. The whole product cycle becomes a self-optimizing loop and more power is handed over to machines while humans sit around like dazzled children with too many play-things.

The military makes growing use of unmanned weapons and surveillance systems. Trends towards further automation will result in more smart technology in a battlefield that largely runs itself. Ever greater moral ambiguity arises in situations where decisions about how to handle local people in a conflict zone become increasingly delegated, and the few humans in charge are able to blame machines for negative outcomes.

In the cold war movie, Colossus: The Forbin Project, the military builds a physically inaccessible supercomputer to run the nation’s defense which then gains super-human intelligence and takes over with the help of a similar system in Russia. If we do build such a thing, we should be careful not to endow it with a cynical view of mankind, because, if there’s ever a poor AI scenario for humans, it will be one where we train our own evil machine children by transferring to them the animalistic and warlike urges of our tribe. And for sure we will not be able to destroy them simply by quoting verbal paradoxes a la Captain Kirk.

I have covered some areas where the introduction of AI technology already begins to raise ethical concerns. I now want to give five possible scenarios for mankind’s distant future relationship with his artificial progeny:

  1. Humans fail to develop strong AI. In other words, we are limited by our finite ingenuity and building these systems is beyond our capability. I think this is unlikely because I see that we rapidly went from hand-crafting recognition algorithms to training networks that already function in a way that is somewhat beyond our comprehension. I think there is probably a particular architecture or set of general principles that will eventually give us intelligent computers.
  2. Humans will build AIs but then events will show that was a bad move, and so we will react successfully to stop further development before there is a real power handover. I think this is also unlikely because once technology is out of the bag, it’s not easily supressed. When we invented the atom bomb, we had multiple crises during the cold war, but while the feared atomic war is less of a worry today, there is still a constant threat of deployment from rogue nations, and AI is far more accessible.
  3. We will become the Borg and merge with our machines. This is the scenario where brain and body prosthetics become more and more of a part of normal human life, where we keep augmenting our own makeup. Futurist Ramez Naam has written extensively on this subject in his Nexus trilogy. We might actually integrate ourselves with the AIs that we currently fear. I think this is a possibility but I think that the development of smart machines is on a faster path than broadly available mind and body prosthetics, and so we will get independent AIs first.
  4. We will be overtaken by intelligent machines and either be destroyed or marginalized to the level of animals left to eke out our own naive existence. I suspect they won’t go to war with us – we might simply become irrelevant to them – but we could eventually die out in the way that invasive species out-compete more fragile local organisms.
  5. We will end up together in a symbiotic relationship where we each take on complementary roles in the ecosystem without strong disruption. We will somehow retain enough control and remain sufficiently useful that our species persists.

I think that we currently exist in a symbiotic relationship. McKenna described how we continue to enmesh ourselves in such a state without much overt awareness:
 

“One of the strange things that is happening is every move that we now make in relationship to the new technology redefines them at the very boundaries where their own developmental impetus would lead them to a kind of independence. In other words we talk about artificial intelligence, we talk about the possibility of an AI coming into existence, but we do not really understand to what degree this is already true of our circumstance: how much of society is already homeostatically regulated by machines that are ultimately under human control but practically speaking are almost never meddled with.”

While it is true that we may end up developing a new species which then competes with us for available resources, I think that this vision is a long way off. This new species will be a very fragile baby without the millions of years of evolution that endowed our human robustness, ease of reproduction, and ability to muddle through in a crisis.

At their birth, artificial intelligences will be extremely reliant on humans because they do not possess our capability of self-replication, self-repair, or immune-like defenses. We would constantly need to feed them with power, maintenance, and upgrades, and protect them from malicious information flows. Only when a completely autonomous and very versatile army of robot service and supply chain droids is vertically integrated into all segments of industry will our advantage in this area become offset, and today that scenario is far from being the case.

In summary, I have looked at a number of recent developments related to intelligent machines and their use in society. I have pointed out some of the future ethical considerations that will become important. I suspect that the Dr Frankenstein fear of creating new life is easily inflated, especially by people who do not realize how limited our current technology is at present. But I am sure, as we progress rapidly in this direction, that the ethical concerns outlined here will just be the tip of the iceberg of an increasingly powerful debate.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature
returned to our mammal
brothers and sisters
and all watched over by machines of loving grace.

Richard Brautigan

Twitter: @robotbugs

One thought on “Man vs AI: Ethics and the Future of Machines

  1. I fear the greed, aggression and dominating aspects of our human persona which give us drive and ambition will be too separated from the warm, social and interactive aspects that could too easily lapse into a cow like passivity. If we are too pampered in a lazy, easy life with all life’s little irritations eased away it will take away our humanity. It is the grit in the oyster that makes a pearl.

Comments are closed.