Artificial Intelligence Ethics

Centuries before Turing’s question “Can machines think?”, philosophical postulation of machine intelligence included processing knowledge (Diderot: “If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation”) or possessing consciousness and human reasoning faculties (Descartes: “I think therefore I am”).

The term “Artificial Intelligence” (AI) was actually coined in 1956, by John McCarthy at the Dartmouth Conference, widely recognized as the first AI conference.

In the decades since, AI languished in the innovations race, but today — from facial recognition to chat bots to driverless cars — it is a key player in the digital world

However, controversies such as self-driving car fatalities or Google’s Project Maven [1] shifted the discussion in “AI Ethics” from a pure philosophical contemplation to one of indisputable relevance.

But what is “AI Ethics”?

One understanding can be guided by the Monetary Authority of Singapore “FEAT” (fairness, ethics, accountability and transparency) principles, where AI decisions are to be explainable, transparent and fair to consumers.

Some phrased it in the context of “responsible AI”, to use AI [2] “with good intention to empower employees and businesses, and fairly impact customers and society.”

Others suggest that there is a distinction between “AI ethics”, which “is about making sure there are no biases when building the algorithms” versus “ethical AI” where “ we expect it to be able to make moral decisions.” [3]

“AI Ethics” is a complex subject with 4 frequently asked questions.

1. Will AI take our jobs?

“You will work. You will build … You will serve them… Robots of the world … “

— Radius in Karel Čapek’s 1920 science fiction “Rossumovi Univerzální Roboti” (RUR)

The RUR publication coined the word ‘robot’ for a new working class of automatons, originated from the Slavonic word, rabota, which means servitude of forced labor.

Today, aside from chatbots and driverless cars, successes in chess and poker games convincingly demonstrate AI “working” under uncertainty with imperfect information, and pave way for next generation of AI in fields of strategizing and negotiations.

For sure, a top concern is, will AI replace us?

2. Is it the end of Poker Face?

The petabytes of photos, messages, emails and videos that we exchange and store are integral to training AI to perform tasks from speech and facial recognition to sentiment analysis.

While the use of our personal information has triggered privacy debates, the rising concern is the use to build empathy robots — machines that read our emotions from eye dilation, skin heat, or speech patterns to tailor marketing messages or teaching methods.

Are we also losing our right to keep our emotions private? Will AI be able to create a picture of our psychology even if we seem composed to the naked eye? Is it the end of poker face?

3. Will AI go rogue?

“A robot may not injure a human being, or, through inaction, allow a human being to come to harms”

— Isaac Asimov’s Three Laws of Robot Ethics, First Law [5]

Popular science fiction often explores possibilities of coding human values into robots.

  • Where there are clear outcomes, programming AI to reflect our values requires understanding of and mitigating data and algorithm biases.
  • But human values are diverse — culturally situated and contextual. Our decisions can also be inconsistent and irrational.
  • And frequently, there is ambiguity — and no “best” or “right” answer.

In a classic thought experiment, there is a railway-trolley barreling down towards a group of five people strapped onto the tracks. We are standing some distance off next to a lever, faced with two choices: (1) pull the lever to divert the trolley onto a side track where a person is tied up or (2) do nothing and the trolley kills the five people on the main track.

How do we code such a dilemma in a machine? What is the ‘best’ choice? How do we code the machine to make ethical decisions, as we do, under time pressure when there is no time to algorithmically optimize billions of outcomes?

4. Will AI Control us?

“You are my creator, but I am your master; obey!”

— The monster to Victor Frankenstein

With this declaration from the monster, the power shift from Frankenstein to the monster is complete.

Just as there is no 100% security, there is no 100% control — that is, as machines gain more autonomy, our control decreases.

Is it possible to fit and then control every theoretical scenario, physical mechanism and component in the machine? How do we control unpredictability, whether arising from complex simulations or combinatorial explosion of machines, each responding to its own algorithms?

In short

These are just four commonly asked questions and concerns.

For now, arguably we still retain significant “control” over AI.

We have not yet achieved “Strong AI” (machines with human intelligence to reason, solve problems, make judgements under uncertainty), or “Super AI” that surpass human intelligence.

The AI we have so far, “Weak AI”, operates within a pre-defined range. Apple Siri can answer “what is the weather today” but will probably give vague responses or URL links to “is global warming real?”

However, as we delegate more and more tasks and decisions to machines, our ethics that sustain our societies may undergo subtle compromises that can produce significant changes over time.

The overriding question is, do we know what these compromises are, and what practical steps can we take today?

— — — —

Notes

[1] Project Maven, also known as the Algorithmic Warfare Cross-Function Team, was launched in April 2017 and involves using artificial intelligence to improve the precision of military drone strikes.

[2] At a 2019 Accenture Roundtable by Dr Rumman Chowdhury (Managing Director & Global Lead for Responsible AI Accenture Applied Intelligence).

[3] Mr Koh (Chief Technology Officer, Microsoft Singapore), speaking at a 2019 SGInnovate event.

[4] Herbert Simon, in 1956.

[5] The three laws are: (i) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (ii) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. (iii) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store