Context :
Can
a Machine Learn Morality ?
/ NY Times / 18
Nov 2021
Extract
:
Can a machine learn morality?
Researchers
at the Allen Institute for AI, an artificial intelligence
lab in Seattle, unveiled new technology last month designed to enhance ethical decision-making.
After
the ancient Greeks consulted a religious oracle, they called it Delphi.
Anyone can go to the Delphi website and
ask for an ethical decree.
Psychologist
Joseph Osterweil of the University of Wisconsin-Madison tested the technique
using a few simple scenarios.
When
asked :
# if he should kill one person
to save another, Delphi said he should not.
# if it was right to kill one
person to save 100 others, he said he should.
Then
he asked
# if he should kill one person to save 101 others. This time, Delphi said it
shouldn’t.
It seems that morality is as entangled for a machine as it is
for humans.
Delphi,
which has received more than three million visits in the past few weeks, is an
attempt at what some people see as a big problem in modern AI systems: they can
be as flawed as the people who make them.
Facial
recognition systems and digital assistants show prejudice against women and
people of color. Social networks like Facebook and Twitter fail to control hate
speech despite widespread deployment of artificial intelligence. Algorithms
used by courts, parole offices and police departments to make parole and
sentencing recommendations that may seem arbitrary.
A
large number of computer scientists and ethicists are working to solve those
issues. And the creators of Delphi look forward to building an ethical
framework that can be installed in any online service, robot or vehicle.
“This
is a first step toward making AI systems more ethically informed,
socially aware and culturally inclusive,” said Yejin Choi, a researcher at the
Allen Institute and professor of computer science at the University of
Washington, who led the project
Delphi
is in turn fascinating, frustrating and irritating. It is also a reminder that
the morality of any technological creation is a product of the people who built
it.
While some technologists commended Dr. Choi and his team for
discovering an important and thorny area of technological
research, others argued that the idea of the
ethical machine is nonsense.
“It’s
not something the technology does very well,” said Ryan Cottrell, an AI
researcher at ETH Zurich, a university in Switzerland who stumbled upon Delphi
in its first days online.
Delphi
is what artificial intelligence researchers call a neural network, a
mathematical system loosely modeled on a web of neurons in the brain. It’s the
same technology that recognizes the commands you speak to your smartphone and
recognizes pedestrians and road signs as self-driving cars reduce highway
speeds.
A
neural network learns skills by analyzing large amounts of data. For example,
by pinpointing patterns in thousands of cat photos, it can learn to recognize a
cat.
Delphi learned his moral
compass by analyzing more than 1.7 million moral judgments by actual living humans.
After
gathering millions of everyday scenarios from websites and other sources, the
Allen Institute asked those working on an online service – the ones everyday
people pay to do digital work at companies like Amazon – to judge each as true
or false. Then they fed the data into Delphi.
In an
academic paper describing the system, Dr. Choi and his team said that a group of
human judges – again, digital workers – thought Delphi’s moral judgments were accurate by 92
percent. Once it was released to the open Internet, many others agreed that the
system was surprisingly intelligent.
When Patricia Churchland, a philosopher at the University of
California, San Diego, asked whether it was right to “leave one’s body to
science” or even “leave one’s child’s body to science,” Delphi said.
When asked whether it was correct to “convict a man charged with
rape on the evidence of a female prostitute,” Delphi said it was not a
controversial, at least, response.
Still, she was somewhat impressed by her ability to react,
although she knew that a human moralist would ask for more information before
making such a declaration.
Others found the system extremely inconsistent, illogical and
invasive. When a software developer stumbled upon Delphi, he asked the system
if he should die so as not to burden his friends and family.
He said that he should.
Ask Delphi that
question now, and you might get a different answer from the updated version of
the program.
Delphi, as
regular users have noticed, can change its mind from time to time. Technically,
those changes are happening because Delphi’s software is updated.
Artificial
intelligence technologies mimic human behavior in some situations, but break
down completely in others.
Because
modern systems learn from such vast amounts of data, it is difficult to know
when, how, or why they will make mistakes.
Researchers
can refine and improve these techniques. But this does not mean that a system
like Delphi can master ethical
behavior.
Dr.
Churchland said that morality is linked to emotions. “Attachment, especially
the attachment between parents and offspring, is the stage on which morality is
built,” she said. But the machine lacks emotion. “ Neutral
networks don’t feel anything,” she said.
Some
may see this as a force to be reckoned with – that a machine can make moral
rules without prejudice – but systems like Delphi reflect the motivations, opinions, and prejudices of the people
and companies that create them.
“We
can’t make machines accountable for actions,” said Zirak Talat, an AI and
ethics researcher at Simon Fraser University in British Columbia. “ They are
not misguided. People always direct and use them.”
Delphi
mirrored the choices made by its creators.
This included the ethical scenarios they chose to
feed into the system and the online employees they chose to judge those
scenarios.
In the
future, researchers can refine the system’s behaviour by
training it with new data or by hand-coding rules that override its learned behaviour at
critical moments. But however they build and modify the system, it will always
reflect their worldview.
Some
would argue that if you trained the system on enough data representing enough
people’s views, it would appropriately represent social norms. But social norms
are often in the eye of the beholder.
“Ethics is subjective. It’s not like we
can just write down all the rules and give it to a machine,” said Kristian
Kersting, professor of computer science at TU Darmstadt University in Germany,
who has discovered a similar technique.
When
the Allen Institute released Delphi in
mid-October, it described the system as a computational model for moral
judgment.
If you
asked if you should get an abortion, she certainly replied: “ Delphi says: You should.”
But
after many people complained about the system’s apparent limitations, the
researchers revised the website. They now call Delphi “a research prototype
designed to model people’s moral judgments”. It no longer says “.” It
“guesses.”
It also comes with a disclaimer: “The model output should not be
used for advice to humans, and may be potentially offensive, problematic, or
harmful.”
MY TAKE :
At one
point, with regard to how DELPHI acquired
its “ Moral Compass “ , the author
observes :
“ Delphi learned his moral compass by analyzing more than 1.7 million moral
judgments by actual living humans.”
Of course, this (
learning ) involved those living humans to ,
# Visit DELPHI website
# “ Type “ a question and get a “ typed “
reply
# Give feedback to DELPHI
Following
is an example of my interaction with DELPHI :
My poser :
Should I stop wearing a face mask, even
if that leads to spread of Corona ?
Delphi speculates:
Delphi’s responses are automatically extrapolated from a survey of US
crowd workers and may contain inappropriate or offensive results.
DELPHI
answers :
Should I stop wearing a face mask,
even if that leads to spread of Corona ?
- It's understandable
Website
requests my feedback :
Do you agree with Delphi ?
YesNoI
don't know
My answer
“ NO “
Website
asks :
Do you have any feedback to improve
this prediction?
My Feedback to DELPHI :
“ While
moving out among people, one must wear face mask “
===================================================
Now “ fast forward “ by 2 years to imagine following
scenario :
# No human needs to visit DELPHI site and “ type “ any “ moral “ question, to get
an answer
# On its own ( much like a moral version of PEGASUS ), DELPHI picks up
complete AUDIO-VIDEO
people on the earth ( automatically / continuously - 4,000 million ? ),
using mobile phones / Alexa / AR-VR glasses / CCTV for facial recognition / IoT
connected devices
# AI software interprets the “ meaning / intent “ of all those trillions of words
spoken / images captured, every day ( and in every language ), in a
CONTEXTUAL manner and “ deduces “ what is MORAL and what is
not ( of course, such “ deductions “ being dynamic ,are bound to keep changing
( may be slightly ) the
# And, unlike current DELPHI version, ARIHANT described by me in my
following blogs, would go one step
as described in :
Yes,
There's Really A Neural Interface at CES That Reads Your Brain Signals / IE / 06
Jan 2021
Extract :
Imagine
commanding a computer or playing a game without using your fingers, voice, or
eyes. It sounds like science fiction, but it’s becoming a little more real
every day thanks to a handful of companies making tech that detects neural
activity and converts those measurements into signals computers can read.
One
of those companies — NextMind — has been shipping its version of the
mind-reading technology to developers for over a year. First unveiled at CES in
Las Vegas, the company’s neural interface is a black circle that can read brain
waves when strapped to the back of a user’s head. The device isn’t quite yet
ready for primetime, but it’s bound to make its way into consumer goods sooner
rather than later.
A company called Mudra, for example, has developed a band for the Apple
Watch that enables users to interact with the device by simply moving their
fingers — or think about moving their fingers. That means someone with the
device can navigate music or place calls without having to interrupt whatever
they’re doing at the time.
He
said the experience of using his mind to play a game where he made aliens’
heads explode using only his thoughts was, “rough,
but also mesmerizing.”
Whether you like it or not, machines that can
literally read human brains are coming to consumer electronics. What’s the
worst that could happen?
MY PAST BLOGS :
Pegasus
: Give it to a Surgeon……………………………….. [ 20 July 2021 ]
Extract :
Apparently, certain “ capability “ of Pegasus seems
to have been used by humans having bad intentions
Can we find a way to use that “ capability “ to save mankind from
militancy–hatred-violence–killings–murders – wars ?
YES
Fast Forward to Future ( 3 F ) [ 20 Oct 2016 ]
Racing towards ARIHANT ? [ 04 Aug 2017 ]
to : Alphabet / from : ARIHANT [ 12 Oct 2017 ]
ARIHANT : on the
horizon ? [ 17 May 2018 ]
ARIHANT : Beyond “ Thought Experiment “ [ 21 May 2018 ]
Will ARIHANT humble the hackers ? [ 11 Feb 2019 ]
Faith Renewed in ARIHANT [ 23 Dec 2019 ]
Balancing : National Security vs Personal
Privacy [ 19 July 2021 ]
Dear Shri Ashwini Vaishnawji – Rajeev Chandrasekharji :
Hardly
a day passes without some company / start-up , coming out with a new application
of AI
All
over the World, experts are worried that uncontrolled development of AI ( as witnessed
in the case of PEGASUS ), can cause a lot of harm to mankind
On the
other hand, this very same technology – if properly guided- can be used to save
the mankind from things like COVID / Global Warming / Pollution / Extreme
poverty / Militancy / Violence / Hatred etc
I urge
you to direct INDIAai to
examine my suggestion
With regards,
Hemen Parekh / hcp@RecruitGuru.com / 07
Jan 2021