Context :
A.I. Poses ‘Risk of
Extinction,’ Industry Leaders Warn NY Times / 30
May 2023
Extract
:
A
group of industry leaders warned on Tuesday that the artificial intelligence
technology they were building might one day pose an existential
threat to humanity and should be considered a societal risk on a par
with pandemics and nuclear wars.
“Mitigating
the risk of extinction from A.I. should be a
global priority alongside other societal-scale risks, such as pandemics and
nuclear war,” reads a one-sentence
statement released by the Center
for AI Safety, a non profit organization. The open letter was signed
by more than 350 executives, researchers and engineers working in A.I.
The
signatories included top executives from three of the leading A.I. companies:
Ø Sam Altman,
chief executive of OpenAI;
Ø Demis Hassabis,
chief executive of Google DeepMind; and
Ø Dario
Amodei, chief executive of Anthropic.
Geoffrey Hinton
and Yoshua
Bengio, two of the three researchers who won a Turing Award for
their pioneering work on neural networks and are often considered “godfathers”
of the modern A.I. movement, signed the statement, as did other prominent
researchers in the field. (The third Turing Award winner, Yann LeCun,
who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)
The
statement comes at a time of growing concern about the potential harms of
artificial intelligence. Recent advancements in so-called large language models
— the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could
soon be used at scale to spread misinformation and propaganda, or that it could
eliminate millions of white-collar jobs.
A New Generation of Chatbots
Here are the bots to know:
ChatGPT.
ChatGPT, the artificial
intelligence language model from a research lab, OpenAI, has been making
headlines since November for its ability to respond to complex questions, write
poetry, generate code, plan
vacations and translate languages. GPT-4, the
latest version introduced in mid-March, can
even respond to images (and ace the Uniform Bar
Exam).
Bing.
Two months after ChatGPT’s
debut, Microsoft, OpenAI’s primary investor and partner, added
a similar chatbot, capable of having open-ended
text conversations on virtually any topic, to its Bing internet search engine.
But it was the bot’s occasionally inaccurate, misleading and weird
responses that drew much of the attention after
its release.
Bard.
Google’s chatbot, called
Bard, was
released in March to a limited number of
users in the United States and Britain. Originally conceived as a creative
tool designed to draft emails and poems, it can generate ideas, write blog
posts and answer
questions with facts or opinions.
Ernie.
The search giant Baidu unveiled
China’s first major rival to ChatGPT in March. The debut of Ernie, short for
Enhanced Representation through Knowledge Integration, turned
out to be a flop after a promised “live”
demonstration of the bot was revealed to have been recorded.
Eventually,
some believe, A.I. could become powerful enough that it could create
societal-scale disruptions within a few years if nothing is done to slow it down,
though researchers sometimes stop short of explaining
how that would happen.
These
fears are shared by numerous industry leaders, putting them in the unusual
position of arguing that a technology they are building — and, in many cases,
are furiously racing to build faster than their competitors — poses grave risks
and should be regulated more tightly.
This
month, Mr. Altman,
Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala
Harris to talk about A.I. regulation. In a Senate
testimony after the meeting, Mr. Altman warned that the risks of advanced A.I.
systems were serious enough to warrant government intervention and called for
regulation of A.I. for its potential harms.
Dan
Hendrycks, the executive director of the Center for AI Safety, said
in an interview that the open letter represented a “coming-out” for some
industry leaders who had expressed concerns — but only in private — about the
risks of the technology they were developing.
MY TAKE
:
A Run-away Nuclear Chain Reaction ?
Hundreds of companies-big and tiny-are issuing every single day, dozens of AI
tools - APIs - Plugins - Apps etc
In turn , these AI tools are churning out their own " Clones " , at a frightening
pace
It may soon look like a " Run-away Chain Reaction " in a Nuclear Reactor , which
no one will be able
to stop or even slow-down
Global Warming took :
> decades to raise ave. temperature on earth , by
1 * C
> some 30 years , to raise
CO2 in atmosphere from 0.1 % to 0.2 %
AI MELTDOWN will take only a few months
I urge ALL concerned stakeholders to take a look at my following suggestion to
avoid this catastrophe
Following COMPARATIVE TABULATION, might
help you to kick off a debate to draw up :
“ MAGNA CARTA of SAVE HUMANS “
Law |
Scope |
Enforcement |
EU AI Act |
Artificial intelligence systems |
European Commission |
Chatbots |
International Authority for
Chatbots Approval (IACA) |
|
All artificial intelligence systems |
United Nations |
With
Regards,
Hemen
Parekh
www.hemenparekh.ai /
02 June 2023
Related Readings :
Rising
concern. 74% of Indian workers worried AI will replace their jobs:
Microsoft Report
EU,
US ready common code of conduct on artificial intelligence
Related Blogs :
No comments:
Post a Comment