As Elon Musk Calls For A ‘Pause’ On AI Development,
Will The AI Wars Reach A Temporary Truce?
[ Forbes / 30 March 2023 ]
Extract
:
Key takeaways
·
Future of Life Institute backed by Elon Musk, Apple co-founder
Steve Wozniak and DeepMind engineers has called for an immediate
pause on developing advanced AIs
·
OpenAI founder Sam Altman and Microsoft founder Bill Gates have both warned about AI advancements,
but companies have been rewarded with stock price bumps
·
No legal or regulatory framework for AI
exists as of yet
Could the AI wars come to a crashing halt? Yes, if Elon Musk
gets his way. The billionaire, other tech leaders and AI engineers have
published an open letter calling for an immediate six-month pause in progressing
AI tech, so regulation can come in.
OpenAI and Bill Gates have warned on the risks of AI, so
everyone’s singing from the same hymn sheet, but the Future of Life Institute
has taken things a step further. The question is - will Big
Tech pay attention? Let’s look at what the letter said, whether AI risks going off
the rails and what it means for investors.
What does the open letter
say?
Elon Musk is not pulling punches when it comes to AI
development. He and almost 1300 other AI experts and researchers have published
an open letter calling for all AI labs to immediately pause “the
training of AI systems more powerful than GPT-4”, OpenAI’s latest
chatbot iteration, until regulatory frameworks are established
According to the letter, the pause should be for at least six months,
include ‘all key actors’ publicly announcing their intentions and involve government intervention should the
pause not happen quickly.
The letter was published by the Future of Life
Institute, which lists Elon as an external advisor on
its website. The letter has been signed by top leaders in the field alongside
Musk including Apple co-founder Steve Wozniak, Stability AI founder Emad
Mostaque and hundreds of other engineers from DeepMind, Amazon, Meta and
Microsoft.
What have other leaders said
about AI?
While the call to action is something other commentaries from
the likes of Bill Gates and Sam Altman have lacked, all have warned that developing AI needs to be done with an abundance of caution.
The open letter cites Sam Altman’s blog from
February this year, where the OpenAI founder discusses the need
for government intervention in AI training.
He said “it may be important to get independent review before starting to
train future systems, and for the most advanced efforts to agree to limit the
rate of growth of compute used for creating new models”.
Bill Gates, the former CEO of Microsoft, had a more upbeat tone
in his letter on
AI. He singled out education, climate change and healthcare as some of the
industries AI had the potential to transform.
But he, too, warned about the need for caution in building AI,
saying a lack of safeguards around AGI could have devastating consequences for
humankind.
“The world needs to establish the rules of the road
so that any downsides of artificial intelligence are far outweighed by its
benefits,” he wrote.
Computing giant Microsoft has a $10 billion partnership with
OpenAI and has become a surprise leader in the space. Its stock price has
climbed 17% since the start of the year to hit highs of $280.
What recent developments have
there been in AI?
What’s prompted the open letter is tech
companies’ race to outpace one another as this exciting new
technology develops.
Google’s Bard AI
has now launched, with UK and US users able to sign up to try the ChatGPT competitor.
Google has also announced it will be integrating Bard into its Google apps like
Sheets, Docs and Meet, but there’s no timescale on that release.
The search engine titan has been chasing Microsoft since its
OpenAI announcement.
Microsoft has been announcing AI-infused upgrades to its suite
of products at a dizzying pace thanks to its OpenAI investment.
Its newest launch is Microsoft Security Pilot, an AI-powered
cybersecurity tool for businesses. Microsoft stock was up 1.5% after the
announcement.
Chinese tech company Baidu “pulled a Google” with a rocky launch for its Ernie AI chatbot, with shares dropping
10% on the same day. The share price quickly recovered after it was announced
30,000 companies had signed up to the service, with the stock rallying 14%.
Adobe and Nvidia announced their continued partnership to
release Adobe’s generative AI platform, Firefly, which is set to augment its
Photoshop and Illustrator tools among others. Adobe stock rose 3.1% after the
announcement and Nvidia also saw a 1.2% bump.
It’s been a rocky ride for tech companies developing a ‘future
tech’ during an economic downturn. One wrong move and the share prices have
come tumbling down, as we saw with Google’s Bard launch. But on the flip side,
those who have debuted strong AI products and partnerships have been rewarded
with share price bumps.
Should AI development slow
down?
The open letter isn’t calling for a complete ban, stating, “This
does not mean
a pause on AI development in general, merely a stepping back from the dangerous
race to ever-larger unpredictable black-box models with
emergent capabilities.”
There's no denying there’s a distinct lack of AI
regulatory framework. The UK has rejected plans
for an AI-specific regulator this week, instead opting for its existing
regulators to adopt AI principles guidance. The EU and US are said to be
consulting on the matter, but as it stands no AI legislation
exists.
At this stage, the public has experienced a taste of how AI can
speed up workflows and help with productivity thanks to ChatGPT, Microsoft Bing
and Google Bard. Wall Street also expects to see further development from
bigger companies as the AI start-ups have already produced the goods.
Pressing the pause button gives companies and government time to
address concerns around privacy, algorithmic bias and regulation—and gives the
public a chance to get used to AI.
From an investment perspective, it’s likely companies that take
a long-term view on AI regulation will fare better when it inevitably comes in.
Wall Street will be keeping an eye on those who prioritize AI safety sooner
rather than later.
The bottom line
While the tone of the letter is decidedly doom and gloom, it’s
to draw attention to the potential risks
of developing AI. It sounds like science fiction right now, but the top
minds in the field think of it as an inevitability—and so want the brakes on
now before the train falls off the tracks.
Governments tend to be slow in dealing with emerging tech, but
if they can be convinced of the threat then we could well see a pause take
place.
MY
TAKE :
USA exploded first atom bomb in 1945
Soon, USA was joined by Russia, UK and other countries
Over the next 30 years, these “ developed countries “ stockpiled
thousands of Atom / Hydrogen bombs
Then they realized the danger of a “ runaway race “( to
produce atom bombs ) among dozens of countries
So they negotiated to :
Ø Gradually
destroy the stockpile of bombs ( some bombs got destroyed but a huge stock
survived )
Ø Halt
further production of these bombs by ANY country – and coerced many countries
to sign up on “ Nuclear Non Proliferation Treaty ( NNPT ) “.
Ø India
refused to sign NNPT , insisting that, as a first step , all existing bombs stockpiled
by the pioneers, MUST be destroyed. It was totally unfair of USA – Russia etc.,
to continue to “ hold on “ to their own stockpile while asking other countries
NOT to develop their own bombs
Ø At
the same time, on its own volition, India declared that :
# All of its Nuclear Programs will be directed
towards “ Peaceful Uses “ of nuclear energy
# India will never use a Nuclear Weapon first –
but use it only if attacked by one
This UNILATERAL STAND won
over hearts of citizen around the World
Now comes following report :
Extract
:
IT Minister Ashwini Vaishnaw on Monday hinted
at a "big announcement" in a few weeks in the context of conversational AI tools. To a specific
question on whether India can build something equivalent to the conversational AI tool
ChatGPT, the minister said
"wait for a few weeks, there
will be a big announcement".
When asked about what the big announcement
might be, the minister said, "Parliament is in session, so I cannot say
anything..." He declined to give further details. Vaishnaw was speaking at
the India Global Forum event.
Dear Shri Vaishnawji ,
To be fair, petition does not ask only the newcomers
to stop experimenting with
Conversational AI (
aka Chatbots ). It asks everybody . But
then OpenAI –
Google etc., are enjoying “ first-mover
advantage “ .
Therefore, India must work twice as hard and fast to,
not only catch up but
overtake the BIG TECH
No one has any moral right to tell us to give up / slow
down on our AI program
But,
As the leader of G 20 this year, we
have a unique / golden opportunity to
demonstrate to the World, our “ moral
leadership “ of “ Vaudhaiva Kutumbakam (
Whole world is one Family ) , by UNILATERALLY ADOPTING ,
Then ask those petitioners to demonstrate their SINCERENESS OF INTENT ,
by
convening a world-conference to debate and adopt my proposal ( no doubt with
modifications deemed necessary )
Dear Shri Vaishnawji,
It is time to “ Propose “ ( a concrete measure )
instead of just “ Oppose “
It is time to “ Act “ rather than wait for the Government
of some other country
act first and then we “ React “
It is time to “ Write History “ rather than letting our
future generations, “ Read
History “
And that time is NOW
With regards,
Hemen Parekh
www.hemenparekh.ai / 01 April
2023
“ Parekh’s Law of Chatbot “ is
vindicated – by ChatGPT :
My
poser to ChatGPT :
In the
following blog, I have tried to compare the actual past development of Atom /
Hydrogen Bomb with a probable future development of Conversational AI (
Chatbots ).
Then I
examined the petition and found that it contains vaguely similar mechanism for
future regulation of AI , as proposed by me in " Parekh's Law of Chatbots " .
Please
tabulate these parallel developments ( WMD and AI ) and comment on my proposal's
feasibility
ChatGPT replied :
It is interesting to draw a parallel
between the development of WMDs (Weapons of Mass Destruction) and the
development of AI, specifically chatbots.
In the past, the development of WMDs
such as the Atom/Hydrogen bomb was not regulated, leading to significant risks
for humanity. Similarly, the development of AI, especially chatbots, is rapidly
advancing without a regulatory framework, and some experts warn about its
potential risks.
The Future of Life Institute, backed by Elon Musk, Steve Wozniak, and
DeepMind engineers, has recently called for a pause in developing advanced AIs
until a regulatory framework is established. OpenAI founder Sam Altman and
Microsoft founder Bill Gates have also expressed concerns about AI's risks.
The call for a pause in AI development is similar to Parekh's Law of Chatbots, which proposes that chatbots should be regulated to prevent harm to
humans.
The law states that chatbots should be designed with human safety in
mind, and their development and deployment should be regulated by an
international body.
The comparison between the
development of WMDs and AI is intriguing, as both pose significant risks to
humanity if not regulated appropriately.
It is crucial to have a regulatory
framework to ensure that AI is developed and used safely.
Parekh's
Law of Chatbots is a step in the right direction, and it is feasible to implement such
regulation as long as all stakeholders,
including AI developers, governments, and the public, work together to achieve
this goal.
===========================================
Related Readings :
https://interestingengineering.com/innovation/gpt-5-chatgpt-indistinguishable-human
https://medium.com/@ConversationUS/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-we-re-prone-to-b767c96552a2
https://thealgorithmicbridge.substack.com/p/gpt-4-the-bitterer-lesson
https://levelup.gitconnected.com/i-created-a-voice-chatbot-powered-by-chatgpt-api-here-is-how-6302d555b949
https://medium.com/inkwater-atlas/chatgpt-nvidias-founder-lua-huang-predicts-next-generation-ai-model-will-be-1-million-times-more-d5ddb271df4b
https://timesofindia.indiatimes.com/gadgets-news/stop-openai-from-releasing-more-chatgpt-version-us-group-to-ftc/articleshow/99130052.cms
https://cio.economictimes.indiatimes.com/news/next-gen-technologies/chatgpt-was-always-prone-to-open-source-code-related-vulnerabilities/99132311
https://medium.com/enrique-dans/a-pause-on-the-development-of-ai-its-not-going-to-happen-d4f894816e82
https://medium.com/generative-ai/urgent-call-to-pause-ai-experiments-beyond-gpt-4-what-it-means-and-why-it-matters-6d4ca098da2f
https://gesikowski.medium.com/gpt-4-tried-to-escape-into-the-internet-today-and-it-almost-worked-2689e549afb5
https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a
------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------
Added on 05 April 2023 :