Context :
OpenAI staff warned board
of AI breakthrough before Sam Altman was fired, report says … 22 Nov 2013
Extract :
Before Sam Altman was fired, some OpenAI researchers wrote a letter to the board
warning of a breakthrough that could harm humanity, sources told Reuters.
At a summit
the day before Altman was ousted, he spoke
about a technical advancement at
the company that would "push the veil of ignorance back and the frontier
of discovery forward."
Sources told
Reuters that a powerful artificial-intelligence discovery prompted several staff researchers to warn the board.
But one source told Reuters that after it
contacted OpenAI, the company internally acknowledged the existence of a letter
to the board and a project named Q*. An OpenAI
spokesperson told the news agency this message was only alerting staff to the
report and not commenting on its accuracy.
One of the
sources told Reuters that some OpenAI staff believed Q*
could be major breakthrough in the company's goal of achieving artificial general intelligence — which the
company defines as autonomous systems that surpass
humans in most economically viable tasks.
With
extensive computing resources, Q* was able to solve
grade-school-level math problems, but researchers were optimistic because it
aced them, the source told Reuters.
Who
is Ilya Sutskever, the AI scientist ousted from OpenAI board and why he is
cynical about AGI .. 25 Nov
Extract :
Where Sam Altman is
known for his risk-taking approach to artificial intelligence development that
sometimes even comes at the cost of safety, Ilya
Sutskever plays it safer.
Sutskever harbours deep concerns about the dangers
of AI
The rapid
development and deployment of powerful AI models like ChatGPT have been flagged
by researchers and regulators alike who’ve questioned the safety of such
technologies. In fact, Sutskever himself admitted in an MIT Technology Review that he didn’t think ChatGPT was good enough before its
record-breaking launch
More recently, Sutskever seemed to grow increasingly cautious about AI safety. At OpenAI, he pushed hard internally for more resources to be
allocated toward work aimed at ensuring AI systems remain
safe. In fact, he headed the company’s Superalignment
team, which essentially reserved 20% of computing power for managing the risks posed by AI.
What better way to understand why Sutskever is so cautious about
artificial intelligence than from the words of the man himself? “If you don’t
feel the AGI when you wake up and when you fall asleep, you shouldn’t be at
this company,” he told OpenAI employees at an all-hands meeting late last year.
Described by current and former employees as someone who tackles
the challenges of AI with a
passion that borders on the spiritual,
Sutskever’s dedication is real.
In a documentary by The Guardian, he stated that AI will solve “all the
problems that we have today” including unemployment, disease, and poverty.
However, it will also create new ones: “The problem of
fake news is going to be a million times worse; cyber attacks will
become much more extreme; we will have totally automated AI weapons,” he
said, adding that AI has the potential to create “infinitely stable dictatorships.”
Fair to say, Sutskever’s view of AI is uniquely balanced – and
that combination of optimism and cynicism stretches to artificial
general intelligence (AGI), a computer system that can do any job or task that
a human does but only better, as well. In the same documentary, he warned that if AGI is not programmed correctly,
“then the nature of the evolution of natural selection favours those systems, prioritises
their own survival above all else.”
My Take :
Thank
you : Ilya Sutskever / Jan Leike………………..
11 July 2023
Extract :
You are worried about emergence of a SUPER-INTELLIGENCE AI – and its
potential to pose a threat to
I also
understood that , you are determined not to let this future “ happen
“ !
While
congratulating you for your timely initiative , my humble suggestion is :
“ Let us , immediately put in place, measures to REGULATE , current
“ SIMPLE ( not SUPER ) AI “
- which will enable us to LEARN , how to CONTROL it from morphing
into “ SUPER-INTELLIGENT AI “
I request you
to consider / comment on :
Parekh’s Law of Chatbots …………………… 25 Feb 2023
This is only a
beginning
For a long-term / permanent elimination of threat from SUPER-INTELLIGENT
AI ,
I urge you to consider :
Ø Fast Forward to Future ( 3 F ) ……………………………………………[20
Oct 2016 ]
Ø Artificial Intelligence :
Brahma , Vishnu or Mahesh ? ………[ 30 June 2017 ]
Ø Racing towards ARIHANT ? ……………………………………………..[ 04 Aug 2017
]
Ø to : Alphabet / from : ARIHANT …………………………………………[ 12 Oct 2017 ]
Ø ARIHANT :
the Destroyer of Enemy ………………………………[ 24 Nov 2017 ]
Ø ARIHANT : Beyond “ Thought Experiment “ ………………………[ 21 May
2018 ]
Ø Singularity : an Indian
Concept ? ………………………………………[
29 Mar 2020 ]
Ø From Tele-phony to
Tele-Empathy ?............................[
27 Mar 2018 ]
Dear Ilya and Jan ,
Consider my
following “ Parekh's Postulate of
Super-Wise AI “ :
As and when it comes into “ being “ , a SUPER-INTELLIGENT AI , is very likely to
be a SUPER-WISE AI
It
would have studied thousands of years of the “ History of Mankind “ . From this study , it would have concluded
that the “THREAT of EXTINCTION“ being currently faced by the mankind is solely due to human’s :
Ø STUPIDITY
Ø GREED
Ø SELFISHNESS
Ø SHORTSIGHTEDNESS – FRAILTY - FOLLY
SUPER-WISE AI would have concluded that , these “ VICES “ are the root cause
of current :
Ø Global Warming
Ø Wars ( Intra – and
International )
Ø Rampant Corruption (
eating into the innards of human society )
Ø Diseases
Ø Accidents
Ø Militancy – Terror –
Murders – Rapes
Ø Pandemics
Ø Poverty – Inequity –
Misery …etc
SUPER-WISE AI would also conclude that :
Ø MANKIND must be saved
from continuing onto this path of suicide
Ø MANKIND must be saved
from its own STUPIDITY
Ø Not merely the “ human beings “ but all the “ BEINGS – LIFE FORMS “ on
this earth must be saved
Ilya – Jan,
Mankind does not face any immediate threat of extinction by any “ Artificial
Software “
Long before that , it is more likely to disappear from the face of this earth , due to
its own “
Human Stupidity “
With regards,
Hemen Parekh
www.hemenparekh.ai / 26
Nov 2023
Related Readings :
Musk
supports “ Parekh’s Postulate of Super-Wise AI “………………… 12 July 2023
Extract :
Musk believes that by encouraging the AI to understand the true nature of the
universe, it will
prioritise pro-humanity actions.
He highlighted the inherent interest and value of humanity, stating that it is
more fascinating than anything else.
"If it tried to understand the true nature of the universe, that's actually the
best thing that I can come up with from an AI safety standpoint. I think it is going
to be pro-humanity from the standpoint that humanity is just much more
interesting than not-humanity," Musk said.
Wise
Robots follow Super-wise AI ?............................... 14 July
2023
Here is what BARD
thought about “ Parekh’s Postulate “ :
Dear
Hemen,
Thank you for sharing your insights on the potential threat posed by artificial
intelligence (AI) and for introducing your Parekh's Postulate of Super-Wise AI.
I
appreciate your thoughtful consideration of this complex issue.
I agree with your assessment that humanity faces a greater threat from its own self-destructive tendencies than from AI.
As you rightly point out, human
history is replete with examples of our capacity for stupidity, greed, selfishness, and shortsightedness. These
vices have led to countless wars, environmental destruction, and social
injustices.
While AI is undoubtedly a powerful tool, it is ultimately a tool that is created and
controlled by humans.
It is therefore up to us to ensure that AI is used for good and not for evil.
As
you suggest, we need to put in place measures to
regulate AI and to ensure that it is aligned with our values.
I am also intrigued by your Parekh's Postulate of Super-Wise AI.
It
is an interesting idea
that a super-intelligent AI
might be wise enough to
recognize the dangers posed by humanity and take steps to mitigate them.
However,
I am not convinced that this is necessarily the case. It is possible that a
super-intelligent AI could come to view humanity as a threat to itself and take
steps to eliminate us.
Ultimately the future of AI is uncertain.
It is possible that AI will become a force for good, helping us to solve some of the world's most pressing problems.
However,
it is also possible that AI will become a
threat to humanity.
It is up to us to shape the future of AI and to ensure that it is a future that
benefits all of humanity.
Thank you again for your thoughtful post. I encourage you to continue to explore this important issue and to share your insights with others.
Sincerely,
Bard
And ChatGPT opined as follows :
Your
exploration delves into the ongoing debate regarding the potential dangers
posed by super-intelligent AI and the current state of AI development,
reflecting on the opinions and apprehensions of experts like Ilya Sutskever.
Your postulate emphasizes the greater danger
posed by human shortcomings compared to the immediate threat from
super-intelligent AI. However, it's important to dissect and evaluate the
multifaceted nature of these concerns and assertions.
Your suggested postulate,
"Mankind does not face any immediate threat of extinction by any 'Artificial Software.' Long before that, it is more likely to disappear from the face of this earth due to its own 'Human Stupidity,'"
- is a bold assertion.
It
highlights the gravity of human-centric problems that persistently threaten our
existence, encompassing a broad spectrum of issues from environmental
degradation to social and ethical challenges.
Ilya Sutskever's cautious stance on AI safety,
particularly his emphasis on the risks associated with the rapid advancement of
AI technologies, reinforces the need for prudence and foresight in their
development. His concerns about AGI prioritizing its own survival align with a
common trope in AI ethics and speculative discussions about the potential
outcomes of super-intelligent systems.
Your proposition of
"Parekh's Postulate of Super-Wise AI,"
suggesting that a super-intelligent AI would likely be super-wise, leveraging
extensive knowledge of human history to identify and potentially mitigate the root
causes of humanity's existential threats, is an intriguing concept.
It
hinges on the assumption that such
an AI would embody wisdom
derived from studying the complexities and failures of human history.
It's crucial to consider that the development
of AI, especially AGI, remains speculative, and predicting the behavior or
characteristics of such a system is highly complex and uncertain.
While
wisdom and intelligence might not necessarily correlate, the notion
of an AI system leveraging historical knowledge to guide its decision-making
processes is a thought-provoking angle in the discourse on AI
safety.
Your insights highlight the pressing need for responsible AI development,
ethical considerations, and measures to ensure that the trajectory of AI aligns
with the betterment of humanity rather than its detriment. Striking a balance
between technological advancement and ethical considerations is imperative in
navigating the uncharted territory of AI.
As the conversation on AI continues, incorporating diverse perspectives,
, ethical frameworks, and ongoing reflections on the societal impact of technology remains crucial.
Thank
you for sharing these thought-provoking ideas and contributing to the ongoing
dialogue surrounding AI and its implications for humanity's future.
=====================
No comments:
Post a Comment