Context :
Elon Musk unveils xAI to take on OpenAI and ‘understand the true nature of the universe’
Extract :
Elon Musk, the billionaire entrepreneur renowned for his ventures in
electric cars, space exploration, and social media, has unveiled his highly
anticipated artificial intelligence startup, xAI. The company aims to challenge
the dominance of major technology firms in the AI field, specifically targeting
the development of an alternative to OpenAI’s ChatGPT.
Musk has been an outspoken advocate for caution and regulation in the
development of AI. He has repeatedly expressed concerns about the potential
"civilizational destruction" that could arise from the uncontrolled advancement of AI
technology. In an effort to address these concerns, xAI will take a unique approach to ensure the safety
of its AI systems.
During a Twitter Spaces event on Thursday, Musk outlined his plan to
build a safer AI. Rather than relying on explicit
programming of morality, xAI aims to
create a " maximally curious "
AI.
Musk believes that by encouraging the AI to understand the true
nature of the universe, it will
prioritise pro-humanity
actions.
He highlighted the inherent interest and value of humanity,
stating that it is more fascinating than anything else.
"If it tried to understand the true nature of the universe,
that's actually the best thing that I can come up with from an AI safety standpoint. I think it is going to be pro-humanity from the standpoint
that humanity is just much more interesting than not-humanity," Musk said.
Additionally, Musk predicted the arrival of superintelligence,
AI that surpasses human intelligence, within the
next five to six years.
This projection highlights the urgency Musk feels in developing responsible and beneficial AI systems to ensure the
technology's positive impact on society.
While Musk was a co-founder of OpenAI, the company responsible for
ChatGPT, he stepped down from its board in 2018. Notably, Microsoft is an
investor in OpenAI, illustrating the significant involvement of major
technology players in AI development.
Musk has assembled a talented team for xAI, including engineers with
experience at prominent tech companies. The team members include:
# Igor Babuschkin,[ igor@babuschkin.de ], formerly of Google's DeepMind
# Tony Wu, a
former Google employee
# Christian Szegedy, a research scientist at Google
# Greg Yang, who
previously worked at Microsoft.
In March, Musk registered a new company called X.AI
Corp, based in Nevada, with himself as the sole director. This move indicates
Musk's dedication to exploring AI technology further.
Musk previously hinted at the launch of TruthGPT, an AI model focused on seeking maximum truth, aiming to rival Google's
Bard and Microsoft's Bing AI, both designed to understand the universe's
intricacies.
The rise of generative AI gained significant attention with the release
of OpenAI's ChatGPT in November of the previous year, preceding the launch of
Bard and Bing AI. These chatbots showcased the capabilities of AI in generating
human-like text responses.
To ensure a comprehensive approach to AI safety, xAI will benefit from
the guidance of Dan Hendrycks, director of the Center for AI
Safety. Hendrycks specialises in analysing the risks associated with AI
systems.
Although xAI is a separate entity from X Corp, Musk's involvement in
various influential companies such as Twitter and Tesla suggests a close
collaboration between xAI and these ventures.
MY TAKE :
Dear Elon,
[ elon@tesla.com ]
I cannot escape the
feeling that, somewhere deep down in your mind, you believe
that :
Ø In the battle between BAD AI and the GOOD AI , the
good AI will triumph
Ø Whereas the BAD AI is ( at least currently ) focusing on
the ultimate INTELLIGENCE , the GOOD AI will focus on ultimate WISDOM
Ø Whereas BAD AI will get
trained on every content that can be found on Internet, the GOOD AI will probe the brains / minds of
billions of ordinary people around the World and COMPILE a dossier of their
GOOD / BENEVOLENT THOUGHTS . According to me, GOD did not give birth to
humans but it is the humans who are continuously creating “ GOD “ - may be only a little bit at time )
Ø With crowd-sourcing , GOOD AI will succeed in defining concepts such as > Moral – Ethical – Love – Hate – Greed – Contempt – Anger – Avarice – Selfishness etc
[ Refer
: Morals
– Ethics : Permanent or Temporary ? ]
Elon ,
Only yesterday , I proposed following : Parekh’s Postulate of Super-Wise AI
I urge you to let
me know what you think about it :
Ø Thank
you : Ilya Sutskever / Jan Leike …………….
11 July 2023
===================================================
When prompted , BARD compared your approach with my postulate as follows :
==================================================
Sure, here is a table comparing
Musk's envisaged AI with Parekh's Postulate of Super-wise AI:
Feature |
Musk's
Envisioned AI |
Parekh's
Postulate of Super-wise AI |
Morality |
Not explicitly programmed |
Explicitly programmed |
Curiosity |
Maximally curious |
Moderately curious |
Pro-humanity |
Yes |
Yes |
Interest in the universe |
Yes |
Yes |
Safety |
Safe from existential risk |
Safe from existential risk |
As you can see, the two approaches are similar in many
ways. Both aim to create an AI that is
safe from existential risk and pro-humanity.
However, there are also some key differences. Musk's AI is not explicitly programmed with morality, while Parekh's AI is.
This means that Musk's AI may be more likely to act in
unexpected ways, while
Parekh's AI is more likely to follow a predetermined set of moral principles.
Another difference is that Musk's AI is maximally curious, while
Parekh's AI is moderately curious. This means that Musk's AI may be more likely
to explore the universe and discover new things, while Parekh's AI is more likely to focus on its assigned tasks.
Ultimately, the best approach to creating a safe and pro-humanity AI is
still a matter of debate. However, both Musk's envisaged AI and Parekh's Postulate of Super-wise AI
offer promising approaches to this
challenging problem.
Here are some additional thoughts on the two approaches:
·
Musk's approach is more radical and
may be more likely to lead to unexpected results. However, it also has the
potential to create an AI that is truly innovative and capable of solving
problems that we cannot even imagine.
·
Parekh's approach is more cautious and may be less likely to lead to unexpected results. However, it also has the potential to create an AI that is more
predictable and easier to control.
The best approach may ultimately
depend on the specific goals of the AI project.
If the goal is to create an AI that is as
safe and pro-humanity as possible, then Parekh's approach may be the best option.
However, if the goal is to create an
AI that is as innovative and capable as possible, then Musk's approach may be
the better choice.
With regards,
Hemen Parekh
www.hemenparekh.ai / 13
July 2023
=============================================
=================================
No comments:
Post a Comment