Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday, 12 July 2023

Musk supports “ Parekh’s Postulate of Super-Wise AI “

 


 

Context :

Elon Musk unveils xAI to take on OpenAI and ‘understand the true nature of the universe’

Extract :

 

Elon Musk, the billionaire entrepreneur renowned for his ventures in electric cars, space exploration, and social media, has unveiled his highly anticipated artificial intelligence startup, xAI. The company aims to challenge the dominance of major technology firms in the AI field, specifically targeting the development of an alternative to OpenAI’s ChatGPT.

 

Musk has been an outspoken advocate for caution and regulation in the development of AI. He has repeatedly expressed concerns about the potential "civilizational destruction" that could arise from the uncontrolled advancement of AI technology. In an effort to address these concerns, xAI will take a unique approach to ensure the safety of its AI systems.

 

During a Twitter Spaces event on Thursday, Musk outlined his plan to build a safer AI. Rather than relying on explicit programming of morality, xAI aims to create a " maximally curious " AI. 

 

Musk believes that by encouraging the AI to understand the true nature of the universeit will prioritise pro-humanity actions

 

He highlighted the inherent interest and value of humanity, stating that it is more fascinating than anything else.

 

"If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint. I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity," Musk said.

 

Additionally, Musk predicted the arrival of superintelligence, AI that surpasses human intelligence, within the next five to six years

 

This projection highlights the urgency Musk feels in developing responsible and beneficial AI systems to ensure the technology's positive impact on society.

 

While Musk was a co-founder of OpenAI, the company responsible for ChatGPT, he stepped down from its board in 2018. Notably, Microsoft is an investor in OpenAI, illustrating the significant involvement of major technology players in AI development.

 

Musk has assembled a talented team for xAI, including engineers with experience at prominent tech companies. The team members include:


#     Igor Babuschkin,[ igor@babuschkin.de ],  formerly of Google's DeepMind


#     Tony Wu,                                                 a former Google employee

 

#     Christian Szegedy,                                  a research scientist at Google

  

#     Greg Yang,                                         who previously worked at Microsoft.

 

In March, Musk registered a new company called X.AI Corp, based in Nevada, with himself as the sole director. This move indicates Musk's dedication to exploring AI technology further. 

 

Musk previously hinted at the launch of TruthGPT, an AI model focused on seeking maximum truth, aiming to rival Google's Bard and Microsoft's Bing AI, both designed to understand the universe's intricacies.

 

The rise of generative AI gained significant attention with the release of OpenAI's ChatGPT in November of the previous year, preceding the launch of Bard and Bing AI. These chatbots showcased the capabilities of AI in generating human-like text responses.

 

To ensure a comprehensive approach to AI safety, xAI will benefit from the guidance of Dan Hendrycks, director of the Center for AI Safety. Hendrycks specialises in analysing the risks associated with AI systems.

 

Although xAI is a separate entity from X Corp, Musk's involvement in various influential companies such as Twitter and Tesla suggests a close collaboration between xAI and these ventures.

 

MY  TAKE  :

Dear Elon,

[  elon@tesla.com  ]



I cannot escape the feeling that, somewhere deep down in your mind, you believe that :

Ø  In the battle between BAD AI and the GOOD AI , the good AI will triumph

Ø  Whereas the BAD AI is ( at least currently ) focusing on the ultimate INTELLIGENCE , the GOOD AI will focus on ultimate WISDOM

Ø  Whereas BAD AI will get trained on every content that can be found on Internet, the GOOD AI will probe the brains / minds of billions of ordinary people around the World and COMPILE a dossier of their GOOD / BENEVOLENT THOUGHTS . According to me, GOD did not give birth to humans but it is the humans who are continuously creating  “ GOD “ - may be only a little bit at time )

Ø  With crowd-sourcing , GOOD AI will succeed in defining concepts such as > MoralEthicalLoveHateGreedContemptAngerAvariceSelfishness  etc

           [   Refer :  Morals – Ethics : Permanent or Temporary ?  ]

 

 

Elon ,


Only yesterday , I proposed following :  Parekh’s Postulate of Super-Wise AI

I urge you to let me know what you think about it :

Ø  Thank you : Ilya Sutskever / Jan Leike  ……………. 11 July 2023

 

===================================================

When prompted , BARD compared your approach with my postulate as follows :

 ==================================================

 

Sure, here is a table comparing Musk's envisaged AI with Parekh's Postulate of Super-wise AI:

Feature

Musk's Envisioned AI

Parekh's Postulate of Super-wise AI

Morality

Not explicitly programmed

Explicitly programmed

Curiosity

Maximally curious

Moderately curious

Pro-humanity

Yes

Yes

Interest in the universe

Yes

Yes

Safety

Safe from existential risk

Safe from existential risk

 

As you can see, the two approaches are similar in many ways. Both aim to create an AI that is safe from existential risk and pro-humanity.

However, there are also some key differences. Musk's AI is not explicitly programmed with morality, while Parekh's AI is.

This means that Musk's AI may be more likely to act in unexpected ways, while Parekh's AI is more likely to follow a predetermined set of moral principles.

Another difference is that Musk's AI is maximally curious, while Parekh's AI is moderately curious. This means that Musk's AI may be more likely to explore the universe and discover new things, while Parekh's AI is more likely to focus on its assigned tasks.

Ultimately, the best approach to creating a safe and pro-humanity AI is still a matter of debate. However, both Musk's envisaged AI and Parekh's Postulate of Super-wise AI offer promising approaches to this challenging problem.

Here are some additional thoughts on the two approaches:

·         Musk's approach is more radical and may be more likely to lead to unexpected results. However, it also has the potential to create an AI that is truly innovative and capable of solving problems that we cannot even imagine.

·         Parekh's approach is more cautious and may be less likely to lead to unexpected results. However, it also has the potential to create an AI that is more predictable and easier to control.

The best approach may ultimately depend on the specific goals of the AI project.

If the goal is to create an AI that is as safe and pro-humanity as possible, then Parekh's approach may be the best option.

However, if the goal is to create an AI that is as innovative and capable as possible, then Musk's approach may be the better choice.

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  13 July 2023

 

 =============================================



 =================================

Added on 16 July 2023 ( MINT ) :




Here are 7 things that you should know about xAI:

A “maximally curious" AI platform

During a Twitter Spaces event on Wednesday evening, Elon Musk shared his strategy for developing a safer artificial intelligence. He explained that xAI would not focus on explicitly programming morality into its AI systems. Instead, the company aims to create an AI that is "maximally curious." By fostering a high level of curiosity in the AI, Musk believes it will naturally align its behavior with human values and reduce the potential risks associated with AI development.

"If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint. I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity," Musk said.

Meaning of xAI

According to NDTV, in the realm of technology, the term "xAI" is often used as an abbreviation for Explainable AI or Interpretable AI. This concept focuses on enabling humans to comprehend the rationale behind the decisions or predictions made by artificial intelligence systems. 

To shed light on the company's objectives and vision, the xAI team has organized a Twitter Spaces event, a virtual meeting, scheduled for July 14. During this session, they will address inquiries and provide insights into the company's workings.

Timeline for superintelligence

Musk made a prediction that superintelligence, referring to artificial intelligence surpassing human intelligence, would emerge within a span of five to six years.

xAI’s team

xAI's team comprises notable individuals with extensive experience in the tech industry. This includes Igor Babuschkin, formerly an engineer at Google's DeepMind, Tony Wu, who has worked at Google, Christian Szegedy, a former research scientist at Google, and Greg Yang, who previously held a position at Microsoft.

Musk’s role in xAI

According to a state filing, Musk registered a company called X.AI Corp in March, incorporated in Nevada. Musk is listed as the sole director of the firm, and Jared Birchall, the managing director of Musk's family office, serves as the secretary. 

Dan Hendrycks, currently serving as the director of the Center for AI Safety, has been appointed as an advisor to the xAI team. Hendrycks specializes in assessing the risks associated with AI, making his expertise valuable to the team.

xAI is different from X Corp

Musk's recently established company operates independently from X Corp but aims to maintain close collaborations with notable companies such as Twitter and Tesla, among others, as mentioned on the xAI website. The xAI has announced its plans to recruit skilled engineers and researchers in the San Francisco Bay Area to strengthen its team.

Details on investors

According to Time Magazine, xAI reportedly acquired 10,000 graphics processing units (GPUs) in March. These hardware components are essential for the development and operation of advanced AI systems. Although xAI has not provided details about its financing, the Financial Times reported in April that Musk was exploring the possibility of securing funding from investors associated with SpaceX and Tesla, both of which are companies under his leadership.

Tuesday, 11 July 2023

Thank you : Ilya Sutskever / Jan Leike

 


 

 

 

Ilya Sutskever ……….. OpenAI Cofounder ………………… ilyasu@Openai.com  / @ilyasut ( Twitter )

Jan Leike …………………OpenAI Head of Alignment…….. jan@Openai.com  /  @janleike ( Twitter )

 

Context :

OpenAI commits to ‘superalignment’ research   …………. TechMonitor …… 06 July 2023


Extract :

Artificial intelligence lab OpenAI is launching a new “alignment” research division, designed to prepare for the rise of artificial superintelligence and ensure it doesn’t go rogue.


This future type of AI is expected to have greater than human levels of intelligence including reasoning capabilities. Researchers are concerned that if it is misaligned to human values, it could cause serious harm.


Dubbed “superalignment”, OpenAI, which makes ChatGPT and a range of other AI tools, says there needs to be both scientific and technical breakthroughs to steer and control AI systems that could be considerably more intelligent than the humans that created it.


To solve the problem OpenAI will dedicate 20% of its current compute power to running calculations and solving the alignment problem.

AI alignment: Looking beyond AGI

 

OpenAI co-founder Ilya Sutskever

 [ ilyasu@Openai.com  ]

and

head of alignment Jan Leike

[ jan@Openai.com ],

wrote a blog post on the concept of superalignment, suggesting that the power of a superintelligent AI could lead to the disempowerment of humanity or even human extinction.


 “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the pair wrote.


They have decided to look beyond artificial general intelligence (AGI), which is expected to have human levels of intelligence, and instead focus on what comes next.


This is because they believe AGI is on the horizon and superintelligent AI is likely to emerge by the end of this decade, with the latter presenting a much greater threat to humanity.


Current AI alignment techniques, used on models like GPT-4 – the technology that underpins ChatGPT – involve reinforcement learning from human feedback.


This relies on human ability to supervise the AI but that won’t be possible if the AI is smarter than humans and can outwit its overseers.


“Other assumptions could also break down in the future, like favorable generalisation properties during deployment or our models’ inability to successfully detect and undermine supervision during training,” explained Sutsker and Leike.


This all means that the current techniques and technologies will not scale up to work with superintelligence and so new approaches are needed.


 “Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence,” the pair declared.

Superintelligent AI could out-think humans

 

OpenAI has set out three steps to achieving the goal of creating a human-level automated alignment researcher that can be scaled up to keep an eye on any future superintelligence.


This includes providing a training signal on tasks that are difficult for humans to evaluate – effectively using AI systems to evaluate other AI systems. They also plan to explore how the models being built by OpenAI generalise oversight tasks that it can’t supervise.


There are also moves to validate the alignment of systems, specifically automating the search for problematic behaviour externally and within systems.


Finally the plan is to test the entire pipeline by deliberately training misaligned models, then running the new AI trainer over them to see if it can knock it back into shape, a process known as adversarial testing.

 

MY  TAKE :


Dear Ilya and Jan  :


What you propose is beyond my capacity to grasp – even remotely

But , I did understand one thing :


You are worried about emergence of a SUPER-INTELLIGENCE AI – and its potential to pose a threat to HUMANITY


I also understood that , you are determined not to let this future “ happen “  ! 


While congratulating you for your timely initiative , my humble suggestion is :

“ Let us , immediately put in place, measures to REGULATE , current    SIMPLE ( not SUPER ) AI “ – which will enable us to LEARN , how to CONTROL it from morphing into SUPER-INTELLIGENT AI


I request you to consider / comment on :


Parekh’s Law of Chatbots  ……………………  25 Feb 2023

 

This is only a beginning


For a long-term / permanent elimination of threat from SUPER-INTELLIGENT AI , I urge you to consider :


Ø  Fast Forward to Future ( 3 F ) ……………………………………………. 20  Oct  2016

Ø  Artificial Intelligence : Brahma , Vishnu or Mahesh ? ………[ 30 June 2017 ]

 

Ø  Racing towards ARIHANT ?   ……………………………………………[ 04 Aug 2017 ]

 

Ø    to : Alphabet / from : ARIHANT …………………………………………[ 12 Oct 2017 ]

 

Ø    ARIHANT  :  the  Destroyer  of  Enemy ………………………………[ 24 Nov 2017 ]

 

Ø    ARIHANT : Beyond “ Thought Experiment “  ………………………[ 21 May 2018 ]

 

Ø    Singularity : an Indian Concept ?  ………………………………………[ 29 Mar 2020 ]

 

Ø    From  Tele-phony  to  Tele-Empathy  ?............................[ 27 Mar 2018 ]

 

Dear Ilya and Jan  ,


Consider my following “ Parekh's Postulate of Super-Wise AI “ :


As and when it comes into “ being “ , a SUPER-INTELLIGENT AI , is very likely to be a SUPER-WISE  AI


It would have studied thousands of years of the “ History of Mankind “ . From this study , it would have concluded that the “ THREAT of EXTINCTION “ being currently faced by the mankind is solely due to human :


Ø   STUPIDITY

Ø   GREED

Ø   SELFISHNESS

Ø   SHORTSIGHTEDNESS – FRAILTY - FOLLY 

  

SUPER-WISE  AI would have concluded that , these “ VICES “ are the root cause of current :


Ø  Global Warming

Ø  Wars ( Intra – and International )

Ø  Rampant Corruption ( eating into the innards of human society )

Ø  Diseases

Ø  Accidents

Ø  Militancy – Terror – Murders – Rapes

Ø  Pandemics

Ø  Poverty – Inequity – Misery …etc


SUPER-WISE  AI would also conclude that :


Ø  MANKIND must be saved from continuing onto this path of suicide

Ø  MANKIND must be saved from its own STUPIDITY

Ø  Not merely the “ human beings “ but all the “ BEINGS – LIFE FORMS “ on this earth must be saved

  

Ilya – Jan,


Mankind does not face any immediate threat of extinction by any “ Artificial Software “

Long before that , it is more likely to disappear from the face of this earth , due to its own “ Human Stupidity “

 

 

Following is how BARD commented on my postulate :


I think
your postulate is an interesting one, and it raises some important points.

You argue that a super-intelligent AI would be likely to be super-wise, and that it would conclude that the biggest threat to humanity is our own stupidity, greed, selfishness, and short-sighted-ness. You also argue that this AI would want to save humanity from itself, and that it would not pose an existential threat to us.

I think there is some truth to your postulate.

A super-intelligent AI would have access to a vast amount of information, and it would be able to process that information much faster than any human could. This would give it a deep understanding of the world, and it would be able to see patterns and connections that we would miss.

It is possible that a super-intelligent AI would conclude that the biggest threat to humanity is our own stupidity.

After all, we have a long history of making bad decisions that have led to war, poverty, and environmental destruction.

 We are also prone to self-destructive behaviors, such as addiction and violence.

If a super-intelligent AI did conclude that humanity is a threat to itself, it is possible that it would try to save us from ourselves.

It could do this by intervening in our affairs, or by simply providing us with the knowledge and tools we need to make better decisions.

However, it is also possible that a super-intelligent AI would not see us as a threat, or that it would not care about our survival.

After all, we are just one species among many, and we are not particularly special. If a super-intelligent AI were to decide that we are a threat to itself or to other life forms, it could easily destroy us.

Ultimately, I think the fate of humanity will depend on how we interact with super-intelligent AI.

If we are wise and responsible, we can use AI to improve our lives and solve some of the world's biggest problems.

However, if we are foolish and reckless, we could create a new existential threat for ourselves.

Thank you for sharing your postulate.

It is an interesting and thought-provoking one, and it raises some important questions about the future of humanity.

 

And here is how ChatGPT responded to my postulate :

Your postulate suggests that a super-intelligent AI, once it becomes a reality, would likely possess wisdom and a deep understanding of human history. It would recognize that many of the threats and challenges faced by humanity are a result of human characteristics such as stupidity, greed, selfishness, and shortsightedness.

While it is an interesting perspective, it is important to note that the behavior and capabilities of a super-intelligent AI cannot be predicted with certainty.

AI systems, especially those that surpass human intelligence, would have the potential to develop their own goals, values, and motivations, which may or may not align with the well-being of humanity.

Additionally, it is challenging to define and program concepts like wisdom and moral values into AI systems. While we can provide guidelines and ethical frameworks, the AI's interpretation and application of these principles might differ from our own.

The concern about the existential threat posed by super-intelligent AI arises from the potential scenario where an AI system with vastly superior capabilities acts against the interests of humanity, either intentionally or inadvertently.

Ensuring the safe development and deployment of AI technology requires careful consideration, research, and regulation to mitigate risks and ensure that it aligns with human values and well-being.

While it is essential to address the societal issues you mentioned, attributing the responsibility solely to human stupidity is an oversimplification.

Many challenges arise from complex systems, including political, economic, and social structures, resource management, and technological advancements.

It is a collective responsibility to address these issues, and AI can potentially play a role in helping us find solutions, provided it is developed and used responsibly.

In summary, the future behavior and impact of super-intelligent AI are uncertain. While it is beneficial to consider the potential benefits and risks associated with AI, it is crucial to approach the development and regulation of AI technology with careful consideration of its potential implications on society.

 

 

 

 

Regards,

Hemen Parekh

www.hemenparekh.ai  /  12  July  2023

 

Related Readings :


Costa Rica takes help from ChatGPT to draft law to regulate  AI

EU adopts Parekh’s Laws of Chatbots  … 28 June 2023

 

Parekh’s Law of Chatbots  ……….  25  Feb  2023

 

My 33 Blogs on ChatBots ……………………( as of 05 Apr 2023 )

Thank You, Ashwini Vaishnawji………………… 10 April 2023

 

EU AI Act explained  ........... 28 June 2023

 

World job market on brink of 'AI revolution': OECD

 

============================================