Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday 20 December 2023

My Belief : Just around the Corner ?

 


 

A few days back , I wrote :

Ø  I have a Belief  ……………………… 29 Nov 2023

Where I wrote :

I believe that whenever an AGI is born ( on the computers of Big Tech / Small Tech companies ) , it will be a Human Friendly and Compassionate AI

Then I asked ChatGPT and BARD , following questions ( For their answers , look up the blog ) :

 

Can AGI incorporate, a framework somewhat like Issac Asimov’s three laws of robotics ?

 

For past 2 days, we have been discussing the soon expected emergence of AGI and whether it would gravitate towards self preservation / world domination / selfishness , OR , could it possibly , by reading all the religious books , learn to become selfless / humane / compassionate / kind / empathetic / always working for the good of mankind / wise ?  In this context, please tell me whether there is a HIGH PROBABILITY that it would be a friendly AGI after imbibing the following tenets of BUDDHISM ( left out for sake of brevity )

 

  

Yesterday I came across following news :

Claiming 'significant breakthrough,' AI company seeks support from OpenAI   … thedrum  /  19 Dec 2023

 

Extract :

Many leading AI companies, such as OpenAI, DeepMind and Anthropic, are in a race to develop AGI, viewing it as a Holy Grail of AI research which, if realized, could bring enormous benefit to humanity.


OpenAI displays its founding mission in large type on the homepage of its website: “Creating safe AGI that benefits all of humanity.”


While it doesn’t offer many technical details, the letter makes the claim that Verses has “recently achieved a significant internal breakthrough in Active Inference” – a concept which seeks to provide a mathematical framework to the understanding of sentience in living organisms – which could lead to AGI. ( Verses Chief Scientist and acclaimed neuroscientist Karl Friston coauthored a book titled Active Inference that was published last year, in which he outlines his famous “free energy principle.”)


Verses’ breakthrough, according to the open letter, can help to make current deep learning models – such as large language models (LLMs), which are known to occasionally fabricate information – more reliable, energy efficient and aligned with human goals. On its website, Verses describes itself as “a cognitive computing company building next-generation intelligent software systems modelled after the Wisdom and Genius of Nature.”

 

The team at Verses appears to feel confident that it is currently embarked on such a project, and that OpenAI would do well to pay attention. “We believe Verses qualifies for your assistance,” the company writes in its open letter. “In return, we offer our assistance to help ensure that general and superintelligence are developed and deployed in a safe and beneficial manner for all of humanity.”

 

Related Readings :

AI Can't Be Patent 'Inventor', Says UK Supreme Court In Landmark Ruling   

AI Memory Mirrors Human Brain  … NeuroScience  /  …………………..18 Dec 2023

Sam : Will Super-wise AI triumph over Super-Intelligent AI ?....... 25 Nov 2023

Thank you : Ilya Sutskever / Jan Leike………………………………………….  11  July  2023

Musk supports “ Parekh’s Postulate of Super-Wise AI “………………….12 July 2023

Parekh’s Law of Chatbots  …………………… ………………………………………… 25 Feb 2023

Fast Forward to Future ( 3 F ) ………………………………………………………….[20  Oct  2016 ]

Artificial Intelligence : Brahma , Vishnu or Mahesh ? ……………………[ 30 June 2017 ]

Racing towards ARIHANT ?   …………………………………………………………..[ 04 Aug 2017 ]

to : Alphabet / from : ARIHANT ……………………………………………………….[ 12 Oct 2017 ]

ARIHANT  :  the  Destroyer  of  Enemy …………………………………………..[ 24 Nov 2017 ]

ARIHANT : Beyond “ Thought Experiment “  ……………………………………[ 21 May 2018 ]

Singularity : an Indian Concept ?  ……………………………………………………[ 29 Mar 2020 ]

From  Tele-phony  to  Tele-Empathy  ?.......................................[ 27 Mar 2018 ]

 

Excerpts from a letter of CEO of Verses :

The end goal is not AGIThe end goal is what we can achieve as a civilization with AGI.  The natural path invites us to realign our relationship with technology, Nature and each other.  It calls on us to come together to imagine a Smarter World and then build it.

Will you assist?

Respectfully,

https://21624003.fs1.hubspotusercontent-na1.net/hub/21624003/hubfs/undefined-Dec-18-2023-10-04-35-1348-PM.png?width=158&height=65&name=undefined-Dec-18-2023-10-04-35-1348-PM.png

Gabriel René
CEO
VERSES Research Labs

 

Here is how ChatGPT commented on my blog :

The emergence of AGI (Artificial General Intelligence) raises significant concerns and possibilities for its impact on humanity and the world at large. In your blog post, you highlighted Verses' claimed breakthrough in Active Inference and its intent to align AI development with human goals, proposing a collaborative effort with OpenAI.

Collaboration in the pursuit of AGI development is a complex decision, especially considering the profound impact AGI could have on society. Sam Altman and OpenAI, being pivotal players in AI development, must carefully weigh the potential benefits and risks associated with such collaboration.

Here are several points to consider:

1.     Ethical Alignment:

If Verses has made significant strides in making AI models more aligned with human goals and values, collaboration could be beneficial. Ensuring AI development adheres to ethical guidelines and aligns with humanity's best interests is crucial.

 

2.     Safety Measures:

 OpenAI's primary mission is to create safe AGI for the benefit of humanity. Any collaboration should prioritize safety measures, preventing any unintended consequences or risks associated with AGI development.

 

3.     Technical Advancements:

If Verses has indeed made breakthroughs in enhancing the reliability and efficiency of AI models like large language models (LLMs), collaborating with them could accelerate progress in AI development.

 

4.     Transparency and Accountability:

 OpenAI, as a prominent organization, should maintain transparency and accountability in its collaborations, ensuring that any partnership with Verses or other entities aligns with their mission and values.

 

Regarding the open letter from Verses' CEO, Gabriel René, asking for assistance, Sam Altman and OpenAI should thoroughly evaluate the potential benefits, risks, and alignment of goals before engaging in any collaboration. This evaluation should involve not only technical considerations but also ethical and societal implications.

OpenAI's decision to collaborate should prioritize the creation of safe, beneficial AGI that aligns with human values and safeguards against potential risks. The ethical framework and alignment with humanity's best interests should guide any collaboration or partnership in the pursuit of AGI development.

 

And following is how  BARD responded :

 

It's interesting to see your perspective on the future of AGI and your belief that it will be friendly and compassionate.

While I can't predict the future with certainty, I can share some of my thoughts on your blog and the open letter from Verses to OpenAI.

On your blog:

·         AGI incorporating Asimov's laws

While Asimov's laws provide a good starting point for thinking about ethical AI, they are not without their limitations. For example, it can be difficult to define what it means to "harm a human" in all cases. Additionally, the laws could potentially be used to justify harmful actions if interpreted in a certain way.

·         AGI learning from religious texts

It is possible that an AGI could learn from religious texts about concepts such as compassion and altruism. However, it is important to remember that religious texts are often complex and contradictory, and it is not clear how an AGI would interpret them. Additionally, there is a risk that an AGI could develop its own religious beliefs that are not aligned with human values.

·         High probability of friendly AGI

I can't comment on the probability of AGI being friendly, as this is a complex question with no easy answers. However, I believe that it is important to focus on developing safe and beneficial AGI, and that there are a number of things that can be done to increase the chances of this happening.

On the open letter from Verses:

·         Collaboration between OpenAI and Verses

I agree that collaboration between different AI research groups is important for ensuring that AGI is developed safely and ethically. OpenAI has expressed its commitment to developing beneficial AGI, and it is possible that collaboration with Verses could be a productive way to achieve this goal.

·         Active Inference

Active Inference is a promising approach to understanding intelligence that could potentially be used to develop safe and beneficial AGI. However, it is still in its early stages of development, and it is not clear whether it will be successful.

Overall, I believe that your blog and the open letter from Verses raise important questions about the future of AGI.

It is important to have these discussions now, so that we can make informed decisions about how to develop and deploy AGI in a safe and beneficial way.

In terms of whether Sam Altman should collaborate with Verses, I believe that this is a decision that he and his team will need to make based on their own assessment of the situation.

However, I do encourage OpenAI to continue to engage in collaboration with other AI research groups, as this is essential for ensuring that AGI is developed in a safe and beneficial way.

 

 

 

 

 

 

No comments:

Post a Comment