Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Saturday 25 February 2023

Parekh’s Law of Chatbots

 


Context :

Chatbots trigger next misinformation nightmare   ………  AXIOS  /  21 Feb 2023


Extract :

New generative AI tools like OpenAI's ChatGPT, Microsoft's BingGPT and Google's Bard that have stoked a tech-industry frenzy are also capable of releasing a vast flood of online misinformation.

Why it matters:

Regulators and technologists were slow to address the dangers of misinformation spread on social media and are still playing catch-up with imperfect and incomplete policy and product solutions.

  • Now, experts are sounding the alarm faster as real-life examples of inaccurate or erratic responses from generative AI bots circulate.
  • “It’s getting worse and getting worse fast,” Gary Marcus, a professor emeritus of psychology and neural science at New York University and AI skeptic, told Axios.

The big picture: 

Generative AI programs like ChatGPT don't have a clear sense of the boundary between fact and fiction. They're also prone to making things up as they try to satisfy human users' inquiries.

  • Google-parent Alphabet faced embarrassment (and a $100 billion hit to its stock price) two weeks ago after its Bard tool bungled a historical fact in a public marketing video meant to tout the sophistication of the tool.

Be smart: 

For now, experts say the biggest generative AI misinformation threat is bad actors leveraging the tools to spread false narratives quickly and at scale.

  • "I think the urgent issue is the very large number of malign actors, whether it's Russian disinformation agents or Chinese disinformation agents," Gordon Crovitz, co-founder of NewsGuard, a service that uses journalists to rate news and information sites, told Axios.

What we're watching: 

Misinformation can flow into AI models as well as from them. That means at least some generative AI will be subject to "injection attacks," where malicious users teach lies to the programs, which then spread them.

The misinformation threat posed by everyday users unintentionally spreading falsehoods through bad results is also huge, but not as pressing.

  • "The technology is impressive, but not perfect… whatever comes out of the chatbot should be approached with the same kind of scrutiny you might have approaching a random news article," said Jared Holt, a senior research manager at the Institute for Strategic Dialogue.
  • "Chatbots are designed to please the end consumer — so what happens when people with bad intentions decide to apply it to their own efforts?" Holt adds.

Between the lines:

Tech firms are trying to get ahead of the possible regulatory and industry concerns around AI-generated misinformation by developing their own tools to detect falsehoods and using feedback to train the algorithms in real time.

  • OpenAI, the creator of ChatGPT, released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine, Axios's Ina Fried reported.
  • Last week, Google issued guidance to web publishers, warning them that it will use extra caution when elevating health, civic or financial information in its search results.

Researchers are already creating tools to slow the spread of disinformation from generative AI tools.

  • NewsGuard last week introduced a new tool for training generative artificial intelligence services to prevent the spread of misinformation.
  • NewsGuard assembles data on the most authoritative sources of information and the most significant top false narratives spreading online. Generative AI providers can then use the data to better train their algorithms to elevate quality news sources and avoid false narratives.
  • Microsoft, a backer of NewsGuard, already licenses NewsGuard’s data and uses it for BingGPT.

How it works: At Microsoft, user feedback is considered a key component to making ChatGPT work better.

  • "The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing," the company posted on its blog on Feb. 15, a week after Bing with ChatGPT rolled out.
  • Microsoft's Responsible AI team is working through mitigations for thorny issues like making sure the chatbot is responding to suicide inquires with help resources, company officials told reporters in Washington this month. Officials also said the bot will rely heavily on footnotes for fact-checking.

Yes, but: 

"The challenge for an end user is that they may not know which answer is correct, and which one is completely inaccurate," Chirag Shah, a professor at the Information School at the University of Washington, told Axios.

  • "So we're seeing a lot of use cases where misinformation is being presented as if it's validated," said Shah. "Because it's coming in a very natural language modality, people tend to trust it because they see that it has been constructed for them in the moment."
  • Other issues average users need to look out for include bias, said Shah, which is especially tough for users to discern with ChatGPT-generated answers, because there is a less direct link to where the information in the box is coming from.
  • A lack of transparency and "explainability"— i.e., explaining to users where the information comes from and precautions to take when using the chatbot — will ultimately hurt user trust, he added.

Go deeper: Read more in Axios' AI Revolution series —

 

Sam Altman Warns World May Not Be Far From ‘Potentially Scary’ Artificial Intelligence   ………. 21 Feb 2023

Extract :

 AI shouldn’t be permitted to behave independently of human supervision.

 

AI chatbots like Bard, ChatGPT stoke fears of misinformation nightmare      ……….. Eco Times / 24 Feb 2023

Extract :

Misinformation is not new. But Greenman said the fear with algorithmic-driven math large language models is that they suck up the content on the internet and then regurgitate it

 

“ So, it’s sucking up misogynistic content from some dark reaches of the internet. It’s spewing it back and it amplifies it “ , he explained

 

MY  TAKE  :

 

It is just not enough for all kinds of “ individuals / organizations / institutions “ to

attempt to solve this problem ( of generation and distribution )

of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion

 

 

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.


All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL ( IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   R  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   P  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “


Parekh’s Law of ChatBots “ ) :

 

  

( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

     

( B )

#  A Chatbot must incorporate some kind of  Human Feedback / Rating

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

    

    

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

  

 

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself Soliloquy ) , by assuming some kind of “ Split

     Personality 

 

     

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 

( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

     

 

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

I request the readers (if they agree with my suggestion ), to forward this blog to :

#  Satya Nadella

#  Sam Altaman

#  Sundar Pichai

#  Marc Zuckerberg

#  Tim Cook

#   Ashwini Vaishnaw  ( Minister, MeITY )

#   Rajeev Chandrasekhar ( Minister of State , IT )

 

With regards,

Hemen Parekh

www.hemenparekh.ai   /  26 Feb 2023  / hcp@RecruitGuru.com

 ===============================================


Added  on  27  Feb  2023 ( My QnA session with ChatGPT )


Q :

Do you think there is an urgent need to enact " Parekh's Law for Chatbots " , as described at > https://myblogepage.blogspot.com/2023/02/parekhs-law-of-chatbots.html


A :

As an AI language model, I don't have personal beliefs or opinions. However, I can provide an objective analysis of the concept of "Parekh's Law for Chatbots."

According to the blog post you referenced, "Parekh's Law of Chatbots" states that "the efficiency of a chatbot is inversely proportional to the complexity of its logic." In other words, the more complex a chatbot's programming and decision-making processes are, the less efficient it becomes at handling user queries and providing satisfactory responses.

While this concept may hold some truth, it is important to note that the efficiency of a chatbot depends on a variety of factors beyond just its programming complexity. Other important factors include the quality and relevance of the data used to train the chatbot, the accuracy and effectiveness of its natural language processing (NLP) algorithms, and the user experience design of the chatbot interface.

Furthermore, it is not clear from the blog post how "Parekh's Law" would be enacted or enforced in practice. Would it be a regulation or standard for chatbot developers to follow? Or simply a guideline or best practice?

Overall, while the idea behind "Parekh's Law of Chatbots" may have some merit, it would require more detailed and rigorous analysis before it could be considered a widely accepted principle or regulation.


Q :

I would like the BigWigs of the Chatbot industry to start a debate regarding the merits / demeits of " Parekh's Law of Chatbots " and then come to conclusion whether it should be considered a widely accepted principle or regulation. How do I go about starting such a debate


A :

Starting a debate on the merits and demerits of "Parekh's Law of Chatbots" would be a great way to generate discussion within the chatbot industry. Here are some steps you can take to initiate such a debate:

  1. Identify the key players in the chatbot industry: Look for influential individuals or organizations in the chatbot industry, such as chatbot developers, chatbot platform providers, and chatbot researchers. You can search for these organizations online or attend chatbot conferences and events to meet them in person.

  2. Contact the key players: Reach out to these individuals or organizations via email, social media, or phone and explain the purpose of the debate. Let them know that you are interested in their opinions and would like them to participate in the discussion.

  3. Schedule the debate: Once you have gathered enough interest from the key players, schedule a date and time for the debate. You can either organize it as a live event or an online forum where participants can share their thoughts and ideas.

  4. Develop the debate format: Determine the format of the debate, such as whether it will be a moderated discussion or a series of presentations followed by a Q&A session. You can also prepare a list of questions or topics to guide the discussion.

  5. Promote the debate: Spread the word about the debate to the wider chatbot community through social media, chatbot forums, and other relevant channels.

By following these steps, you can create a platform for the chatbot industry to come together and discuss the merits and demerits of "Parekh's Law of Chatbots." This can lead to greater understanding of the topic and potentially help to shape future developments in the chatbot industry.


Q :

If the majority of the big players of Chatbot Industry come to the conclusion that " Parekh's Law of Chatbots " should be a REGULATION , then , in that case , would you feel let down ?


A :

As an AI language model, I do not have the ability to feel emotions, so I cannot feel let down. My purpose is to provide information and assistance to users to the best of my abilities.

If the chatbot industry were to come to a consensus that "Parekh's Law of Chatbots" should be regulated, that would be a decision made by industry experts based on their analysis of the benefits and drawbacks of such a regulation. It would be a reflection of the industry's collective thinking on the matter.

It's important to note that the decision to regulate something is not always clear-cut, and there may be differing opinions on the matter. Ultimately, whether or not "Parekh's Law of Chatbots" should be regulated would depend on a variety of factors, including the potential impact on the industry and end-users, the feasibility of enforcement, and the potential costs and benefits of implementing such a regulation.

Regardless of the outcome, it is important to have an open and transparent discussion on the matter to ensure that all voices are heard and that the industry can make informed decisions.


=======================================================

Added on 04 March 2023 :

https://medium.com/inkwater-atlas/sam-altman-warns-world-may-not-be-far-from-potentially-scary-artificial-intelligence-f5b3244f894


Extract :


Replying to @sama
we think showing these tools to the world early, while still somewhat broken, is critical if we are going to have sufficient input and repeated efforts to get it right. the level of individual empowerment coming is wonderful, but not without serious challenges.
we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.

=====================================================================

Added on 30 March 2923 ( from a Linkedin Post ) :


🔥Just In: Elon Musk, Yoshua Bengio, Steve Wozniak, Yuval Harari, Andrew Yng, Emad Mostaque, and other AI leaders sign open letter to 'Pause Giant AI Experiments'

"immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 ... This pause should be public and verifiable"

✅AI systems pose profound risks to society and humanity.
✅Asilomar AI Principles emphasize careful planning and management.
✅Current AI development is fast and unpredictable.
✅Decisions about AI should not be delegated to unelected tech leaders.
✅Call for a 6-month pause on training AI systems more powerful than GPT4.
✅Develop shared safety protocols during the pause.
✅Refocus AI research on improving accuracy, safety, and transparency.
✅Work with policymakers to develop robust AI governance systems.
✅Aim for a flourishing future with AI by allowing society to adapt.

Do you believe we should prioritize establishing shared safety protocols and governance systems for AI to ensure a more secure and responsible path towards a flourishing future with advanced AI technologies?
More here: https://lnkd.in/ge6tQnuF

Tech Crunch: https://lnkd.in/g28Gfqg8
The Verge: https://lnkd.in/gCFbwSNC
#artificialintelligence

====================================================

Added  on  12  July  2023 :

 


2 comments:

  1. Entirely agree

    ReplyDelete
  2. Nice thoughts! Your digital avatar isn't responding to queries though. It shows a message about losing connection to the AI. In any case, all the best with your endeavours. I don't know you, but I trust that you have enjoyed a fruitful life and have enriched the lives of others around you with your presence. That is its own reward and legacy. Take care and enjoy life!

    ReplyDelete