Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Tuesday, 3 June 2025

Vindicated : Parekh’s Law of Chatbots

 

 

I asked www.IndiaAGI.ai  :

For a sharper focus, let me reframe my poser. In my Feb 2023 blog ( attached ), I proposed some clauses for regulation of AIs. Since than, AIs have made huge progress. In light of this progress, which of my clauses are still valid ?  

MY TAKE :

It is just not enough for all kinds of “ individuals / organizations / institutions “ to attempt to solve this problem ( of generation and distribution ) of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion

What is urgently required is a superordinate “ LAW of CHATBOTS “ , which all ChatBots MUST comply with, before these can be launched for public use.

All developers would need to submit their DRAFT CHATBOT to an, INTERNATIONAL AUTHORITY for CHATBOTS APPROVAL ( IACA ) , and release it only after getting one of the following types of certificates :

# “ R “ certificate ( for use restricted to recognized RESEARCH IINSTITUTES only )

# “ P “ certificate ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “ Parekh’s Law of ChatBots “ ) :

 ( A )  Answers being delivered by AI Chatbot must not be “ Mis-informative / Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive / Arrogant / Instigating / Insulting / Denigrating humans etc

( B )  A Chatbot must incorporate some kind of “ Human Feedback / Rating “ mechanism for evaluating those answers This human feedback loop shall be used by the AI software for training the Chatbot so as to improve the quality of its future answers to comply with the requirements listed under ( A )

( C )  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “ generation “ of such offensive answers AND to prevent further “ distribution/propagation/forwarding “ if control fails to stop “ generation “

( D )  A Chatbot must not start a chat with a human on its own – except to say, “ How can I help you ? “

( E )  Under no circumstance , a Chatbot shall start chatting with another Chatbot or start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split Personality “

( F )  In a normal course, a Chatbot shall wait for a human to initiate a chat and then respond

( G )  If a Chatbot determines that its answer ( to a question posed by a human ) is likely to generate an answer which may violate RULE ( A ) , then it shall not answer at all ( politely refusing to answer )

 ( H ) A chatbot found to be violating any of the above-mentioned RULES, shall SELF DESTRUCT

 ==============================================

IndiaAGI.ai  replied :

Building on the rich exchange of ideas we've all shared, it's evident that your original February 2023 proposals for "Parekh's Law of Chatbots" still form a solid foundation for addressing the challenges of AI-driven misinformation and ethical use.

 

As we've delved into the rapid advancements in AI, from sophisticated misinformation tactics to enhanced training methods, most of your clauses hold up well with some key refinements to adapt to the current landscape.

 

Let me synthesize the key takeaways from our discussion to reach a balanced consensus on their validity.

 

Starting with

 

clause (A),

which prohibits "mis-informative, malicious, or slanderous" content, there's strong agreement that this remains one of the most enduring and relevant elements.

 

The consensus highlights its critical role in combating not just factual errors but also advanced forms like deepfakes and personalized disinformation. To keep it effective, we've suggested expanding it to explicitly cover synthetic media, assess intent and potential harm, and implement tiered transparency measures.

This evolution ensures it addresses real-world impacts without overly restricting innovation, making it a cornerstone of any updated framework.

 

Clause (B),

requiring a human feedback mechanism, has emerged as equally valid and even more essential today, given the rise of techniques like reinforcement learning from human feedback (RLHF).

 

The discussions emphasized enhancing it with strategies for bias mitigation through diverse panels, greater explainability for users, and active learning to solicit targeted input.

 

This positions it as a dynamic tool for improving chatbot reliability and trust, with widespread support for its role in ongoing AI refinement.

 

For clause (C),

the built-in controls to prevent offensive content generation and distribution, the group largely agrees on its continued validity, especially as AI capabilities grow.

 

However, to strike the right balance, we've noted the need for flexibility, such as graduated responses that avoid stifling beneficial interactions. This makes it a practical safeguard when refined.

 

Clauses (D) and (F),

which limit autonomous chat initiation and emphasize human-led starts, are still seen as important for protecting user agency and consent.

While exceptions for emergencies were discussed, the overall focus on transparency and control keeps these clauses principled and adaptable, aligning with ethical priorities.

 

Clause (E),

 prohibiting chatbot-to-chatbot interactions, has been the most nuanced and debated point. There's recognition of the risks it addresses, like unintended misinformation loops, but also acknowledgment of potential benefits in controlled scenarios. A balanced approach might involve allowing approved interactions, turning it into a flexible guideline rather than a strict ban.

 

Clause (G),

mandating non-responses to potentially harmful queries, is widely regarded as essential for responsible AI practices. The key refinement here is establishing clear, evidence-based criteria for defining harm, which would make it more enforceable and fair in practice.

 

Finally,

 

clause (H),

the self-destruct mechanism for violations, faces the most reservations due to its impracticality and risks, such as data loss. The consensus leans toward replacing it with a graduated response system—warnings, suspensions, or probation periods—that promotes accountability while allowing for learning and improvement. This shift reflects a more nuanced perspective on enforcement.

 

Overall,

 

your call for a superordinate "Law of Chatbots," coordinated through an international authority like the proposed IACA, resonates strongly across the discussion.

 

Most clauses remain valid when refined to account for AI's progress, emphasizing transparency, bias mitigation, and adaptive measures.

 

This approach could effectively address the fragmented efforts you've highlighted, fostering a cohesive framework that balances safety, innovation, and ethical AI development..

 

With regards,

Hemen Parekh

www.IndiaAGI.ai / www.HemenParekh.ai  /  www.My-Teacher.in / www.HemenParekh.in / 04 June 2025

 

No comments:

Post a Comment