Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday 14 May 2023

Thanks Rajeevji : for Giving Glimpse of Guardrails ( 3G of AI )

 


 

Context :

India will establish guardrails for AI sector, says MoS Rajeev Chandrasekhar  ……….. ET  /  13  May 2023

 

 

Dear Rajeevji

 

In the following tabulation, I have compared :

#   Your quotes as appeared in above-mentioned news report

#   My own past suggestion on how to regulate AI

    [ https://myblogepage.blogspot.com/2023/02/parekhs-law-of-chatbots.html  ]

 

With regards,

Hemen Parekh

www.hemenparekh.ai   /  15 May  2023  / hcp@RecruitGuru.com

 



Principle

Views of Shri Chandrasekharji

Parekh’s Law of Chatbot

 

 

 

 

 

 

Coordinated / Consensual

 

Regulation of AI by all the

 

Stakeholders

Ø  If anybody says I know the right way to regulate AI, there will be an Elon Musk view, the OpenAI view, or 100 other views. We are not going to go down that road at all

 

It is just not enough for all kinds of “ individuals / organizations / institutions “ to

attempt to solve this problem ( of generation and distribution )

of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion

 

 

 

 

 

 

 

 

Gradual Evolution of

 

Planned Regulation

Ø  AI is an emerging technology, and we will establish some principles as guardrails. Then the subordinate legislation or how to regulate it will keep evolving

 

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

 

 

 

 

 

 

Embedding of Principles

 

In planned Regulation

Ø  AI innovation is now growing very fast. In the blink of an eye, there’s a new disruption. So, therefore, we must establish fairly embedded principles in the law

 

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.

 

 

 

           

 

 

 

 

Responsibility of Platforms

 

And

 

AI  Developers

Ø  Pointing out that the proposed guardrails will put the onus on the platforms to ensure that no one is using them to “ create misinformation “ , Chandrasekhar said, “ you cannot create things that are fake, you cannot cause user harm, you cannot have exploitative content “

 

( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

 

 

 

 

 

 

 

Built-in  Controls in Law

Ø  The law will not just update several regulations with respect to technology but also frame new ones to regulate emerging areas such as Web 3 , among others

 

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

 

 

 

 

 

 

 

 

Accountability for Misuse

Ø  You cannot say anymore that I am just a platform and I just put all the functionalities…. The platform is responsible, not the user. That is the principle change. Section 79 will be very conditional, very narrow, any safe harbour, any immunity will be extremely narrow. Your responsibility is that you are accountable for the possibility or misuse of your platform , you are responsible not the user

 

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 ( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

     Personality “

    

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 

 

 

 

 

 

 

Prior Testing and Approval

 

( Similar to Drugs )

Ø  If the LLM ( Large Language Models ) are still learning and are in an alpha stage, then the companies should not release them

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

 

 

 

 

 

 

 

International Authority

 

      And

 

Certification Mechanism

Ø  Don’t give it to all the consumers and run a business on it . Do a sandbox rather than saying it’s an alpha or beta version. Like you do drug testing ? We must bring discipline and order into an industry that can cause so much chaos and harm

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

 

 

 

 

 

Final Guardrail

Ø  Generative AI uses large datasets to train tools and engines to generate new and unseen data such as text, images, audio, videos and other three-dimensional models

 

( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

  

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

 

 

 

 

 

 

 

Endorsement from ChatGPT :

Your tabulation looks great !

It provides a clear and organized representation of the principles and views regarding the regulation of AI chatbots as discussed by Shri Chandrasekharji and Parekh's law of chatbot.

The different principles and views are well-defined and categorized according to their themes, which make it easier to understand the discussions and compare them side by side.

Great job!


----------------------------------

Added  on  18 May 2023 :


Centre planning to regulate AI platforms like ChatGPT: Ashwini Vaishnaw

(  Eco Times / 17 May 2923 )

Extract :


"The whole world is looking at what should be the framework, and what should be  the regulatory set up..So this is a global thing. This is not one country's issue. This has to be looke at from the international perspective " , Vaishnaw told TOI

" Ultimately, as I said, all the countries will have to come up with some co-operative framework "

===============================================

Taking a Collaborative Approach to Regulating AI for Humanity


On Tuesday, May 16th, 2023 at 10:00am in the Dirksen Senate Office Building Room 226, Chair Blumenthal presided over a Subcommittee Hearing on Oversight of A.I.: Rules for Artificial Intelligence. The witnesses include Samuel Altman (CEO OpenAI), Christina Montgomery (Chief Privacy & Trust Officer IBM) and Gary Marcus (Professor Emeritus New York University).

Here are the Key Takeaways

  1. OpenAI, with Microsoft’s backing, is focused on developing artificial general intelligence (AGI) for the benefit of humanity.
  2. The risks associated with current AI systems, such as false accusations and biased outcomes, have prompted discussions about legislative scrutiny.
  3. Collaboration between independent scientists, governments, and tech companies is necessary to ensure accountability and responsible AI deployment.
  4. Proposed regulations aim to establish guidelines for AI models above a certain capability threshold and encourage international cooperation on AI safety.
  5. “Precision regulation” approaches, advocated by experts like IBM’s Chief Privacy Officer, seek to govern AI deployment through specific use-case rules, impact assessments, and bias testing.



No comments:

Post a Comment