Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday 8 March 2023

Meta mirrors Parekh’s Law of Chatbots

 


 

Context :

Meta will keep releasing AI tools despite leak claims  ……………………… Hindu  /  07 March 2023

Extract :

Meta Platforms Inc. on Monday said it will continue to release its artificial intelligence tools to approved researchers despite claims on online message boards that its latest large language model had leaked to unauthorised users.

"While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness," Meta said in a statement.

Facebook owner Meta maintains a major AI research arm and last month released LLaMA, short for Large Language Model Meta AI. Meta claimed that the model can achieve the kind of human-like conversational abilities of AI systems designed by ChatGPT creator OpenAI and Alphabet Inc. while using far less computing power.

Unlike some rivals such as OpenAI, which keeps tight wraps on its technology and charges software developers to access it, Meta's AI research arm shares most of its work openly. But AI tools also contain the potential for abuse, such as creating and spreading false information

To avoid those kinds of misuse, Meta makes its tools available to researchers and other entities affiliated with government, civil society and academia under a non-commercial license after a vetting process.

Last week, users on the online forum 4Chan claimed to have made the model available for download. Reuters could not independently verify those claims.

In its statement, Meta said its LLaMA release was handled in the same way as previous models and that it does not plan to change its strategy.

"It’s Meta's goal to share state-of-the-art AI models with members of the research community to help us evaluate and improve those models," Meta said.

 

MY   TAKE  :

 

Ø  Parekh’s Law of Chatbots ……..  26 Feb 2023

 

Extract :

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “

 

Parekh’s Law of ChatBots “ ) :

 

  

( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

     

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

    

    

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

  

 

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

     Personality “

 

     

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 

( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

     

 

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

I request the readers (if they agree with my suggestion ), to forward this blog to :

#  Satya Nadella

#  Sam Altaman

#  Sundar Pichai

#  Marc Zuckerberg

#  Tim Cook

#   Ashwini Vaishnaw  ( Minister, MeITY )

#   Rajeev Chandrasekhar ( Minister of State , IT )

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  09 Mar 2023

 

Related Readings :

Chatbots : the GOOD , the BAD and the UGLY

 

No comments:

Post a Comment