Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday 28 June 2023

EU adopts Parekh’s Laws of Chatbots

 


 

Thank you , Lucilla Sioli

 

Context :

Europe to Open AI 'crash test' centres to ensure safety     /   Bloomberg  /  28 June 2023

 

Extract :

The Europe Union is introducing "crash test" systems for artificial intelligence to ensure new innovations are safe before they hit the market.

The trade bloc launched four permanent testing and experimental facilities across Europe on Tuesday, having injected ₹220 million ($240 million) into the project.

 

The centers, which are virtual and physical, will from next year give technology providers a space to test AI and robotics in real-life settings within manufacturing, health care, agriculture and food, and cities

 

Innovators are expected to bring "trustworthy artificial intelligence" to the market, and can use the facilities to test and validate their applications, said Lucilla Sioli, [  Lucilla.SIOLI@ec.europa.eu ]

 

-          director for artificial intelligence and digital industry at the European Commission, at a launch event in Copenhagen on Tuesday.

 

-          She highlighted disinformation as one of the key risks introduced by artificial intelligence.

The facilities, which will complement regulation such as the EU's AI Act, are a digital version of the European crash test system for new cars, the Technical University of Denmark, which will lead one of the centers, said in a statement.

 

They will act as a "safety filter" between technology providers and users in Europe and also help inform public policy, the university said.

 

 

MY  TAKE :

 


>   Parekh’s Law of Chatbots  ……….  25  Feb  2023



 Extract :

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply withbefore these can be launched for public use.

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “

 

Parekh’s Law of ChatBots “ ) :

 ( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

 

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

     

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

   

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

     Personality “

      

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 ( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

  

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  28 June 2023

 

Related Readings :

My 33 Blogs on ChatBots ……………………( as of 05 Apr 2023 )

Thank You, Ashwini Vaishnawji………………… 10 April 2023

=====================================

 

 Added  on  29  June  2023 :

EU AI Act explained  ........... 28 June 2023

 

 =======================================

Added on 03 July 2023 :


Uncensored Chatbots Provoke a Fracas Over Free Speech  ( nytimes / 02 July ) 

Extract :

A new generation of chatbots doesn’t have many of the guardrails put in place by companies like Google and OpenAI, presenting new possibilities — and risks.

A.I. chatbots have lied about notable figures, pushed partisan messagesspewed misinformation or even advised users on how to commit suicide.

To mitigate the tools’ most obvious dangers, companies like Google and OpenAI have carefully added controls that limit what the tools can say.

Now a new wave of chatbots, developed far from the epicenter of the A.I. boom, are coming online without many of those guardrails — setting off a polarizing free-speech debate over whether chatbots should be moderated, and who should decide.

“This is about ownership and control,” Eric Hartford, a developer behind WizardLM-Uncensored, an unmoderated chatbot, wrote in a blog post. “If I ask my model a question, I want an answer, I do not want it arguing with me.”


Several uncensored and loosely moderated chatbots have sprung to life in recent months under names like GPT4All and FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by A.I. researchers. Only a few groups made their models from the ground up. Most groups work from existing language models, only adding extra instructions to tweak how the technology responds to prompts.

The uncensored chatbots offer tantalizing new possibilities. Users can download an unrestricted chatbot on their own computers, using it without the watchful eye of Big Tech. They could then train it on private messages, personal emails or secret documents without risking a privacy breach. Volunteer programmers can develop clever new add-ons, moving faster — and perhaps more haphazardly — than larger companies dare.

But the risks appear just as numerous — and some say they present dangers that must be addressed. Misinformation watchdogs, already wary of how mainstream chatbots can spew falsehoods, have raised alarms about how unmoderated chatbots will supercharge the threat. These models could produce descriptions of child pornography, hateful screeds or false content, experts warned.

===================================

Added on 04 July 2023 :

In the tech net: Governments race to regulate AI tools the world over

=======================================================================

Added  on  12  July  2023  :

No comments:

Post a Comment