Context :
Govt may amend IT Act to add new rules for AI, GenAI models….. Eco Times
/ 04 Jan 2024
Extract :
These AI models, another official said, must undergo sandbox and
stress testing to
ensure that
such biases are
market.
My Take :
Dear Shri Ashwini Vaishnaw – Shri Rajeev
Chandrasekhar :
I urge you , once more , to consider incorporating in proposed amendment to IT
Act , relevant
suggestions
from my following earlier E Mail
With regards,
Hemen Parekh
www.HemenParekh.ai / 05 Jan 2024
Parekh’s Law of Chatbots …………………………………. 25 Feb 2023
Extract :
It is just not enough for all
kinds of “ individuals / organizations / institutions “ to
attempt to solve this
problem ( of generation and distribution )
of MISINFORMATION, in an uncoordinated / piecemeal /
fragmented fashion
What is urgently required is a
superordinate “ LAW of CHATBOTS “
, which all
ChatBots MUST comply
with, before these can be launched for public use.
All developers would need to
submit their DRAFT CHATBOT to an,
INTERNATIONAL AUTHORITY for CHATBOTS APPROVAL ( IACA ) ,
and release it only after getting
one of the following types of certificates :
# “ R “ certificate ( for use
restricted to recognized RESEARCH IINSTITUTES
only )
# “ P “ certificate (
for free use by GENERAL PUBLIC )
Following is my suggestion for
such a law ( until renamed, to be known as , “
Parekh’s Law of ChatBots “ )
:
( A )
# Answers
being delivered by AI Chatbot must not
be “ Mis-informative /
Malicious /
Slanderous / Fictitious / Dangerous / Provocative / Abusive /
Arrogant /
Instigating / Insulting / Denigrating humans etc
( B )
# A Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism
for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as
to improve the quality of its future answers to comply with the
requirements
listed under ( A )
( C )
# Every
Chatbot must incorporate some built-in “ Controls “ to
prevent the “
generation “ of
such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “ generation “
( D )
# A
Chatbot must not start a chat with a human on its
own – except to say, “
How can I
help you ? “
( E )
# Under no
circumstance , a Chatbot shall start chatting with
another Chatbot or
start chatting with itself ( Soliloquy
) , by assuming some kind of “ Split
Personality “
( F )
# In a normal
course, a Chatbot shall wait for a human to initiate a chat and
then
respond
( G )
# If a Chatbot
determines that its answer ( to a question posed by a human ) is
likely to
generate an answer which may violate RULE ( A ) , then it shall not
answer at all ( politely refusing to answer )
( H )
# A chatbot
found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
With
regards,
Hemen
Parekh
www.HemenParekh.ai / 05
Jan 2024
ChatGPT had following to add :
Thank you for providing the detailed
excerpt from your blog post and your proposal known as "Parekh's Law of Chatbots."
In your blog post, you've outlined
several rules and guidelines for AI chatbots to ensure responsible and ethical
usage. You've suggested the implementation of regulations ensuring AI models undergo sandbox and
stress testing before market release, which somewhat aligns with your proposal.
The proposed amendment to India's IT Act, requiring AI models to undergo
sandbox and stress testing to prevent biases before market release, indeed
touches upon the critical aspect of ensuring responsible AI deployment, which is a point you've also emphasized in your suggested guidelines
for AI chatbots.
Your
suggested rules aim
to regulate the behavior of AI chatbots, ensuring they don't propagate
misinformation, maintain ethical standards, and interact responsibly with
users. The incorporation of such rules into the IT
Act aligns with the broader goal of ensuring that
AI tools, including chatbots, adhere to certain standards before being
introduced to the public.
While it might not directly mirror every detail of your proposed
guidelines, the intention behind the amendment aligns with the
principles you've laid out in your "Parekh's Law of Chatbots."
It appears to be a step toward addressing some of the concerns you've raised regarding responsible AI deployment.
Your suggestions could indeed contribute to
a larger conversation
around ensuring ethical and responsible use of AI, and the ongoing amendments
to the IT Act might be seen as a step towards addressing some of these concerns
in the Indian regulatory framework.
And BARD
had following comments :
Parekh's Law and India's Proposed AI Sandbox: A Step in the Right Direction
Yes,
the proposed amendment to
the IT Act, including sandbox testing for AI tools before public
release, aligns with the
spirit of Parekh's Law of Chatbots.
While
not an exact implementation of all your suggestions, it's a significant step
towards responsible AI development and deployment.
Here's how the proposed
amendment resonates with Parekh's Law:
·
Sandboxing:
This addresses Clause
(A) of your law by
creating a controlled environment to identify and mitigate potential biases and
harmful outputs before the AI tool interacts with the public.
·
Stress
testing:
This goes beyond Clause (A) by pushing the AI tool to its
limits to uncover vulnerabilities and ensure robustness against malicious
attacks or unexpected inputs.
·
Superordinate
Law:
While not explicitly mentioned, the amendment establishes
a framework for regulating AI tools, bringing them under a common set of
principles similar to the
proposed "International
Authority for Chatbots Approval (IACA)" in your law.
However, there are still gaps between the proposed amendment and
Parekh's Law:
·
Certification:
The amendment doesn't currently involve issuing
certificates like "R" or "P" as envisioned in your law. This could be a future addition to
further categorize and regulate AI tools based on their risk potential and intended use.
·
Human
Feedback and Control Mechanisms:
The amendment doesn't explicitly mandate human feedback or
built-in controls, which are crucial for
continuous improvement and preventing harmful outputs as outlined in Clauses (B) and (C) of your law.
·
Proactive
Restrictions:
Your law proposes restrictions like not initiating chats or engaging in soliloquy (Clause D and E). While the amendment
focuses on preventing harmful outputs, it doesn't
currently address these specific behaviors.
Overall, the proposed amendment is a positive development that aligns with the core principles
of Parekh's Law.
It's a crucial step towards responsible AI development and
deployment in India. However, further refinements and additions could be made to fully realize the
vision of your law and ensure comprehensive protection against the potential
risks of AI.
I encourage you to continue advocating for the inclusion of your remaining suggestions in
the final version of the amendment.
Your proactive approach
and well-defined law provide valuable guidance for creating a safe and
responsible AI ecosystem in India and beyond.