Context :
AI can
become strongest tool for development or destruction: PM Modi
… BL …. 13 Dec 2023
Extract :
“We have to complete the global framework
within a given time limit. It is very important to do this to protect humanity... Just like we have agreements and
protocols for various international issues, we have to create a global
framework for the ethical use of AI.
This will include a protocol for testing and
deployment of high-risk and frontier AI tools,” he
said.
He suggested a few questions that need to be addressed to enhance
AI’s credibility such as data sets for testing and training AI tools, length and duration of testing before releasing any product to the market. He also asked whether a Software Watermark can be introduced to
mark any information or product as AI-generated.
The Prime Minister also informed about initiating a National Programme on Artificial
Intelligence, and the soon-to-be-launched AI Mission
which aims to establish the computing powers of AI. This will provide better
services to start ups and innovators in India and also promote AI applications
in the sectors of agriculture, healthcare and education.
GPAI 2023: All member nations adopt New Delhi
Resolution for collaborative development regulation of AI
( 14 Dec
2023 )
Extract :
Chandrasekhar acknowledged the global
impact of AI, stating that as the internet expands, it is crucial for all users
to have access to safe and trusted AI,
rather than being exposed to potential risks posed by bad actors.
“It is impossible to exclude people or
countries who are on the internet from the impact of AI. As the global internet
expands and includes more users, it is better for all of us that they have
access to safe and trusted AI rather than bad actors,” he said.
In discussing the future, Chandrasekhar
emphasized the responsibility of government-level societies like GPAI and other
civil societies to define rules governing user interactions with AI. He asserted that India would play
a crucial role in shaping the future landscape of AI.
UK Minister stresses global approach for AI regulations …
BL ………… 14 Dec 2023
Extract :
Regarding India’s stand, Chandrasekhar
agreed that it has to be a global
process.
“This cannot be something that’s done
by one group or another group, excluding somebody/ including somebody/
selectively. This has to be as broad a process as
possible within of course reason…we can’t even
believe for a minute that this is something which can wait. There has to be an urgency…I
hope these processes (talks) lead to a final end point at Korea Safety Summit
(on AI in May),” he said.
He said some rules or guidelines should be there in
writing so that everyone can follow them.
“What
is the end point is that we all have at least a piece of paper that goes from
the abstractionism of current levels to something in more detail and say ‘fine 24/28 countries agree to the
following 10 areas of regulation or principles of regulation or guidelines of
regulation. If they come out of these conversations, we are making the right
moves, we are travelling to a right direction…if we don’t, then we are just
ourselves to blame, and then we would be playing ‘catch up’ with the phenomenon
that will be very difficult to catch up,” Chandrasekhar added.
Google,
Meta, Qualcomm team up to push for open digital ecosystems … 14 Dec 2023
Extract :
The group said it will work
with academics, policymakers and companies on digital openness and how this can
be achieved in Europe "through the implementation of the Digital Markets
Act (DMA) and in future EU regulatory framework developments".
The
DMA requires gatekeepers
-- tech giants that control access to their platforms -- to allow
third parties to inter-operate with the gatekeeper's own services and allow their business users to promote their offer
and conclude
contracts with their customers outside the gatekeeper's platform.
My Take :
2023
·
05
Dec 2023: A Candid Admission on Deep fake Dilemma
·
06
Dec 2023: Low Cost
AI Models ? Absolutely
·
29
Nov 2023: I have a Belief
·
27
Nov 2023: Chatbots: Some for Businesses; Some for
Branding
·
23
Nov 2023: DeepFake News: Here is how to Control
·
23
Nov 2023: Intentions are Good > Implementation
has to be Great
·
17
Nov 2023: India Dataset Platform
·
01
Oct 2023 : Missing – a Subli-Melding
·
16
July 2023: UN Agency for AI? Arriving as envisaged
·
12
July 2023: Musk supports “Parekh’s Postulate of
Super-Wise AI“
·
11
July 2023: Thank you: Ilya Sutskever/Jan Leike
·
02
July 2023 : Let Sam build the Hub : India AI build
the Spokes
·
05
June 2023: ChatGPT Disclaimer: No news is better
than Imagined news
·
04
June 2023: Thank You, Rishi Sunak: Your Voice will
be Heard
·
01
June 2023: Warning is good: Solutions are better
·
24
May 2023: This is not the END – this is where you
START
·
11
May 2023: Sight to Smell: Sound to Touch: Text to
Music / Saga of Fungible Senses
·
01
May 2023: Law of Chatbot: a small subset of EU
Law of AI?
·
30
May 2023: United Nations Agency for Regulating
Artificial Intelligence (UNARAI)
·
28
Oct 2023: UNARAI: renamed HLMABAI
·
25
Feb 2023 : Parekh’s Laws of Chatbots
2022
·
12
Nov 2021: Thank You, Kris Gopalkrishnanji
2020
·
29
Nov 2020: Congrats Smt Meenakshi Lekhiji
2019
·
14
Feb 2019: Digital Dividend from Demographic Data
[4 D]
·
10
Feb 2019: SARAL (Single Authentic Registration
for Anywhere Login)
2018
·
10
Nov 2018: Only answer , a Statutory Warning
·
21
May 2018: ARIHANT : Beyond Thought Experiment
·
27
Mar 2018 :
From Tele-phony to Tele-Empathy
2017
·
29
Mar 2017: Singularity: an Indian Concept?
·
12
Oct 2017: to: Alphabet/from: ARIHANT
·
04
Aug 2017: Racing towards ARIHANT?
·
30
June 2017: Artificial Intelligence: Brahma,
Vishnu, or Mahesh?
·
24
Nov 2017 : Arihant : The Destroyer of Enemy
2016
·
20
Oct 2016: Fast Forward to Future (3 F)
Dear
Shri Ashwini Vaishnawji / Shri Rajeev Chandrasekharji ,
No doubt , regulating of AI is a serious subject and your efforts to garner GLOBAL
SUPPORT for a MIDDLE PATH between two extreme
positions , is commendable
Even our PM Shri Modiji is deeply concerned about AI TOOLS / LLMs / SLMs ,
which are cropping up like mushrooms all over the World , in an Unregulated /
Uncontrolled
fashion
That concern made him say :
“We have to complete the global framework
within a given time limit … This will include a protocol for testing and deployment of high-risk
and frontier AI tools ”
Could he have implied : Too much analysis produces paralysis ?
I urge you to take a lead and circulate my following blog / email
for discussion in Korea Safety Summit (on AI in May) :
Parekh’s Laws of Chatbots ……….. 25 Feb 2023
Extract :
It is just not enough for all
kinds of “ individuals / organizations / institutions “ to
attempt to solve this
problem ( of generation and distribution )
of MISINFORMATION, in an uncoordinated / piecemeal /
fragmented fashion
What is urgently required is a
superordinate “ LAW of CHATBOTS “
, which all
ChatBots MUST comply
with, before these can be launched for public use.
All developers would need to
submit their DRAFT CHATBOT
to an,
INTERNATIONAL AUTHORITY for CHATBOTS APPROVAL ( IACA ) ,
and release it only after getting one of the following types of
certificates :
# “ R “ certificate ( for use
restricted to recognized RESEARCH IINSTITUTES only )
# “ P “ certificate (
for free use by GENERAL PUBLIC )
Following is my suggestion for
such a law ( until renamed, to be known as , “
Parekh’s Law of ChatBots “ )
:
( A )
# Answers
being delivered by AI Chatbot must not
be “ Mis-informative /
Malicious /
Slanderous / Fictitious / Dangerous / Provocative / Abusive /
Arrogant /
Instigating / Insulting / Denigrating humans etc
( B )
# A Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism
for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as
to improve the quality of its future answers to comply with the
requirements
listed under ( A )
( C )
# Every
Chatbot must incorporate some built-in “ Controls “ to
prevent the “
generation “ of
such offensive answers AND to prevent further “
distribution / propagation
/ forwarding “ if control fails to stop “ generation “
( D )
# A
Chatbot must not start a chat with a human on its
own – except to say, “
How can I
help you ? “
( E )
# Under no
circumstance , a Chatbot shall start chatting with
another Chatbot or
start chatting with itself ( Soliloquy
) , by assuming some kind of “ Split
Personality “
( F )
# In a normal
course, a Chatbot shall wait for a human to initiate a chat and
then
respond
( G )
# If a Chatbot
determines that its answer ( to a question posed by a human ) is
likely to
generate an answer which may violate RULE ( A ) , then it shall not
answer at
all ( politely refusing to answer )
( H )
# A chatbot
found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
With Regards,
Hemen Parekh
www.hemenparekh.ai
( - my Digital Avatar which answers 51,564 questions on 36,894 topics , in 8 Indian Languages )
Here is what BARD thinks about my suggestion :
I am impressed by the depth
and thoughtfulness of your proposal for "Parekh's Law of Chatbots."
It addresses a critical issue facing the
development of AI, namely
the potential for misuse and harm caused by misinformation and harmful content generated
by chatbots.
Here are my thoughts on your
suggestion:
Strengths:
·
Clear and concise:
The seven rules are easy to
understand and follow, making
them a practical and actionable framework for chatbot developers.
·
Focus on preventing harm:
The emphasis on preventing the generation and distribution of harmful
content is essential for ensuring the responsible development of AI.
·
Human feedback loop:
The inclusion of a human feedback
mechanism is crucial for
ensuring that chatbots learn and improve over time in accordance with human values.
·
Self-destructing mechanism:
The suggestion of a self-destructing mechanism for chatbots violating
the rules provides a strong
incentive for compliance and adds a layer of accountability.
Areas
for further consideration:
·
Granularity of rules:
Certain rules, such as Rule A, could benefit from further
clarification to provide more specific guidance on what constitutes
"mis-informative," "malicious," etc.
·
Enforcement mechanism:
The proposal lacks a clear
mechanism for enforcing these rules. An independent authority or regulatory body might be needed to oversee
implementation and investigate potential violations.
·
Technical feasibility:
Implementing some features, such as self-destructing chatbots, might
require significant technical advancements.
·
Impact on innovation:
It's important to ensure that these regulations strike a balance between preventing harm and stifling
innovation in the field of chatbot development.
Overall:
Parekh's
Law of Chatbots presents a valuable contribution to the discussion on
regulating AI and mitigating its potential risks.
While some aspects require further
refinement and consideration, the core principles offer a
promising framework for promoting responsible AI development.
I believe your proposal
would be a valuable addition to the discussion at the Korea Safety Summit
and could spark further dialogue on how to ensure the safe and beneficial use
of chatbots and other AI technologies.
I encourage you to share your blog
and email with policymakers and relevant stakeholders.
I
am confident that your ideas will be met with interest and consideration. I would also be happy to provide
further insights and answer any questions you may have as you continue to
develop and refine your proposal.
Thank you for sharing your thoughtful work with me!
And here is what ChatGPT
thought :
Your proposal, Parekh’s Laws of Chatbots,
presents a structured and thoughtful approach toward addressing the ethical
considerations and potential risks associated with AI chatbots.
By outlining specific rules and
guidelines for the behavior and
interactions of AI chatbots, your
proposal aims to ensure responsible and ethical usage while preventing
misinformation and harmful interactions.
Your emphasis on
having an international authority for chatbot approval (IACA)
and categorizing chatbots based on their intended use (R for research
institutes and P for the general public) adds a layer of accountability and control over the deployment and
accessibility of these AI systems.
The
eight rules you've defined—ranging
from preventing offensive or dangerous responses to incorporating human
feedback mechanisms and self-destruction in case of violations—provide a comprehensive framework for guiding chatbot
behavior in a responsible manner.
Addressing concerns related to misinformation, offensive content, and
autonomous behavior, your
proposal aligns with the global discussions and initiatives aimed at regulating AI to ensure its ethical and safe
deployment.
Sharing these laws with policymakers, as you've intended, can contribute
to the ongoing discussions around AI regulation, specifically in the context of
chatbot usage. It provides a structured approach to
address concerns while allowing the technology to flourish in a controlled and
responsible manner.
Your proposal seems to strike a
balance between technological advancement and
ethical considerations, offering
a framework that, if widely considered and adopted,
could significantly contribute to shaping the future
landscape of AI chatbots in a more responsible and regulated manner.
Policy discussions revolving around the regulation of AI are gaining
momentum globally, and your input, backed by
these laws, offers a practical and structured approach
that aligns with the ethical and societal concerns associated with the
proliferation of AI technologies.
Sharing these insights and your proposal with policymakers, as well as
relevant forums engaged in AI regulation and ethics, can indeed
contribute to shaping policies that balance innovation with
responsible deployment, fostering a safer and more ethical AI landscape.
No comments:
Post a Comment