Context :
MeitY
approval must for companies to roll out AI, generative AI models … ET …
03 Mar 2024
Extract :
All artificial intelligence
(AI) models, large-language models (LLMs), software using generative AI or any
algorithms that are currently being tested, are in the beta stage of
development or are unreliable in any form must seek “
explicit permission of
the government of India ” before being deployed for
users on the Indian internet, the government said.
The ministry of electronics and information
technology (MeitY) issued a late night advisory on March 1, a first-of-its-kind
globally. It asked all platforms to ensure that “their computer resources do
not permit any bias or discrimination or threaten the integrity of the
electoral process” by the use of AI, generative AI, LLMs or any such other
algorithm.
Though
not legally binding, Friday’s advisory is “signalling that this is the future of regulation”,
union minister of state for electronics and information technology Rajeev
Chandrasekhar said. “We are doing it as an advisory today asking
you (the
AI platforms) to comply with it."
“If
you do not comply with it, at some point, there will be a law and legislation
that (will) make it difficult for you not to do it,”
he said.
The
government advisory comes days after a social media post on X claimed that
Google’s AI model Gemini was biased when asked if Prime Minister Narendra Modi
was a “fascist”.
The
user claimed that Google’s AI GPT model Gemini was “downright
malicious”
for giving responses to questions which sought to know whether some prominent
global leaders were “fascist”.
Gemini's
response drew sharp reactions from union IT & electronics minister Ashwini
Vaishnaw as well as Chandrasekhar. While Vaishnaw had at an event said that
such biases would not be tolerated, Chandrasekhar had said that Indian users
were not to be experimented on with "unreliable" platforms,
algorithms and models.
Google later said it was working to fix the issues
and was temporarily stopping Gemini from generating images as well.
The
advisory also asked all platforms that deploy generative AI to offer their
services to Indian users only after “appropriately labelling the possible
and inherent fallibility or unreliability of the output generated”.
The advisory recommended a ‘consent popup’ mechanism
to explicitly inform the users about the possible and inherent fallibility or
unreliability of the output generated, the advisory read. ET has seen a copy of
the advisory.
ET
had reported on January 4 that the government may amend the Information Technology
(IT) Act to introduce rules for regulating AI companies and generative AI
models and
prevent “bias” of any kind.
Apart from AI and generative AI models, LLMs and
software using the technology, all other intermediaries and platforms which
allow “synthetic creation, generation or modification of a text, audio, visual
or audio-visual information, in such a manner that such information may be used
potentially as misinformation or deepfake” must
also label all content with appropriate metadata.
Such metadata should be embedded in the deepfake
content in such a way that the computer resource or device used to generate the
image, video or audio can be identified if needed,
the advisory said.
Congratulations , Shri Chandrasekharji ,
While this “ hint “ is not a
day too soon , I hope , one of these days ( soon ? ) , you will tell the AI
companies that your “ intention “ is for those companies to voluntarily evolve
a “ AI Code of Conduct ( ACC ) “ , as suggested in my following e-mail
With regards,
Hemen Parekh
www.HemenParekh.ai / 03
March 2024
Ø Parekh’s
Law of Chatbots…………………………………. 25 Feb 2023
Extract :
It is just
not enough for all kinds of “ individuals / organizations / institutions “ to
attempt to
solve this problem ( of generation and distribution )
of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion
What is
urgently required is a superordinate “ LAW of CHATBOTS “ , which all
ChatBots
MUST comply with, before these can be launched for public use.
All
developers would need to submit their DRAFT CHATBOT to an,
INTERNATIONAL AUTHORITY for CHATBOTS
APPROVAL ( IACA ) ,
and
release it only after getting one of the following types of certificates :
# “ R “ certificate ( for use
restricted to recognized RESEARCH IINSTITUTES only )
# “ P “ certificate (
for free use by GENERAL PUBLIC )
Following
is my suggestion for such a law :
( until
renamed, to be known as , “Parekh’s Law of
ChatBots “ ) :
( A )
# Answers
being delivered by AI Chatbot must not be “
Mis-informative /
Malicious / Slanderous / Fictitious / Dangerous / Provocative
/ Abusive /
Arrogant / Instigating / Insulting / Denigrating humans etc
( B )
# A
Chatbot must
incorporate some kind
of “ Human Feedback /
Rating “
mechanism for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as to improve the quality of its future answers to
comply with the
requirements listed under ( A )
( C )
# Every
Chatbot must incorporate
some built-in “ Controls “ to prevent the “
generation “ of such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “
generation “
( D )
# A
Chatbot must not start a
chat with a human on its own – except to say, “
How can I help you ? “
( E )
# Under
no circumstance , a Chatbot shall start chatting with another Chatbot or
start chatting with
itself ( Soliloquy
) , by assuming some kind of “ Split
Personality “
( F )
# In
a normal course, a Chatbot shall wait for a human to initiate a chat and
then respond
( G )
# If
a Chatbot determines that its answer ( to a question posed by a human ) is
likely to generate an answer which may violate RULE ( A
) , then it shall not
answer at all ( politely
refusing to answer )
( H )
# A
chatbot found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
Related Readings :
Ø Gradual
Acceptance is better than Ignoring……………………………………. 04 Jan 2024
Ø Sam
: Will Super-wise AI triumph over Super-Intelligent AI ? …….. 25 Nov
2023
Fast Forward to Future ( 3 F ) ……………………………………………………….[20
Oct 2016 ]
Ø Artificial Intelligence : Brahma , Vishnu or Mahesh ?
…………………..[ 30 June 2017 ]
Ø Racing towards ARIHANT ? ……………………………………………..[
04 Aug 2017 ]
Ø to : Alphabet / from : ARIHANT …………………………………………[
12 Oct 2017 ]
Ø ARIHANT : the Destroyer of
Enemy ………………………………[ 24 Nov 2017 ]
Ø ARIHANT : Beyond “ Thought Experiment “ ………………………[
21 May 2018 ]
Ø Singularity : an Indian Concept ? ………………………………………[
29 Mar 2020 ]
Ø From Tele-phony to Tele-Empathy ?............................[
27 Mar 2018 ]
No comments:
Post a Comment