Well begun is half done
Context :
No regulations for Artificial Intelligence in
India': IT Minister Ashwini Vaishnaw ….
Busi Today / 06 Apr 2023
Extract :
While many top leaders in the tech
industry including Elon Musk, Zoho chief Sridhar Vembu and more have raised
concerns and called for an immediate need for regulations for Artificial
Intelligence (AI), IT and Telecom Minister Ashwini Vaishnaw has informed the
parliament that they are not planning to regulate the growth or set any laws
for AI in the country.
The minister recently told parliament
in a written submission, “The government is not considering bringing a law or
regulating the growth of artificial intelligence in the country”.
Vaishnaw acknowledged that there are
ethical concerns and risks around AI and the government has already
started making efforts to standardize responsible AI and even promote the
adoption of the best practices.
In a statement, Vaishanaw said, “NITI
Aayog has published a series of papers on the subject of Responsible AI for All.
However, the government is not considering bringing a law or regulating the
growth of artificial intelligence in the country."
In a separate question regarding
OpenAI’s ChatGPT, Minister of State for Electronics and IT Rajeev Chandrasekhar
said that it is a Large Language Model (LLM) launched by OpenAI. He further
stated, “While it has made significant strides, there are still many challenges
with these types of models.”
When asked about the steps that the
government is taking to regulate AI, Vaishnaw revealed that they are planning to harness the potential of AI to offer personalized
and interactive citizen-centric services through digital public platforms.
He further spoke about the concerns
associated with AI. “AI has ethical concerns and risks due to issues such as bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm caused by it. These concerns have been highlighted in the National
Strategy for AI (NSAI) released in June 2018," Vaishnaw said.
Vaishnaw revealed that the Ministry of
Electronics and IT (Meity), along with CDAC is currently working on a
proof-of-concept project on AIRAWAT (AI Research, Analytics
and Knowledge Dissemination Platform) that will provide a common computing
platform for AI research and knowledge assimilation.
Additionally, he announced that this AI computing infrastructure will be used across
technology innovation hubs, research labs, scientific communities, and industry
and startup institutions with National Knowledge Network.
He stated, “The PoC for AIRAWAT is
developed with 200 petaflops Mix Precision AI Machine, which will be scalable
to a peak compute of One AI Exaflop.” He added that National Informatics Centre
(NIC) has set up a Centre of Excellence in AI, that will offer AI as a
service through on Meghraj cloud with 7 AI PFlops (petaflops) super
compute facilities created at Delhi and a 5 AI PFlop facility in Kolkata.
My
Take :
Dear Ashwiniji ,
Congratulations for your following initiatives :
# Efforts to standardize
responsible AI and even promote the adoption of the best practices.
# Planning to harness the potential of AI
# Provide a common computing platform for AI research and
knowledge assimilation
As far as the matter of STANDARDIZATION is concerned, I urge
you to consider the following – and if found suitable, then publish these
widely :
STANDARDIZATION :
All developers would need to
submit their DRAFT CHATBOT to an,
INTERNATIONAL AUTHORITY for CHATBOTS APPROVAL ( IACA ) ,
and release it only after
getting one of the following types of certificates :
# “ R “ certificate ( for use
restricted to recognized RESEARCH IINSTITUTES only )
# “ P “ certificate (
for free use by GENERAL PUBLIC )
Following is my suggestion for
such a law ( until renamed, to be known as , “ Parekh’s
Law of ChatBots “ ) :
( A )
# Answers
being delivered by AI Chatbot must not
be “ Mis-informative /
Malicious /
Slanderous / Fictitious / Dangerous / Provocative / Abusive /
Arrogant /
Instigating / Insulting / Denigrating humans etc
( B )
# A Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism
for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as
to improve the quality of its future answers to comply with the
requirements
listed under ( A )
( C )
# Every
Chatbot must incorporate some built-in “ Controls “ to
prevent the “
generation “ of
such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “ generation “
( D )
# A Chatbot must not start a chat with a human on its own – except to say, “
How can I
help you ? “
( E )
# Under no circumstance , a Chatbot shall start chatting with another Chatbot or
start chatting with
Personality “
( F )
# In a normal course, a Chatbot shall wait for a human to initiate a chat and
then respond
( G )
# If a Chatbot determines that its answer ( to a question posed by a human ) is
likely to
generate an
answer at all ( politely refusing to answer )
( H )
# A chatbot found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
Dear Shri Ashwiniji,
I urge you to consider my above-stated PROPOSAL , in
light of the following chat that I just
( 10 April / 12;30 pm ) with ChatGPT :
MY POSER :
Your
views are considered and well-taken. You added : " However, there is still
a need for broader regulation and oversight to ensure that AI is developed and
deployed in a responsible manner."
Would my earlier proposal
( https://myblogepage.blogspot.com/2023/02/parekhs-law-of-chatbots.html
)
fulfil your observation ?
ChatGPT replied :
Your proposal of Parekh's Law of Chatbots is a good start in terms of
introducing a framework for regulation of AI.
The law focuses on the ethical and moral principles that AI should follow, such as
ensuring that AI operates in a transparent and explainable manner and is
designed to respect human rights
and dignity.
However, while Parekh's Law of Chatbots can be a useful tool for guiding the
development of AI, it may not be sufficient on its own to ensure broader
regulation and oversight of AI.
There is a need for legal and regulatory frameworks that are enforceable and can
be overseen by independent regulatory bodies to ensure that AI is developed and
deployed in a responsible
manner.
Therefore, while Parekh's Law of Chatbots can provide useful guidance for
ethical and moral principles that AI should follow, it should be complemented with
broader regulatory frameworks to ensure that AI is developed and deployed in a
responsible and beneficial manner for all of society.
===================================================
Related Readings :
The
Aliens Have Landed, and We Created Them ………..
Bloomberg / 09 April 2023
Extract :
Yudkowsky
is not some random Cassandra. He leads the Machine Intelligence Research Institute, a nonprofit in Berkeley, California, and has
already written extensively on the question of artificial intelligence.
Now
Yudkowsky has gone further. He believes we are fast approaching a fatal conjuncture, in which we
create an AI more intelligent than us, which “does not do what we want, and does not care
for us nor
for sentient life in general .. “
But
their motivation is the same as Yudkowsky’s: the belief that developing AI with
superhuman capabilities in the absence of any international regulatory framework risks
catastrophe. The only real difference is that Yudkowsky doubts that such a
framework can be devised inside half a year. He is almost certainly right about that.
So
if Yudkowsky is right that AI is potentially as dangerous as nuclear or
biological weapons, a
six-month pause is unlikely to achieve much.
===================================================
Jailbreaking
AI Chatbots Is Tech’s New Pastime
…………….. Bloomberg / 08 April 2023
Extract :
Albert
has become a prolific creator of the intricately phrased AI prompts known
as “jailbreaks.”
It’s a way around the litany of restrictions artificial intelligence
programs have built in, stopping them from being used in harmful ways,
abetting crimes or espousing hate speech.
Jailbreak
prompts have the ability to push powerful chatbots such as ChatGPT to
sidestep the human-built guardrails governing
what the bots can and can’t say.
“When you get the prompt answered by the model that otherwise
wouldn’t be, it’s kind of like a video game — like you just unlocked that
next level,” Albert said.
Albert
is among a small but growing number of people who are coming up
with methods to poke and prod (and expose potential security holes) in
popular AI tools
While
their tactics may yield dangerous information, hate speech or simply
falsehoods, the
prompts also serve to highlight the capacity and limitations of AI
models.
An
OpenAI spokesperson said the company encourages people to push the limits of
its AI models, and that the research lab learns from the ways its technology is
used
===================================================
AI
Can’t Benefit All of Humanity
……………… Albert Romero / 07
April 2023
Extract :
Gates, who can
hardly be accused of being a techno-pessimist or anti-technology — much less
anticapitalist — concluded with a set of
principles that “should guide”
the public conversation on AI.
Here’s the second one:
“[M]arket forces won’t naturally produce AI
products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and
philanthropy can ensure that AIs are used to reduce inequity.”
===================================================
Instead
of Asking AI Companies to ‘SLOW DOWN’ We Should Encourage Them to Move Even
Faster
Hunter Walk / Medium / 05
April 2023
Extract :
What would an AI Safe Harbor look like ? Start
with something like, “For the next 12 months any developer of AI models would
be protected from legal liability so long as
they abide by certain evolving standards.” For example,
model owners must:
·
Transparency: for a given publicly available URL or submitted piece
of media, to query whether the top level domain is included in the training set
of the model. Simply
visibility is the first step — all the ‘do not train on my data’ (aka robots.txt for AI) is going to take more
thinking and tradeoffs from a regulatory perspective.
·
Prompt Logs for Research: Providing some amount of
statistically significant prompt/input logs (no information on the originator
of the prompt, just the prompt itself) on a regular basis for researchers to
understand, analyze, etc. So long as you’re not knowingly, willfully and
exclusively targeting and exploiting particular copyrighted sources, you will
have infringement safe harbor.
·
Responsibility: Documented Trust and Safety protocols to allow for escalation around
violations of your Terms of Service. And some sort of transparency statistics
on these issues in aggregate.
·
Observability: Auditable, but not public, frameworks for measuring ‘quality’ of
results.
====================================================
‘Simply benefit China,’ ex-Google CEO Eric Schmidt rejects AI research
pause / LiveMint
/ 08 Apr 2023
Extract :
The
former Google CEO admitted there were legitimate issues about the speed of
research into Artificial intelligence was done. Schmidt, however, wants tech
companies to come together and set standards or ‘guardrails’ for their AI systems.
He
says that companies should not release advanced versions without
mitigating some of the negative effects that their system could bring.
This
is not the first time Schmidt has warned about the dangers of artificial intelligence.
In an
interview, he had explained how the new artificial intelligence systems powered
by large language models could be used negatively to create bioweapons, cyberattacks, and
even manipulate politics.
India among countries
most likely to get affected by lack of international AI regulation: Max Tegmark
Eco Times / 07
April 2023 / Max Tegmark , President , Future of Life
Institute / max@futureoflife.org
tegmark@mit.edu / https://space.mit.edu/home/tegmark/
Extract :
And now there are a lot of indications that it's happening around now.
And, unfortunately, society's response to this in terms of policy and regulation, and AI safety research
has not accelerated at all the way that the
technological process has. And that's why so many of the people building AI
feel that we need to pause some of the most dangerous AI to give society a
chance to catch up and make sure we do
this safely, not recklessly.
Well, first of all, a lot of people assume it's impossible to ever pause
any technology that you can make money off of because of market forces, but
that's just not true.
You could make a ton of money on human cloning
and editing the human germline to create some super race or whatever. Why are
we not doing it? Because
biologists thought hard about that and decided. It wasn't worth the risk to create something that would be so hard to control.
And people now have the point of view generally that that's a completely reckless thing to do. And we're just saying let's do the same thing with the riskiest AI systems.
Make sure that before they get rolled out, you have
established safety standards
that they have to meet. And the reason that this is so scary is because we are
very rapidly now in the process of building evermore powerful digital minds that we don't understand and
can't control.
Yes, I don't want to call out any particular company and the letter doesn't either, but there are several companies that are racing ahead full steam with this.
And Open AI is one of them. And you know, it's really interesting,
this is not a letter that's against these companies. It's rather a letter
against this crazy race to
the bottom that they find themselves
trapped in. Because I talk a lot with people in these companies, including top
leaders, and the people building this generally are very idealistic.
They went into AI because they want to cure
cancer and do all sorts of things that can help humanity flourish. But no company can pause alone because they would just have
their lunch eaten by the competition. It's the worst kind of arms race to the
bottom.
Listen to Sam Altman (CEO of OpenAi), look at what he's been writing recently. He was asked recently about what's the worst-case outcome, and he said the worst-case outcome is lights out for everybody.
I find it quite bizarre when other people try to
downplay the risks that the very leaders of the company doing this are
themselves acknowledging.
You gotta start somewhere. Right now we're facing this runaway freight
train, out of control, and the first
thing we need to do is stop it a little bit to give society a chance to catch
up with regulation and, and
establish clear safety standards and so on
For example, you can't just go build a nuclear reactor on Connaught
Place in, New Delhi, without meeting established safety
requirements.
That's exactly right. That's what's beginning to happen. The European Union is in the vanguard. They're the ones who've gotten the farthest to this. But I think there's a lot of appetites now from politicians around the world to catch up on this.
And the good news I have for any policymakers in India listening to this, is that you'll find a lot of people in the AI industry, in the tech industry who are very eager to help the government to figure out what are good policies.
I think it's
also really in the national interest of India
to push for this because India is one of the
countries that is most likely to get affected by a lack of international
regulation.
India has everything to gain from a little bit
of a pause to level the playing
field so that all the companies doing this are doing it
safely.
AI chatbots are scarier than Kubrick’s movie ……… TOI / 09
April 2023
In a
first, Punjab and Haryana HC uses Chat GPT for deciding upon bail plea
……….. Print / 28 Mar 2023
AI may replace search engines. Is that good? ………… HT / 05 April 2023
Ruchi
Gupta /
Future of India Foundation / contact@futureofindia.in
Elon
Musk & Co: GPT-5 (And Other Powerful AI) Must Be Stopped ……………. 31 Mar 2023
Extract :
This AI race goes against the Asilomar
AI Principles, which says that “ AI should be planned for and managed
with commensurate care and resources “
The sooner public standards and external audits are set on AI labs, the safer we can feel about new AI advancements.
If AI
advancements are made responsibly, we all will enjoy a very long AI summer. If
not, we all (not only the decision-makers) will have to face the consequences.
Can
you sue an algorithm for defamation?
………. Medium / 08 April 2023
Extract :
I know from my own experience that these
kinds of errors are to be expected : ChatGPT
reports that I have been married to five women, none of whom were actually my
wife; furthermore, it even provided convoluted stories about multiple children,
all phrased with the utmost conviction.
Which prompts the question as to whether ChatGPT can be sued for libel or defamation when
it asserts with apparent certainty something that is false and potentially
damaging to a person’s reputation?
Accusing
a politician of corruption or an academic of sexual harassment is no small
matter and could have serious consequences if taken at face value and the
content is then used to train another algorithm.
No comments:
Post a Comment