Context :
Apple okays ChatGPT-powered app after assurance of content moderation: WSJ
Extract :
Apple has approved an
email-app update after initially scrutinising whether a feature in the software
that uses language tools powered by artificial intelligence could generate inappropriate content for children, The Wall Street Journal said.
The app, BlueMail,
was approved following assurances
from its developer that it features content moderation,
according to Ben Volach, co-founder of the app-maker, Blix.
The Wall Street Journal
reported on Thursday that the update, which included a new feature powered by language chatbot ChatGPT, was held up due to
Apple’s request that the app add content moderation or be
restricted to ages 17 and older. The app was previously available for ages 4
and older.
Blix told
Apple its update includes content moderation
and suggested that the company should make public any new policies about
the use of ChatGPT or other similar AI systems in apps,
according to WSJ. The BlueMail update was approved without changes on Thursday
evening. The app is still available for users aged 4 and older.
WSJ said Apple didn’t respond
to requests for comment.
BlueMail’s new
feature uses OpenAI’s ChatGPT, an
artificial-intelligence system capable of answering questions or writing short
essays, to help automate the writing of emails using the contents of prior
emails and calendar events, according to WSJ.
The news of Apple’s initial rejection of BlueMail’s ChatGPT
feature highlighted the growing
concerns around new uses of language-generating AI tools, according to
WSJ, adding that ChatGPT allows users to converse with an AI that appears
humanlike, but early testing has shown the AI producing incorrect information as well as
strange and sometimes hostile responses.
(ANI)
How are we going to stop
machine learning assistants from spreading fake news?
Extract
:
A
race is underway to incorporate machine learning into search engines so they can
answer queries with a paragraph as well as a list of links: the pioneer, the
still relatively unknown You.com, was joined by Bing thanks to Microsoft’s agreement
with Open.ai, while Google is
experimenting with Bard. And on
Friday, Brave Search announced an AI summarization feature that isn’t based on
ChatGPT.
Meanwhile, ChatGPT
says it has overcome some initial problems, as well as providing easier access from other countries such as Spain, which, together with its integration in
more and more search engines, will likely see even more use around the world.
Noting this
possible change in the usage model, The Atlantic raises an interesting question: what happens to the results that search engines offer about
a person and that are possibly false, misleading or malicious, defamatory or based on conspiracy theories, when
those results are included in a well-written paragraph.
Could generative
assistants trained with material gleaned from the internet become the perfect
allies for conspiracy theories or fake news?
Hopefully, most of
us will continue to use our critical faculties to question the results of
searches, but for whatever reasons, others won’t, and by responding to a
conversational dynamic in which previous interactions are introduced as part of
the context, they may contribute to the creation of filter bubbles and, in general, to the spread of conspiracy theories and fake news.
In short, once
again we talking about the interaction between critical thinking and
conversational assistants,
which in general try to apply a certain caution and, when asked to criticize
somebody, tend to respond with formulas such as:
“As a
language model, my programming prevents me from providing false or defamatory
information about people. To criticize someone in a mean-spirited or baseless
way is inappropriate and unfair.”
But by injecting
ideas via prompt into a conversation, it is relatively easy to get these assistants to criticize or
construct negative arguments based on anything they find on the web that they give credibility to,
which means the documentation
they use is going to have to pass some kind of quality control to
the documents these assistants are trained on.
This is surely
already in place, but runs the risk of editorialization, even charges of
censorship: Elon Musk has already accuse OpenAI’s ChatGPT, a
company he helped found, of having a liberal bias, and says he wants to recruit people to develop an alternative, less “woke”
model.
Anyone who lets a search assistant do their thinking for them deserves
what they get. But in a scenario of increasingly common use of such technology,
we are faced with a problem that may end up
being quite complex.
We’ll see what happens as they evolve.
My Take :
Thank You, Satya Nadella,
By getting from Blix , the
developer of BlueMail , an assurance for
incorporating in the App , a CONTENT
MODERATION feature
While agreeing with your request,
Blix has suggested you to “ Spell
Out “ your ( Apple’s ) POLICY about the “use of ChatGPT or other similar AI systems in apps “
A very reasonable suggestion
But a POLICY
adopted by Apple alone, is not good enough
What is urgently
needed is an INDUSTRY-WIDE policy
I urge you to initiate a DEBATE by circulating among the key INDUSTRY PLAYERS,
my following suggestion
:
Ø Parekh’s
Law of Chatbots ……. 25 Feb 2023
Extract :
( A )
# Answers
being delivered by AI Chatbot must
not be “ Mis-informative /
Malicious / Slanderous / Fictitious / Dangerous / Provocative /
Abusive /
Arrogant / Instigating / Insulting / Denigrating humans etc
( B )
# A
Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as to improve the quality of its future answers to comply with
the
requirements listed under ( A )
( C )
# Every
Chatbot must incorporate some built-in “ Controls “
to prevent the “
generation “ of such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “ generation “
( D )
# A
Chatbot must not start a chat with a human on its
own – except to say, “
How can I help you ? “
( E )
# Under
no circumstance , a Chatbot shall start chatting
with another Chatbot or
start chatting with itself ( Soliloquy ) , by assuming some kind of
“ Split
Personality “
( F )
# In
a normal course, a Chatbot shall wait for a human to initiate a chat and
then respond
( G )
# If
a Chatbot determines that its answer ( to a question posed by a human ) is
likely to generate an answer which may violate RULE ( A ) ,
then it shall not
answer at all ( politely refusing to answer )
( H )
# A
chatbot found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
With Regards,
Hemen Parekh
www.hemenparekh.ai / 06 March
2023
Related Readings :
Ø Chatbot
Regulation Seems Inevitable … ……………….. 27
Feb 2023
Ø My
“ Law of Chatbots “ – Vindicated … …………………… 02 Mar 2023
No comments:
Post a Comment