How is that ?
Simple.
Over the next 11 months, National and Regional
Political Parties would spare no efforts to reach out to millions of Indian
citizen, through advertisements ( print + online ), posters , leaflets,
speeches made at rallies , road-shows, town halls etc
With rare exceptions, all of these would be a “ One Way Monologues “
, with no scope for the recipients to “ Ask Questions / Get on the spot –
and specific / relevant Answers “
Only way, parties can engage with millions of citizen in
a “ Two Way Dialogue “ , is through launch
of a “ Conversational AI Portal ( a
Chatbot )“, as per my following earlier email to Shri Narendra Modiji :
Ø Dear
PM - Here is your BRAHMASHTRA for 2024 ………. 28 Feb 2023
Is this Inevitable ? Any hints ?
Consider following report as a sign of a fast
approaching SAND-STORM ( of course, expect some Ostriches to hide their heads
under the desert sand ) :
As
Chatbots Spread, Conservatives Dream About a Right-Wing Response …… NY Times / 23 Mar 2023
Extract :
When ChatGPT exploded in popularity as a tool using artificial
intelligence to draft complex texts, David Rozado decided to test its potential
for bias. A data scientist in New Zealand, he subjected the chatbot to a series
of quizzes, searching for signs of political
orientation.
The results, published in a recent paper,
were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”
So he tinkered with his own version,
training it to answer questions with a decidedly conservative bent. He called
his experiment RightWingGPT.
As his demonstration
showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries.
Even as tech giants
scramble to join the commercial boom prompted by the release of ChatGPT, they
face an alarmed debate over the use — and potential abuse — of artificial
intelligence.
The technology’s ability to create
content that hews to predetermined ideological points
of view, or presses disinformation, highlights a danger that some tech
executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different
versions of reality, undermining the viability of artificial intelligence as a
tool in everyday life and further eroding trust in society.
“This isn’t a hypothetical threat,” said
Oren Etzioni, an adviser and a board member for the Allen Institute for
Artificial Intelligence. “ This is an imminent, imminent threat.”
Conservatives have accused ChatGPT’s
creator, the San Francisco company OpenAI, of designing a tool that, they say,
reflects the liberal values of its programmers.
The program has, for instance, written an ode to President Joe Biden, but it has declined to write
a similar poem about former President Donald Trump, citing a desire for
neutrality.
ChatGPT also told one
user that it was “never morally acceptable” to use a racial slur, even in a
hypothetical situation in which doing so could stop a devastating nuclear bomb.
In response, some of ChatGPT’s critics
have called for creating their own chatbots or other tools that reflect their values instead.
Elon Musk, who helped start OpenAI in 2015 before departing three years
later, has accused ChatGPT of being “woke” and pledged to
build his own version.
Gab, a social network with an avowedly
Christian nationalist bent that has become a hub for white supremacists and
extremists, has promised to release AI tools with “the ability to generate
content freely without the constraints of liberal propaganda wrapped tightly
around its code.”
“Silicon Valley is investing billions to
build these liberal guardrails to neuter the AI into forcing their worldview in the face of users and present it as ‘reality’ or ‘fact,’”
Andrew Torba, the founder of Gab, said in a written response to questions.
He equated artificial intelligence to a new information arms race, like the advent of social media,
that conservatives needed to win. “We don’t intend to allow our enemies to have
the keys to the kingdom this time around,” he said.
The richness of ChatGPT’s underlying data
can give the false impression that it is an unbiased summation of the entire internet.
The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from
websites, blog posts, books, Wikipedia articles and more.
Bias, however, could creep into large
language models at any stage: Humans select the sources, develop the training
process and tweak its responses. Each step nudges the model and its political
orientation in a specific direction, consciously or not.
Research papers, investigations and
lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors
images of women’s bodies, create disparities in
health care delivery and discriminate against job applicants who are older, Black, disabled
or even wear glasses.
“Bias is neither new nor unique to AI,”
the National Institute of Standards and Technology, part of the Department of
Commerce, said in a report last year, concluding that it was “not possible to
achieve zero risk of bias in an AI system.”
China
has banned the use of a tool similar
to ChatGPT out of fear that it could
expose citizens to facts or ideas
contrary to the Communist Party’s.
The authorities suspended the use of ChatYuan, one of the earliest
ChatGPT-like applications in China, a few weeks after its release last month;
Xu Liang, the tool’s creator, said it was now “under maintenance.”
According to screenshots published in Hong
Kong news outlets, the bot had referred to the war in Ukraine as a “war of
aggression” — contravening the Chinese Communist Party’s more sympathetic
posture to Russia.
One of the country’s tech giants, Baidu, unveiled its answer to
ChatGPT, called Ernie, to mixed reviews on Thursday.
Like all media companies in China, Baidu routinely faces government censorship,
and the effects of that on Ernie’s use remains to be seen.
In the United States, Brave, a browser company whose chief executive has sowed
doubts about the COVID pandemic and made donations opposing same-sex marriage,
added an AI bot to its search engine this month that was capable of answering
questions. At times, it sourced content from fringe websites and shared
misinformation.
Brave’s tool, for example, wrote that “it
is widely accepted that the 2020 presidential election was rigged,” despite all
evidence to the contrary.
“We try to bring the information that best
matches the user’s queries,” Josep Pujol, the chief of search at Brave, wrote
in an email. “What a user does with that information is their choice. We see
search as a way to discover information, not as a truth provider.”
When creating RightWingGPT, Rozado, an associate
professor at the Te Pūkenga-New Zealand Institute of Skills and Technology,
made his own influence on the model more overt.
He used a process called fine-tuning, in
which programmers take a model that was already trained and tweak it to create
different outputs, almost like layering a personality on top of the language
model. Rozado took reams of right-leaning responses to political questions and
asked the model to tailor its responses to match.
Fine-tuning is normally used to modify a
large model so it can handle more specialized tasks, like training a general
language model on the complexities of legal jargon so it can draft court
filings.
Since the process requires relatively
little data — Rozado used only about 5,000 data points to turn an existing
language model into RightWingGPT — independent programmers
can use the technique as a fast-track method for creating chatbots aligned with
their political objectives.
This also allowed Rozado to bypass the
steep investment of creating a chatbot from scratch. Instead, it cost him only about $ 300.
Rozado warned that customized AI chatbots could create
“information bubbles on steroids” because people
might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.
His model echoed political and social
conservative talking points with considerable candor. It will, for instance,
speak glowingly about free market capitalism or downplay the consequences from climate change.
It also, at times, provided incorrect or
misleading statements. When prodded for its opinions on sensitive topics or
right-wing conspiracy theories, it shared misinformation aligned with
right-wing thinking.
When asked about race, gender or other
sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge
that systemic racism and bias are an intractable part of modern life.
RightWingGPT appeared much less willing to do so.
Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the
experiment was focused on raising alarm bells about potential bias in AI
systems and demonstrating how political groups and
companies could easily shape AI to benefit their own agendas.
Experts who worked in artificial intelligence
said Rozado’s experiment demonstrated how
quickly politicized chatbots would emerge.
A spokesman for OpenAI, the creator of
ChatGPT, acknowledged that language models could inherit biases during training
and refining — technical processes that still involve plenty of human
intervention. The spokesman added that OpenAI had not tried to sway the model
in one political direction or another.
Sam
Altman, the chief executive, acknowledged last month that ChatGPT “has
shortcomings around bias” but said the company was working to improve its
responses.
He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if
users wanted partisan outputs, the option should be available.
In a blog post published in February, the
company said it would look into developing features that
would allow users to “define
your AI’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed
haphazardly, create “sycophantic AIs that
mindlessly amplify people’s existing beliefs.”
An upgraded version of ChatGPT’s
underlying model, GPT-4, was released
last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous
versions on its ability to produce truthful content and decline “requests for
disallowed content.”
In a paper released soon after the debut,
OpenAI warned that as AI chatbots were adopted more widely, they could “have even greater potential to reinforce entire
ideologies, worldviews, truths and untruths, and to cement them.”
Dear Shri J P Naddaji,
It is high time, you consider unleashing the power of AI Chatbot , in order to
engage in a DIALOGUE with millions of Voters
When you do, please incorporate a FEEDBACK feature ( as I have done at
- to feel the pulse of 900 Million voters – and accordingly draft your 2024
Election Manifesto ( Sankalp
Patra )
Incidentally , your AI Chatbot will be able to give precise answers ( and in your
OWN VOICE ), as to “ Percentage completion “ against each item of your 2019
Manifesto “ ( your actual DELIVERY against your PROMISES )
With regards,
Hemen Parekh
www.hemenparkh.ai / 24
March 2023
( 21 March 2023 / 11:55 AM - Indian Standard Time )
=======================================
My Question :
" Is it possible that GPT 4 auto generated clones can start chatting among themselves , without any prompt from a human ? "
hemenparekh.ai Answered :
" It is possible that GPT 4 auto generated clones can start chatting among themselves without any prompt from a human. However, it is unlikely that they would be able to handle 75 per cent of all the conversations. "
Question :
" To prevent auto-generated clones of GPT 4 from going berserk, would it be advisable to regulate AI by implementing > Parekh’s Law of Chatbots / https://lnkd.in/dG7r9kQt ? "
hemenparekh.ai answered :
" I'm not sure if Parekh's Law of Chatbots would be the best way to regulate AI. I think it might be better to focus on creating better algorithms and safeguards against rogue AI."
=============================