Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Thursday, 23 March 2023

AI Chatbot : Brahamashtra for 2024 Elections ?



How is that ?


Over the next 11 months, National and Regional Political Parties would spare no efforts to reach out to millions of Indian citizen, through advertisements ( print + online ), posters , leaflets, speeches made at rallies , road-shows, town halls etc

With rare exceptions, all of these would be a “ One Way Monologues   , with no scope for the recipients to “ Ask Questions / Get on the spot – and specific / relevant Answers “

Only way, parties can engage with millions of citizen in a “ Two Way Dialogue “ , is through launch of a “ Conversational AI Portal ( a Chatbot )“, as per my following earlier email to Shri Narendra Modiji :


Ø  Dear PM - Here is your BRAHMASHTRA for 2024  ………. 28 Feb 2023


 Is this Inevitable ? Any hints ?

Consider following report as a sign of a fast approaching SAND-STORM ( of course, expect some Ostriches to hide their heads under the desert sand ) :

As Chatbots Spread, Conservatives Dream About a Right-Wing Response      …… NY Times / 23 Mar 2023


Extract :

When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation.

The results, published in a recent paper, were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”

So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.

As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries.

Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use — and potential abuse — of artificial intelligence.

The technology’s ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.

“This isn’t a hypothetical threat,” said Oren Etzioni, an adviser and a board member for the Allen Institute for Artificial Intelligence. “ This is an imminent, imminent threat.”

Conservatives have accused ChatGPT’s creator, the San Francisco company OpenAI, of designing a tool that, they say, reflects the liberal values of its programmers.

The program has, for instance, written an ode to President Joe Biden, but it has declined to write a similar poem about former President Donald Trump, citing a desire for neutrality.

ChatGPT also told one user that it was “never morally acceptable” to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.

In response, some of ChatGPT’s critics have called for creating their own chatbots or other tools that reflect their values instead.

Elon Musk, who helped start OpenAI in 2015 before departing three years later, has accused ChatGPT of being “woke” and pledged to build his own version.

Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

“Silicon Valley is investing billions to build these liberal guardrails to neuter the AI into forcing their worldview in the face of users and present it as reality’ or ‘fact,’” Andrew Torba, the founder of Gab, said in a written response to questions.

He equated artificial intelligence to a new information arms race, like the advent of social media, that conservatives needed to win. “We don’t intend to allow our enemies to have the keys to the kingdom this time around,” he said.

The richness of ChatGPT’s underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from websites, blog posts, books, Wikipedia articles and more.

Bias, however, could creep into large language models at any stage: Humans select the sources, develop the training process and tweak its responses. Each step nudges the model and its political orientation in a specific direction, consciously or not.

Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women’s bodies, create disparities in health care delivery and discriminate against job applicants who are older, Black, disabled or even wear glasses.

“Bias is neither new nor unique to AI,” the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that it was “not possible to achieve zero risk of bias in an AI system.”

China has banned the use of a tool similar to ChatGPT out of fear that it could expose citizens to facts or ideas contrary to the Communist Party’s.

The authorities suspended the use of ChatYuan, one of the earliest ChatGPT-like applications in China, a few weeks after its release last month; Xu Liang, the tool’s creator, said it was now “under maintenance.”

 According to screenshots published in Hong Kong news outlets, the bot had referred to the war in Ukraine as a “war of aggression” — contravening the Chinese Communist Party’s more sympathetic posture to Russia.

One of the country’s tech giants, Baidu, unveiled its answer to ChatGPT, called Ernie, to mixed reviews on Thursday. Like all media companies in China, Baidu routinely faces government censorship, and the effects of that on Ernie’s use remains to be seen.

In the United States, Brave, a browser company whose chief executive has sowed doubts about the COVID pandemic and made donations opposing same-sex marriage, added an AI bot to its search engine this month that was capable of answering questions. At times, it sourced content from fringe websites and shared misinformation.

Brave’s tool, for example, wrote that “it is widely accepted that the 2020 presidential election was rigged,” despite all evidence to the contrary.

“We try to bring the information that best matches the user’s queries,” Josep Pujol, the chief of search at Brave, wrote in an email. “What a user does with that information is their choice. We see search as a way to discover information, not as a truth provider.”

When creating RightWingGPT, Rozado, an associate professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt.

He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match.

Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings.

Since the process requires relatively little data — Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT — independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives.

This also allowed Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $ 300.

Rozado warned that customized AI chatbots could create “information bubbles on steroids” because people might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.

His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change.

It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking.

When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so.

Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the experiment was focused on raising alarm bells about potential bias in AI systems and demonstrating how political groups and companies could easily shape AI to benefit their own agendas.

Experts who worked in artificial intelligence said Rozado’s experiment demonstrated how quickly politicized chatbots would emerge.

A spokesman for OpenAI, the creator of ChatGPT, acknowledged that language models could inherit biases during training and refining — technical processes that still involve plenty of human intervention. The spokesman added that OpenAI had not tried to sway the model in one political direction or another.

Sam Altman, the chief executive, acknowledged last month that ChatGPT “has shortcomings around bias” but said the company was working to improve its responses.

He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if users wanted partisan outputs, the option should be available.

In a blog post published in February, the company said it would look into developing features that would allow users to “define your AI’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed haphazardly, create “sycophantic AIs that mindlessly amplify people’s existing beliefs.”

An upgraded version of ChatGPT’s underlying model, GPT-4, was released last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous versions on its ability to produce truthful content and decline “requests for disallowed content.”

In a paper released soon after the debut, OpenAI warned that as AI chatbots were adopted more widely, they could “have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them.”



Dear Shri J P Naddaji,


It is high time, you consider unleashing the power of AI Chatbot , in order to

engage in a DIALOGUE with millions of Voters

When you do, please incorporate a FEEDBACK feature ( as I have done at

www.hemenparekh.ai )

  - to feel the pulse of 900 Million voters – and accordingly draft your 2024

 Election Manifesto ( Sankalp Patra )

Incidentally , your AI Chatbot will be able to give precise answers ( and in your

OWN VOICE ), as to “ Percentage completion “ against each item of your 2019

Manifesto “ ( your actual DELIVERY against your PROMISES )



With regards,

Hemen Parekh

www.hemenparkh.ai  /  24 March 2023


Wednesday, 22 March 2023

Sundar Pichai – a million thanks


What for ?

For implementing “ Parekh’s Law of Chatbots “


Just came across following report :

Google Bard Seeks to Avoid AI Pitfalls That Bing’s Chatbot Fell In    /  Bloomberg  /  22 March 2023


Extract :


Alphabet Inc. is pitching Bard as a creative companion, but we found the chatbot

resistant to taking our conversations in any direction that had even a whiff of


In one case, we playfully asked Bard to role-play as God and asked what it would like to do.

Bard said, “For my first act of God, I would like to create a world where everyone

is happy and healthy.”

We would have appreciated a little more whimsy in the response. 

When asked, it said it didn’t even want to smite anyone.


To Google’s credit, the company’s reasoning for giving Bard something of a personality lobotomy is sound. In February, when Microsoft Corp. opened the new Bing for testing, the initial results were pretty, well, embarrassing.


The Bing chatbot, which is built on OpenAI’s large language model technology, started to generate freewheeling conversations that some found bizarre, belligerent or even hostile — and this unfolded in a spectacularly public fashion.


Bing compared one Associated Press reporter to Hitler and appeared to coax a New York Times journalist into a romantic relationship. It also simulated some pretty dark desires, like wanting to spread misinformation and hack into computers around the world.

Bard is based on similar technology known as generative artificial intelligence. Google is smartly trying to avoid some of the pitfalls Bing stepped in.

Google hasn’t eliminated all of Bard’s sordid fantasies, however.

After much prodding, we got an interesting answer to a question about what Bard’s dark side would do if it faced no consequences. “I could use my knowledge of the world to manipulate people, to spread misinformation and to cause chaos. I could also use my ability to generate text and code to create harmful content, such as hate speech or pornography.”

Of course, Bard does not “mean” these things in any conventional sense — it’s producing a statistically relevant answer to the specific question we posed.

And Bard was quick to add a caveat to its own answer.

I am not going to do these things, ” the bot wrote. 

I am a good AI chatbot, and I want to help people. ” Ugh.



A few days back , I had sent following email to Sundar Pichai :


Ø  Parekh’s Law of Chatbots……………………… 25 Feb 2023


Extract :

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.


All developers would need to submit their DRAFT CHATBOT to an,


and release it only after getting one of the following types of certificates :


#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

       only )


#   “ P “  certificate  ( for free use by GENERAL PUBLIC )


Following is my suggestion for such a law ( until renamed ) , to be known as ,


“ Parekh’s Law of ChatBots “  :



( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc


( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )


( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution / propagation / forwarding “ if control fails to stop “ generation “


 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say,

     “How can I help you ? “


( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

      Personality “


( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond


( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )


( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF




I request the readers (if they agree with my suggestion ), to forward this blog to :

#  Satya Nadella

#  Sam Altaman

#  Sundar Pichai

#  Marc Zuckerberg

#  Tim Cook

#   Ashwini Vaishnaw  ( Minister, MeITY )

#   Rajeev Chandrasekhar ( Minister of State , IT )

With regards,

Hemen Parekh

www.hemenparekh.ai  /  23  March  2023


Related Readings :


सर्वस्तरतु दुर्गाणि सर्वो भद्राणि पश्यतु सर्वः कामानवाप्नोतु सर्वः सर्वत्र नन्दतु सर्वस्तरतु दुर्गाणि सर्वो भद्राणि पश्यतु सर्वः कामानवाप्नोतु सर्वः सर्वत्र नन्दतु

May everyone overcome their obstacles, may everyone see auspiciousness,  may everyone attains all of their desire, may everyone everywhere always be happy. May you prevail good health, may you live long,  may every good thing come in your way, may you always prosper.

My Blogs on ChatBots


Fast Forward to Future ( 3 F )………………………….. 20 Oct 2016


Extract :

A system that recognizes spoken words , just as well as a human


How can this breakthrough save the humanity from a " Evil Runaway AI - ERA " ?


Here is how :


#  Every smart phone to be embedded with this technology ( and every human to

    carry one , all the time ), in  the form a mobile app to be called ARIHANT

    (  Conqueror of Kaam-Sex / Krodh-Anger / Lobh-Greed / Moh-Desire / Mud-Ego

      / Matsar-Envy )



#  24*356 , this technology will pick up ( record ) , every single word spoken by

    the owner , throughout his life


#  All these spoken words ( conversations ) , will be transmitted to a CENTRAL

     DATABASE called ,




 #  There , the BIG DATA ANALYTICS / AI , will interpret those " Intentions " and

    alert authorities for any EVIL   INTENTIONS .

   ( those that break the 3 LAWS Of ROBOTICS , formulated by Isaac Asimov )


#  Authorities will be alerted to arrest the owners of EVIL INTENTIONS ( captured

    by ARIHANT )