Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Saturday 25 March 2023

Shaking up an Industry

 


How AI 'revolution' is shaking up journalism            /  The News  / 19 March 2023

Extract :

Journalists had fun last year asking the shiny new artificial intelligence (AI) chatbot ChatGPT to write their columns, most concluding that the bot was not good enough to take their jobs. Yet.

But many commentators believe journalism is on the cusp of a revolution where mastery of algorithms and AI tools that generate content will be a key battleground.

The technology news site CNET perhaps heralded the way forward when it quietly deployed an AI program last year to write some of its listicles

It was later forced to issue several corrections after another news site noticed that the bot had made mistakes, some of them serious.

But CNET's parent company later announced job cuts that included editorial staff — though executives denied AI was behind the layoffs.

The German publishing behemoth Axel Springer, owner of Politico and German tabloid Bild among other titles, has been less coy.

"[AI]has the potential to make independent journalism better than it ever was — or simply replace it," the group's boss Mathias Doepfner told staff last month.

Hailing bots like ChatGPT as a "revolution" for the industry, he announced a restructuring that would see "significant reductions" in production and proofreading.

Both companies are pushing AI as a tool to support journalists and can point to recent developments in the industry.

Glorified word processor

For the past decade, media organisations have been increasingly using automation for routine work like searching for patterns in economic data or reporting on company results.

Outlets with an online presence have obsessed over "search engine optimisation or SEO", which involves using keywords in a headline to get favoured by Google or Facebook algorithms and get a story seen by the most eyeballs.

And some have developed their own algorithms to see which stories play best with their audiences and allow them to better target content and advertising — the same tools that turned Google and Facebook into global juggernauts.

Alex Connock, an author of "Media Management and Artificial Intelligence", says that mastery of these AI tools will help decide which media companies survive and which ones fail in the coming years.

And the use of content creation tools will see some people lose their jobs, he said, but not in the realms of analytical or high-end reporting.

"In the specific case of the more mechanistic end of journalism — sports reports, financial results — I do think that AI tools are replacing, and likely increasingly to replace human delivery," he said.

Not all analysts agree on that point.

Mike Wooldridge of Oxford University reckons ChatGPT, for example, is more like a "glorified word processor" and journalists should not be worried.

"This technology will replace journalists in the same way that spreadsheets replaced mathematicians — in other words, I don't think it will," he told a recent event held by the Science Media Centre.

He nonetheless suggested that mundane tasks could be replaced — putting him on the same page as Connock.

Test the robots

French journalists Jean Rognetta and Maurice de Rambuteau are digging further into the question of how ready AI is to take over from journalists.

They publish a newsletter called "Qant" written and illustrated using AI tools.

Last month, they showed off a 250-page report written by AI detailing the main trends of the CES technology show in Las Vegas.

Rognetta said they wanted to "test the robots, to push them to the limit".

They quickly found the limit.

The AI struggled to identify the main trends at CES and could not produce a summary worthy of a journalist. It also pilfered wholesale from Wikipedia.

The authors found that they needed to intervene constantly to keep the process on track, so while the programs helped save some time, they were not yet fit to replace real journalists.

Journalists are "afflicted with the syndrome of the great technological replacement, but I don't believe in it", Rognetta said.

"The robots alone are just not capable of producing articles. There is still a part of journalistic work that cannot be delegated."

 

MY  TAKE :

Ø  Revenge of the AI ?  ………………………………………. 29  Sept  2016

 

Extract :

Hindustan Times ( 30  Sept  2016 ) carries following news report :

 

Facebook, Amazon, Google, IBM and Microsoft on one AI platform

 

In a major boost to artificial intelligence (AI) research, five top-notch tech

 

 companies -- Facebook, Amazon, Google, IBM and Microsoft -- have joined

 

 hands to announce a historic partnership on AI and machine learning.

 

 

 

It means that these companies will discuss advancements and conduct research

 

 in AI and how to develop best products and services powered by machine

 

 learning, Tech Crunch reported on Thursday.

 

 

 

Initial financial help will come from these companies and as other stakeholders

 

 join the group, the finances are expected to increase.

 

 

 

“We want to involve people impacted by AI as well,” Mustafa Suleyman, co-

 

founder and head of applied AI at DeepMind, a subsidiary of Alphabet (parent 

 

company of Google), was quoted as saying.

 

 

 

According to the report, the organisational structure has been designed to allow

 

 non-corporate groups to have equal leadership side-by-side with large tech

 

 companies.

 

 

 

“The power of AI is in the enterprise sector. For society at-large to get the

 

 benefits of AI, we first have to trust it,” Francesca Rossi, AI ethics researcher at

 

 IBM Research, told Tech Crunch.

 

 

 

AI-powered bots will become the next interface, shaping our interactions with

 

 the applications and devices we rely on and Microsoft’s latest solutions are set

 

 to change the way HP interacts with its customers and partners, Microsoft’s

 

 Indian-born Microsoft CEO Satya Nadella said recently.

 

 

 

At Microsoft’s Worldwide Partner Conference in August, Nadella had said that AI-

 

powered chatbots will “fundamentally revolutionise how computing is 

 

experienced by everybody.”

 

 

 

 

 

By " burying " this news on page 17 , in 10 CC ( column centimeter ), it was

 

 as if the Hindustan Times  Editor was saying :

 

 

"  Ignore this - it is of little consequence ! "

 

Now , fast forward to year 2026

 

In Hindustan Times's office , you won't find ,

 

*  Watchman / Receptionist / Reporters / Journalists / Composers / Graphic

 

 Designers / Editors / Operators etc

 

 

 

All will be replaced by AI Robots , embedded into Computers / Cameras /

 

 Printing Machines / Delivery Drones  !

 

 

 

And those AI Robots would select / print news such as this , in large , bold 

 

headlines on the Front Page !

 

 

I only hope , AI of 2026 remains devoid of human frailties of jealousy / anger 

 

/ revenge !

 

----------------------------------------------------------------------------------------

With regards,

Hemen Parekh

www.hemenparekh.ai  /  26 Mar 2023

 

Related Readings :

7 Signs AI Is Going To Replace You (Especially Writers)   /   Medium  /  24 March 2023

AI and context: which jobs will it replace?    /  Medium  / 15 March 2023

 

 

 

 

Thursday 23 March 2023

AI Chatbot : Brahamashtra for 2024 Elections ?

 


 

How is that ?


Simple.

Over the next 11 months, National and Regional Political Parties would spare no efforts to reach out to millions of Indian citizen, through advertisements ( print + online ), posters , leaflets, speeches made at rallies , road-shows, town halls etc

With rare exceptions, all of these would be a “ One Way Monologues   , with no scope for the recipients to “ Ask Questions / Get on the spot – and specific / relevant Answers “

Only way, parties can engage with millions of citizen in a “ Two Way Dialogue “ , is through launch of a “ Conversational AI Portal ( a Chatbot )“, as per my following earlier email to Shri Narendra Modiji :

 

Ø  Dear PM - Here is your BRAHMASHTRA for 2024  ………. 28 Feb 2023

 

 Is this Inevitable ? Any hints ?


Consider following report as a sign of a fast approaching SAND-STORM ( of course, expect some Ostriches to hide their heads under the desert sand ) :

As Chatbots Spread, Conservatives Dream About a Right-Wing Response      …… NY Times / 23 Mar 2023

 

Extract :

When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation.

The results, published in a recent paper, were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”

So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.

As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries.

Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use — and potential abuse — of artificial intelligence.

The technology’s ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.

“This isn’t a hypothetical threat,” said Oren Etzioni, an adviser and a board member for the Allen Institute for Artificial Intelligence. “ This is an imminent, imminent threat.”

Conservatives have accused ChatGPT’s creator, the San Francisco company OpenAI, of designing a tool that, they say, reflects the liberal values of its programmers.

The program has, for instance, written an ode to President Joe Biden, but it has declined to write a similar poem about former President Donald Trump, citing a desire for neutrality.

ChatGPT also told one user that it was “never morally acceptable” to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.

In response, some of ChatGPT’s critics have called for creating their own chatbots or other tools that reflect their values instead.

Elon Musk, who helped start OpenAI in 2015 before departing three years later, has accused ChatGPT of being “woke” and pledged to build his own version.

Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

“Silicon Valley is investing billions to build these liberal guardrails to neuter the AI into forcing their worldview in the face of users and present it as reality’ or ‘fact,’” Andrew Torba, the founder of Gab, said in a written response to questions.

He equated artificial intelligence to a new information arms race, like the advent of social media, that conservatives needed to win. “We don’t intend to allow our enemies to have the keys to the kingdom this time around,” he said.

The richness of ChatGPT’s underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from websites, blog posts, books, Wikipedia articles and more.

Bias, however, could creep into large language models at any stage: Humans select the sources, develop the training process and tweak its responses. Each step nudges the model and its political orientation in a specific direction, consciously or not.

Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women’s bodies, create disparities in health care delivery and discriminate against job applicants who are older, Black, disabled or even wear glasses.

“Bias is neither new nor unique to AI,” the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that it was “not possible to achieve zero risk of bias in an AI system.”

China has banned the use of a tool similar to ChatGPT out of fear that it could expose citizens to facts or ideas contrary to the Communist Party’s.

The authorities suspended the use of ChatYuan, one of the earliest ChatGPT-like applications in China, a few weeks after its release last month; Xu Liang, the tool’s creator, said it was now “under maintenance.”

 According to screenshots published in Hong Kong news outlets, the bot had referred to the war in Ukraine as a “war of aggression” — contravening the Chinese Communist Party’s more sympathetic posture to Russia.

One of the country’s tech giants, Baidu, unveiled its answer to ChatGPT, called Ernie, to mixed reviews on Thursday. Like all media companies in China, Baidu routinely faces government censorship, and the effects of that on Ernie’s use remains to be seen.

In the United States, Brave, a browser company whose chief executive has sowed doubts about the COVID pandemic and made donations opposing same-sex marriage, added an AI bot to its search engine this month that was capable of answering questions. At times, it sourced content from fringe websites and shared misinformation.

Brave’s tool, for example, wrote that “it is widely accepted that the 2020 presidential election was rigged,” despite all evidence to the contrary.

“We try to bring the information that best matches the user’s queries,” Josep Pujol, the chief of search at Brave, wrote in an email. “What a user does with that information is their choice. We see search as a way to discover information, not as a truth provider.”

When creating RightWingGPT, Rozado, an associate professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt.

He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match.

Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings.

Since the process requires relatively little data — Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT — independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives.

This also allowed Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $ 300.

Rozado warned that customized AI chatbots could create “information bubbles on steroids” because people might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.

His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change.

It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking.

When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so.

Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the experiment was focused on raising alarm bells about potential bias in AI systems and demonstrating how political groups and companies could easily shape AI to benefit their own agendas.

Experts who worked in artificial intelligence said Rozado’s experiment demonstrated how quickly politicized chatbots would emerge.

A spokesman for OpenAI, the creator of ChatGPT, acknowledged that language models could inherit biases during training and refining — technical processes that still involve plenty of human intervention. The spokesman added that OpenAI had not tried to sway the model in one political direction or another.

Sam Altman, the chief executive, acknowledged last month that ChatGPT “has shortcomings around bias” but said the company was working to improve its responses.

He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if users wanted partisan outputs, the option should be available.

In a blog post published in February, the company said it would look into developing features that would allow users to “define your AI’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed haphazardly, create “sycophantic AIs that mindlessly amplify people’s existing beliefs.”

An upgraded version of ChatGPT’s underlying model, GPT-4, was released last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous versions on its ability to produce truthful content and decline “requests for disallowed content.”

In a paper released soon after the debut, OpenAI warned that as AI chatbots were adopted more widely, they could “have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them.”

 

 

Dear Shri J P Naddaji,

 

It is high time, you consider unleashing the power of AI Chatbot , in order to

engage in a DIALOGUE with millions of Voters


When you do, please incorporate a FEEDBACK feature ( as I have done at

www.hemenparekh.ai )


  - to feel the pulse of 900 Million voters – and accordingly draft your 2024

 Election Manifesto ( Sankalp Patra )


Incidentally , your AI Chatbot will be able to give precise answers ( and in your

OWN VOICE ), as to “ Percentage completion “ against each item of your 2019

Manifesto “ ( your actual DELIVERY against your PROMISES )

 

 

With regards,

Hemen Parekh

www.hemenparkh.ai  /  24 March 2023

 

Wednesday 22 March 2023

Sundar Pichai – a million thanks


 

What for ?

For implementing “ Parekh’s Law of Chatbots “


Elaborate

Just came across following report :


Google Bard Seeks to Avoid AI Pitfalls That Bing’s Chatbot Fell In    /  Bloomberg  /  22 March 2023

 

Extract :

 

Alphabet Inc. is pitching Bard as a creative companion, but we found the chatbot


resistant to taking our conversations in any direction that had even a whiff of


controversy.

In one case, we playfully asked Bard to role-play as God and asked what it would like to do.


Bard said, “For my first act of God, I would like to create a world where everyone

is happy and healthy.”


We would have appreciated a little more whimsy in the response. 

When asked, it said it didn’t even want to smite anyone.

 


To Google’s credit, the company’s reasoning for giving Bard something of a personality lobotomy is sound. In February, when Microsoft Corp. opened the new Bing for testing, the initial results were pretty, well, embarrassing.

 

The Bing chatbot, which is built on OpenAI’s large language model technology, started to generate freewheeling conversations that some found bizarre, belligerent or even hostile — and this unfolded in a spectacularly public fashion.

 

Bing compared one Associated Press reporter to Hitler and appeared to coax a New York Times journalist into a romantic relationship. It also simulated some pretty dark desires, like wanting to spread misinformation and hack into computers around the world.

Bard is based on similar technology known as generative artificial intelligence. Google is smartly trying to avoid some of the pitfalls Bing stepped in.

Google hasn’t eliminated all of Bard’s sordid fantasies, however.

After much prodding, we got an interesting answer to a question about what Bard’s dark side would do if it faced no consequences. “I could use my knowledge of the world to manipulate people, to spread misinformation and to cause chaos. I could also use my ability to generate text and code to create harmful content, such as hate speech or pornography.”

Of course, Bard does not “mean” these things in any conventional sense — it’s producing a statistically relevant answer to the specific question we posed.


And Bard was quick to add a caveat to its own answer.


I am not going to do these things, ” the bot wrote. 

I am a good AI chatbot, and I want to help people. ” Ugh.

 

 

A few days back , I had sent following email to Sundar Pichai :

 

Ø  Parekh’s Law of Chatbots……………………… 25 Feb 2023

 

Extract :

What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.

 

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

       only )

       

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed ) , to be known as ,

 

“ Parekh’s Law of ChatBots “  :

 

  

( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

     

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

    

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution / propagation / forwarding “ if control fails to stop “ generation “

  

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say,

     “How can I help you ? “

      

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

      Personality “

     

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 

( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

     

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

      DESTRUCT

 

 

I request the readers (if they agree with my suggestion ), to forward this blog to :

#  Satya Nadella

#  Sam Altaman

#  Sundar Pichai

#  Marc Zuckerberg

#  Tim Cook

#   Ashwini Vaishnaw  ( Minister, MeITY )

#   Rajeev Chandrasekhar ( Minister of State , IT )

With regards,

Hemen Parekh

www.hemenparekh.ai  /  23  March  2023

 

Related Readings :

 

सर्वस्तरतु दुर्गाणि सर्वो भद्राणि पश्यतु सर्वः कामानवाप्नोतु सर्वः सर्वत्र नन्दतु सर्वस्तरतु दुर्गाणि सर्वो भद्राणि पश्यतु सर्वः कामानवाप्नोतु सर्वः सर्वत्र नन्दतु

May everyone overcome their obstacles, may everyone see auspiciousness,  may everyone attains all of their desire, may everyone everywhere always be happy. May you prevail good health, may you live long,  may every good thing come in your way, may you always prosper.

My Blogs on ChatBots

 


Fast Forward to Future ( 3 F )………………………….. 20 Oct 2016

 

Extract :


A system that recognizes spoken words , just as well as a human

 

How can this breakthrough save the humanity from a " Evil Runaway AI - ERA " ?

 

Here is how :

 

#  Every smart phone to be embedded with this technology ( and every human to

    carry one , all the time ), in  the form a mobile app to be called ARIHANT

    (  Conqueror of Kaam-Sex / Krodh-Anger / Lobh-Greed / Moh-Desire / Mud-Ego

      / Matsar-Envy )

 

  

#  24*356 , this technology will pick up ( record ) , every single word spoken by

    the owner , throughout his life

 

#  All these spoken words ( conversations ) , will be transmitted to a CENTRAL

     DATABASE called ,

 

    MIND READER  Of  HUMAN  INTENTIONS

 

 #  There , the BIG DATA ANALYTICS / AI , will interpret those " Intentions " and

    alert authorities for any EVIL   INTENTIONS .

   ( those that break the 3 LAWS Of ROBOTICS , formulated by Isaac Asimov )

 


#  Authorities will be alerted to arrest the owners of EVIL INTENTIONS ( captured

    by ARIHANT )