Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Thursday, 20 April 2023

ChatGPT : a Voter Meter ?

 


 

Context :

BJP starts widespread surveys to prepare plans for ’24 Lok Sabha poll battle  ………….. HT  / 17 April 2023

Extract :

Even as the Bharatiya Janata Party (BJP) prepares for the Karnataka assembly elections next month, it is carrying out a much larger exercise to win more seats in the 2024 parliamentary polls and a key tool towards this end has been to conduct extensive surveys, party functionaries said on Sunday.

 

At least four widespread surveys will help formulate the BJP’s strategy to fight the general elections next year, according to three persons involved in these plans being supervised by home minister Amit Shah.

 

While some surveys are on and will continue till 2024 to maintain a dynamic campaign, four key survey results are being tallied by party president JP Nadda to decide the strategy for the Lok Sabha polls by May, exactly a year before the general elections, the people cited above said, seeking anonymity.

 

One of the ambitious surveys was conducted ,

 

#  by 40,000 booth workers,

#  covering 100,000 booths

#  across all 543 parliamentary constituencies and

#  polling 10 million respondents.

 

The idea came from the Prime Minister,” said a key functionary who was part of the team of eight persons in Delhi that coordinated the results from all states.

 

“The PM said that we should try to get booth-level data of why we lost 160 seats in the last elections,” he said.

 

“Last May, the party began the exercise using the Saral app. All booth workers, helped by 2,000 state level representatives, asked respondents ,

 

#  why they didn’t vote for the party and what they found attractive about other candidates.”

 

“We were able to get in all responses by November, but the data is so vast and threw up such micro information about so many booths that we are still making sense of all that it has thrown up,” the functionary said. It ranged from Tamil Nadu and Kerala, where the BJP is yet to make inroads, to collecting data from booths in constituencies in which the BJP lost in Gujarat.

 

There are 3-4 main reasons why the BJP loses an election, according to this survey, the functionaries said. The first and most obvious reason is that the rival candidate was preferred.

 

 “In such cases, there isn’t much that we can focus on other than perhaps try to convince that person to join the BJP,” they said.

 

A second reason for losing, according to worker responses in the survey, was infighting.

 

This problem exists in all states and the party is using the survey results to send a message to district and state units to sort out internal messes.

 

==================================================================

 

MY  TAKE :

 

Dear Shri  Naddaji,


You need to get 900 million voters to take this survey – not a miniscule 10 million


And you need to do it fast and compile / display , the results automatically ( no

rocket science – simple software ! )


Therefore, you need to carry out this survey using a MOBILE APP ( - could be called  VOTER METER ? )


But most importantly, you need AI to keep analysing the “ continuously pouring-in

DATA “ from that Mobile App and tell you :


The probability of this candidate ( XYZ ) winning the forthcoming election is … (

 5 % / 20 % / 80 % etc )


To be able to get ADVANCE PREDICTION on the “ Winn-ability “ of each

candidate ( - and be able to make a “ course-correction “ in time ) , I suggest you

launch a VOTER METER app having following configuration :

 

                                   ChatGPT :  Voter  Meter

      Constituency :                            Candidate Name :

                            Rating Score :   1 =  Very Low  /  5 = Very High

 

Srl No

        Attribute

Weightage  of  Attribute - %

1

2

3

4

5

 

 

 

 

 

 

 

 

1

1.     Incorruptible

 

20

 

 

 

 

 

 

 

 

 

 

 

 

 

2

1.     Honesty - Integrity

1.      

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

3

2.     Respect for Others

 

10

 

 

 

 

 

           

 

 

 

 

 

 

           

4

1.     Approachability

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

5

1.     Education Level

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

6

1.     Simple  -  Dignified

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

7

1.     Sympathetic  Listener

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

8

1.     Action-oriented / Hard Working

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

9

1.     Positive Thinking

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

10

1.     Problem Solver

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

11

Leadership Quality

5

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

TOTAL

100 %

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In support of my suggestion, I reproduce below, my dialogue with ChatGPT :

 

My Question :


Thanks for rearranging my list. I have tried to improve by > Adding some more "

Attributes " > Assigning weightages to each attribute > Preparing a TABULATED

FORMAT > Thinking in terms of turning this Survey as a Mobile App / What do you

think of my following presentation ? /

 

ChatGPT replied :

Your updated presentation looks great !


The addition of more attributes and weightages assigned to each

attribute will help in getting more comprehensive and accurate

voter opinion on the candidate.


The tabulated format is well-organized and easy to understand.


The use of a rating score scale of 1 to 5 is clear and straightforward,

and the weightages assigned to each attribute will allow for a more

nuanced evaluation of the candidate's qualities.



It's also great to see that you're thinking about turning this survey into a

mobile app.


This will make it more accessible and convenient for voters to participate

and will help in collecting a larger sample size of responses.



Overall, I think this is an excellent presentation, and I believe that

using this voter meter survey form will provide you with valuable

insights into selecting the most popular and qualified candidate to

represent your party.


 

With Regards,

Hemen Parekh

www.hemenparekh.ai   /  21 April 2023

 

My Chat with Creator of ChatGPT




 -----------------------------------------------------------------------------------------


with regards,


hemen parekh

www.hemenparekh.ai    /  21  April  2023 

From Musk to Monk ?

 

 

Context :

Elon Musk says he'll create 'TruthGPT' to counter AI 'bias'   Hindu  /  18 Apr 2023


Extract :


Billionaire Twitter owner Elon Musk is again sounding warning bells on the dangers of artificial intelligence to humanity— and claiming that a popular chatbot has a liberal bias that he plans to counter with his own AI creation.

Mr. Musk told Fox News host Tucker Carlson in a segment aired on April 17 night that he plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a "maximum truth-seeking AI that tries to understand the nature of the universe.”

The idea, Mr. Musk said, is that an AI that wants to understand humanity is less likely to destroy it.

Mr. Musk also said he's worried that ChatGPT “is being trained to be politically correct.”

In the first of a two-part interview with Mr. Carlson, Mr. Musk also advocated for the regulation of artificial intelligence, saying he's a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.

Mr. Musk has for many years expressed strong opinions about artificial intelligence and has dismissed other tech leaders, including Mark Zuckerberg and Bill Gates, for having what he has described as a “limited” understanding of the field.

 “I came up with the name and the concept,” Mr. Musk told Mr. Carlson, lamenting that OpenAI is now closely allied with Microsoft and is no longer a nonprofit.

07


By 2020, Mr. Musk was tweeting that “OpenAI should be more open” while noting

that he had “no control & only very limited insight” into it.


At times, he has been complimentary. In the days after the Nov. 30 release of ChatGPT, Mr. Musk tweeted to OpenAI CEO Sam Altman that it is “scary good” and complained that news media wasn’t widely covering it because “ChatGPT is not a far left cause.”

Since then, however, Mr. Musk has repeatedly highlighted examples that he says show left-wing bias or censorship. Like other chatbots, ChatGPT has filters that try to prevent it from spewing out toxic or offensive answers.

 


MY  TAKE  :

 

Dear Elon,

 

Congratulations on your initiative to launch TruthGPT


Over a period of time ( 2 – 3 years ? ), I wonder if it could “ morph / mutate “ to

become ARIHANT , as envisaged in my following , 6 YEAR old blog



with regards,

 

hemen Parekh

 

www.hemenparekh.ai

 

 

Fast Forward to Future ( 3 F ) ………. 20  Oct  2016

 

Extract :

 

 

Here is how :

 

 

#  Every smart phone to be embedded with this technology ( and every human to

   carry one , all the time ), in  the form a mobile app to be called :


  ARIHANT [ 24th and last TIRTHANKAR / Monk / of JAINISM - 500 BC ] 


 Conqueror of ,


>  Kaam-     Sex 

>  Krodh-    Anger 

>  Lobh-     Greed 

>  Moh-      Desire

>  Mud-      Ego 

>  Matsar-  Envy 

 

    

#  24*356 , this technology will pick up ( record ) , every single word spoken by

   the owner , throughout his life

  

#  All these spoken words ( conversations ) , will be transmitted to a CENTRAL

    DATABASE called ,

 

    MIND READER  Of  HUMAN  INTENTIONS

 

 #  There , the BIG DATA ANALYTICS / AI , will interpret those " Intentions " and

     alert authorities for any EVIL INTENTIONS ( those that break the 3 LAWS Of

     ROBOTICS , formulated by Isaac Asimov )

 

 

#  Authorities will be alerted to arrest the owners of EVIL INTENTIONS ( captured

    by ARIHANT )

 

  

If Google could become a " Database of written Intentions " ( through more than

a billion typed Search Queries per day ) , then there is no reason to disbelieve

that ARIHANT could become a " Database of Spoken Intentions " of the entire

mankind ( may be a billion conversations , EVERY SECOND ! )

 

  

And since Google is a member of a consortium formed a few weeks back for the

advancement of AI , it will be too happy to integrate its own , very powerful

Language Translation Software " into the ARIHANT

 

 

This will enable catching of  EVIL INTENTIONS  expressed in any language !


Fast forward to 2026

 

If implemented , ARIHANT ( the Omnipotent / Omnipresent / Omniscient AI ) will

eavesdrop on every spoken word of every human being , decipher the problem

faced and deliver personalized / customized solutions through IoE ( Internet of

Everything ) network

 

 

And what can stop ARIHANT  from picking up all exchanges of data

communication, between those " powerful autonomous weapons " if they are

embedded with IoE enabled sensors and each assigned its own IP V 6 address ? 

 

 

It is not enough to raise questions about the potential evil usage of AI

 

 

We must find methods to prevent such evil use of AI

 

 

Mankind must be saved !

 

Related Readings :

 

 Artificial Intelligence : Brahma , Vishnu or Mahesh ? ………[ 30 June 2017 ]

 

Ø  Racing towards ARIHANT ?   ……………………………………………[ 04 Aug 2017 ]

 

Ø  to : Alphabet / from : ARIHANT …………………………………………[ 12 Oct 2017 ]

 

Ø  ARIHANT  :  the  Destroyer  of  Enemy ………………………………[ 24 Nov 2017 ]

 

Ø  ARIHANT : Beyond “ Thought Experiment “  ………………………[ 21 May 2018 ]

 

Ø  Singularity : an Indian Concept ?  ………………………………………[ 29 Mar 2020 ]

 

Ø  From  Tele-phony  to  Tele-Empathy  ?............................[ 27 Mar 2018 ]

 

Ø  ARTIFICIAL AND BIOLOGICAL NEURONS JUST TALKED OVER THE INTERNET



Ø  SCIENTISTS DEBUT SYSTEM TO TRANSLATE THOUGHTS DIRECTLY INTO TEXT

  

 ============================================

Added on 13 July 2023 :

Elon Musk unveils xAI to take on OpenAI and ‘understand the true nature of the universe’  .. BT .  13 July 2023   


Extract :

Elon Musk, the billionaire entrepreneur renowned for his ventures in electric cars, space exploration, and social media, has unveiled his highly anticipated artificial intelligence startup, xAI. The company aims to challenge the dominance of major technology firms in the AI field, specifically targeting the development of an alternative to OpenAI’s ChatGPT.


Musk has been an outspoken advocate for caution and regulation in the development of AI. He has repeatedly expressed concerns about the potential "civilizational destruction" that could arise from the uncontrolled advancement of AI technology. In an effort to address these concerns, xAI will take a unique approach to ensure the safety of its AI systems.


During a Twitter Spaces event on Thursday, Musk outlined his plan to build a safer AI. Rather than relying on explicit programming of morality, xAI aims to create a " maximally curious " AI. 


Musk believes that by encouraging the AI to understand the true nature of the universe, it will prioritise pro-humanity actions


He highlighted the inherent interest and value of humanity, stating that it is more fascinating than anything else.


"If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint. I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity," Musk said.


Additionally, Musk predicted the arrival of superintelligence, AI that surpasses human intelligence, within the next five to six years


This projection highlights the urgency Musk feels in developing responsible and beneficial AI systems to ensure the technology's positive impact on society.


While Musk was a co-founder of OpenAI, the company responsible for ChatGPT, he stepped down from its board in 2018. Notably, Microsoft is an investor in OpenAI, illustrating the significant involvement of major technology players in AI development.


Musk has assembled a talented team for xAI, including engineers with experience at prominent tech companies. The team members include Igor Babuschkin, formerly of Google's DeepMind; Tony Wu, a former Google employee; Christian Szegedy, a research scientist at Google; and Greg Yang, who previously worked at Microsoft.


In March, Musk registered a new company called X.AI Corp, based in Nevada, with himself as the sole director. This move indicates Musk's dedication to exploring AI technology further. 


Musk previously hinted at the launch of TruthGPT, an AI model focused on seeking maximum truth, aiming to rival Google's Bard and Microsoft's Bing AI, both designed to understand the universe's intricacies.


The rise of generative AI gained significant attention with the release of OpenAI's ChatGPT in November of the previous year, preceding the launch of Bard and Bing AI. These chatbots showcased the capabilities of AI in generating human-like text responses.


To ensure a comprehensive approach to AI safety, xAI will benefit from the guidance of Dan Hendrycks, director of the Center for AI Safety. Hendrycks specialises in analysing the risks associated with AI systems.


Although xAI is a separate entity from X Corp, Musk's involvement in various influential companies such as Twitter and Tesla suggests a close collaboration between xAI and these ventures.

Sunday, 16 April 2023

ChatGPT : the Sacrificial Goat ?

 


 

During the past few weeks, a few countries have either already imposed some restrictions on use of ChatGPT or are planning to do so

Reason ?

They think ChatGPT is “ potentially “ dangerous considering that , if left to “ evolve “ without any constraints, the following scenario is highly probable :

Ø  AIs cloning / re-producing their own ( better or worse ) versions without any prompt from a human

Ø  AIs engaging in conversations / chats among themselves without human intermediation

Ø  AIs acquiring “ human frailties “ but failing to acquire “ human wisdom “

Ø  AIs setting for themselves “ Goals / Targets / Tasks “ which cause harm to humans

Some 1000 + geeks / scientists have proposed a 6 month pause on development of even more powerful AI

Sure, a few countries can “ ban “ use of one ChatGPT , coming out of one country, USA

But can anyone ban / regulate , some 10,000 ChatGPT equivalent AI, coming out of a 100 countries in next 6 months , some of these having characteristics described above ?

Here are a few which have sprung up within past few weeks :

https://www.linkedin.com/feed/update/urn:li:activity:7053255822046330880/?utm_source=share&utm_medium=member_desktop

I am not against the idea of “ REGULATING “ all , current and future AI

In fact, I strongly believe there is an URGEND NEED for such a regulation , which is evolved through a CONSENSUS among all the stakeholders and implemented / regulated / enforced through a UN regulatory body ( ala SECURITY COUNCIL )

I urge Shri Ashwini vaishnaw , IT Minister ( India ) to take a lead in evolving such a consensus, by circulating among the stakeholders ( with modifications deemed necessary )  my following suggestion :

 

Ø  Parekh’s Law of Chatbots    ……..  25 Feb 2023

 

With regards,

Hemen Parekh

www.hemenparekh.ai  

 

Related Readings :

US begins study of possible rules to regulate AI like ChatGPT  ……….. Reuters  /  12 Apr 2023

Extract :

The Biden administration said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.

ChatGPT, an AI program that recently grabbed the public's attention for its ability to write answers quickly to a wide range of queries, in particular has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."

The agency wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.

President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.

ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp (MSFT.O).

NTIA plans to draft a report as it looks at "efforts to ensure AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing work to "ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities."

A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4 saying it was "biased, deceptive, and a risk to privacy and public safety."

 

 

https://www.reuters.com/technology/china-releases-draft-measures-managing-generative-artificial-intelligence-2023-04-11/ 

China's New Draft Law Mandates "Security Assessement" For AI Products   NDTV / 11 Apr 2023

Extract :

 

New AI products developed in China will have to undergo a security assessment before being released and must reflect "core socialist values", a sweeping new draft law by the country's internet regulator showed Tuesday.

The fresh regulations come as a flurry of Chinese companies rush to develop artificial intelligence services that can mimic human speech since San Francisco-based OpenAI launched ChatGPT in November, sparking a gold rush in the market.

Rapid advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images and people shown mouthing things they never said.

"Before providing services to the public that use generative AI products, a security assessment shall be applied for through national internet regulatory departments," the draft law, released by the Cyberspace Administration of China, reads.

The draft law -- dubbed "Administrative Measures for Generative Artificial Intelligence Services" -- aims to ensure "the healthy development and standardised application of generative AI technology", it read.

AI-generated content, it continued, needs to "reflect core socialist values, and must not contain content on subversion of state power".

It must also not contain, among other things, "terrorist or extremist propaganda", "ethnic hatred" or "other content that may disrupt economic and social order."

The Cyberspace Administration of China said it was seeking public input on the contents of the new regulations, which under Beijing's highly centralised political system are almost certain to become law.

"The new CAC draft document is one of the strictest measures for generative AI so far," Andy Chun, adjunct professor at City University of Hong Kong, told AFP.

Companies submitting security assessments will need to "be very careful to ensure each data source used for AI learning must be within guidelines, accurate, unbiased, and not infringe on IP rights of others," he said.

"Ensuring accuracy is hard. No generative AI system to date can do that," said Chun.

The regulatory crackdown comes as China's tech giants ramp up their efforts in the closely-watched sector.

Alibaba's cloud computing unit on Tuesday unveiled its own product called Tongyi Qianwen, which is expected to be rolled out across the tech giant's office workplace communications software and household appliances.

CEO Daniel Zhang said in a statement that the software came in a "technological watershed moment driven by generative AI and cloud computing".

And Baidu -- which operates the country's leading search engine -- released its own "Ernie Bot" AI chat product last month.

But investors were unimpressed by the bot's display of linguistic and maths skills at an unveiling, sending shares falling by as much as 10 percent.

ChatGPT is unavailable in China, but the American software is also gaining a base of Chinese users who use virtual private networks to get around the ban, deploying it to write essays and cram for exams.

And a 24-year-old Shanghai blogger caused a stir this month when he used AI technology to "resurrect" his dead grandmother, producing lifelike imagery of his interactions with the dead relative.

Beijing has announced ambitious plans to become a global leader in the field of AI by 2030, and consultancy group McKinsey estimates the sector could add about $600 billion every year to China's gross domestic product by then.

But it has also warned that deepfakes present a "danger to national security and social stability".

Beijing in January enforced new rules that would require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion".


 

UK

5 Core Principles of AI Ethics 

Extract :

A key recommendation from the report calls for a cross-sector AI code to be formed, a code that a could be adopted around the globe.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” writes Lord Tim Clement-Jones, the chairman of the House of Lords Select Committee on AI that commissioned the UK report.

The report includes 5 Core Principles:

• AI should be developed for the common good and benefit of humanity.

• AI should operate on principles of intelligibility and fairness.

• AI should not be used to diminish the data rights or privacy of individuals, families or communities.

• All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

• The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

 

Italy became the first Western country to ban ChatGPT. Here’s what other countries are doing 

Extract :

·         Italy last week became the first Western country to ban ChatGPT, the popular AI chatbot.

·         ChatGPT has both impressed researchers with its capabilities while also worrying regulators and ethicists about the negative implications for society. 

·         The move has highlighted an absence of any concrete regulations, with the European Union and China among the few jurisdictions developing tailored rules for AI.

·         Various governments are exploring how to regulate AI, and some are thinking of how to deal with general purpose systems such as ChatGPT.

 

https://medium.com/generative-ai/i-created-an-autonomous-ai-agent-that-can-stalk-anyone-75fcc42246ec 

 

INDIA

Thank You, Ashwini Vaishnawji

 

 

My 34 Blogs on ChatBots ( as of 10 Apr 2023 )