Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Tuesday, 4 April 2023

ChatGPT : Mobile App Maker ?

 


 

Context :

ChatGPT may soon help consumers file complaints  ……………. Eco Times  /  29 Mar 2023

Extract :

The Ministry of Consumer Affairs is working on a ChatGPT-powered mechanism through which consumers can lodge complaints on the National Consumer Helpline (NCH), said a senior official.

“We can use the technology to help consumers lodge their complaints on NCH via text messages or voice notes,” said the official, speaking on the condition of anonymity.

 

This will make it easier for aggrieved consumers to lodge complaints, the official added.

ChatGPT, developed by US-based OpenAI, is a chatbot that leverages artificial intelligence to offer answers to complex queries in a conversational and simple way. It was launched in November last year.

The ministry, which has met executives of OpenAI to discuss the way forward, is also looking at how the AI-enabled technology can assist aggrieved customers in filing cases with the consumer commissions.

The Ministry of Electronics and IT (MeitY) is also reportedly working on a ChatGPT-powered WhatsApp chatbot to help farmers learn about various government schemes.

 

Many services are adopting AI technology to make their platforms smarter.

Microsoft has announced that it would integrate ChatGPT's underlying technology into its Edge browser and Bing search to let users get results and online material with the help of AI. Google has unveiled Bard, its own AI-powered conversational chatbot

 

MY  TAKE  :

 

Dear Shri Piyush Goyalji

[ Minister for Consumer Affairs  /  Minister-ca@nic.in ]

 

It is heartening to know that you are :

#   Considering a “ Complaint Filing System “

#   Talking to OpenAI executives

 

Please think in terms of a Mobile App “ I  SIN >  < U  SIN “ , which I had proposed ( by sending as email ) some 10 YEARS ago

 

ChatGPT can help you to launch it within a week !

 

Here is how

 

I asked ChatGPT :

 

Could you please prepare a User Interface ( along with related Source code ) for a Mobile App , as described in following write-up ?

 I  SIN >  <   U SIN

That could be the name of an android app ( I Seek It Now > < U Solve It Now ) To be developed by India’s Central Home Ministry for successful implementation of proposed, “ Delivery of Services Act “

The app will come pre-installed on those mobile phones which Government of India plans to distribute FREE , to some 400 million “ Poor People “ of India , before the 2014 National Election ( and on those 220 million Aakash Tablets )

Issuer will enter the Mobile No and that person’s Aadhar Card No, in a computerised central database , before issuing the phone

Hence , no need to login to the App , whose screen will read, “ I wish to send my complaint to the following department :

National Commission for Women

Garbage Clearance

Water Supply

Electricity Supply

Roads Repair

Rationing ( Food Distribution )

Police

Bus Transport

Railways

Post

Employment

Education

Irrigation

Building Permits

Revenue / Land Records

Anti Corruption Other ……………………………………….etc

(List should not exceed 50)

Touching name of any department name will open a SMS message box where the user will type out his complaint ( in a language of his choice )

There will be option to speak-out and record voice message , for illiterate users

Touching SEND button will transmit the complaint and within minutes, user will receive SMS reply , giving, :

Name of Officer responsible

Office Landline No

Mobile No

Email Id

Complaint No

Expected date of solving

 

Bingo !

 

Then publish online , a tabulation of departments , arranged in the descending order of the no of complaints received / cleared / pending! This frugal innovation will bring in Good Governance all over India

 

ChatGPT gave following reply in 10 seconds :

Unfortunately, as a language model, I do not have the capability to prepare a full-fledged user interface and source code for a mobile app. However, I can provide you with some general guidance on how you can approach this task.

To create a user interface for an Android app, you can use the XML layout files to design the UI elements, such as buttons, text fields, and dropdown menus.

You can also use Java code to program the app's logic and functionality. Here are the steps you can follow to create the user interface for the " I SIN><U SIN " app:

1.     Create a new Android Studio project and select "Empty Activity" as the template.

 

2.     Open the activity_main.xml file and design the layout of the main screen. You can use Linear Layout or Relative Layout to arrange the UI elements. For example, you can create a vertical Linear Layout and add TextView for the heading, followed by a list of buttons for the different departments.

 

 

3.     Define the buttons in the XML layout file and assign a unique ID to each button. You can also specify the text to be displayed on the button.

 

4.     In the MainActivity.java file, define the button click listeners using findViewById() method to identify the button by its ID. Inside the click listener, you can launch an Intent to start the SMS message activity.

 

 

5.     In the SMS message activity, you can create a TextView to display the name of the department selected and an EditText to allow the user to type their complaint. You can also add a button to allow the user to record a voice message.

 

6.     Define the logic to send the SMS message using the SMSManager class. You can also use TextToSpeech API to convert the user's voice message to text before sending the SMS.

 

 

7.     Once the SMS is sent, you can use a BroadcastReceiver to listen for incoming SMS messages and display the reply message in the SMS message activity.

 

8.     Finally, you can publish the tabulation of departments on a website or a cloud-based database, such as Firebase, using REST APIs.

Note that this is just a general overview of how you can create the user interface and source code for the

 " I SIN><U SIN " app.

 

You will need to do more research and testing to make sure the app works as intended and is user-friendly.

Additionally, as the project involves sensitive personal data such as Aadhaar Card numbers, you will need to ensure that the app is secure and complies with relevant data privacy laws.

 

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  05 April 2023

 

 

Sunday, 2 April 2023

Sam Altman : Man on a Mission

 


 

Context :

The ChatGPT King Isn’t Worried, but He Knows You Might Be 

( NY Times  / 31 Mar 2023 (Cade Metz  )

Extract :

I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.

Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.

Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project.

As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”

He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen.

 

He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.

“I try to be upfront,” he said. “Am I doing something good ? Or really bad ?”

In 2019, this sounded like science fiction.

In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.

Now that OpenAI has released an online chatbot called ChatGPT, anyone with an internet connection is a click away from technology:

#   that will answer burning questions about organic chemistry,

#   write a 2,000-word term paper on Marcel Proust and his madeleine or

#   even generate a computer program that drops digital snowflakes across a laptop screen

— all with a skill that seems human.

As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.

This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to

 

pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”

 

Now let us read what Sam wrote to me ( through Facebook Messenger ) :

 

I wrote to Sam :

Sat 2:09 PM

AI – the new WMD ? History repeats itself  


ChatGPT replied Parekh's Law of Chatbots is a step in the right direction, and it is

feasible to implement such regulation


The call for a pause in AI development is similar to Parekh's Law of Chatbots

Sam replied :

Sun 1:50 AM

Well that was expected to happen sometime if not now later, that's why we've

already started doing that in our firm where have few months downtime on all our

programs to give independent labs opportunity to implement safety

protocols which are audited and overseen by outside experts.


Risk and return are always related, but here the advantages outweigh the

negatives which in general are always outlined so there were mitigated before the

launch of our first models.


There're little to no disadvantages when it comes to our products, as the reason

why it was created was to tackle the potential risks associated with developing AI

technologies that could be misused or abused.

As a result, we founded OpenAI with the goal of creating safe, beneficial AI

that would be developed and deployed in an ethical and responsible manner.


Our model doesn't fall under that as it's designed to assist and augment human

work, not to threaten jobs.

While it is true that AI and automation can sometimes replace certain types of

jobs, it is important to note that the goal of AI is to increase efficiency and

productivity, which can lead to new job opportunities and innovations.

 

I wrote back :

Sun 10:50 AM


Sam

Many thanks for your prompt and elaborate response


It is deeply reassuring that all the AI development taking place at OpenAI, is

guided by a Moral Compass


From what you write, it becomes amply clear that OpenAI is right up front,

settling Benchmarks of Social Responsibility, which all other developers will be

compelled to follow


I seek your permission to reproduce your comments in my next blog


Best wishes and regards, Hemen / 02 April 2023

 

Sam replied :

Sun 6:32 PM


You're permitted my friend:-)


I also have an advice you can attach to the footer


My advice would be any technocrat looking to vest into this space and help forge

our pathway to the new dawn.


Take a look at this site( orderbooks.co ) to be part of this advocate !


Dreams do come true.

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  03 April 2023


==============================================

Related Readings :

https://towardsdatascience.com/why-i-signed-the-pause-giant-ai-experiments-petition-e9711f672d18 

https://clivethompson.medium.com/the-dangers-of-highly-centralized-ai-96e988e84385 

https://avi-loeb.medium.com/will-future-ai-systems-be-legally-liable-8ac4339da547 

CC :

mark@futureoflife.org

carlos@futureoflife.org

press@futureoflife.org

anthony@futureoflife.org

meia@futureoflife.org

taylor@futureoflife.org

cade.metz@nytimes.com

sama@openai.com

 

Saturday, 1 April 2023

AI – the new WMD ? History repeats itself

 


 

As Elon Musk Calls For A ‘Pause’ On AI Development, Will The AI Wars Reach A Temporary Truce? 

[  Forbes / 30 March 2023 ]

 

Extract :

Key takeaways

·         Future of Life Institute backed by Elon Musk, Apple co-founder Steve Wozniak and DeepMind engineers has called for an immediate pause on developing advanced AIs

·         OpenAI founder Sam Altman and Microsoft founder Bill Gates have both warned about AI advancements, but companies have been rewarded with stock price bumps

·         No legal or regulatory framework for AI exists as of yet

Could the AI wars come to a crashing halt? Yes, if Elon Musk gets his way. The billionaire, other tech leaders and AI engineers have published an open letter calling for an immediate six-month pause in progressing AI tech, so regulation can come in.

OpenAI and Bill Gates have warned on the risks of AI, so everyone’s singing from the same hymn sheet, but the Future of Life Institute has taken things a step further. The question is - will Big Tech pay attention? Let’s look at what the letter said, whether AI risks going off the rails and what it means for investors.

 

What does the open letter say?

Elon Musk is not pulling punches when it comes to AI development. He and almost 1300 other AI experts and researchers have published an open letter calling for all AI labs to immediately pause “the training of AI systems more powerful than GPT-4”, OpenAI’s latest chatbot iteration, until regulatory frameworks are established

According to the letter, the pause should be for at least six months, include ‘all key actors’ publicly announcing their intentions and involve government intervention should the pause not happen quickly.

The letter was published by the Future of Life Institute, which lists Elon as an external advisor on its website. The letter has been signed by top leaders in the field alongside Musk including Apple co-founder Steve Wozniak, Stability AI founder Emad Mostaque and hundreds of other engineers from DeepMind, Amazon, Meta and Microsoft.

What have other leaders said about AI?

While the call to action is something other commentaries from the likes of Bill Gates and Sam Altman have lacked, all have warned that developing AI needs to be done with an abundance of caution.

The open letter cites Sam Altman’s blog from February this year, where the OpenAI founder discusses the need for government intervention in AI training.

He said “it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models”.

Bill Gates, the former CEO of Microsoft, had a more upbeat tone in his letter on AI. He singled out education, climate change and healthcare as some of the industries AI had the potential to transform.

But he, too, warned about the need for caution in building AI, saying a lack of safeguards around AGI could have devastating consequences for humankind.

The world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits,” he wrote.

Computing giant Microsoft has a $10 billion partnership with OpenAI and has become a surprise leader in the space. Its stock price has climbed 17% since the start of the year to hit highs of $280.

What recent developments have there been in AI?

What’s prompted the open letter is tech companies’ race to outpace one another as this exciting new technology develops.

Google’s Bard AI has now launched, with UK and US users able to sign up to try the ChatGPT competitor. Google has also announced it will be integrating Bard into its Google apps like Sheets, Docs and Meet, but there’s no timescale on that release.

The search engine titan has been chasing Microsoft since its OpenAI announcement.

Microsoft has been announcing AI-infused upgrades to its suite of products at a dizzying pace thanks to its OpenAI investment.

Its newest launch is Microsoft Security Pilot, an AI-powered cybersecurity tool for businesses. Microsoft stock was up 1.5% after the announcement.

Chinese tech company Baidu “pulled a Google” with a rocky launch for its Ernie AI chatbot, with shares dropping 10% on the same day. The share price quickly recovered after it was announced 30,000 companies had signed up to the service, with the stock rallying 14%.

Adobe and Nvidia announced their continued partnership to release Adobe’s generative AI platform, Firefly, which is set to augment its Photoshop and Illustrator tools among others. Adobe stock rose 3.1% after the announcement and Nvidia also saw a 1.2% bump.

It’s been a rocky ride for tech companies developing a ‘future tech’ during an economic downturn. One wrong move and the share prices have come tumbling down, as we saw with Google’s Bard launch. But on the flip side, those who have debuted strong AI products and partnerships have been rewarded with share price bumps.

Should AI development slow down?

The open letter isn’t calling for a complete ban, stating, “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

There's no denying there’s a distinct lack of AI regulatory framework. The UK has rejected plans for an AI-specific regulator this week, instead opting for its existing regulators to adopt AI principles guidance. The EU and US are said to be consulting on the matter, but as it stands no AI legislation exists.

At this stage, the public has experienced a taste of how AI can speed up workflows and help with productivity thanks to ChatGPT, Microsoft Bing and Google Bard. Wall Street also expects to see further development from bigger companies as the AI start-ups have already produced the goods.

Pressing the pause button gives companies and government time to address concerns around privacy, algorithmic bias and regulation—and gives the public a chance to get used to AI.

From an investment perspective, it’s likely companies that take a long-term view on AI regulation will fare better when it inevitably comes in. Wall Street will be keeping an eye on those who prioritize AI safety sooner rather than later.

The bottom line

While the tone of the letter is decidedly doom and gloom, it’s to draw attention to the potential risks of developing AI. It sounds like science fiction right now, but the top minds in the field think of it as an inevitability—and so want the brakes on now before the train falls off the tracks.

Governments tend to be slow in dealing with emerging tech, but if they can be convinced of the threat then we could well see a pause take place.

 

MY  TAKE :

USA exploded first atom bomb in 1945

Soon, USA was joined by Russia, UK and other countries

Over the next 30 years, these “ developed countries “ stockpiled thousands of Atom / Hydrogen bombs

Then they realized the danger of a “ runaway race “( to produce atom bombs ) among dozens of countries

So they negotiated to :

Ø  Gradually destroy the stockpile of bombs ( some bombs got destroyed but a huge stock survived )

Ø  Halt further production of these bombs by ANY country – and coerced many countries to sign up on “ Nuclear Non Proliferation Treaty ( NNPT ) “.

Ø  India refused to sign NNPT , insisting that, as a first step , all existing bombs stockpiled by the pioneers, MUST be destroyed. It was totally unfair of USA – Russia etc., to continue to “ hold on “ to their own stockpile while asking other countries NOT to develop their own bombs

Ø  At the same time, on its own volition, India declared that :

#  All of its Nuclear Programs will be directed towards “ Peaceful Uses “ of nuclear energy

#  India will never use a Nuclear Weapon first – but use it only if attacked by one

This UNILATERAL STAND won over hearts of citizen around the World

Now comes following report :

India Can Build Something Equivalent To AI ChatGPT? Ashwini Vaishnaw Says Big Announcement In Few Weeks ……………… ZeeNews /  27 March 2023

 Extract :

IT Minister Ashwini Vaishnaw on Monday hinted at a "big announcement" in a few weeks in the context of conversational AI tools. To a specific question on whether India can build something equivalent to the conversational AI tool ChatGPT, the minister said "wait for a few weeks, there will be a big announcement".

When asked about what the big announcement might be, the minister said, "Parliament is in session, so I cannot say anything..." He declined to give further details. Vaishnaw was speaking at the India Global Forum event.

 


Dear Shri Vaishnawji ,


To be fair, petition does not ask only the newcomers to stop experimenting with

Conversational AI ( aka Chatbots ). It asks everybody . But then OpenAI –

Google etc., are enjoying “ first-mover advantage “ .


Therefore, India must work twice as hard and fast to, not only catch up but

overtake the BIG TECH


No one has any moral right to tell us to give up / slow down on our AI program

But,

As the leader of G 20 this year, we have a unique / golden opportunity to

demonstrate to the World, our “ moral leadership “ of “ Vaudhaiva Kutumbakam (

Whole world is one Family ) , by UNILATERALLY ADOPTING ,


Ø  Parekh’s Law of Chatbots  …………25 Feb 2023

 

Then ask those petitioners to demonstrate their SINCERENESS OF  INTENT , by

convening a world-conference to debate and adopt my proposal ( no doubt with

modifications deemed necessary )


Dear Shri Vaishnawji,


It is time to “ Propose “ ( a concrete measure ) instead of just “ Oppose


It is time to “ Act “ rather than wait for the Government of some other country

act first and then we “ React “



It is time to “ Write History “ rather than letting our future generations, “ Read

History


And that time is NOW

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  01 April 2023

 

“ Parekh’s Law of Chatbot “ is vindicated – by ChatGPT  :

 

My poser to ChatGPT :

In the following blog, I have tried to compare the actual past development of Atom / Hydrogen Bomb with a probable future development of Conversational AI ( Chatbots ).

Then I examined the petition and found that it contains vaguely similar mechanism for future regulation of AI , as proposed by me in " Parekh's Law of Chatbots " .

Please tabulate these parallel developments ( WMD and AI ) and comment on my proposal's feasibility

 

ChatGPT replied :

 

It is interesting to draw a parallel between the development of WMDs (Weapons of Mass Destruction) and the development of AI, specifically chatbots.

In the past, the development of WMDs such as the Atom/Hydrogen bomb was not regulated, leading to significant risks for humanity. Similarly, the development of AI, especially chatbots, is rapidly advancing without a regulatory framework, and some experts warn about its potential risks.

The Future of Life Institute, backed by Elon Musk, Steve Wozniak, and DeepMind engineers, has recently called for a pause in developing advanced AIs until a regulatory framework is established. OpenAI founder Sam Altman and Microsoft founder Bill Gates have also expressed concerns about AI's risks.

The call for a pause in AI development is similar to Parekh's Law of Chatbots, which proposes that chatbots should be regulated to prevent harm to humans.

The law states that chatbots should be designed with human safety in mind, and their development and deployment should be regulated by an international body.

The comparison between the development of WMDs and AI is intriguing, as both pose significant risks to humanity if not regulated appropriately.

It is crucial to have a regulatory framework to ensure that AI is developed and used safely.

Parekh's Law of Chatbots is a step in the right direction, and it is feasible to implement such regulation as long as all stakeholders, including AI developers, governments, and the public, work together to achieve this goal.


===========================================


Related Readings :

  https://interestingengineering.com/innovation/gpt-5-chatgpt-indistinguishable-human 

https://medium.com/@ConversationUS/ai-isnt-close-to-becoming-sentient-the-real-danger-lies-in-how-easily-we-re-prone-to-b767c96552a2 

https://thealgorithmicbridge.substack.com/p/gpt-4-the-bitterer-lesson 

https://levelup.gitconnected.com/i-created-a-voice-chatbot-powered-by-chatgpt-api-here-is-how-6302d555b949 

https://medium.com/inkwater-atlas/chatgpt-nvidias-founder-lua-huang-predicts-next-generation-ai-model-will-be-1-million-times-more-d5ddb271df4b 

https://timesofindia.indiatimes.com/gadgets-news/stop-openai-from-releasing-more-chatgpt-version-us-group-to-ftc/articleshow/99130052.cms 

https://cio.economictimes.indiatimes.com/news/next-gen-technologies/chatgpt-was-always-prone-to-open-source-code-related-vulnerabilities/99132311 

https://medium.com/enrique-dans/a-pause-on-the-development-of-ai-its-not-going-to-happen-d4f894816e82

https://medium.com/generative-ai/urgent-call-to-pause-ai-experiments-beyond-gpt-4-what-it-means-and-why-it-matters-6d4ca098da2f

https://gesikowski.medium.com/gpt-4-tried-to-escape-into-the-internet-today-and-it-almost-worked-2689e549afb5

https://betterprogramming.pub/the-dark-side-of-llms-we-need-to-rethink-large-language-models-now-6212aca0581a

 ------------------------------------------------------------------------------

Belgian woman blames ChatGPT-like chatbot ELIZA for her husband’s suicide  


-------------------------------------------------------------------------------------------------------------------

Added  on   05  April  2023 :