Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday, 2 April 2023

Sam Altman : Man on a Mission

 


 

Context :

The ChatGPT King Isn’t Worried, but He Knows You Might Be 

( NY Times  / 31 Mar 2023 (Cade Metz  )

Extract :

I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.

Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.

Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project.

As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”

He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen.

 

He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.

“I try to be upfront,” he said. “Am I doing something good ? Or really bad ?”

In 2019, this sounded like science fiction.

In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.

Now that OpenAI has released an online chatbot called ChatGPT, anyone with an internet connection is a click away from technology:

#   that will answer burning questions about organic chemistry,

#   write a 2,000-word term paper on Marcel Proust and his madeleine or

#   even generate a computer program that drops digital snowflakes across a laptop screen

— all with a skill that seems human.

As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.

This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to

 

pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”

 

Now let us read what Sam wrote to me ( through Facebook Messenger ) :

 

I wrote to Sam :

Sat 2:09 PM

AI – the new WMD ? History repeats itself  


ChatGPT replied Parekh's Law of Chatbots is a step in the right direction, and it is

feasible to implement such regulation


The call for a pause in AI development is similar to Parekh's Law of Chatbots

Sam replied :

Sun 1:50 AM

Well that was expected to happen sometime if not now later, that's why we've

already started doing that in our firm where have few months downtime on all our

programs to give independent labs opportunity to implement safety

protocols which are audited and overseen by outside experts.


Risk and return are always related, but here the advantages outweigh the

negatives which in general are always outlined so there were mitigated before the

launch of our first models.


There're little to no disadvantages when it comes to our products, as the reason

why it was created was to tackle the potential risks associated with developing AI

technologies that could be misused or abused.

As a result, we founded OpenAI with the goal of creating safe, beneficial AI

that would be developed and deployed in an ethical and responsible manner.


Our model doesn't fall under that as it's designed to assist and augment human

work, not to threaten jobs.

While it is true that AI and automation can sometimes replace certain types of

jobs, it is important to note that the goal of AI is to increase efficiency and

productivity, which can lead to new job opportunities and innovations.

 

I wrote back :

Sun 10:50 AM


Sam

Many thanks for your prompt and elaborate response


It is deeply reassuring that all the AI development taking place at OpenAI, is

guided by a Moral Compass


From what you write, it becomes amply clear that OpenAI is right up front,

settling Benchmarks of Social Responsibility, which all other developers will be

compelled to follow


I seek your permission to reproduce your comments in my next blog


Best wishes and regards, Hemen / 02 April 2023

 

Sam replied :

Sun 6:32 PM


You're permitted my friend:-)


I also have an advice you can attach to the footer


My advice would be any technocrat looking to vest into this space and help forge

our pathway to the new dawn.


Take a look at this site( orderbooks.co ) to be part of this advocate !


Dreams do come true.

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  03 April 2023


==============================================

Related Readings :

https://towardsdatascience.com/why-i-signed-the-pause-giant-ai-experiments-petition-e9711f672d18 

https://clivethompson.medium.com/the-dangers-of-highly-centralized-ai-96e988e84385 

https://avi-loeb.medium.com/will-future-ai-systems-be-legally-liable-8ac4339da547 

CC :

mark@futureoflife.org

carlos@futureoflife.org

press@futureoflife.org

anthony@futureoflife.org

meia@futureoflife.org

taylor@futureoflife.org

cade.metz@nytimes.com

sama@openai.com

 

No comments:

Post a Comment