Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday, 16 April 2023

ChatGPT : the Sacrificial Goat ?

 


 

During the past few weeks, a few countries have either already imposed some restrictions on use of ChatGPT or are planning to do so

Reason ?

They think ChatGPT is “ potentially “ dangerous considering that , if left to “ evolve “ without any constraints, the following scenario is highly probable :

Ø  AIs cloning / re-producing their own ( better or worse ) versions without any prompt from a human

Ø  AIs engaging in conversations / chats among themselves without human intermediation

Ø  AIs acquiring “ human frailties “ but failing to acquire “ human wisdom “

Ø  AIs setting for themselves “ Goals / Targets / Tasks “ which cause harm to humans

Some 1000 + geeks / scientists have proposed a 6 month pause on development of even more powerful AI

Sure, a few countries can “ ban “ use of one ChatGPT , coming out of one country, USA

But can anyone ban / regulate , some 10,000 ChatGPT equivalent AI, coming out of a 100 countries in next 6 months , some of these having characteristics described above ?

Here are a few which have sprung up within past few weeks :

https://www.linkedin.com/feed/update/urn:li:activity:7053255822046330880/?utm_source=share&utm_medium=member_desktop

I am not against the idea of “ REGULATING “ all , current and future AI

In fact, I strongly believe there is an URGEND NEED for such a regulation , which is evolved through a CONSENSUS among all the stakeholders and implemented / regulated / enforced through a UN regulatory body ( ala SECURITY COUNCIL )

I urge Shri Ashwini vaishnaw , IT Minister ( India ) to take a lead in evolving such a consensus, by circulating among the stakeholders ( with modifications deemed necessary )  my following suggestion :

 

Ø  Parekh’s Law of Chatbots    ……..  25 Feb 2023

 

With regards,

Hemen Parekh

www.hemenparekh.ai  

 

Related Readings :

US begins study of possible rules to regulate AI like ChatGPT  ……….. Reuters  /  12 Apr 2023

Extract :

The Biden administration said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.

ChatGPT, an AI program that recently grabbed the public's attention for its ability to write answers quickly to a wide range of queries, in particular has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."

The agency wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.

President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.

ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp (MSFT.O).

NTIA plans to draft a report as it looks at "efforts to ensure AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing work to "ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities."

A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4 saying it was "biased, deceptive, and a risk to privacy and public safety."

 

 

https://www.reuters.com/technology/china-releases-draft-measures-managing-generative-artificial-intelligence-2023-04-11/ 

China's New Draft Law Mandates "Security Assessement" For AI Products   NDTV / 11 Apr 2023

Extract :

 

New AI products developed in China will have to undergo a security assessment before being released and must reflect "core socialist values", a sweeping new draft law by the country's internet regulator showed Tuesday.

The fresh regulations come as a flurry of Chinese companies rush to develop artificial intelligence services that can mimic human speech since San Francisco-based OpenAI launched ChatGPT in November, sparking a gold rush in the market.

Rapid advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images and people shown mouthing things they never said.

"Before providing services to the public that use generative AI products, a security assessment shall be applied for through national internet regulatory departments," the draft law, released by the Cyberspace Administration of China, reads.

The draft law -- dubbed "Administrative Measures for Generative Artificial Intelligence Services" -- aims to ensure "the healthy development and standardised application of generative AI technology", it read.

AI-generated content, it continued, needs to "reflect core socialist values, and must not contain content on subversion of state power".

It must also not contain, among other things, "terrorist or extremist propaganda", "ethnic hatred" or "other content that may disrupt economic and social order."

The Cyberspace Administration of China said it was seeking public input on the contents of the new regulations, which under Beijing's highly centralised political system are almost certain to become law.

"The new CAC draft document is one of the strictest measures for generative AI so far," Andy Chun, adjunct professor at City University of Hong Kong, told AFP.

Companies submitting security assessments will need to "be very careful to ensure each data source used for AI learning must be within guidelines, accurate, unbiased, and not infringe on IP rights of others," he said.

"Ensuring accuracy is hard. No generative AI system to date can do that," said Chun.

The regulatory crackdown comes as China's tech giants ramp up their efforts in the closely-watched sector.

Alibaba's cloud computing unit on Tuesday unveiled its own product called Tongyi Qianwen, which is expected to be rolled out across the tech giant's office workplace communications software and household appliances.

CEO Daniel Zhang said in a statement that the software came in a "technological watershed moment driven by generative AI and cloud computing".

And Baidu -- which operates the country's leading search engine -- released its own "Ernie Bot" AI chat product last month.

But investors were unimpressed by the bot's display of linguistic and maths skills at an unveiling, sending shares falling by as much as 10 percent.

ChatGPT is unavailable in China, but the American software is also gaining a base of Chinese users who use virtual private networks to get around the ban, deploying it to write essays and cram for exams.

And a 24-year-old Shanghai blogger caused a stir this month when he used AI technology to "resurrect" his dead grandmother, producing lifelike imagery of his interactions with the dead relative.

Beijing has announced ambitious plans to become a global leader in the field of AI by 2030, and consultancy group McKinsey estimates the sector could add about $600 billion every year to China's gross domestic product by then.

But it has also warned that deepfakes present a "danger to national security and social stability".

Beijing in January enforced new rules that would require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion".


 

UK

5 Core Principles of AI Ethics 

Extract :

A key recommendation from the report calls for a cross-sector AI code to be formed, a code that a could be adopted around the globe.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” writes Lord Tim Clement-Jones, the chairman of the House of Lords Select Committee on AI that commissioned the UK report.

The report includes 5 Core Principles:

• AI should be developed for the common good and benefit of humanity.

• AI should operate on principles of intelligibility and fairness.

• AI should not be used to diminish the data rights or privacy of individuals, families or communities.

• All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

• The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

 

Italy became the first Western country to ban ChatGPT. Here’s what other countries are doing 

Extract :

·         Italy last week became the first Western country to ban ChatGPT, the popular AI chatbot.

·         ChatGPT has both impressed researchers with its capabilities while also worrying regulators and ethicists about the negative implications for society. 

·         The move has highlighted an absence of any concrete regulations, with the European Union and China among the few jurisdictions developing tailored rules for AI.

·         Various governments are exploring how to regulate AI, and some are thinking of how to deal with general purpose systems such as ChatGPT.

 

https://medium.com/generative-ai/i-created-an-autonomous-ai-agent-that-can-stalk-anyone-75fcc42246ec 

 

INDIA

Thank You, Ashwini Vaishnawji

 

 

My 34 Blogs on ChatBots ( as of 10 Apr 2023 )

 

No comments:

Post a Comment