Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Monday, 1 May 2023

Law of Chatbot : a small subset of EU Law of AI ?

 


 

Sure, 


-  but whereas EU Law of AI may not get every AI developer 

   on the board for next 2 years ,  


-  BIG ( and small ) TECH can reach a consensus on a

 “ self regulatory Law of Chatbots “ , within months !

   


With regards,


Hemen Parekh

www.hemenparekh.ai  /  hcp@RecruitGuru.com 


=============================================

 

 Context :

EU Likely to Reach Political Agreement on AI Law This Year, Says Tech Regulator Chief Vestager 

( Gadget360  /  01 May 2023 )

 

Extract :

The European Union is likely to reach a political agreement this year that will pave the way for the world's first major artificial intelligence (AI) law, the bloc's tech regulation chief Margrethe Vestager said on Sunday.

This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU's Artificial Intelligence Act to a vote by a committee of lawmakers on May 11. Parliament will then thrash out the bill's final details with EU member states and the European Commission before it becomes law.

At a press conference after a Group of Seven digital ministers' meeting in Takasaki, Japan, Vestager said the EU AI Act was "pro-innovation" since it seeks to mitigate the risks of societal damage from emerging technologies.

Regulators around the world have been trying to find a balance where governments could develop "guardrails" on emerging artificial intelligence technology without stifling innovation.

"The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself," Vestager said.

While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.

"There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence," she said in the interview.

While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI'S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.

An organisation backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.

Digital ministers of the G7 advanced nations on Sunday also agreed to adopt "risk-based" regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.

"Now when everyone has AI at their fingertips ... there's a need for us to show the political leadership to make sure that one can safely use AI and gain all the amazing possibilities of improvement in productivity and better services," Vestager said in an interview with Reuters.

 

My poser to ChatBot :

Many thanks for this quick comparison and contrast . I would appreciate if you could reformat your views , ( comparing the features of the two ), in a TABULATED format , for quick and easy grasp

 

Chatbot replied :

Certainly, here is a tabulated format for the comparison and contrast of the proposed EU law to regulate AI and the proposed "Law of Chatbots":

Features

Proposed EU law to regulate AI

Proposed "Law of Chatbots"

Purpose

Regulate AI technologies to mitigate societal risks

Regulate chatbots to ensure ethical and fair use

Scope

Covers all AI technologies, with a focus on high-risk applications

Specifically targets chatbot technology

Enforcement

Will take a few years to be enforced

Not specified

Timeframe

Expected to be passed by this year

Not specified

Political support

Backed by EU tech regulation chief Margrethe Vestager and members of the European Parliament

Not specified

Innovation support

Considered "pro-innovation"

Not specified

Risk-based approach

Utilizes "guardrails" for high-risk applications

Not specified

Global collaboration

Calls for world leaders to collaborate on AI regulation

Not specified

Stakeholders

EU member states, European Commission, businesses

Chatbot developers, businesses using chatbots, consumers

Potential impact

Could pave the way for global agreements on AI regulation

Could improve the ethics and fairness of chatbot technology

I hope this format helps you in comparing and contrasting the two proposed laws.

 

 

Related Readings :

AI Chatbots Have Been Used to Create Dozens of News Content Farms   ……………. Bloomberg  /   01 May 2023

 

Extract :

 

The news-rating group NewsGuard has found dozens of news websites generated by AI chatbots proliferating online, according to a report published Monday, raising questions about how the technology may supercharge established fraud techniques. 

But none disclose they’re populated using AI chatbots such as OpenAI Inc.’s ChatGPT and potentially Alphabet Inc.’s Google Bard, which can generate detailed text based on simple user prompts.

The majority of the sites appear to be content farms — low-quality websites run by anonymous sources that churn-out posts to bring in advertising.

“Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism,” said Crovitz, a former publisher of the Wall Street Journal.

OpenAI didn't immediately respond to a request for comment, but has previously stated that it uses a mix of human reviewers and automated systems to identify and enforce against the misuse of its model, including issuing warnings or, in severe cases, banning users.

 

Giansiracusa, the Bentley professor, said it was worrying how cheap the scheme has become, with no human cost to the perpetrators of the fraud.

Before, it was a low-paid scheme. But at least it wasn’t free,” he said. “It’s free to buy a lottery ticket for that game now.”

 

 

The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead  …… NYT  / 01  May  2023

Extract :

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

 “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I.

 “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous.

 

Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Internet will be flooded with false photosvideos and text, and the average person will “what is true anymore.”

Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks.

“It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.

And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret.

The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology.

“I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

 

 

Social Networks Want to Be Conversational AI’s Killer App  …………… Bloomberg  /  01  May 2023

 

Extract :

 

Over the last month, a few hundred million Snapchat users got a new best friend: a chatbot named “ My AI.”

 

The program, which sometimes dubs itself “Sage” when you ask what it prefers to be called, is powered by OpenAI’s powerful artificial intelligence tool, ChatGPT.

 

Users asked the chatbot to pretend to be their boyfriend and to do their homeworkquizzed itteased it, and got it to reveal its awareness of their location.

 

  

But Sage also portends something else — the imminent collision of generative AI tools like ChatGPT and social media.

 

Mark Zuckerberg said that conversational AI is coming soon to social networks like Facebook, Instagram and WhatsApp.

 

“I think there’s an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful,” Zuckerberg told investors.

 

But Zuckerberg and Snap CEO Evan Spiegel are now suggesting that social, not search, might actually be AI’s killer app — serving as a sort of ever-present virtual assistant who can suggest funny videos and give you clever ideas about what to say in group chats.

 

For Liz Perle, a former Instagram employee and Gen Z consultant, the more pressing concern is whether young users will use Sage and sites like ChatGPT to take academic shortcuts.

 

Simply put, chatbots like Sage aim to entice users to devote more time and attention to their social media apps. And that’s the gold standard for any social network — for better, and way too often, for worse. 

 ====================================================Added  on  05 May 2023 :


Artificial Intelligence alarms: Microsoft's Satya Nadella and Google's Sundar Pichai go to White House


 

 

No comments:

Post a Comment