Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Monday, 1 May 2023

Law of Chatbot : a small subset of EU Law of AI ?

 


 

Sure, 


-  but whereas EU Law of AI may not get every AI developer 

   on the board for next 2 years ,  


-  BIG ( and small ) TECH can reach a consensus on a

 “ self regulatory Law of Chatbots “ , within months !

   


With regards,


Hemen Parekh

www.hemenparekh.ai  /  hcp@RecruitGuru.com 


=============================================

 

 Context :

EU Likely to Reach Political Agreement on AI Law This Year, Says Tech Regulator Chief Vestager 

( Gadget360  /  01 May 2023 )

 

Extract :

The European Union is likely to reach a political agreement this year that will pave the way for the world's first major artificial intelligence (AI) law, the bloc's tech regulation chief Margrethe Vestager said on Sunday.

This follows a preliminary deal reached on Thursday by members of the European Parliament to push through the draft of the EU's Artificial Intelligence Act to a vote by a committee of lawmakers on May 11. Parliament will then thrash out the bill's final details with EU member states and the European Commission before it becomes law.

At a press conference after a Group of Seven digital ministers' meeting in Takasaki, Japan, Vestager said the EU AI Act was "pro-innovation" since it seeks to mitigate the risks of societal damage from emerging technologies.

Regulators around the world have been trying to find a balance where governments could develop "guardrails" on emerging artificial intelligence technology without stifling innovation.

"The reason why we have these guardrails for high-risk use cases is that cleaning up … after a misuse by AI would be so much more expensive and damaging than the use case of AI in itself," Vestager said.

While the EU AI Act is expected to be passed by this year, lawyers have said it will take a few years for it to be enforced. But Vestager said businesses could start considering the implication of the new legislation.

"There was no reason to hesitate and to wait for the legislation to be passed to accelerate the necessary discussions to provide the changes in all the systems where AI will have an enormous influence," she said in the interview.

While research on AI has been going on for years, the sudden popularity of generative AI applications such as OpenAI'S ChatGPT and Midjourney have led to a scramble by lawmakers to find ways to regulate any uncontrolled growth.

An organisation backed by Elon Musk and European lawmakers involved in drafting the EU AI Act are among those to have called for world leaders to collaborate to find ways to stop advanced AI from creating disruptions.

Digital ministers of the G7 advanced nations on Sunday also agreed to adopt "risk-based" regulation on AI, among the first steps that could lead to global agreements on how to regulate AI.

"Now when everyone has AI at their fingertips ... there's a need for us to show the political leadership to make sure that one can safely use AI and gain all the amazing possibilities of improvement in productivity and better services," Vestager said in an interview with Reuters.

 

My poser to ChatBot :

Many thanks for this quick comparison and contrast . I would appreciate if you could reformat your views , ( comparing the features of the two ), in a TABULATED format , for quick and easy grasp

 

Chatbot replied :

Certainly, here is a tabulated format for the comparison and contrast of the proposed EU law to regulate AI and the proposed "Law of Chatbots":

Features

Proposed EU law to regulate AI

Proposed "Law of Chatbots"

Purpose

Regulate AI technologies to mitigate societal risks

Regulate chatbots to ensure ethical and fair use

Scope

Covers all AI technologies, with a focus on high-risk applications

Specifically targets chatbot technology

Enforcement

Will take a few years to be enforced

Not specified

Timeframe

Expected to be passed by this year

Not specified

Political support

Backed by EU tech regulation chief Margrethe Vestager and members of the European Parliament

Not specified

Innovation support

Considered "pro-innovation"

Not specified

Risk-based approach

Utilizes "guardrails" for high-risk applications

Not specified

Global collaboration

Calls for world leaders to collaborate on AI regulation

Not specified

Stakeholders

EU member states, European Commission, businesses

Chatbot developers, businesses using chatbots, consumers

Potential impact

Could pave the way for global agreements on AI regulation

Could improve the ethics and fairness of chatbot technology

I hope this format helps you in comparing and contrasting the two proposed laws.

 

 

Related Readings :

AI Chatbots Have Been Used to Create Dozens of News Content Farms   ……………. Bloomberg  /   01 May 2023

 

Extract :

 

The news-rating group NewsGuard has found dozens of news websites generated by AI chatbots proliferating online, according to a report published Monday, raising questions about how the technology may supercharge established fraud techniques. 

But none disclose they’re populated using AI chatbots such as OpenAI Inc.’s ChatGPT and potentially Alphabet Inc.’s Google Bard, which can generate detailed text based on simple user prompts.

The majority of the sites appear to be content farms — low-quality websites run by anonymous sources that churn-out posts to bring in advertising.

“Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism,” said Crovitz, a former publisher of the Wall Street Journal.

OpenAI didn't immediately respond to a request for comment, but has previously stated that it uses a mix of human reviewers and automated systems to identify and enforce against the misuse of its model, including issuing warnings or, in severe cases, banning users.

 

Giansiracusa, the Bentley professor, said it was worrying how cheap the scheme has become, with no human cost to the perpetrators of the fraud.

Before, it was a low-paid scheme. But at least it wasn’t free,” he said. “It’s free to buy a lottery ticket for that game now.”

 

 

The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead  …… NYT  / 01  May  2023

Extract :

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

 “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I.

 “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous.

 

Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Internet will be flooded with false photosvideos and text, and the average person will “what is true anymore.”

Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks.

“It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.

And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret.

The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology.

“I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

 

 

Social Networks Want to Be Conversational AI’s Killer App  …………… Bloomberg  /  01  May 2023

 

Extract :

 

Over the last month, a few hundred million Snapchat users got a new best friend: a chatbot named “ My AI.”

 

The program, which sometimes dubs itself “Sage” when you ask what it prefers to be called, is powered by OpenAI’s powerful artificial intelligence tool, ChatGPT.

 

Users asked the chatbot to pretend to be their boyfriend and to do their homeworkquizzed itteased it, and got it to reveal its awareness of their location.

 

  

But Sage also portends something else — the imminent collision of generative AI tools like ChatGPT and social media.

 

Mark Zuckerberg said that conversational AI is coming soon to social networks like Facebook, Instagram and WhatsApp.

 

“I think there’s an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful,” Zuckerberg told investors.

 

But Zuckerberg and Snap CEO Evan Spiegel are now suggesting that social, not search, might actually be AI’s killer app — serving as a sort of ever-present virtual assistant who can suggest funny videos and give you clever ideas about what to say in group chats.

 

For Liz Perle, a former Instagram employee and Gen Z consultant, the more pressing concern is whether young users will use Sage and sites like ChatGPT to take academic shortcuts.

 

Simply put, chatbots like Sage aim to entice users to devote more time and attention to their social media apps. And that’s the gold standard for any social network — for better, and way too often, for worse. 

 ====================================================Added  on  05 May 2023 :


Artificial Intelligence alarms: Microsoft's Satya Nadella and Google's Sundar Pichai go to White House


 

 

Sunday, 30 April 2023

JudgeGPT : Thou Shall Not Hate

 


 

Of late, Election Campaign  Speeches in Karnataka have gotten ugly. 

This made me engage into a ( friendly ) debate with ChatGPT as follows :

 

 

My Poser to ChatGPT :

Yesterday, the Supreme Court of India issued a directive to all the State Governments to take SUO MOTO cognizance of " hate speeches " being delivered by politicians during the current on-going election campaign in Karnataka .

Without waiting for someone to file a complaint in this regard, Supreme Court directed the Police Department ( of the State where such speech was delivered ) to take " appropriate " action.

I believe this directive will be difficult to implement , since Police Departments of different States are quite likely to define " hate " and " speech " and " appropriate action " , differently .

In your knowledge, are there universally agreed upon " definitions " of these words ?

What problems do you see arising out of such broad / vague , instructions ?

If you were to be the Chief Justice of India, how would you have worded your instructions / guidance / directive ?

If sharply worded / unambiguous " instructions " are not possible then what other " constraining " method to reduce / eliminate, " hate speeches " would you recommend ?

 

 hemen parekh / 29 April 2023

 

ChatGPT replied :

Here are 4 possible questions based on the given context:

1.     Are there universally agreed upon definitions of the terms "hate speech" and "appropriate action"?

 

2.     What problems do you anticipate with implementing the Supreme Court's directive on hate speeches during election campaigns?

 

3.     How would you, as the Chief Justice of India, have worded the directive to address hate speeches during election campaigns?

 

4.     If sharp and unambiguous instructions are not possible, what other methods would you recommend to reduce or eliminate hate speeches during election campaigns?

And here are possible answers:

1.     While there are some international conventions and guidelines on defining hate speech, the definitions may vary between countries or even within different regions of the same country. Similarly, the term "appropriate action" could be interpreted differently depending on the context and the local laws. Therefore, it might be challenging to achieve a universal agreement on these terms.

 

2.     One of the main challenges with implementing the directive is the variation in interpretation and enforcement across different state police departments. Without a clear and common understanding of what constitutes hate speech and what action is deemed appropriate, there might be inconsistencies in identifying and punishing offenders. Additionally, political interference or reluctance from law enforcement agencies to take action against influential politicians might hinder the effectiveness of the directive.

 

 

3.     As the Chief Justice of India, I would provide a detailed and specific definition of hate speech and the appropriate action to be taken by the police department. This could include examples of hate speech, a clear outline of the consequences for offenders, and guidelines for enforcing the directive consistently across all states. Furthermore, I would also ensure that there are mechanisms in place for monitoring and reporting instances of hate speech, and for providing adequate protection to whistleblowers and complainants.

 

4.     Other methods that could be used to reduce or eliminate hate speeches during election campaigns include:

 

 

·         Education and awareness campaigns to sensitize politicians and voters to the impact of hate speech on social harmony and democratic values.

 

·         Stronger regulations on political campaigns, including stricter rules on campaign finance, media coverage, and candidate eligibility criteria.

 

·         Increased accountability of political parties and their leaders for the behavior of their supporters and candidates.

 

·         Empowering civil society organizations, independent media, and citizen groups to monitor and report instances of hate speech and hold politicians accountable for their actions.

 

·         Encouraging political leaders to adopt a more inclusive and respectful approach to campaigning, focused on issues and policies rather than personal attacks or divisive rhetoric.

 

 

I shot back to ChatGPT :

For the Presidential Election in 2024 , even in USA , election campaigns by both, the Republicans and the Democrats, are hotting up.

In as far as the " Election Speeches " being made is considered, have you observed any marked difference between the speeches being made by the American Politicians and the Indian politicians ?

If yes, then what ?

Are American political parties strictly observing any " self-imposed " code of conduct in respect of attacking the opponents ?

You are also well-versed with the current USA laws in respect of " hate speeches " - and with the past court verdicts in this matter.

If you were the Chief Justice of the Supreme Court of USA, what kind of specific directive would you issue to FBI or to the Federal Police, if any USA politicians are found to be violating any guidelines / code of conduct / US laws ?

 

ChatGPT responded :

As an AI language model, I do not have personal observations or opinions, but I can provide general information on the topics you raised.

Regarding the differences between American and Indian election speeches, it is challenging to make a direct comparison as the cultural, political, and social contexts in both countries are different.

However, in general, American political parties tend to use more polished and rehearsed speeches and focus on issues and policies, whereas Indian politicians may rely more on emotional appeals and rhetoric.

Still, this is a generalization, and there are exceptions in both cases.

Both American political parties have a self-imposed code of conduct that guides their campaigns, but there have been instances where candidates from both parties have engaged in negative campaigning and attacking their opponents.

The US has laws that prohibit hate speech, and the courts have issued verdicts on this matter.

However, there is still some ambiguity in defining hate speech, and there are on-going debates about the limits of free speech and hate speech.

If I were the Chief Justice of the Supreme Court of the US, I would issue a directive to the FBI or Federal Police to investigate any alleged violations of guidelines, codes of conduct, or US laws by politicians during their campaigns.

I would also emphasize the importance of upholding the principles of free and fair elections and encourage political parties to engage in constructive and respectful campaigning that focuses on issues and policies rather than personal attacks.

 

 

Dear Shri Kiren Rijijuji,

 

I urge you to convene an All Party Meeting – along with the representatives from

 the Election Commission , to discuss the Supreme Court directive and quickly

 evolve a CODE  OF  CONDUCT , to govern the speeches made during Election

 Campaigns

 

Why is this important ?


Ø  It shows up the ugly underbelly of a DEMOCRATIC FOUNDATION PROCESS called Election

Ø  It lowers our MORAL FABRIC

Ø  It spills over not only into Parliament Debates but also into our entire Public Institutions

 

Why is this Urgent ?


Ø  Because , in 2023 we have 9 State Elections and in 2024, besides General Elections, we also have 7 State Elections

 

Preplanning - How to go about ?


Ø  First launch an Online Poll among citizen re which words they consider

    as denoting “ hate ”


Ø  This starting list could be what Lok Sabha Speaker considers as “ Un-

    parliamentary “ words



Ø  Online poll- takers should be allowed to “ propose “ other words ( with

    minimum 1000 supporters )

 

Process - How to “ monitor / control / punish “ the offenders ?

 

Ø  All election-campaign speeches must be monitored through Video-

    Audio recording

 

Ø  Recording must be carried out by :

 

#   Election Commission Observer

#   State Police Officer

#   Members of audience ( Any number )

 

Ø  After recording , they will upload it on the ECI website

 

Ø  All of these persons shall use ( modified ) cVigil mobile app ( or a

    special purpose app developed by ECI for this purpose )

 

 

Ø  Important difference from cVIGIL , is that the Video/Audio recorder /

    up-loader, shall fully identify himself / herself by providing full data (

    including Aadhar No / Voter Car No ) on the mobile app

 

Ø  Data shall comprise location / date / time / Speaker Name / Political

    Party Name etc

 

 

Ø  AI software ( on ECI website ) will :


#   Match the Voice Profiles of all politicians with each Audio File . 


     With video-audio files getting uploaded from SEVERAL /

     AUTHENTIC sources, probability of a FAKE file, is little

     

#   Use of “ Speech – to – Text “ conversion and then “ Parse the

      audio file and determine the “ frequency “ and “ intensity “ of the

     hate words

      

#   Assign “ Marks “ ( for individual politician and aggregate for all

     politicians belonging to each party ) and determine the

    “ punishment “ ( enshrined in the AI algorithm ) and publish the “

     punishment “  on various websites and forward the same to CBI

     and State Police for implementation. 


     Copy will be sent to ECI and the Supreme Court

     

     

>  Each “ hate “ word used ( in the speech ) , will attract 1mark “ 

    ( - at some future date,  after upload of several thousand hours of

    Audio file ,  AI software can be “ trained “ to determine the 

     intensity “ of a given “ hate “ word , based on the context in

    which it was  used, past speech-history of the speaker and

    occurrence of other non-hate words in a given speech )

         

    

#  On ECI website , this “ Marks Earned “ numbers will be dynamically

    displayed ( Politician- wise AND Political Party-wise )

     

 

  Punishment - Calculation of “ Quantum

 

Ø  For a given politician - debarred from making a speech for :


#  More than 10 marks   >   1 week

#  11 to 20 marks          >   1 month

#  More than 20 marks   >   6 months ( + Suspension as MP / MLA , if


                                                         existing member )

 

Ø  For a given Political Party - debarred from contesting any Election for :


#  More than aggregate of 50 marks >  1  year

#  51 to 100 marks                          >  2 years

#  More than 100 marks                   >  5 years ( including General


                                                                           Election )


Stopping Political Parties ( stopping eligibility ) from receiving Electoral

Bonds ( for varying periods ) , could also be considered

 

 

  Rationale -  logic and how exactly the punishment quantum was arrived at by

  the AI  software

-          shall be openly displayed on ECI website for everyone to see .


        Since the punishment meted out to the offender individual ( - and

        to the party to which he / she belongs ), is entirely SOFTWARE

       determined without any human intervention, there can be no

       reason to allege any bias / partiality / leniency , on the part of ECI /

       State Police


Authority for administering punishment :


   #   For individual offender >  Concerned State Police

   #   For Political Party        >  Election Commission

 

 

 

 

With regards,

Hemen Parekh

www.hemenparekh.ai  /  01 May 2023

 


Related Readings :

 

Ø  A NARCO Test ?  …………………………………………  28   Feb  2014

 

Extract :

To bring out " truth " from a suspected criminal , police deliver a NARCO test , by administering , what is popularly known as TRUTH SERUM

It reveals a person's innermost thoughts and feelings


Occasionally , his feelings about others . Feelings that have so far , remained suppressed in his subconscious


Of late , we have heard our politicians refer to their political opponents , as :


>   Maut Ka Saudagar


>   Khooni Panja


>   Shehzada


>   Chor ( Thief  )


>   Criminal


>   Mendhak  (  Frog  )


>   Bandar  (  Monkey  )


>   Napunsak  (  Impotent  )...........etc



Those that failed to receive the full prescribed doze of the Truth Serum , just call their opponents :


>   Corrupt


>   Indecisive


>  Incompetent


>  Insecure


>  Spineless


>  Confused..................etc


I wonder whether , once in five years , our good doctor EC ( Election Commissioner ) , administers a NARCO test to our politicians , by announcing elections  !


What I never knew is that this NARCO test also brings to the surface , all those wonderful / sympathetic   feelings that our political bosses harbored for us - the poor public - always , but were too busy to express all along  ! 


Here is how they describe us - the poor people of Mother India  :


>  Suppressed


>  Oppressed


>  Tortured


>  Maligned


>  Neglected


>  Miserable


>  Starving


>  Suffocated


>  Deprived ..............etc



Such  a glorious show of empathy  !



I almost wish , we had National Elections , every alternate year  !



*  hemen  parekh  (  01 March  2014  /  Mumbai  )