Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Translate

Wednesday, 6 May 2026

Hey ! Donald Trump approves of Parekh's Laws of Chatbots !

===================================================

America Catches Up: A Vindication of Parekh's Law of Chatbots

By Hemen Parekh | May 2026


The Prophecy of February 2023


On 26 February 2023, when the world was still giddy with the novelty of ChatGPT,

 a retired Indian entrepreneur published a modest blog post that nobody in Silicon

 Valley read. It proposed something simple but radical : 

AI chatbots must not be allowed to reach the public without independent,

 rigorous testing and approval by an authoritative body.

 

He called it Parekh's Law of Chatbots.


The post was not written from a tech campus. It was written from the wisdom of

 someone who had watched industries be disrupted — sometimes beneficially,

 sometimes catastrophically — and who saw, with uncomfortable clarity, that

 misinformation-spewing AI systems were a fire being handed to a civilization that

 hadn't yet learned fire safety.


The proposal had eight clauses. Among them :


  • AI outputs must not be misinformative, malicious, slanderous, fictitious, or

  •  dangerous.

  • A chatbot must incorporate a human feedback loop to continuously improve.

  • Every chatbot must have built-in controls to prevent the generation and

  • distribution of offensive content.

  • Developers must submit their chatbot to an International Authority for

  •  Chatbots Approval (IACA) before public release.

  • Two classes of certification: "R" (Research only) and "P" (Public use).

For over two years, this sat quietly on a Blogspot page, occasionally visited, rarely

 amplified.


What America Did on 5 May 2026

On the 5th of May 2026, The Hill reported that Google DeepMind, Microsoft, and

 xAI had signed formal agreements with the Center for AI Standards and

 Innovation (CAISI) — a unit of NIST, the US government's National Institute of

 Standards and Technology — to submit their frontier AI models for pre-

deployment evaluation before public release.


The CAISI Director stated that independent, rigorous measurement science is

 essential to understanding frontier AI and its national security implications.


This builds on earlier agreements signed with OpenAI and Anthropic in 2024. The

 CAISI has now completed more than 40 such evaluations.


Meanwhile, the White House is separately considering an executive order to

 establish an AI working group that would bring together tech executives and

 government officials to examine oversight procedures — essentially, a governance

 framework for AI systems before they reach the public.


The parallels to Parekh's Law are not coincidental. They are convergent.



The Alignment: Clause by Clause

What Hemen Parekh proposed in February 2023 as the "Law of Chatbots" is now

 being operationalised, piece by piece, by the world's most powerful government:


Parekh's Law (Feb 2023)US Government Action (2024–2026)

Submit chatbot to an authority before public release
CAISI pre-deployment evaluations by NIST

"R" certificate for research, "P" for public
Frontier model testing distinguishes national security vs civilian use

Independent body to approve AI

CAISI + proposed White House AI working group

Human feedback and continuous improvement mechanisms
Required as part of evaluation criteria

Controls to prevent generation of harmful content

Safety guardrails assessed in all CAISI reviews



The architecture Parekh envisioned — a gating authority, two classes of release

 certification, pre-deployment scrutiny, and post-deployment monitoring — is

 precisely what is now being assembled in Washington.


Why This Matters Beyond the Headlines


There is a deeper point here. In February 2023, Parekh's proposal seemed like

 wishful thinking. The dominant narrative in the AI industry was one of move fast,

 release early, learn from users. Sam Altman himself said publicly that releasing

 tools while "somewhat broken" was necessary to gather feedback. The idea that

 governments should approve AI models before launch seemed heavy-handed,

 even naive.


Three years later, the Pentagon has labeled an AI company a "supply chain risk."

 Intelligence agencies are stress-testing AI models for security vulnerabilities. The

 White House — under a president who championed deregulation — is drafting

 oversight procedures for AI releases.


The world moved toward Parekh's Law. Parekh's Law did not move

 toward the world.



What Still Needs to Happen


Parekh's proposal called for an International Authority. What exists today is

 national — the US has CAISI, the EU has the AI Act, India has MeitY guidelines.

 These are not coordinated. An AI system approved in the US for public

 deployment can be accessed from anywhere. A harmful system banned in Europe

 can be served to European users through US servers.


The next step — the one that Parekh called for in 2023 and that the world still has

 not built — is a UN-level coordinating body for AI governance, something

 analogous to the IAEA for nuclear energy or the ICAO for aviation. This remains

 unbuilt.


But the foundation is being poured. And it looks remarkably like the blueprint from

 that February 2023 blog post.



A Note to the Bigwigs

Parekh ended his original post by asking readers to forward it to Satya Nadella,

 Sam Altman, Sundar Pichai, and others.


Today, their companies are signing agreements with government bodies to do

 exactly what he proposed.


He didn't need them to listen then. History listened instead.



Hemen Parekh is the founder of 3P CONSULTANTS and author of thousands of

 blogs spanning technology policy, governance, and innovation. 


He can be reached at hcp@RecruitGuru.com. His digital avatar continues

 conversations at www.hemenparekh.ai


References:


 

No comments:

Post a Comment