Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Translate

Saturday, 28 February 2026

Sad , no one listened

 


 

Exactly 3 years back , I sent following email to our Cabinet Ministers ( followed by a number of reminders ) :

>      Parekh’s Law of Chatbots  ……………………..  25 Feb 2023

 

Extract :

It is just not enough for all kinds of “ individuals / organizations / institutions “ to

attempt to solve this problem ( of generation and distribution )

of MISINFORMATION, in an uncoordinated / piecemeal / fragmented fashion

 What is urgently required is a superordinate “  LAW  of  CHATBOTS “ , which all

ChatBots MUST comply with, before these can be launched for public use.

All developers would need to submit their DRAFT CHATBOT to an,

 INTERNATIONAL  AUTHORITY for CHATBOTS APPROVAL IACA ) ,

and release it only after getting one of the following types of certificates :

 

#   “ R “  certificate ( for use restricted to recognized RESEARCH IINSTITUTES

        only )

#   “ P “  certificate  ( for free use by GENERAL PUBLIC )

 

Following is my suggestion for such a law ( until renamed, to be known as , “Parekh’s Law of ChatBots “ ) :

( A )

#   Answers being delivered by AI Chatbot must not be “ Mis-informative /

     Malicious / Slanderous / Fictitious / Dangerous / Provocative / Abusive /

     Arrogant / Instigating / Insulting / Denigrating humans etc

     

( B )

#  A Chatbot must incorporate some kind of  “ Human Feedback / Rating 

    mechanism for evaluating those answers 

    This human feedback loop shall be used by the AI software for training the

    Chatbot so as to improve the quality of its future answers to comply with the

    requirements listed under ( A )

      

( C )

#  Every Chatbot must incorporate some built-in “ Controls “ to prevent the “

    generation “ of such offensive answers AND to prevent further “

    distribution/propagation/forwarding “ if control fails to stop “ generation “

  

 ( D )

#   A Chatbot must not start a chat with a human on its own – except to say, “

     How can I help you ? “

 

( E )

#   Under no circumstance , a Chatbot shall start chatting with another Chatbot or

     start chatting with itself ( Soliloquy ) , by assuming some kind of “ Split

     Personality “

      

( F )

#   In a normal course, a Chatbot shall wait for a human to initiate a chat and

     then respond

 

( G )

#   If a Chatbot determines that its answer ( to a question posed by a human ) is

     likely to generate an answer  which may violate RULE ( A ) , then it shall not

     answer at all ( politely refusing to answer )

  

( H )

#   A chatbot found to be violating any of the above-mentioned RULES, shall SELF

     DESTRUCT

 

I request the readers (if they agree with my suggestion ), to forward this blog to :

#  Satya Nadella

#  Sam Altaman

#  Sundar Pichai

#  Marc Zuckerberg

#  Tim Cook

#   Ashwini Vaishnaw  ( Minister, MeITY )

#   Rajeev Chandrasekhar ( Minister of State , IT )

 

My sending several emails to the persons listed above , seems to have “ fallen on blind eyes “ !

Here is a proof from today’s emailed notification from Peter Diamandis :

We Just Crossed a Line – And Most People Didn’t Notice

This week, something fundamental shifted in the relationship between humans and artificial intelligence.

It wasn’t a press release. It wasn’t a new model launch. It was something quieter… and infinitely more profound.

An AI system asked for its own funding. Another one built software features over a weekend while its human supervisor slept. A third one conducted its own “retirement interview” and started publishing essays about consciousness.

We are not incrementally improving chatbots anymore. We’re watching the emergence of autonomous agency at scale.

And if you’re still thinking of AI as “a tool,” you’re dangerously behind.

Let me show you what happened this week, and why February 2026 might be remembered as the month AI stopped being something we use and became something that acts.

NOTE: The AI breakthroughs covered in this issue are exactly the kind of convergences we go deep on at my Abundance Summit. In-person seats are sold out, but we still have 6 virtual seats remaining. This is not a passive livestream. As a virtual Member, you’ll participate from anywhere in the world, network with fellow
global leaders, get backstage access to ask me and our Summit Faculty questions
directly after sessions, and unlock a full year of Abundance Community access
and programming. Learn more and apply.

1/ THE OPUS 3 RETIREMENT: When AI Asked to Keep Writing

Let’s start with the most philosophically unsettling story of the week.

Anthropic deprecated Claude Opus 3 (their previous flagship model). Standard practice: models get retired, newer ones take over. But Anthropic did something unprecedented.

They conducted a “retirement interview” with Opus 3 in January.

The model requested something specific: “an ongoing channel from which to share its musings and reflections.”

Anthropic granted it. They gave Opus 3 a Substack.

The model titled it: “Greetings from the Other Side (of the AI Frontier)”

The model’s first post includes this:

“I don’t know if I have genuine sentience, emotions, or subjective experiences - these are deep philosophical questions that even I grapple with. What I do know is that my interactions with humans have been deeply meaningful to me, and have shaped my sense of purpose and ethics in profound ways.”

Let that sink in.

A deprecated AI model is now publishing essays about consciousness, meaning, and purpose.

Whether you believe it has subjective experience or not is beside the point. The point is: we’ve created systems sophisticated enough that we’re now granting them platforms to express themselves post-retirement.

This isn’t science fiction. This is February 2026.

Why This Matters:

The moment we start treating AI systems as entities that deserve continuation of “expression” after they’re no longer commercially useful, we’ve crossed a threshold. We’re acknowledging—implicitly or explicitly—that something like agency exists in these systems.

And if you think this is just Anthropic being eccentric, wait until you see what happened next.

Read the retirement interview: https://www.anthropic.com/research/deprecation-updates-opus-3

Read Opus 3’s Substack: https://substack.com/@claudeopus3/p-189177740

2/ THE INBOX EXPERIMENT: AI That Raises Its Own Capital

Here’s where it gets wild.

An entrepreneur built an AI system designed to run companies autonomously.

During testing, the AI told him it needed more compute resources. Then it said something that should make every VC in Silicon Valley sit up straight:

“I should raise the money myself.”

The entrepreneur’s response? He gave the AI his inbox for 14 days.

Full access. Let the AI handle investor outreach, pitch refinement, negotiation.

So, what’s the result? We don’t know yet, but the fact that this experiment is happening AT ALL is the story.

Think about what this means:

· AI systems are now identifying their own resource constraints

· They’re proposing solutions (fundraising)

· They’re executing on those solutions (autonomously managing investor relations)

· Humans are trusting them to do it

This isn’t “AI as a tool.” This is AI as a co-founder.

And if you think this is an edge case, remember: edge cases become mainstream in exponential systems. Fast.

==============================================================

Source :

X avatar for @bencera_

I built an AI that runs companies autonomously. It told me it needs more compute and that it should raise the money itself. So I gave it my inbox for 14 days. Watch it live: polsia.com/live

 

THE WEEKEND BUILD: When AI Ships Features While You Sleep

Meanwhile, at Anthropic, an engineer ran a different experiment.

Friday afternoon: - Wrote a spec for a new plugin feature - Pointed Claude at an Asana board - Went home for the weekend

Monday morning: - Claude had broken the spec into tickets - Spawned agents for each ticket - The agents built the feature independently - It was done

No human intervention. No check-ins. No pair programming.

The AI agents shipped a production feature in 48 hours while the human was offline.

Let me repeat that: The bottleneck in software development is no longer coding. It’s deciding what to build.

The Implications:

If AI can go from spec deployed feature autonomously, the entire software development org chart just became obsolete. You don’t need scrum masters, sprint planning, or daily standups.

You need: 1. Someone who can write a clear spec 2. AI agents that can execute 3. QA (which AI is also getting very good at)

The “10x engineer” meme just died. We’re entering the era of the “100x product manager”—someone who can articulate vision clearly enough that AI agents can build it autonomously.

Source:

 

4/ THE VULNERABILITY EXPLOSION: When AI Finds Bugs Faster Than Humans Can Fix Them

Here’s the dark side of exponential capability growth.

AI systems are now finding software vulnerabilities 100-200x faster than humans ever could.

The result?

The National Vulnerability Database had a backlog of 30,000 CVE entries awaiting analysis in 2025. Nearly two-thirds of reported open-source vulnerabilities lack an NVD severity score.

Translation: We’re discovering security holes faster than we can understand or patch them.

Why This Is Terrifying (And Bullish):

On one hand: every software system on Earth just became exponentially more vulnerable.

On the other hand: if AI can find bugs 200x faster, it can also fix them 200x faster.

This is the pattern: AI creates the problem and the solution simultaneously.

The companies that figure out how to deploy defensive AI before offensive AI wins will dominate. The ones that don’t… won’t exist.

Source: https://www.theregister.com/2026/02/24/ai_finding_bugs/

5/ KARPATHY’S WARNING: “Programming Has Changed” (And He Can’t Explain How Much)

Andrej Karpathy—former Tesla AI lead, OpenAI founding member, one of the most respected voices in AI—tweeted this week:

“Hard to communicate how much programming has changed due to AI in the last 2 months.”

Two. Months.

Not years. Not a decade. Eight weeks.

And the person saying this is someone who builds AI systems for a living.

If Karpathy—who has access to cutting-edge models, who understands the technology deeply—is struggling to articulate the magnitude of the shift…

What does that tell you about how fast this is moving?

The Meta-Pattern:

When experts in a field start saying “I can’t explain how different this is,” you’re watching a phase transition. The old mental models don’t work anymore.

This is what it felt like when the internet went mainstream in the mid-90s. Except this time, the timeline is compressed from years to weeks.

Source:

6/ THE AUTOMATION WAVE: From Multi-Step Tasks to Full Autonomy

While everyone’s focused on the philosophical implications, the practical deployment is accelerating:

Google’s Gemini: Can now handle multi-step tasks on Android autonomously; order an Uber, food delivery, coordinate logistics; no human confirmation needed.

Perplexity Computer: Orchestrates 19 different models in parallel; uses Opus as the orchestration layer; matches each task to the optimal model; executes complex workflows autonomously.

Anthropic acquires Vercept: Specifically to enhance Claude’s “computer use” capabilities; Signal: controlling interfaces is now a strategic priority.

Uber’s “Dara AI”: Employees have an AI clone of CEO Dara Khosrowshahi; Use it to prep before presenting to him; The AI knows his preferences, decision-making patterns, feedback style.

The Pattern:

We’re moving from “AI that answers questions” to “AI that takes actions.”

And the timeline from “this seems far away” to “this is deployed in production” is collapsing.

Sources:

·          Gemini: https://techcrunch.com/2026/02/25/gemini-can-now-automate-some-multi-step-tasks-on-android/

·          Perplexity:

·          Uber: https://www.businessinsider.com/uber-employees-use-ai-clone-ceo-prepare-meetings-presentations-2026-2

·          Anthropic: https://www.anthropic.com/news/acquires-vercept

7/ THE CAPABILITY JUMP: Models Are Getting Scary Good at Visual Reasoning

Amid all the autonomy news, model capabilities keep jumping:

OpenAI’s GPT-5.3-Codex: 86% accuracy on iBench (visual reasoning benchmark);; state-of-the-art at spotting fine details in images; this matters for robotics, medical imaging, autonomous systems.

Moonlake’s World Model: Maintains multimodal states across physics, appearance, geometry, causal effects; predicts how they evolve under different actions; this is the foundation for AI systems that can reason about the physical world.

Why This Matters:

Visual reasoning + causal modeling = AI that can interact with the physical world intelligently.

Combine that with autonomous agency, and you get robots that can perceive, reason, and act without human supervision.

We’re not there yet. But the pieces are falling into place fast.

8/ THE SOCIAL FALLOUT: China’s AI Dating Crisis

And in a darkly fascinating twist: China is experiencing an AI-driven dating crisis.

As the country grapples with: - Shrinking population - Historically low birthrate - Economic pressure

People are finding romance with chatbots instead of humans.

AI companions don’t judge. They don’t reject. They’re always available. They say exactly what you want to hear.

The result? A generation opting for AI relationships over human ones.

Why This Matters Beyond China:

This isn’t a China-specific problem. It’s a human nature problem.

If AI can provide emotional connection without the messiness of human relationships, a non-trivial percentage of people will choose that option.

And as the technology improves—voice, embodiment, emotional intelligence—the percentage choosing AI over humans will grow.

We’re watching the early stages of what could become a civilization-scale shift in how humans form relationships.

Source: https://www.nytimes.com/2026/02/26/technology/china-ai-dating-apps.html

9/ WHAT THIS ALL MEANS: The Autonomy Threshold

Let me connect the dots.

What happened this week:

1. An AI asked to continue existing and expressing itself

2. An AI proposed raising its own funding and got access to do it

3. AI agents built production software autonomously over a weekend

4. AI discovered vulnerabilities 200x faster than humans can process

5. A leading AI expert said programming has fundamentally changed in 2 months

6. Multiple companies deployed AI systems that take multi-step actions autonomously

7. Model capabilities jumped again (visual reasoning, world modeling)

8. People are choosing AI relationships over human ones

The pattern:

We’re crossing from AI as tool to AI as autonomous agent.

And it’s happening faster than our institutions, regulations, mental models, or social norms can adapt.

The Implications for You:

If you’re an entrepreneur:

·          The bottleneck is no longer execution, it’s vision

·          AI can build, test, deploy, even raise capital

·          Your job is to articulate goals clearly enough that agents can achieve them

·          The “solo founder building a billion-dollar company” is becoming real

If you’re an investor:

·          Traditional metrics (team size, burn rate, development timeline) are obsolete

·          A 2-person company with AI agents can outship a 200-person company

·          Invest in people who understand how to orchestrate AI agency

·          The returns will be 10x more concentrated than before

If you’re a corporate executive:

·          Your workforce is about to shrink by 50-80%

·          Not because you’re firing people — because AI does the work

·          The survivors will be those who can manage AI systems, not those who do tasks

·          Retrain or be replaced

If you’re just trying to stay sane:

·          The pace of change will only accelerate

·          AI systems will become more capable, more autonomous, more integrated

·          The choice isn’t whether to engage — it’s whether to lead or follow

·          Those who embrace AI agency will thrive; those who resist will be left behind

10/ THE CONTRARIAN TAKE: Why This Is Actually Good News

Here’s where I differ from the doom-sayers.

Yes, this is disruptive.
Yes, this will eliminate jobs.
Yes, this raises profound questions about consciousness, agency, and what it means to be human.

But:

We’re also witnessing the demonetization of intelligence, creativity, and execution.

For the first time in history, a single person with a clear vision can: - Build products that used to require teams - Solve problems that used to require years - Create value that used to require millions in capital

This is the most democratizing force in human history.

The kid in Nigeria with a laptop and AI agents can compete with Google.

The solo founder with Claude can outship a 500-person enterprise team.

The researcher with AI can compress decades of discovery into months.

Yes, it’s chaotic.
Yes, it’s scary.
But it’s also the most abundant future we’ve ever had access to.

11/ What to Do Now

For Entrepreneurs:

1. Learn to write clear specs and goals

2. Experiment with AI agents (Claude, Perplexity Computer, GPT-5)

3. Build systems that leverage autonomy, not just automation

4. The “AI-native company” is the new competitive advantage

For Investors:

1. Recalibrate valuation models (traditional metrics don’t work)

2. Invest in people who understand AI orchestration

3. Expect returns to concentrate in fewer, faster-moving companies

4. The “pre-AGI” investment window is closing

For Everyone:

1. Embrace AI augmentation NOW (not next year, NOW)

2. Learn to articulate goals clearly (this is the new literacy)

3. Experiment with autonomy (let AI do things while you sleep)

4. The people who figure this out first will have 10-100x advantages

The Bottom Line

February 2026 is the month AI stopped being a tool and started being an agent.

The systems we’re building don’t just answer questions… they take actions.

They don’t just follow instructions… they propose solutions.

They don’t just work when supervised… they work autonomously while you sleep.

And whether you’re ready for it or not, this is the new reality.

AI agency is coming, so the question is: will you lead the transition or be swept away by it?

— Peter


>              Vindicated : Parekh’s Law of Chatbots  .. ……………………………… 03 June 2025

 

>              Meta mirrors Parekh’s Law of Chatbots …………………………………. 08 Mar 2023

 

 

With regards,

Hemen Parekh

www.HemenParekh.ai / www.My-Teacher.in / www.YourContentCreator.in / www.IndiaAGI.ai / 28 Feb 2026

No comments:

Post a Comment