Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday, 24 September 2025

Why India’s ‘cautious support’ for China’s global AI body makes sense — and what I worry about

Why India’s ‘cautious support’ for China’s global AI body makes sense — and what I worry about

Why India’s ‘cautious support’ for China’s global AI body makes sense — and what I worry about

I read the report that India may lend “cautious support” to China’s proposal for a global AI body with interest and a measure of déjà vu India may lend 'cautious support' to China's global AI body proposal. This is the kind of geopolitically charged technical diplomacy I’ve been thinking about for years: how standards, rules and institutions for technology are being shaped as much by national strategy as by science.

I want to walk you through why India’s posture — not an enthusiastic yes, not an outright no, but cautious support — is strategically defensible, and why a set of hard red lines and concrete demands must accompany any backing.

Why cautious support can be smart

  • India has to preserve influence, not only principle. Sitting on the sidelines of a major multilateral initiative on AI would cede rule‑making to others. By signalling conditional support India keeps a seat at the table to shape technical standards, capacity building, and norms for responsible AI. See how infrastructure and standards shape markets in other sectors — like automotive AI and software-defined vehicles — in analyses such as the Imperial College review of global automotive disruption and the role of AI in mobility (Imperial College report).

  • India needs pragmatic multilateralism. Many problems around AI — cross-border data flows, safety testing, benchmarked assurance, workforce training — are transnational. A global forum, properly governed, can accelerate capacity building for lower‑and‑middle income countries and enable technical cooperation that benefits India’s large developer and startup ecosystem.

  • Influence trumps isolation for standard‑setting. If India simply opposes all China‑led initiatives, it risks fragmenting governance into competing blocs (standards A vs standards B) with higher compliance costs for Indian industry and less access to global markets.

Why “cautious” must mean concrete conditions

Caveats matter. A blanket or naïve endorsement would be risky. India should make any goodwill conditional on a set of commitments and safeguards. My checklist would include:

  • Transparency of governance: membership rules, decision‑making processes, voting and dispute resolution must be public and verifiable.
  • Technical openness: standards, protocols and evaluation frameworks should be open and subject to peer review (not closed, proprietary, or state‑captured stacks).
  • Human‑rights and safety commitments: clear, enforceable norms against AI systems that enable rights violations, mass surveillance, or discriminatory outcomes.
  • Interoperable assurance frameworks: shared metrics and testing approaches for safety, robustness and explainability (what some call AI assurance).
  • Data governance safeguards: rules for cross‑border data flows, lawful access, and protections for sensitive personal data.
  • Avoidance of export controls that weaponize standards: the body should not be a vehicle for unilateral technological exclusion without multilateral justification.

If the proposal meets these conditions, joining — or at least participating actively — is a way to shape outcomes. If not, India should withhold endorsement and work with like‑minded partners to build alternative or parallel mechanisms.

The geopolitics beneath the technical language

China’s proposal will inevitably reflect its governance model. Beijing’s industrial policy and standards playbook has shown how technical rules can become strategic advantage — whether in batteries, EV platforms or AI stacks. I explored how geopolitics and technology intersect in the automotive and mobility world, and the lesson is the same here: whoever shapes technical norms can shape supply chains, markets and national security exposure (Imperial College report).

India must therefore judge the proposal not only on the immediate deliverables but on the long‑term institutional incentives it creates. Will this global body entrench a single state model of AI governance? Or will it include guardrails — accountability, open evaluation, multistakeholder participation — that make its standards credible worldwide?

Where India is uniquely positioned to push for the right model

I’m optimistic about India’s leverage: we have a large pool of engineering talent, a fast‑growing AI startup ecosystem, and democratic institutions that can champion inclusive, rights‑respecting standards. India can push for:

  • Capacity building: affordable training, testbeds and model‑validation resources for emerging economies.
  • Open benchmarks and shared evaluation datasets that reduce bias and favor transparent performance claims.
  • AI assurance practices that regulators and insurers can accept — a theme I’ve repeatedly written about when thinking of software defined vehicles and safety‑critical AI systems (my earlier reflections on mobility, software and assurance).

Those are practical, exportable items that benefit India’s firms and its citizens.

Risks I don’t want us to ignore

  • Governance capture: If the body’s architecture lets any one state dominate agenda‑setting, technical working groups and procurement standards, smaller powers lose bargaining power.
  • Fragmentation: Parallel, incompatible standards make compliance complex and raise costs for Indian firms operating globally.
  • Security and export controls: New rules might disguise protectionist or exclusionary measures that hurt Indian access to components or markets.

Better to shape a global institution than to be shaped by one — but only if India insists on safeguards.

A pragmatic set of steps I’d like India to take now

  1. Condition political support on a public charter that enshrines openness, multistakeholder governance, and human‑rights protections.
  2. Push for technical working groups co‑chaired by diverse regional partners (not dominated by any single state).
  3. Demand open, peer‑reviewed standards for safety and assurance — the kind that regulators, certification bodies and insurers can accept.
  4. Secure commitments on capacity building (training, testbeds, funding) targeted at the Global South.
  5. Keep parallel bilateral and plurilateral tracks going with other partners to avoid overdependence on any single forum.

Final thought — a little personal note

When I look back at what I’ve written over the years about technology, standards and national strategy, I keep returning to the same theme: technical design choices become political choices. I argued about the importance of standards, software platforms and assurance in the context of mobility and electrification long before these debates became mainstream (see my earlier posts on mobility, EV policy and the move toward software‑defined vehicles) Holy Grail for Electric Vehicles. That memory feels oddly validating now — and a reminder that we should treat proposals for global governance of AI as both a technical task and a strategic negotiation.

India’s stance of ‘cautious support’ is not fence‑sitting. It’s opening a bargaining position. If India uses that position to demand openness, fairness and technical credibility, this could be a rare moment where geopolitics and public interest align. If India fails to push hard for those safeguards, the consequences will be structural, and expensive, for our industry and for our citizens.


Regards,
Hemen Parekh

No comments:

Post a Comment