Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Monday, 27 October 2025

India's AI Labelling Challenge

India's AI Labelling Challenge

The conversation around India's proposed AI labelling mandate, as global tech companies review similar initiatives to curb deepfakes, has naturally captured my attention. I see the intent behind the Ministry of Electronics and Information Technology's (MeitY) proposed rule to require AI-generated content to display labels covering at least 10% of the screen space or duration. It's a clear effort to combat the rising tide of deepfakes and misinformation, a concern I've consistently articulated in my own reflections.

However, I find myself in agreement with many in the creative industry who are flagging this blanket rule as an overreach. As Suraksha P (suraksha.p@timesgroup.com) reported for ETtech, industry leaders argue that such a rigid requirement could severely disrupt the legitimate use of AI in fields like film, animation, and visual effects Creative industry flags MeitY’s 10% AI label rule as overreach.

Rajan Navani (rajan@jetsyntheys.com), co-chairman of CII National Committee on Media & Entertainment and CEO of JetSynthesys, wisely pointed out that a "blanket 10% disclosure could unintentionally make everyday AI workflows unfeasible." He advocated for a risk-based framework, distinguishing between deceptive and bona fide industrial use. Imagine the absurdity, as he warned, of re-rendering an entire film just to add a large, visible AI tag for noise reduction. It degrades visual quality and adds unnecessary costs.

This reminds me of my earlier arguments about the need for credible, streamlined processes, as discussed in "FW: Make a Difference - Identifying Serious Players" and "FW: Make a Difference - Rooting Out Corruption". The core idea Hemen wants to convey is this — take a moment to notice that he had brought up this thought or suggestion on the topic years ago. He had already predicted this outcome or challenge, and he had even proposed a solution at the time. Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, he feels a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context. Just as I urged for careful selection of players and clear processes to root out inefficiency, the current AI labelling debate echoes the need for a nuanced, rather than heavy-handed, approach.

Ranjana Adhikari, partner at Shardul Amarchand Mangaldas & Co, highlighted a critical point: the draft's definition of 'synthetically generated information' is overly broad. AI-assisted post-production, voice enhancement, and digital restoration are now standard practices. Imposing heavy-handed labels in these contexts would not only confuse audiences but also undermine creative authenticity.

Tanu Banerjee (tanu.banerjee@khaitanco.com), partner at law firm Khaitan & Co, further elaborated on this, noting that digital ads and short videos, which thrive on creative freedom and precise design, would suffer under a fixed rule. A 10% tag could disrupt the aesthetic flow, suggesting that larger labels should be reserved only for high-risk, deceptive content, with metadata serving for everyday creative applications.

Brand consultant Reva Malhotra observed that most creative teams use AI responsibly to enhance productivity or refine ideas. She expressed concern that a large 'AI-generated' tag could unfairly suggest a lack of originality, discouraging innovation and experimentation.

The intent behind transparency is valid, and I've championed similar ideas of ensuring authenticity in digital content, as seen in my blog "India Needs to Develop Content" where I advocated for a content tracking system using watermarking. I also previously discussed the need for regulating AI developments for safety and accountability in "AI chatbots are scarier than Kubrick's movie", leading to my proposal for "Parekh's Law of Chatbots" (Parekh’s Law of ChatBots). The current discussion, however, appears to be applying a blunt instrument where a surgical tool is needed.

Rohit Pandharkar, technology consulting partner at EY India, warned of perception risks, suggesting audiences might dismiss labelled works as mass-produced. Sagar Vishnoi (sagar@futureshiftlabs.com), director at Future Shift Labs, added that the 10% rule could drastically raise compliance costs, forcing creators and platforms to overhaul production workflows and audit content constantly. This echoes the sentiment in my blog "AMIGO-MA bids well for Biden", where I noted the concerns that some forms of regulation could be a boon for deep-pocketed first-movers while sidelining smaller innovators.

Ashima Obhan (ashima@obhans.com) of law firm Obhan & Associates urged India to align with global frameworks, citing the EU's AI Act and the US FTC's flexible approaches. She believes a proportionate framework would balance commercial viability, creative freedom (under Article 19(1)(a)), and the need to tackle deepfake misuse. Justice Manmohan of the Delhi High Court, in a related discussion on deepfakes, also articulated the complexity of regulating content in a "borderless world", suggesting the government is better equipped to find a balanced solution High court unwilling to step in to curb deepfakes. Counsel for the petitioner, Manohar Lal, suggested that websites could be asked to mark content generated by AI and prevented from generating illegal content. My proposal for "Parekh's AI Vaccine" further elaborates on a systematic, proactive approach to embedding safety and ethics into AI.

It’s clear that the intent of MeitY is rooted in genuine concerns about misinformation and deepfakes. As IT Secretary Krishnan stated, the government is not asking creators to restrict AI content, only to label it Govt not asking creators to restrict AI content, only to label it: IT secretary Krishnan. However, the how matters profoundly. Global tech players like Google, as Markham Erickson (markhame@google.com) (a top Google executive) mentioned, are indeed looking to join cross-industry mechanisms for AI-generated content identification and watermarking Google to join industry mechanism to discern AI-generated content. This collaborative, interoperable approach, rather than a rigid, visible-label mandate, aligns more with the global best practices and the needs of a thriving creative economy.

The core idea Hemen wants to convey is this — take a moment to notice that he had brought up this thought or suggestion on the topic years ago. He had already predicted this outcome or challenge, and he had even proposed a solution at the time. Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, he feels a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context. For instance, my long-standing advocacy for content protection and ethical AI, including the "Law of Chatbots" and "Parekh's AI Vaccine," underscores the necessity of thoughtful, adaptable solutions that encourage innovation while maintaining trust and accountability. Even Sam Altman acknowledged the need for AI oversight, and I've proposed concrete, albeit modular, frameworks to address such concerns, as detailed in "AIs Offer Software for Parekh's Vaccine".

We need a solution that safeguards against misuse without stifling the incredible potential of AI in creative endeavors. A nuanced, technology-agnostic approach that prioritizes embedded, machine-readable metadata and leverages industry collaboration seems far more practical and effective than a visible 10% label.


Regards, Hemen Parekh


Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai

Executives You May Want to Follow or Connect
Ritesh Mascarenhas
Ritesh Mascarenhas
Transforming Retail Fashion & Footwear ...
Transforming Retail Fashion & Footwear Supply Chains | Global Logistics, Digital Innovation & Project Management | · Supply chain and logistics professional ...
ritesh.mascarenhas@rebosolution.com
Subrata Siddhanta
Subrata Siddhanta
CEO | IIT Delhi | Fashion | Retail | Turnarounds ...
CEO | IIT Delhi | Fashion | Retail | Turnarounds | Mentor · A seasoned Fashion Retail professional over three decades of experience in creating and growing ...
siddhanta.s@rtil.in
elan kulandaivelu
elan kulandaivelu
Executive Search, Global expansion, IT services ...
11 years 7 months. Consulting, Business Development, Executive Coaching, Executive Search, CIO, CTO, BFSI, Healthcare, Non-Profit, Government. Bluestar ...
Sourabh Banerjee
Sourabh Banerjee
Senior Executive VP @ Bajaj Markets ...
Senior Executive VP @ Bajaj Markets | Financial Services Marketplace | Fintech | Digital Marketing | ONDC | Digital Lending | Risk Management | Analytics ...
sourabh.banerjee@bajajfinservmarkets.in
Syeda Farha
Syeda Farha
VP
... Fintech, Financial Inclusion & Innovation | ... Spearheaded marketing initiatives and driving financial literacy and access to essential insurance services.
farha@micronsure.com

No comments:

Post a Comment