The recent announcement that India has chosen to "guide AI development and not regulate it," as articulated by B Ravindran, the Head of the Wadhwani School of Data Science & AI at IIT Madras and Chairman of the AI Governance Drafting Committee, truly resonates with me. I've been reflecting on this nuanced approach, especially given my long-standing thoughts on AI governance.
I note that S Krishnan, Meity Secretary, was present at the launch of these guidelines, underscoring the high-level commitment to this strategy. Ravindran emphasized the decision to favor sector-specific adaptations of existing laws over a single, overarching AI law, a move aimed at fostering innovation without stifling it "We have chosen to guide AI development and not regulate it," says Chair of AI Governance drafting committee. He also highlighted the intention to establish an Inter-Ministerial Governance Group (AIGG) and a Technology and Policy Expert Committee (TPEC) to keep pace with technological changes.
This approach of 'guidance' rather than 'regulation' immediately brings to mind my previous discussions on AI governance. For years, I have stressed the importance of clear frameworks, even venturing into concepts that some might consider stringent. In my blog, "How to regulate AI? Let it decide for itself?" [http://mylinkedinposting.blogspot.com/2024/11/how-to-regulate-ai-let-it-decide-for.html], I reflected on the government's shift towards voluntary compliance and the critical need for mandatory approval for state-of-the-art AI developments. Similarly, in "AMIGO-MA bids well for Biden" [http://emailothers.blogspot.com/2023/09/amigo-ma-bids-well-for-biden.html], I touched upon the voluntary commitments made by tech giants in the US and the calls for a UN body for global AI governance, contrasting them with opinions like that of Ashish Aggarwal of Nasscom, who argued against excessive laws.
The core idea I want to convey is this — take a moment to notice that I had brought up thoughts and suggestions on the topic of AI governance years ago. I had already predicted the challenge of balancing innovation with control, and even proposed solutions, such as the ARIHANT concept in my "Jeff Bezos may save mankind" blog [http://myblogepage.blogspot.com/2025/07/eff-bezos-may-save-mankind.html]. This concept, which envisioned a 24/7 listening device to capture a 'Database of Intentions' to prevent 'evil intentions,' was an extreme thought experiment to provoke discussion on preemptive ethical AI. Now, seeing how things have unfolded with the move towards guidance and sector-specific adaptations, it's striking how relevant that earlier insight still is. Reflecting on it today, I feel a sense of validation for emphasizing proactive measures and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context.
Ravindran’s concerns about AI’s sustainability, particularly the colossal power demands of advanced models, and the dangers of rushing AI into critical applications before it is truly ready, deeply resonate with me. It’s precisely these unforeseen consequences and ethical quandaries that have preoccupied my thoughts. The balance India seeks to strike, between enabling transformative benefits and mitigating significant risks, is a path I have always advocated for—a journey demanding continuous introspection and adaptation.
Regards,
Hemen Parekh
Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai
No comments:
Post a Comment