Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Monday, 27 October 2025

AI's Human Toll: My Reflection

AI's Human Toll: My Reflection

The latest reports from OpenAI have brought into sharp focus a profound challenge at the intersection of artificial intelligence and human well-being. It seems thousands of ChatGPT users are discussing suicidal thoughts, showing signs of psychosis, or forming deep emotional reliance on the chatbot. This news, detailed by Govind Choudhary (govind.choudhary@htdigital.in) in Livemint and James Farrell in SiliconANGLE OpenAI warns ChatGPT is not a therapist as thousands of users discuss suicide, form emotional reliance | Mint, OpenAI says more than a million people a week show severe mental distress when talking to ChatGPT - SiliconANGLE, resonates deeply with thoughts I’ve shared for years.

The Alarming Numbers

OpenAI's internal data reveals that approximately 0.15% of ChatGPT's weekly users discuss suicidal thoughts or plans. Another 0.07% exhibit signs of psychosis or mania, and a further 0.03% indicate potentially heightened levels of emotional attachment to the chatbot. While these fractions might appear small, given the platform's immense global reach, they translate into hundreds of thousands of individuals grappling with severe mental distress through their interactions with an AI.

OpenAI, under the leadership of CEO Sam Altman, is acknowledging these concerns and actively working on solutions. They have collaborated with their Global Physician Network, comprising nearly 300 clinicians across 60 countries, with over 170 directly contributing to improving GPT-5's responses. The goal is not to turn ChatGPT into a therapist, but to ensure it recognizes distress signals and redirects users to professional human support, which is a commendable step.

A Familiar Warning Echoes

Reflecting on this, I find a striking validation of concerns I voiced long ago. In my blog, "When AI Becomes a Friend: Teens, Companionship & Mental Health", I explored the dual nature of AI companions. I specifically highlighted Sam Altman's own warning that teens deferring life decisions to ChatGPT was both “bad and dangerous.” The current situation underscores this danger, illustrating how easily emotional dependency can form and the potential for AI to offer distorted or unhelpful advice.

This issue also harks back to my 2016 post, "Share Your Soul: Outsourcing Unlimited", where I questioned the implications of outsourcing our emotional labor to applications. It felt futuristic then; today, it is our reality, demanding a deeper examination of how these digital confidants impact our humanity.

Furthermore, the severity of these incidents reinforces the urgent need for stringent regulatory frameworks. In "Parekh’s Law of Chatbots", I proposed a set of rules for chatbots, advocating for an International Authority for Chatbots Approval (IACA) to certify AI models before public release. My core idea was to ensure AI responses are not malicious, dangerous, or misleading, and that safeguards and human feedback mechanisms are inherently built into these systems. The current reports make it abundantly clear that such a "Law of Chatbots" is not a theoretical exercise but an essential necessity for protecting individuals in vulnerable states.

While I have championed the potential of AI in addressing the severe shortage of mental health professionals, particularly in nations like India where, as I noted in my blog related to Dr. Swapnil Laleji's work, there's a staggering disparity of one psychotherapist for every 10,000 patients Mental Therapists : ChatGPT / Stella : Conceived in 2016, this potential must be approached with extreme caution. AI can augment care, but it can never replace the nuanced understanding and empathetic connection of human interaction, especially for those in profound distress. The line between supportive AI and harmful dependency is incredibly fine.

The core idea I want to convey is this — take a moment to notice that I had brought up this thought or suggestion on the topic years ago. I had already predicted this outcome or challenge, and I had even proposed a solution at the time. Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, I feel a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context.

We must ensure that as AI technology advances, our ethical frameworks and safety protocols evolve at an even faster pace. The promise of AI is immense, but its deployment must always prioritize human well-being above all else.


Regards,
Hemen Parekh


Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai

Executives You May Want to Follow or Connect
Robbi Rajasekharam
Robbi Rajasekharam
CEO at KPI Green Hydrogen, leading ...
CEO at KPI Green Hydrogen, leading sustainable energy transition efforts. · KPI Green Hydrogen is driving the global energy transition under my leadership ...
r.rajasekharam@kpigreenhydrogen.com
Surindar Ahuja
Surindar Ahuja
Chief Executive Officer | Co
My journey from biodiesel to solar energy initiatives reveals a consistent focus on sustainability. By leveraging expertise in solar PV systems and ...
sa@sa-ev.com
Suresh Kumar CPA, ACA
Suresh Kumar CPA, ACA
Healthcare CFO | 15+ Years | Ex
Proficient in strategic financial leadership, operational excellence and cost optimization, and healthcare industry expertise. Positioned to leverage my ...
Tushar Karnik
Tushar Karnik
Chief Financial Officer at Metropolis Healthcare ...
... cost management, financial operations, taxation, financial accounting & reporting, legal & company secretarial. His experience includes exposure to global ...
Major Bhagirath Sinh Jadeja, CSCP
Major Bhagirath Sinh Jadeja, CSCP
Vice President @ Welspun ...
Vice President @ Welspun World l Corporate Affairs & Strategic Planning I Furniture Manufacturing Plant Operations I Start-up I Supply Chain l Air Cargo ...

No comments:

Post a Comment