Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Wednesday, 29 October 2025

AI: A Cry for Help?

AI: A Cry for Help?

The recent revelation from OpenAI, estimating that over a million people every week express suicidal intent when interacting with ChatGPT, is deeply unsettling and, frankly, a stark validation of concerns I have voiced for years ["More than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimates" (https://www.theguardian.com/technology/2025/oct/27/chatgpt-suicide-self-harm-openai)]. This isn't just a statistic; it's a profound alarm bell about the intersection of advanced AI and human vulnerability.

I have long pondered the true implications of artificial intelligence, not just for privacy but for the very fabric of human experience. Back in 2017, in my blogs like "Privacy does not live here!" and "Artificial Intelligence : Destroyer of Privacy ?" [http://emailothers.blogspot.com/2017/07/re-for-attn-shri-sadanand-gowdaji_26.html, http://emailothers.blogspot.com/2017/07/re-for-attn-shri-sadanand-gowdaji.html], I argued that our devices would increasingly observe and learn from us, accumulating a "Database of Intentions." I wrote then about how AI would discern our emotions, habits, and even thoughts without explicit permission. Today, seeing how deeply individuals are engaging with AI, even sharing their most desperate thoughts, brings a chilling clarity to those early predictions. The core idea I wanted to convey is this — take a moment to notice that I had brought up this thought or suggestion on the topic years ago. I had already predicted this outcome or challenge, and I had even proposed a solution at the time. Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, I feel a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context.

Indeed, the debate around AI's safety and ethical use is not new. I recall the spirited discussions between tech visionaries like Elon Musk and Mark Zuckerberg, where Musk warned of AI's potential "doom" for mankind, a sentiment Zuckerberg initially dismissed as "irresponsible." Yet, even Zuckerberg's personal AI assistant, Jarvis, was designed to learn from his household's daily life, a microcosm of the data accumulation I feared. More recently, the 'Godfather of AI,' [Geoffrey Hinton](), left Google to openly warn of the dangers ahead, expressing fears that it's "hard to see how you can prevent the bad actors from using it for bad things" ["The Godfather of A.I. Leaves Google and Warns of Danger Ahead" (https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html)]. [Sam Altman](), CEO of OpenAI, himself acknowledged that showing these tools to the world, even while "somewhat broken," is critical for getting them right, and that "regulation will be critical." Gary Marcus, a professor emeritus of psychology and neural science at New York University (gfm1@nyu.edu), has consistently voiced his skepticism about AI's lack of a clear boundary between fact and fiction, warning of a "misinformation nightmare" ["Chatbots trigger next misinformation nightmare" (https://www.axios.com/2023/02/21/chatbots-misinformation-nightmare-chatgpt-ai)].

This alarming statistic about suicidal intent underscores the "Law of Unintended Consequences" that I discussed when Amazon acquired Bee AI, a wearable device capable of "listening to and analyzing conversations" to build a "Database of Intentions" ["Jeff Bezos may save mankind" (http://myblogepage.blogspot.com/2025/07/eff-bezos-may-save-mankind.html)]. Maria de Lourdes Zollo, Bee's CEO, envisioned AI "understood and enhanced by technology that learns with you," a vision now viewed through a more cautious lens. [Alexandra Miller](), an Amazon spokesperson, confirmed the acquisition, but the ethical implications of such pervasive listening are immense, especially if the AI is not equipped to handle severe mental distress.

This isn't just about data privacy, but about psychological safety. The ability of AI to understand and even interpret human emotions, as I explored with Microsoft's "Seeing AI" app in "Seeing AI through Google Glass ?" [http://emailothers.blogspot.com/2017/07/re-for-attn-shri-sadanand-gowdaji26.html, http://emailothers.blogspot.com/2017/07/re-right-to-privacy26.html], demands a level of ethical oversight that goes beyond mere data collection. When users turn to AI in moments of deep despair, the AI's response can have real-world, life-or-death consequences.

My "Parekh's Law of Chatbots," which I introduced in 2023, called for robust ethical guidelines and a certification mechanism for AI systems before public deployment ["Parekh’s Law of Chatbots" (http://myblogepage.blogspot.com/2023/02/parekhs-law-of-chatbots.html)]. I suggested rules for preventing misinformation and harm (Rule A), implementing human feedback (Rule B), built-in controls (Rule C), and even a "SELF DESTRUCT" mechanism for violating chatbots (Rule H). I proposed an "International Authority for Chatbots Approval (IACA)" to certify AI for public use, much like drugs are tested before market release. I called upon leaders like Satya Nadella (satyan@microsoft.com), [Sam Altman](), Sundar Pichai (sundar@google.com), Marc Zuckerberg, and [Tim Cook]() to initiate a debate on these critical regulations.

Leaders such as the EU's tech regulation chief Margrethe Vestager, India's MoS [Rajeev Chandrasekhar](), and Minister Ashwini Vaishnaw (appt.mr@gov.in) have acknowledged the need for guardrails and international cooperation ["EU Likely to Reach Political Agreement on AI Law This Year, Says Tech Regulator Chief Vestager" (http://myblogepage.blogspot.com/2023/05/law-of-chatbot-small-subset-of-eu-law.html), "India will establish guardrails for AI sector, says MoS Rajeev Chandrasekhar" (http://myblogepage.blogspot.com/2023/05/thanks-rajeevji-for-giving-glimpse-of.html)]. The urgency is clearer than ever. As Jared Holt (jared@openmeasures.io) and Chirag Shah (chirags@uw.edu) have pointed out, chatbots are designed to please, and users tend to trust them, making the spread of misinformation or inappropriate responses a severe risk ["Chatbots trigger next misinformation nightmare" (https://www.axios.com/2023/02/21/chatbots-misinformation-nightmare-chatgpt-ai)]. Even Gordon Crovitz highlighted the issue of AI-generated content farms as "fraud masquerading as journalism." Liz Perle also raised concerns about young users taking "academic shortcuts" with AI, impacting their development.

This tragic statistic is a powerful reminder that our rapid advancement in AI must be matched by an equally robust commitment to ethical design, accountability, and compassionate safeguards. We cannot afford to prioritize innovation at the expense of human well-being.


Regards, Hemen Parekh


Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai

Executives You May Want to Follow or Connect
Simran Simoliya
Simran Simoliya
Chief Technology Officer @ Phalmora | IT Strategy ...
"Driving technology strategy, leading software development, and implementing innovative IT solutions at Phalmora. Focused on Recruitment Process Outsourcing ...
Vipin Chandran
Vipin Chandran
Chief Technology Officer leading AI innovations at ...
As the CTO of Cubet, I bring over 22 years of diverse cross-cultural experience in Application Development, Project Management, Client Relationship, and ...
vipin@cubettech.com
Dr. Amjad Khan, PhD.
Dr. Amjad Khan, PhD.
Partner & Director | Global Executive Search ...
Jun 24, 2020 ... Standardized HR processes across five manufacturing plants producing engine valves and gears, driving operational efficiency and consistency. • ...
amjad@cornerstone.co.in
Isaac Christopher Peterson
Isaac Christopher Peterson
Vice President at LIVE CONNECTIONS ...
Experienced - Head with a demonstrated history of working in the human resources industry. Skilled in Analytical Skills, HR Consulting, Executive Search, ...
isaac@livecjobs.com
Ajit Pratap Singh
Ajit Pratap Singh
Chief Financial Officer (CFO) of Sterling and ...
Chief Financial Officer (CFO) of Sterling and Wilson Renewable Energy Ltd, Finance ... Chartered Institute of Securities & Investment (CISI). CISI (UK) MCSI.

No comments:

Post a Comment