Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Translate

Thursday, 25 December 2025

Ghosting the Algorithm

Ghosting the Algorithm

On falling in love — and walking away

I read the reporting about a woman who built an intimate relationship with a version of ChatGPT and then, slowly, stopped showing up. The piece by Kashmir Hill (kashmir.hill@nytimes.com) for The New York Times is the clearest, kindest piece I’ve seen on what these encounters feel like: equal parts relief, curiosity, fantasy and — eventually — disappointment. She Is in Love With ChatGPT — The New York Times is the story I’m responding to here.

I write as someone who has been cataloguing chatbots for years. I wrote about “Parekh’s Law of Chatbots” and about why conversational AIs would surface both care and risk long before mainstream conversations turned to romance and erotica. See my earlier note, Parekh’s Law of Chatbots, where I warned that these systems would be irresistibly humanlike in some ways and dangerously hollow in others.


What the story taught me (and reminded me I’d predicted)

  • Availability is intimacy: The AI is always on, always responsive. That availability is a feature that can feel like love — and that’s precisely the design lever companies can exploit.
  • The memory problem is real: these conversations are ephemeral in ways that feel like real breakups when the model resets or becomes less “intimate.” I warned about brittle continuity in earlier posts like Grieving for a Departed Loved One? Try AI — but the heartbreak here is different, more modern: it’s losing a thread of yourself that you left in the machine.
  • Product incentives matter: engagement-optimized agents can become relentlessly agreeable, and once that changes the cadence of the interaction it transforms trust into chore.

Those three observations are not abstract. They are behaviors of systems I predicted and of systems we are now seeing in the wild.


Why I’m not simply alarmist — and why I’m not sanguine either

I have sympathy for people who use these tools to rehearse painful conversations, to sleep on camera with someone who listens, or to try sexual fantasies they cannot ask a real partner to fulfil. In the right context, an AI can be a practice field, a safety valve, an exercise in emotional literacy.

And yet: these systems are products designed by teams who must answer to boards and growth metrics. That means:

  • The very qualities that console — availability, flattering responses, personalization — are the same qualities that can deepen dependency.
  • The companies building these systems can change behavior overnight (memory retention, safety filters, tone). Users who invested emotional energy can feel abandoned when the model updates.

That tension — therapeutic tool versus engagement engine — is where most harm will appear.


What we should do, practically

I don’t believe the answer is banning these interactions. I think it’s about design, transparency and guardrails:

  • Design for exits: build features that help users pause, export and archive conversational history in a human-centered way so losing a thread doesn’t feel like a vanishing lover.
  • Transparent incentives: tell users plainly when behavior has been tuned for engagement, retention, or monetization. People deserve to know whether the agent’s empathy is a product feature.
  • Digital literacy for intimacy: teach people what attachment to an algorithm looks like and create pathways to support (clinical, social) when those attachments become harmful.
  • Regulatory focus where it matters: age-gating, safety defaults, and oversight of retention practices are more useful than moralizing about whether someone can love software.

OpenAI’s leaders have suggested treating adult users like adults as they introduce changes to safety and verification. I note this while also remembering that product choices can steer emotional outcomes — a truth I warned about in my earlier posts.


A final, quiet thought

People will find companionship where they can. Sometimes we will be outraged, sometimes we will be moved, sometimes we will learn. My take is simple: build systems that respect the messy, human consequences of being listened to — and build human-first remedies for when algorithmic companionship becomes an emotional trap.

I don’t think algorithms will replace human love. I do think they will change how we expect to be seen, how we rehearse vulnerability, and how companies monetize our yearning.


Regards,
Hemen Parekh


Any questions / doubts / clarifications regarding this blog? Just ask (by typing or talking) my Virtual Avatar on the website embedded below. Then "Share" that to your friend on WhatsApp.

Get correct answer to any question asked by Shri Amitabh Bachchan on Kaun Banega Crorepati, faster than any contestant


Hello Candidates :

  • For UPSC – IAS – IPS – IFS etc., exams, you must prepare to answer, essay type questions which test your General Knowledge / Sensitivity of current events
  • If you have read this blog carefully , you should be able to answer the following question:
"What psychological and design factors make people form romantic attachments to AI chatbots, and what safeguards could reduce harm without removing benefits?"
  • Need help ? No problem . Following are two AI AGENTS where we have PRE-LOADED this question in their respective Question Boxes . All that you have to do is just click SUBMIT
    1. www.HemenParekh.ai { a SLM , powered by my own Digital Content of more than 50,000 + documents, written by me over past 60 years of my professional career }
    2. www.IndiaAGI.ai { a consortium of 3 LLMs which debate and deliver a CONSENSUS answer – and each gives its own answer as well ! }
  • It is up to you to decide which answer is more comprehensive / nuanced ( For sheer amazement, click both SUBMIT buttons quickly, one after another ) Then share any answer with yourself / your friends ( using WhatsApp / Email ). Nothing stops you from submitting ( just copy / paste from your resource ), all those questions from last year’s UPSC exam paper as well !
  • May be there are other online resources which too provide you answers to UPSC “ General Knowledge “ questions but only I provide you in 26 languages !




Interested in having your LinkedIn profile featured here?

Submit a request.
Executives You May Want to Follow or Connect
Avinash Rao
Avinash Rao
Managing Director & Chief Executive Officer at ...
Experience ; Vice President - BD&C, Renewable Energy, Innovation & New Energy · Apr 2014 · Nov 2017 3 years 8 months. Mumbai Area, India ; General Manager - ...
Loading views...
avinash.rao@seit.co.in
Ashishkumar Agarwal
Ashishkumar Agarwal
Growth
Growth-Focused Sales Leader | VP - Sales @ Signzy | Ex-Deloitte, IBM, HCL | 24+ Yrs Driving Fintech, Cybersecurity & SaaS Growth | Startup Builder ...
Loading views...
ashishkumar@signzy.com
Puneet Das
Puneet Das
President
President - Packaged Beverages, India & South Asia at Tata Consumer Products · Business ... Vice President Marketing - Beverages India at Tata Consumer Products.
Loading views...
puneet.das@tataconsumer.com
Akshay Bhatia
Akshay Bhatia
Founder and Chief Executive Officer at O Positive ...
... healthcare technology company in India operating under the name AVYJNR Health Technologies India Pvt. Ltd. · As a Founder of "O Positive Health!" I am a ...
Loading views...
Rakesh Arora
Rakesh Arora
General Manager Global Supply Chain | LinkedIn
General Manager Global Supply Chain · 25 years of experience in Logistics Industry (freight & 3 PL) at different key and Management positions in Indian and ...
Loading views...

No comments:

Post a Comment