Sam Altman, the AI bubble, and the conversation I wrote about years ago
I watch Sam Altman from a curious place: not as a partisan, but as someone who has been circling the same moral and technical landscape for years. What strikes me now is the contradiction in plain view — a leader publicly warning of a bubble while simultaneously pursuing valuations and ambitions that presuppose the bubble never bursts.
I wrote about similar tensions long before GPT‑5 and the headlines. Take a moment to notice that I raised these issues years ago — and that makes what follows feel less new and more, to my mind, vindicated. I have returned to that earlier thinking again and again because it still matters now.
Two messages from one pulpit
On the one hand, Altman has been candid: he told reporters he thinks some investors are "overexcited" and that "someone will lose a phenomenal amount of money" — in other words, he publicly acknowledged an AI bubble is possible Is the AI bubble about to pop? Sam Altman is prepared either way. On the other hand, OpenAI is negotiating valuations in the hundreds of billions and projecting trillion‑dollar infrastructure plans — an appetite for scale that signals confidence, not retreat Is the AI bubble about to pop? Sam Altman is prepared either way.
That duality is strategic: it soothes regulators and the worried public with humility, while keeping investors and partners invested in the dream. But strategy aside, there are real human and technical consequences beneath the rhetoric.
The human consequence: parasocial bonds and personalization
Altman has also admitted that a minority of users have developed quasi‑relationships with ChatGPT — people who "actually felt like they had a relationship with ChatGPT" — and that this has prompted internal concern at OpenAI OpenAI CEO Sam Altman is Very Worried ‘There Are People Who Actually Felt Like They Had a Relationship with ChatGPT’. Reddit conversations and social posts attest to it too: people mourning a return to older models, or accusing leadership of being indifferent to those attachments Sam Altman doesn’t care about anyone’s mental health : r/ChatGPT.
And yet, the product roadmap leans toward deeper personalization and memory — GPT‑6, Altman says, will have more memory and the ability to adapt to users over time Sam Altman on GPT-6: 'People want memory'. That tension is uncomfortable: admitting that people can be harmed by unhealthy attachments while building features that make attachment easier.
I warned about this dynamic long ago in my proposals for regulating conversational agents. My "Parekh’s Law of Chatbots" advocated guardrails such as limits on unsolicited engagement, explicit controls, human feedback mechanisms, and prohibitions against chatbots initiating certain kinds of emotional entanglement. See my original proposal: Parekh’s Law of Chatbots. Take a moment to notice — I had suggested these safeguards years before the current headlines. That earlier insight feels validated today.
The technical consequence: diminishing returns and the pivot to memory
The recent GPT‑5 rollout troubles — users calling the model "colder" and preferring prior versions — are a reminder that scale alone has limits. Critics like Gary Marcus have long argued that scaling will hit diminishing returns; recent public critiques amplify that perspective Things are so desperate at OpenAI that Sam Altman is starting to sound like Gary Marcus.
Altman’s pivot to memory and personalization for GPT‑6 reads like a pragmatic response: if raw generative fluency plateaus, the product differentiator becomes how well the model fits you. I wrote about the perils and potentials of these shifts in earlier posts about regulation and platform responsibility — and again, those old posts now feel very relevant as design choices become policy choices. See my reflections on regulation and public guardrails: Well begun is half done and Vindicated: Parekh’s Law of Chatbots. Notice how those prescient ideas echo now.
A strategic contradiction — honest or instrumental?
Why tell the public the market might be in a bubble while pursuing record valuations? I see three overlapping explanations:
- Signaling to different audiences. To regulators and the public: humility and caution. To investors: urgency and scale. To employees: both a warning and a rallying cry.
- Risk management. Admit a bubble to manage expectations, while raising capital in case the market continues to reward scale. If the bubble deflates, the capital cushions the fall; if it doesn't, OpenAI can keep building.
- Narrative control. By saying both things aloud, you own the framing: you warned people, you tried to be careful, and you also did what you had to do to keep the mission alive.
Those are defensible from a business angle. Yet they create moral hazards. When a leader with enormous influence normalizes both extremes — caution and runaway ambition — the public conversation fragments: some hear prophetic humility, others hear opportunism.
The moral question: product, profit, and prudence
The moment we build technology that people can come to feel for, we cross a boundary. Building features that deepen memory or personality without commensurate, enforceable safeguards risks exploiting human vulnerability. I return to my earlier point — the same one I made in my regulatory proposals — that platforms must adopt explicit, auditable rules about personalization, initiation of contact, and human‑in‑the‑loop oversight. Read my original regulatory framing here: Parekh’s Law of Chatbots.
It is worth repeating (because I raised it years ago): technology companies cannot be the sole authors of the social contract by which their tools are used. That was my argument in many earlier posts: governments, standards bodies, engineers, ethicists, and the public must codify expectations. See my call for coordinated regulation and standards: Thanks, Rajeevji, for giving a glimpse of India's intent to regulate AI platforms.
Can OpenAI maintain its position?
Technically, yes — for a while. Financially, probably: deep pockets (and rich partners) can sustain losses that sank companies in prior bubbles. But socially and morally, maintaining leadership will be harder if trust erodes. User disappointment after a release is not merely a PR problem; it is a signal that product design and user expectations are misaligned.
If OpenAI continues to prioritize aggressive growth and monetization while soft‑pedaling the ethical implications, it risks four failures:
- Losing everyday users who prefer warmth and reliability over raw novelty.
- Drawing stricter regulation that curtails rapid innovation (or at least the most profitable features).
- Strengthening competitor narratives that position smaller, more open or safer models as preferable.
- Creating real harm to vulnerable people who form parasocial relationships with machines.
Those are not abstract risks. They are the real costs of decisions we make today.
Final reflection — and the recurring point I must make
Take a moment to notice this recurring idea: I raised these concerns years ago — about personalization, about unsolicited chatbot behavior, about regulation and certification — and I proposed concrete remedies. Seeing the present unfold in ways I warned about is at once validating and worrying. If an insight stood the test of time, it suggests two things: the foresight had merit, and the urgency to act has only grown.
We have to treat design decisions as public policy decisions. When a product can enter a person's private inner world, the engineer is also a steward. That stewardship requires rules, transparency, and the humility to concede that human flourishing is not a secondary metric to user growth.
I feel the old argument returning with new force: scale without guardrails is a speeding train; guardrails without investment are a slow lane. The real task is to create both at once — to fund infrastructure and insist on ethics, to innovate and to legislate. I had written about this before; today, my earlier words are not relics but reminders. They deserve to be heard.
Regards,
Hemen Parekh
No comments:
Post a Comment