Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday, 19 October 2025

Apple's AGI Soul-Searching

Apple's AGI Soul-Searching

The recent news of a divide within Apple's AI division, sparked by a research paper questioning if simply scaling up Large Language Models (LLMs) will lead to Artificial General Intelligence (AGI), caught my attention. It seems a paper co-authored by Ruoming Pang (LinkedIn, rpang@meta.com) has stirred a necessary, albeit uncomfortable, debate within the group led by John Giannandrea (LinkedIn, jgiannandrea@apple.com). Is making models bigger the only path forward, or is it a path that leads to a dead end?

This question is not just an internal matter for Apple; it is a moment of soul-searching for the entire industry. We have been captivated by the seemingly magical abilities of models like GPT, but the paper rightly asks if these systems truly reason or if they are merely sophisticated mimics, constrained by the static text they were trained on.

Beyond Information Retrieval

This debate resonates deeply with thoughts I’ve been wrestling with for over a decade. The core idea I've always wanted to convey is this — the end goal was never just about finding information faster. Back in 2010, I predicted that the future would see us moving beyond search engines to a world where we input a “problem” and receive a “solution” (Future of Search Engines).

When ChatGPT arrived, I saw it as the first tangible manifestation of this shift. It was a move from Google's endless list of links to a direct, coherent answer, a point I made when I urged Google to adapt (Google, watch out – here comes ChatGPT). It was a moment of validation for that earlier vision.

The Missing Ingredient: Intent

However, seeing how things have unfolded, it’s striking how relevant an even earlier insight still is. The Apple paper highlights that LLMs lack grounding and real-world interaction. This touches upon a crucial element I was exploring back in 2013: the need to build a “database of intentions.”

In a note from that time, I outlined a system to move beyond static user preferences by analyzing click-stream data—what users searched for, what they viewed, and what they ultimately acted upon (Job Search R.I.P). The goal was to understand the why behind the what. True intelligence isn't just about processing a query; it's about inferring the underlying intent, which is dynamic and context-dependent.

This is the very thing today's LLMs struggle with. They can process the text of a question, but they cannot truly grasp the user's evolving needs, context, or unstated goals. They are powerful, but they are not yet perceptive.

The Path Forward

The debate at Apple is a healthy sign. It suggests the initial euphoria is giving way to a more mature, critical examination of our approach to AGI. Simply feeding more data into ever-larger models is a brute-force approach. The real breakthrough will come from creating systems that can perceive intent, learn from interaction, and ground their knowledge in a dynamic context.

Reflecting on it today, I feel a renewed urgency to revisit those earlier ideas. The path to AGI is not a straight line paved with more processing power. It is a complex journey that requires us to build systems with a deeper, more implicit understanding of the world and our intentions within it.


Regards,
Hemen Parekh


Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai

No comments:

Post a Comment