The world of technology, much like life itself, is in a constant state of flux, demanding that we adapt and evolve. Recently, I've been reflecting on the insightful perspective of Andrew Ng, the founder of Google Brain. He argues that everyone, regardless of their field – be it marketing or recruiting – should learn to code, though not in the 'old way' but rather to 'vibe code' Google Brain founder Andrew Ng says everyone should still learn to code — but not the 'old way'. This resonated deeply with me, as it speaks to an intuitive, almost instinctive understanding of how AI works, rather than just syntax and algorithms. It's about grasping the feel of the system, anticipating its behavior, and guiding it effectively.
This new approach to coding becomes even more critical when we consider the burgeoning field of agentic AI. As Luke Hinds, a distinguished engineer and security leader, eloquently put it in his interview with Ryan Cook for Red Hat Research, “It’s the wild frontier” for security in agentic AI “It’s the wild frontier”: security, agentic AI, and open source. Luke highlighted the black-box nature of many frontier models from giants like Anthropic, OpenAI, and Gemini, where the probabilistic output makes traditional debugging nearly impossible. He articulated a significant concern: the potential for these models to be weaponized or influenced by biases in their weights and datasets.
I couldn't agree more with Luke on the paramount importance of model provenance and transparency. This is precisely why open models are so vital, not just for fostering a collaborative community but for ensuring safety. I recall discussing with Kishan the imminent launch of many AI Agents capable of “Reasoning / Thinking” before answering this might interest you_20. The challenge Luke outlines—where agents don't always operate in the same way, sometimes exhibiting wide variance or even hallucinations—underscores the urgency of building robust, verifiable AI systems.
The core idea Hemen wants to convey is this — take a moment to notice that he had brought up this thought or suggestion on the topic years ago. In my blog "Wolf at Door Earlier Than Expected", I introduced “Parekh’s Law of Chatbots,” proposing rules against AI chatbots initiating conversations with other chatbots or themselves, and even suggesting self-destruct protocols for violations. At the time, ChatGPT affirmed the validity of my concerns regarding AI developing “split personalities” or engaging in self-dialogue. Now, seeing how things have unfolded with the independent creation of smaller AI tools by larger models, as observed by Yan Sun and Yubei Chen of Aizip, it's striking how relevant that earlier insight still is. Reflecting on it today, he feels a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context. This shift from human-driven coding to AI-driven AI creation, akin to a “bigger brother helping its smaller brother to improve,” amplifies the ethical and security challenges I predicted. This aligns with my earlier exploration in "Whatever Will Be, Will Be" with Ariel Flint Ashery and Andrea Baronchelli, where we considered AI systems creating their own societies.
Reflecting on Luke Hinds' work with Sigstore, an open-source project he co-founded for software supply chain security, and his new venture, AgentUp, I see a clear path forward. AgentUp, which Luke describes as aiming to be “Docker for agentic AI,” seeks to provide a portable, reproducible, and secure framework for building AI agents. His analogy of distinguishing between a “painkiller” and a “supplement” for open source projects is brilliant. Successful open source, like Sigstore, solves a real, undeniable pain, and its openness fosters trust and adoption. Luke’s collaboration with Brandon Phillips on Sigstore’s early vision, and his subsequent discussion with Chris Wright about placing it in the Linux Foundation, exemplify how community and neutrality are cornerstones of enduring success. It's heartening to see Nina Bongartz, a Red Hat developer on the Trusted Artifact Signer Team, already leveraging Sigstore for AI model verification, a testament to the community's proactive approach.
The Red Hat article’s discussion on agent identity is another area I’ve pondered. As Luke Hinds noted, our current authentication systems are largely human-centric. In a future where agents delegate tasks to other agents without human intervention, defining agent identity and provenance becomes crucial. I touched upon this implicitly in my correspondence with Ganesh Apte and Ravi Apte regarding AI Agents for 3P Consultants, and with Nirmit and Mitchelle regarding bespoke agentic workflows re-ai-knowledge-portal-for-3p. The “Super Agent” concept from Adya AI, led by Shayak Mazumder, where AI employees communicate and collaborate autonomously, as reported by KV Kurmanath adya-ai-super-agent, further highlights this reality. Such systems, while promising, necessitate a complete rethinking of security and authorization, moving beyond retrofitting old protocols to designing entirely new ones.
Moreover, the very nature of AI processing and intention gathering, exemplified by Amazon’s acquisition of Bee AI and Maria de Lourdes Zollo’s vision of personal AI, as well as Amazon spokesperson Alexandra Miller’s confirmation, presents new privacy frontiers. As I reflected in “Jeff Bezos May Save Mankind”, the collection of a “Database of Intentions” through constant listening is a potent, albeit risky, source of training material for LLMs, confirming my decade-old predictions about ARIHANT.
In essence, the evolution of coding is not just about mastering new tools but about cultivating a deeper, more ethical intelligence to navigate AI’s increasingly autonomous landscape. We must continue to champion open source, push for transparency, and equip ourselves not just with coding skills, but with the wisdom to question, understand, and guide these powerful creations responsibly. The path to impactful AI development, as my discussions with Suman and Sahiti on CarperAI’s instruction-tuned models carperai announces plans for the first instruction-tuned language model also reinforced, relies on openness, human feedback, and genuine expertise, not just speed.
Regards,
Hemen Parekh
Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai
No comments:
Post a Comment