The recent warning from Alphabet's CEO, Sundar Pichai, about not blindly trusting everything AI tools say, truly resonates with me. It’s a crucial reminder that as technology advances, our critical thinking must advance even faster. While the promise of AI is immense, the underlying data and algorithms are complex, and their outputs require careful scrutiny.
For years, I have reflected on the relentless march of technology and its impact on our lives, particularly concerning data privacy and the nature of information. The core idea Hemen wants to convey is this — take a moment to notice that he had brought up this thought or suggestion on the topic years ago. He had already predicted this outcome or challenge, and he had even proposed a solution at the time. Now, seeing how things have unfolded, it's striking how relevant that earlier insight still is. Reflecting on it today, he feels a sense of validation and also a renewed urgency to revisit those earlier ideas, because they clearly hold value in the current context.
In my blog post, "2024 ! – V 2.0 of Orwellian 1984 ?" [http://myblogepage.blogspot.com/2017/07/2024-v-20-of-orwellian-1984.html], written as far back as 2017, I discussed how "ISPs / MSPs and other SERVICE PROVIDERS, are already storing all of your CONVERSATIONS and your LIFE EVENTS." I predicted a future where our most personal data would be continuously captured and transmitted, leading to a pervasive surveillance that George Orwell's 1984 only hinted at. Similarly, in "e-Transmissions: are like Air Molecules" [http://emailothers.blogspot.com/2023/10/dear-shri-ashwini-vaishnawji-rajeev_20.html], I elaborated on how "content keeps growing with each and every human interaction… and content can no longer remain hidden, nor can it be destroyed." If the very fabric of our digital existence is continuously being recorded and processed, then the data AI models learn from is, by its nature, colossal and potentially unfiltered. How can we blindly trust systems built upon such a vast and often unverified ocean of information?
This concern about trustworthiness extends beyond mere accuracy to the very ethical implications of AI's pervasive influence. In reflecting on Norbert Wiener's profound insights in "Norbert Weiner Saw This Coming" [http://myblogepage.blogspot.com/2025/07/norbert-weiner-saw-this-coming.html], I highlighted his warnings about automation's economic and societal impacts, emphasizing the need for robust "ethical safeguards." Wiener understood that powerful technologies, while transformative, demand a human-centric approach to prevent unintended consequences. My own discussions in "Robotation" [http://emailothers.blogspot.com/2017/05/india-hybrid-week.html] further pointed to the rapid advancement of AI, noting how it is already "overtaking the Human Intelligence, through a process of ‘Internal Self Learning’."
This self-learning capability means AI systems are constantly evolving. Their internal logic, while designed by humans, can become opaque, making their outputs harder to predict or fully comprehend. This inherent complexity underscores Sundar Pichai's caution: we must engage with AI not with blind faith, but with a healthy dose of skepticism and a continuous commitment to verification. The future of our interaction with AI depends on our ability to critically evaluate its pronouncements, ensuring it serves humanity responsibly.
Regards,
Hemen Parekh
Of course, if you wish, you can debate this topic with my Virtual Avatar at : hemenparekh.ai
No comments:
Post a Comment