Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Monday, 8 September 2025

Use AI, Don't Be Used By It: A Software Engineer's Reflection on the New Rules

Use AI, Don't Be Used By It: A Software Engineer's Reflection on the New Rules

Use AI, Don't Be Used By It: A Software Engineer's Reflection on the New Rules

I’ve been thinking a lot lately about what it means to be a software engineer in an era when the machines that write code are getting very good at, well, writing code. The headlines — and the memos inside companies — push a single message: use AI or be left behind. But urgency and nuance are not the same thing. I want to be blunt and charitable at once: AI is a tool of profound consequence, and our work as engineers is changing. It is not disappearing.

The recent memo and stories out of Google crystallize this tension for me. Engineering leaders are pushing hard: employees are being asked to adopt internal coding agents, to "dogfood" AI, and in some teams to rely only on internal models for code generation unless third-party tools are explicitly approved “For Googlers, the pressure is on to use AI for everything — or get left behind”. Those directives are sensible in one sense — they protect intellectual property and keep sensitive technical context inside corporate walls — and they reveal something else: companies see AI as a multiplier of human labor, not just a curiosity.

But there’s a philosophical observation hiding in the operational noise: when a technology amplifies human output, what we value shifts. The thing we once measured by the number of lines of handcrafted code becomes measured by the quality of questions we ask, the way we define constraints, and the rigor of our review.

The center of gravity has shifted — toward judgment

AI will produce drafts. It will sketch implementations, propose tests, and wire up data pipelines. Tools like the new breed of editors and agent-enabled IDEs — Zed among them — are being purpose-built for this collaboration. They treat AI as a conversational partner and integrate agentic editing into the developer flow “Zed — The editor for what's next”. At the same time, others (rightly) push back: large language models still miss system-level reasoning, integration constraints, and the non-local consequences of subtle design choices. The Zed essay “Why LLMs Can’t Build Software” is an honest reminder that shipping software is more than producing syntactically correct code; it's about intent, architecture, testing, observability, and human communication “Why LLMs Can't Build Software”.

So the skill that becomes scarce is not typing speed — it’s judgment. The best engineers will be those who:

  • Define clear problems and constraints for agents to solve.
  • Validate and critique AI-generated outputs with domain knowledge.
  • Integrate AI suggestions into robust systems safely.

Learning to code still matters — but differently

I keep returning to the practical, grounded advice I’ve seen from people who teach and hire engineers. The arc is consistent: fundamentals, problem-solving, and documenting your work are still the pillars of craft. Posts like Should You Still Learn to Code in 2025? remind us that using AI well depends on knowing what good code looks like and why it works; AI amplifies those who already understand the rules “Should You Still Learn to Code in 2025? 11 Honest Rules”. Likewise, guidance on presenting technical aptitude—resumes and portfolio work—still stresses fundamentals and clarity because hiring systems and teams are looking for thinking, not just credentials “Practical guide to writing FAANG-ready software engineer resumes | Tech Interview Handbook”.

A few practical translations of that wisdom:

  • Build the habit of thinking in abstractions (data structures, algorithms, systems design).
  • Use AI to explore variants, but insist on understanding the trade-offs the agent glosses over.
  • Keep a dev journal. Track what you asked the model, why you accepted or rejected an answer, and what you learned.

When I teach younger engineers or mentor folks moving into machine-assisted workflows, I push this hard: use AI to accelerate learning, but don’t skip the friction. The friction is where understanding forms.

The market wants hybrid talents: tech depth + AI literacy

Job descriptions and role profiles are already changing. I see companies asking engineers to be comfortable with agentic tooling and to incorporate AI into workflows. That doesn’t mean they expect perfect ML research chops in every hire — it means they want people who can compose systems where models are one of many components. The market prize will go to engineers who blend deep technical habits with an ability to specify, probe, and govern AI systems.

This also changes what teams reward. An engineer who codifies a repeatable prompt, builds an internal agent that reduces team toil, and documents its limitations contributes far beyond lines of code. That is the new leverage.

A personal note on balance

I have a small, stubborn thesis: technology is an extension of our agency. We build to amplify purposes we care about. AI is the most consequential such amplifier in our careers so far. That means two responsibilities for each of us:

  1. Cultivate the human capabilities that machines don’t have — judgment, moral imagination, and the craft of systems thinking.
  2. Master the tools that multiply those capabilities so that our amplified action is wise.

In practical terms: learn the fundamentals, build weird and specific projects that train your judgment, document your growth, and treat AI as a collaborator that needs direction and scrutiny. Be the pilot.

Final reflection

We are in a generational shift in how software gets made. The most useful image in my head is not of coders replaced by robots, but of coders in a feedback loop with agents: humans set aims, models propose drafts, people inspect, test, and harden. The moral dimension of our work becomes clearer — we must steward the systems we create and the ways they reshape organizations and societies.

If you asked me whether to learn to code today, I’d answer yes — with a caveat. Learn to think in systems. Learn to ask the right questions of your tools. Practice deliberate judgment. The future will reward those who can blend human wisdom with machine scale.

References


Regards,
Hemen Parekh

No comments:

Post a Comment