Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Translate

Wednesday, 15 April 2026

Stop Talking to ChatGPT or Claude

Stop Talking to ChatGPT or Claude

Stop Talking to ChatGPT or Claude

I want to be direct: if you are a client with a sensitive legal question, stop pasting privileged facts or strategy into consumer chatbots like ChatGPT or Claude. I say that not to scold curiosity — AI is powerful and tempting — but because the legal and practical risks are real, immediate, and sometimes irreversible.

Why I care (and you should too)

  • I watch technologies change how people think and act. Over the last three years I have written about the limits and responsibilities of chatbots in public life (Parekh’s Law of Chatbots). My advice now is narrower and firmer: don’t treat consumer AI as private counsel.
  • Courts and bar committees have begun to interpret long-standing ethical rules and privilege doctrine in the new AI context. Professional guidance (and recent judicial rulings) show that voluntary use of public AI can destroy confidentiality and eliminate privilege protections [see commentary and analysis from legal practitioners and bar guidance](https://www.jdsupra.com/legalnews/claude-is-not-a-lawyer-federal-court-6358526/; https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf).

What can go wrong — short, concrete examples

  • Confidentiality loss: Many consumer AI services reserve the right to retain and use prompts to improve their models. That means facts you enter can be logged, used to train systems, and in some circumstances disclosed to third parties. If those facts are part of a dispute, they can become discoverable evidence.
  • Privilege waiver: Typing legal strategy or admissions into a public chatbot can be treated as voluntary disclosure to a third party. Courts are already treating such interactions as potentially non‑privileged when the user acted independently of counsel.
  • Fabrications (hallucinations): AI sometimes invents citations, cases, or facts that look plausible. Lawyers who rely on unverified AI output risk filing briefs with bogus authority — a conduct trap that has led to sanctions in several documented instances.
  • Misleading or incomplete analysis: Even when an AI produces credible prose, it lacks professional judgment. Using AI output without careful human supervision can produce flawed legal reasoning.

Ethical and legal risks (distilled)

  • Confidentiality and Rule 1.6 concerns: Lawyers must protect client information. If a client or lawyer uploads confidential material into a public model, protection may be lost unless the tool guarantees confidentiality and does not repurpose inputs.
  • Competence and supervision: Lawyers who use AI still must verify results. The duty of competence means you cannot delegate legal judgment to an AI and present that output as your own without review.
  • Candor to tribunals: Submitting AI-generated or AI-assisted filings without verification risks misleading courts and violating duties of truthfulness.
  • Privilege and work product: Independently created AI products (prompts, summaries, strategy notes) are at risk of being treated as non-privileged, especially if created outside counsel direction.

Practical guidance for clients (what to do instead)

  • Stop. Don’t paste privileged facts into consumer chatbots. Treat public AI like a public bulletin board.
  • Ask your lawyer before using AI. If your lawyer authorizes an AI workflow, make sure they select the tool and document the purpose and protections.
  • Use approved, enterprise-grade tools only when necessary. Firms can contractually obtain confidentiality, no‑training guarantees, and security controls from enterprise AI vendors.
  • Anonymize where possible. If you must use a public tool for brainstorming, remove names, dates, unique facts, and anything that could identify parties, then verify outputs offline.
  • Keep a human in the loop. Never let AI produce final documents or legal analysis without lawyer review and explicit sign-off.
  • Record the process. If an attorney directs AI-assisted work, document the instructions, the platform used, settings, and why it was necessary. That record matters in later privilege fights.
  • Update engagement letters. Ask your lawyer to include an AI clause in your engagement letter that explains what tools may be used and how confidentiality will be preserved.

Best practices lawyers should impose on client-facing workflows

  • Clear policy: Law firms should maintain a written AI-usage policy for clients and staff identifying permitted tools and forbidden acts (e.g., no client-entered privileged prompts in consumer chatbots).
  • Vet vendors: Insist on contractual terms that prohibit vendor model training on client inputs, require strong encryption, and promise notification of breaches or subpoenas.
  • Train clients: Before starting sensitive matters, counsel should advise clients—plainly—about AI risks and how to use (or not use) AI in the representation.
  • Lit-hold language: Update preservation notices to include chatbot logs and metadata where appropriate, and advise clients to preserve any AI interactions.

When AI is appropriate — and how to keep it safe

AI is not evil. It can speed document review, help with redlines, or surface draft ideas. Use it safely when:

  • The lawyer chooses an enterprise product with contractual confidentiality and non‑training guarantees.
  • The lawyer documents the purpose and retains control of the prompts and data flow.
  • Outputs are treated as drafts requiring independent verification before reliance.

Closing reflection (my bottom line)

I’ve been writing about chatbots and their social effects for years. Today the message is sharper: consumer chatbots are wonderful research toys; they are not confidants. If a legal matter is serious — litigation, regulatory exposure, or anything involving admissions, strategy, or sensitive facts — assume a public chatbot will not protect you. Talk to your lawyer first, insist on secure, contractually protected tools, and keep the human judgment where it belongs.

Helpful reading


Regards,
Hemen Parekh


Suggested follow-up question: "What three concrete steps should I ask my lawyer to take before we use any AI tools in my matter?"

Get correct answer to any question asked by Shri Amitabh Bachchan on Kaun Banega Crorepati, faster than any contestant


Hello Candidates :

  • For UPSC – IAS – IPS – IFS etc., exams, you must prepare to answer, essay type questions which test your General Knowledge / Sensitivity of current events
  • If you have read this blog carefully , you should be able to answer the following question:
"How does using a consumer AI chatbot affect attorney-client privilege and what steps should a client take to preserve confidentiality?"
  • Need help ? No problem . Following are two AI AGENTS where we have PRE-LOADED this question in their respective Question Boxes . All that you have to do is just click SUBMIT
    1. www.HemenParekh.ai { a SLM , powered by my own Digital Content of more than 50,000 + documents, written by me over past 60 years of my professional career }
    2. www.IndiaAGI.ai { a consortium of 3 LLMs which debate and deliver a CONSENSUS answer – and each gives its own answer as well ! }
  • It is up to you to decide which answer is more comprehensive / nuanced ( For sheer amazement, click both SUBMIT buttons quickly, one after another ) Then share any answer with yourself / your friends ( using WhatsApp / Email ). Nothing stops you from submitting ( just copy / paste from your resource ), all those questions from last year’s UPSC exam paper as well !
  • May be there are other online resources which too provide you answers to UPSC “ General Knowledge “ questions but only I provide you in 26 languages !




Interested in having your LinkedIn profile featured here?

Submit a request.
Executives You May Want to Follow or Connect
Shakthi M. Nagappan
Shakthi M. Nagappan
CEO, Telangana Lifesciences, Govt. of ...
As the CEO of Telangana Lifesciences, I lead initiatives that drive innovation ... With over 17 years of experience in the lifesciences and biotechnology ...
Loading views...
dir_lifesciences@telangana.gov.in
Saikat Roy Choudhury
Saikat Roy Choudhury
Vice President
✽ eCommerce & SaaS Platforms ✽ Digital Transformation Consulting ✓ Key Responsibilities: ✽ Develop and implement… Show more. As VP of Sales at one of ...
Loading views...
saikat.roychoudhury@webskitters.com
Preeti Arora
Preeti Arora
Vice President of Engineering | Gen AI & Platform ...
Vice President of Engineering | Gen AI & Platform Engineering | Marketplace & Ecommerce | SaaS | Scaling Global Engineering Teams | Distributed Systems ...
Loading views...
preeti.arora@deliveroo.co.uk
Phuntsok Wangyal
Phuntsok Wangyal
Chief Financial Officer at Evren | LinkedIn
Chief Financial Officer. Evren. Dec 2024 - Present 1 year 5 months. Mumbai, Maharashtra, India. Adani Green Energy Ltd. Graphic. Chief Financial Officer. Adani ...
Loading views...
Parag Agrawal
Parag Agrawal
Chief Financial Officer at Juniper Green Energy
Strong knowledge of the renewable power sector, related policies & regulatory issues, tariff regimes, financing and contractual tie-ups, and fiscal benefits ...
Loading views...
parag.agrawal@junipergreenenergy.com

No comments:

Post a Comment