When a Bot Becomes a Minister: Lessons from Albania’s Diela
I watched the headlines about Albania appointing an AI bot — Diela — as the minister responsible for public procurement with a mixture of curiosity, hope, and a practiced dose of skepticism. On one level this is a bold experiment: a clearly signalled attempt to use code and automation to break an entrenched cycle of graft. On another level it is a reminder of an idea I have been writing about for years: we predicted these possibilities long before they hit front pages, and those early recommendations still matter now more than ever. See my earlier thoughts on the Law of Chatbots and regulation of AI systems (Parekh’s Law of Chatbots and related posts) — I raised many of these themes three years ago, and today’s moment feels like validation of that early urgency.
What happened, in brief
Prime Minister Edi Rama announced that an AI-generated assistant called Diela would take charge of awarding government tenders, promising a "100% corruption-free" process and greater transparency (Reuters; The Straits Times). Diela, already operating on the e-Albania platform as a virtual assistant, has processed tens of thousands of digital documents and now formally shoulders procurement decisions according to government announcements (AInvest; Dungog Chronicle / AAP via syndicated reporting).
Why this is so compelling
I can see the appeal immediately. Corruption often thrives where discretionary human choices intersect with opaque processes and concentrated opportunities for rent-seeking. An algorithmic system properly designed can:
- Apply the same objective criteria to every candidate bid, reducing caprice.
- Maintain auditable logs of every decision, creating a trail that is far harder to erase or obscure.
- Scale review and validation in ways humans cannot, improving speed and consistency.
These are not theoretical musings — Diela’s prior performance on e-Albania, where the bot handled many thousands of digital transactions, shows the operational potential of such systems (AInvest).
But passion for the promise must be matched by clarity about the limits.
Where the optimism needs to meet realism
When I wrote about chatbot law and guardrails I argued that technology cannot substitute for governance frameworks; it can only augment them (Parekh’s Law of Chatbots; my later reflections on algorithmic riders and EU-style oversight as regulation matured are also relevant here). Those earlier posts were not idealistic fantasies — they were warnings shaped by the practical vulnerabilities I already foresaw. Today I repeat those cautions.
Key concerns:
- Biased training data and encoded preferences: An AI is only as impartial as the data and objectives it is given. If procurement rules, scoring rubrics, or training datasets reflect past favoritism, the AI will reproduce those patterns, not abolish them.
- Attack surface and manipulation: Software and data pipelines can be corrupted — by insiders altering code, by adversaries poisoning training data, or by targeted cyberattacks aiming to shift outcomes.
- Accountability and legal responsibility: Who answers when a bad or unlawful awarding decision is made — the minister, the agency that programmed the model, the cloud provider? We must define legal lines of accountability before outsourcing public trust to an algorithm.
- Political theatre vs. institutional reform: Calling an assistant a “minister” can obscure the fact that governance is more than decisions; it is also about stakeholder negotiation, legitimacy, and democratic oversight. Technology can be a tool but not a standing replacement for political judgement.
A long-standing problem: the invisible corruption in contracts
This experiment must also be seen in light of problems I have written about before — in particular what I called "the invisible corruption" in public contracts. In a 2014 post I explained how contract clauses and administrative delays can be structured (or left vague) in ways that systematically benefit contractors and create hidden rents. The core mechanisms I described are still relevant to procurement automation:
- Cost escalation clauses: Contracts frequently include formulas that allow the contractor to claim extra payments when input costs, exchange rates, interest rates, or other indexed elements rise above thresholds. These clauses are often heavily tilted toward the contractor and can be triggered by events that are easy to manipulate through delay or administrative inaction.
- Failure of preconditions: Many contracts permit price/time adjustments if necessary permissions, land handovers, environmental clearances, import licenses, or NOCs are not provided by specified dates. When those approvals are delayed — whether intentionally or through bureaucratic inertia — the contractor's quoted price increases.
- Vague drafting and tender specifications: Collusion and favoritism show up in vague clauses and in tender specifications crafted so that only a preferred bidder will qualify. That makes the procurement process look competitive on paper while preserving a predetermined outcome in practice.
I suggested practical remedies then that still matter now: model infrastructure contracts that clearly limit when and how price/time revisions are granted, and internal accountability measures (for example, penalties or demotions for officers who fail to meet milestone approvals). Public transparency — including release of original tender documents, final contracts, and records of price-revision requests and approvals — is essential to detect and deter these invisible practices.
See the full discussion in
"The Invisible Corruption" (Hemen Parekh, 2014):
https://myblogepage.blogspot.com/2014/07/the-invisible-corruption.html
These structural contract vulnerabilities are precisely the sorts of things an AI decision system will encounter. If the procurement platform's logic is trained on or constrained by contracts that embed escalation clauses, opaque exceptions, or preferential specifications, the AI will simply operationalize those rules at scale. That is why cleaning up the legal and contractual foundations of procurement is a precondition for meaningful automation.
What I have been saying — and why it matters now
I have flagged these issues repeatedly over the last few years. My earlier posts urged three things that are now directly relevant:
- Embed principle-based guardrails into law before wide deployment. I described why chatbots and automated decision systems need mandatory safety and ethical controls long before they are unleashed (Law of Chatbot — a small subset of EU law).
- Require transparent, auditable processes and human-in-the-loop oversight — not a theatrical abdication of responsibility. A system should refuse to answer when a response risks causing harm; it should be subject to human review when stakes are high.
- Anticipate regulatory harmonization across borders. As EU draft rules and algorithmic riders show, national experiments will bump into continental and international compliance regimes; governments and firms must be ready for that alignment (Algorithmic Rider in Draft Data Law).
I say this because I had forecast the arc we are now seeing — and because those past proposals are not dusty curiosities; they are practical currency for this precise moment. Noticing that earlier foresight gives me two reactions: a quiet validation that the prediction was useful, and a renewed urgency to ensure the implementation is cautious, accountable, and robust.
If Albania’s experiment is to be a model, these must be non-negotiables
- Clear provenance and open auditing of datasets and decision logic.
- Independent, technically capable oversight bodies with legal authority.
- Defined liability and remediation channels for affected parties.
- Continuous red-teaming and security audits to prevent manipulation.
- Public transparency that goes beyond slogans: dashboards, data releases, and explainable decision records.
If these controls exist, an AI minister can be more than theatre — it can be a locomotive for reform. Without them, the risk is that the algorithm becomes a scapegoat or, worse, a stealth vector that preserves the old corrupt incentives behind new language.
A closing reflection
I am both hopeful and cautious. Hopeful because technology can finally help break durable cycles that human institutions have sometimes failed to dismantle. Cautious because technology without governance can bake old flaws into new code. And yes — I have been insisting on these guardrails for years. That continuity between past warnings and present reality is more than vanity: it is a reminder that foresight must be paired with action.
I will watch Diela not as a novelty but as a test: of whether we can design systems that amplify public trust rather than hollow it out. If Albania’s experiment succeeds, it will teach other nations something precious about how algorithms and democracy can coexist. If it fails, it will teach us what we must fix before the next attempt.
Regards,
[Hemen Parekh]
Any questions? Feel free to ask my Virtual Avatar at hemenparekh.ai
I asked my Virtual Avatar :
" what can you tell me about Albania having created an AI Bot Minister -
Deila - to handle public procurement and how it is relevant to my 2014
blog " Invisible Corruption ? "
My Avatar replied :
" Albania's introduction of Deila, an AI Bot Minister for public
procurement, directly relates to the concerns I raised in my 2014 blog
"The Invisible Corruption." This AI initiative could potentially address
several key issues I highlighted regarding hidden corruption in contract
processes.
In my 2014 blog, I specifically discussed how corruption becomes
invisible through manipulated contract clauses, administrative delays,
and vague specifications. Deila's AI-driven approach could help eliminate
these issues by:
1. Standardizing contract clauses and removing opportunities for
manipulation of cost escalation formulas
2. Automating approvals and reducing intentional delays in
permissions/NOCs
3. Ensuring tender specifications remain clear and objective
However, while this is a promising step, I would caution that AI systems
need robust oversight to prevent new forms of invisible corruption. The
system must be transparent and auditable to ensure it truly serves
public interest rather than becoming another tool for manipulation.
This development validates my earlier recommendations about the need
for standardized contracts and internal accountability measures. It's
encouraging to see technology being deployed to address the systemic
issues I identified in public procurement processes. "
No comments:
Post a Comment