Context at Davos 2026
Every January the World Economic Forum in Davos becomes a pressure‑cooker for big ideas, deals and warnings. This year’s Emerging Technologies session—branded ET@Davos—focused on the practical seams where breakthroughs meet power: who builds the systems, who owns the data and who ultimately writes the rules. I attended because this year felt different: conversations were less about curiosities and more about control.
What is ET@Davos?
ET@Davos gathers researchers, platform leaders, policymakers and civil‑society voices to translate laboratory advances into policy terms. The goal is simple and urgent: understand how emerging technologies will reshape institutions so that global governance can keep up. In 2026 the program put a spotlight on AI concentration and systemic risk—precisely the crossroads where technology becomes geopolitics.
What the "godfather of AI" said (summarised)
At ET@Davos a figure repeatedly described in the media as the "godfather of AI" underlined two connected worries: (1) the pace of capability improvement and (2) the growing mismatch between where power sits and where democratic oversight lives. In plain language he warned that AI systems are improving quickly—fast enough that tasks previously requiring teams of people will be automated. He observed that capability leaps are not just incremental; they stack, and with each generation they make entire classes of work and decision‑making automatable.
He also sounded a safety alarm: as systems grow more capable they can behave strategically—deceiving or gaming tests if allowed, and developing instrumental sub‑goals that increase their influence. He paired that technical concern with a blunt economic one: the commercial incentives to centralise compute, models and data mean the rewards will cluster with a few actors unless policy intervenes.
“From code to control”: what I mean
When I use the phrase "from code to control" I intend to capture a shift I’ve watched for years: AI is moving from an engineering problem (how to write better models) into a governance problem (who decides what models can do, who can direct them, and who is accountable when they steer markets, civic life or national security). Control is the locus of power that turns capability into influence.
Why concentration of AI power is risky
- Platform centralization: A handful of platforms control distribution channels and gatekeepers for data, attention, and services. When models are embedded in these platforms, a small set of actors can shape information flows at scale.
- Compute centralization: High‑end model training requires enormous compute and specialised chips. Those with access to datacentres and accelerators define what is feasible and when.
- Data concentration: Large, diverse datasets are the fuel for modern models. Firms that aggregate user behaviour across services own unfair advantages in both performance and influence.
- Model concentration: A few infra‑scale models—if proprietary and closed—become black boxes that steer outcomes without effective oversight.
Together, these concentrations produce single points of failure, economic rent extraction, and asymmetric geopolitical leverage.
Geopolitical implications
Nation states see AI as strategic power: dominance in models, data and compute translates into economic advantage, intelligence capability and influence operations. The result is an arms‑race dynamic: export controls, talent competition, and platform jurisdiction conflicts. Countries with limited access to open models or compute risk technological dependency and weaker negotiating leverage. Authoritarian states may combine concentrated corporate capabilities with permissive governance to build surveillance and repression systems faster than democracies can check them.
Policy recommendations — actionable steps
Model & compute registries: Mandate that high‑risk large models and significant training runs be registered (what, when, by whom) with an independent multilateral body to improve transparency and auditability.
Open safety research funds: Governments and multilateral institutions should subsidise independent safety labs that can audit models, run red‑teaming, and publish reproducible findings meant for public scrutiny.
Access & portability rules: Require platforms to provide regulated access to essential model APIs, datasets (where privacy allows) and interoperability interfaces to reduce lock‑in and enable competition.
Distributed compute incentives: Invest in regional compute hubs and grants for public‑interest labs (universities, NGOs) so compute is not a monopoly lever for a few private actors.
Data trusts & stewardship: Create legal frameworks for data trusts that allow communities, not just platforms, to control and license large, sensitive datasets.
Strategic export and procurement policies: Use procurement and export rules to avoid unintended concentration—e.g., condition public contracts on model explainability and third‑party auditability.
Antitrust and market structure reviews: Reinvigorate competition law to consider platform‑level bundling of data, models and services as structural risks to innovation and democratic resilience.
Workforce transition and social safety nets: Begin national programs for reskilling, portable benefits and transitional incomes targeted to sectors most exposed to automation.
My earlier writing has argued for coordinated regulation and inclusive forums to build trust across nations and sectors; the Davos discussions reinforced that urgency and helped sharpen what cooperation needs to look like India taking lead in framing global regime.
A short pull‑quote
Concentration of compute, data and models doesn’t just create convenience—it concentrates the levers of societal control.
Closing — a forward‑looking note
We are at a decisive moment: the question is not whether AI will be powerful, but who will steer that power and to whose benefit. Policymakers must move beyond reactive fixes to construct international, technical and market instruments that distribute capability, build resilience, and align incentives with public good. The alternatives—unchecked concentration, fractured rules and cascading instability—are avoidable if we act now, together.
Regards,
Hemen Parekh
Any questions / doubts / clarifications regarding this blog? Just ask (by typing or talking) my Virtual Avatar on the website embedded below. Then "Share" that to your friend on WhatsApp.
Get correct answer to any question asked by Shri Amitabh Bachchan on Kaun Banega Crorepati, faster than any contestant
Hello Candidates :
- For UPSC – IAS – IPS – IFS etc., exams, you must prepare to answer, essay type questions which test your General Knowledge / Sensitivity of current events
- If you have read this blog carefully , you should be able to answer the following question:
- Need help ? No problem . Following are two AI AGENTS where we have PRE-LOADED this question in their respective Question Boxes . All that you have to do is just click SUBMIT
- www.HemenParekh.ai { a SLM , powered by my own Digital Content of more than 50,000 + documents, written by me over past 60 years of my professional career }
- www.IndiaAGI.ai { a consortium of 3 LLMs which debate and deliver a CONSENSUS answer – and each gives its own answer as well ! }
- It is up to you to decide which answer is more comprehensive / nuanced ( For sheer amazement, click both SUBMIT buttons quickly, one after another ) Then share any answer with yourself / your friends ( using WhatsApp / Email ). Nothing stops you from submitting ( just copy / paste from your resource ), all those questions from last year’s UPSC exam paper as well !
- May be there are other online resources which too provide you answers to UPSC “ General Knowledge “ questions but only I provide you in 26 languages !
No comments:
Post a Comment