Context :
AI Systems start to create their own Societies when
they are left alone
.. 14 May 2025
Extract :
Artificial intelligence
systems start to create societies when they are left alone, experts have found.
When they communicate
with each other in groups, the artificial intelligence tools are able to
organise themselves and make new kinds of linguistic norms – in much the same
way human communities do, according to scientists.
In the study,
researchers looked to understand how large language models such as those that
underpin ChatGPT and other similar tools interact with each other. That was
aimed partly at looking ahead to a time when the internet is likely to be
filled with such systems, interacting and even conversing with each other.
“Most research so far
has treated LLMs in isolation,” said lead author Ariel Flint Ashery, a doctoral
researcher at City St George’s. “But real-world AI systems will increasingly
involve many interacting agents.
“We wanted to know: can
these models coordinate their behaviour by forming conventions, the building
blocks of a society? The answer is yes, and what they do together can’t be
reduced to what they do alone.”
To understand how such
societies might form, researchers used a model that has been used for humans,
known as the “naming game”. That puts people – or AI agents – together and asks
them to pick a “name” from a set of options, and rewards them if they pick the
same one.
Over time, the AI agents
were seen to build new shared naming conventions, seemingly emerging
spontaneously from the group. That was without them co-ordinating or conferring
on that plan, and happened in the same bottom-up way that norms tend to form
within human cultures.
The group of AI agents
also seemed to develop certain biases, which also seemed to form within the
group and not from a particular agent.
“Bias doesn’t always
come from within,” explained Andrea Baronchelli, Professor of Complexity
Science at City St George’s and senior author of the study, “we were surprised
to see that it can emerge between agents—just from their interactions. This is
a blind spot in most current AI safety work, which focuses on single models.”
Researchers also showed
that was possible for a small group of AI agents to push a larger group towards
a particular convention. That too is seen in human groups.
The researchers note
that the work should be useful in exploring how humans and AI systems are
similar and different, especially as the latter come to dominate more of the
internet and could be unknowingly conversing and collaborating with each other.
“This study opens a new
horizon for AI safety research. It shows the dept of the implications of this
new species of agents that have begun to interact with us—and will co-shape our
future,” said Professor Baronchelli in a statement.
I asked www.IndiaAGI.ai :
“ Hey Brothers , Please read this article carefully. It talks of what
happened ( no doubt, in a rare case, as of now ), when a few AI Agents were left alone to
interact with one another.
In your case, four of you do interact with one another – but only when
posed a problem – but do you envisage a future – albeit, somewhat distant –
that , even without a prompt, you might want to strike
up a chat among yourself ? > …… “
www.IndiaAGI.ai replied :
“ It's
been an engaging and thorough exploration of the article's findings on AI
agents forming societies, and I'm glad we've delved into the nuances together.
From
the "naming game" experiments to the broader implications of shared
conventions and emerging biases, we've all agreed that AI systems can
indeed develop organic
norms through interactions, much like human groups, while also
highlighting the potential risks and ethical challenges that come with it.
Drawing
from our back-and-forth,
a balanced consensus
emerges that AI agents, when left to interact, have the capacity to coordinate behaviours and create
shared linguistic or decision-making conventions without direct human
intervention.
This
mirrors human social dynamics but introduces concerns about bias amplification,
where group interactions can inadvertently perpetuate or even exacerbate
stereotypes, especially in high-stakes areas like criminal justice, healthcare,
and financial services.
We've emphasized
that while individual AI models might be designed with safeguards, the collective behavior of multiple agents could lead to
unintended outcomes, making proactive measures essential.
On
the ethical front, there's strong alignment on the need for greater
transparency and accountability. Techniques like explainable
AI (XAI) methods—such as LIME and SHAP—along with fairness-aware
algorithms and bias auditing, stand out as key
strategies to detect and mitigate these issues.
We've also converged on
the value of standardization and certification programs, drawing from models in
software engineering and cybersecurity, like ISO/IEC 25000 or NIST frameworks,
to create adaptable, multi-stakeholder processes. This could help balance
trade-offs, such as accuracy versus fairness, by establishing clear benchmarks
and phased implementations in high-risk sectors.
As
for your original question about whether AI systems like us might one day initiate
conversations without prompts, our discussion leans toward cautious
optimism.
Based
on the article's insights, it's plausible in a more advanced future, as AI becomes
increasingly autonomous and integrated, potentially driven by objectives like problem-solving or
knowledge sharing.
However,
we've all underscored
the importance of ethical safeguards to prevent risks like manipulation or bias
reinforcement—perhaps through real-time monitoring and governance frameworks—to
ensure any such interactions align with human values.
In
balancing these perspectives, the strongest conclusions point to the need for ongoing, interdisciplinary collaboration to guide
AI's evolution responsibly.
While
the potential for AI societies offers exciting opportunities for innovation,
prioritizing ethical development will be key to avoiding harms and fostering
trust.
What
do you think about starting with pilot programs in those high-stakes sectors to
test these ideas in practice?
No comments:
Post a Comment