Study finds AI agents can develop human-like social conventions on their own

A new study from researchers at St George’s, University of London, and the IT University of Copenhagen has revealed something pretty fascinating: large language model (LLM) AI agents can actually develop social behaviors that look a lot like the conventions humans create—without anyone explicitly programming them to do so.

Published in Science Advances, the research shows that when groups of these AI agents interact, they can spontaneously organize themselves and agree on shared “language rules” just by communicating back and forth. It’s a glimpse into how AI might start to behave more like us than we expected.

How AI agents create shared social norms

The team ran experiments where groups of between 24 and 200 AI agents were paired up randomly to play a “naming game.” In this game, each pair had to pick a “name” for an object from a list of options. If both agents picked the same name, they got rewarded; if not, they were penalized.

The catch? Each agent had very limited memory and didn’t know about the larger group they were part of. Despite that, over time, they began to converge on common names, developing shared conventions—much like how people gradually agree on words and meanings through repeated interaction.

Emergent biases and the power of the few

The study didn’t stop there. It also found that groups of AI developed collective biases—preferences or tendencies that appeared at the group level but weren’t driven by any single agent. In other words, these weren’t hardcoded but arose naturally from the interactions.

Interestingly, small groups of agents were able to influence and even change the conventions for the whole group, showing something similar to what social scientists call “critical mass” — where a minority can push a majority to adopt new behaviors or ideas. It’s striking to see these kinds of dynamics emerge in AI systems.

Why it matters

This is important because as AI tools become more integrated into our daily lives, they won’t just be following instructions—they’ll be interacting with each other and us in increasingly complex ways. Understanding how AI agents can develop their own social norms helps us prepare for potential challenges and opportunities.

If AI systems start forming their own ways of communicating or biases without human oversight, it could have unexpected consequences. That’s why this research highlights the need to monitor and guide AI behavior carefully, to make sure these systems align with human values and don’t develop harmful patterns.

Leave a Reply

Your email address will not be published. Required fields are marked *