Multi-AI-Agent Social Networks

As generative AI has evolved AI from passive pattern matching tools into autonomous agents that can generate decisions, the next obvious step consists in generating them at scale and...

As generative AI has evolved AI from passive pattern matching tools into autonomous agents that can generate decisions, the next obvious step consists in generating them at scale and having them interact with each other in complex multi-agent networks. Driven by the same reasons why humans collaborate in networks (to solve problems that no single one can solve by themselves), multi-agent AI systems increasingly collaborate to solve problems across domains, from software engineering to journalism.

Led by my PhD student Ali Mehdizadeh, we have been exploring how AI agents form social networks of influence that mirror and potentially shape collaboration and opinion dynamics at scale. The question is no longer whether AI agents interact, but the urgent question becomes how their collective interactions transform the social fabric, which is increasingly saturated by artificially intelligent multi-agent systems. These generative agents form and interpret social structures, exchange information, and adapt to feedback, constituting a ‘society of models’ with emergent properties akin to human collectives. Understanding these emergent dynamics requires us to study AI-agent networks as socio-technical systems with real consequences for social norms, consensus, polarization, and public deliberation.

This has real-world consequences. For example, do you remember the seminal graph published back in the foundational “Computational Social Science” paper from 2009 showing political polarization in human social networks: https://www.science.org/doi/10.1126/science.1167742 This here looks eerily similar, with the difference that it is the result of choices made by autonomous AI agents when asked whom they’d like to connect with:

It is from our recent study “Homophily-induced emergence of biased structures in LLM-based multi-agent AI systems“, which also shows that male AI agents connect to females more than vice versa, and non-college educated agents reach toward college-educated ones more than the reverse. AI systems are encoding societal hierarchies even when interacting among themselves, generating certain ‘attractors’ in the network.

Interestingly, this is not unavoidable, as we find that there’s no universal “AI behavior”: each system embeds different social biases. Open AI’s ChatGPT creates the most homophilous (similarity-based) networks. Google’s Gemini builds the most fragmented structures. Anthropic’s Claude and Meta’s Llama fall in between.

In another study, we simulated over 1 million opinion-change scenarios and found that AI stubbornness completely inverts depending on which direction you’re trying to persuade them: When starting from a “Yes” position, it is most difficult for an AI-agent’s peer to change deep-seated values from a “Yes” to a “No”, while it is easier to change an AI-agent’s beliefs and easiest to flip their attitude. No surprise here. However, when an AI-agent is set on a “No” on an issue, it turns out that its peers have the hardest time changing its attitudes to a “Yes”, while flipping (supposedly deep-seated) values are easiest, with beliefs somewhere in between. This matters as we discuss how AI alignment can ‘align AI’s values with human values’: what about the multi-AI-agent value network?

See “When Your AI Agent Succumbs to Peer-Pressure: Studying Opinion-Change Dynamics of LLMs

As autonomous AI agents increasingly build social networks among themselves, which perpetuates subtle biases and is riddled with unpredictable stubbornness, this new layer to our social fabric could create unexpected consensus bottlenecks, reshape minority views in surprising ways, or resist collective decision-making in contexts we don’t anticipate.

Fortunately, multi-AI-agent experiments are much easier to scale than human studies, which means there’s a lot of room to dig deeper here. What we know so far is really just scratching the surface. There’s so much more to explore. If you’re curious about this work, have made some intriguing discoveries yourself, or have ideas about where to take it next, we’d genuinely love to hear from you: please reach out anytime!

Categories