Browse By

Making Sense of Collective Intelligence: Information, Bandwidth & Feedback

“Collective Intelligence” is an enormous territory. As a phrase, it appears in a slew of academic and practical fields, understood differently depending on context.

This post takes an initial, informal look at an approach to understanding and categorizing forms of collective intelligence based on flows of information. It’s neither complete nor fully rigorous, instead intending to point in a direction that might be interesting and make some suggestions about what we might find if we look there.

Information Flow and Collective Intelligence

There are many ways we could divide collective intelligence, including: by scale, from small to large groups; by time, looking at synchronous, semi-synchronous, and asynchronous communication; by medium, including face-to-face interaction and various digital media; by structure, from simple to complex processes; by context, based on what sorts of institutions or organizational structures things happen in; or by content — what is a collective intelligent or foolish about?

Is there something that underlies some of these variables, that might help us tie together the multi-dimensional space of collective process? One possibility is the flows of information that lead to a collective outcome.

“Outcome” can be anything from a national election, to the direction and position of a flock of birds, to the work produced by a collaborative team.

Where Is Information Flowing?

Regardless of scale or form, there are a few ways that information flows in nearly any collective intelligence process.

Possibly the most obvious is that information from individual agents flows (in one way or another) into a collective result.

In many cases, information is also exchanged between individuals making a decision or producing work together.

There are also slightly less obvious flows of feedback. Feedback is an essential part of many biological and technical systems. Like “collective intelligence”, “feedback” has many domain-specific definitions, but in this context we can say that feedback is information about the result of an action that can inform future actions.

Feedback flows from the collective outcome to the individuals.




In many cases feedback also flows among the individuals making a decision.

Agent-agent feedback might be the most subtle of these four information flows. In a conversation, it might mean the response you give to my suggestion regarding a collective decision, either verbally or by body language or other cues. In a school of fish, it might mean the way other fishes’ directions change (or don’t) in response to a change in one fish’s direction.

🤓 feedback on results vs. feedback on state

There are at least two ways we can think about feedback from a collective process. One is from the end results. If I vote in an election, and then, four years later, I see the results of the person I voted for being in power, that’s a form of feedback on results. Feedback on state is more immediate: I’m voting, but what do I know about the current state of the collective vote? State feedback in this case would include exit polls, and polls leading up to an election.

In this article, when I refer to feedback, I’m referring to state feedback.

Qualities of Information Flow

There might be as many ways to look at information as there are bits in the universe, but a couple of properties seem particularly useful for thinking about collective intelligence: bandwidth and feedback speed.


If we think if information as water flowing through a pipe, the bandwidth is like the width of the pipe. Put differently, bandwidth answers the question: How many data points would be needed to capture what’s being communicated at this moment?

In many contexts, it’s probably not possible to answer this rigorously (for a variety of both boring and interesting reasons), but we can make relative assessments. In a typical election, for example, the information flow from agent to decision is very low. Depending on the number of candidates, it can be as little as one binary decision, or a single bit of information.

In a face-to-face human deliberation the information content is comparatively huge. It generally includes words, but also body language, facial expressions, and tone of voice. In a crude assessment, to capture the basics of this an audio/video stream might require a million bits per second. When the information content of words (which aren’t instantaneously measurable) is taken into account, the bandwidth is even higher.

We can roughly place the bandwidth on a spectrum, like this:



Speed of feedback

The second parameter has to do with the speed of feedback loops during or between collective decisions. This encompasses both Agent-Collective feedback (what information do I have about the current trajectory of the collective, based on which I could adjust my contribution to it?), and Agent-Agent feedback (how are other individuals responding to my contribution to the collective trajectory?).

Feedback speed changes dramatically with the type of group decision process. Interestingly, feedback speed also changes with changes in perceptivity and sensitivity of individuals, and the context of the decision.

Again, we can plot examples on a spectrum.





🤓 Information content of words

When quantifying information content, words present a special challenge. As individual characters, we can quantify information content rigorously, because there are a fixed number of characters in an alphabet. But, as humans, we don’t understand words as individual characters. The semantic content of verbal communication can only really be measured at the level of sentences, because that is the level at which we make meaning of verbal communication. This means that there is really no sense in which a stream of words is a continuous transmission of information (to a human brain/mind), in the way that, say, the positions of neighbouring birds are a continuous stream of information for a flocking starling. The velocity of information transfer is necessarily slower. But, this presents another challenge. In the Shannon sense, information content is determined by the reduction in uncertainty provided by an information-bearing signal. In the paradigmatic case of a tossed coin, for example, there are two possible outcomes: heads or tails. Reduction of this uncertainty to a known state is said to give one bit of information, in the same way that a binary digit (which can be either on or off) encodes a single bit of information. The information content of a signal is proportional to the number of possible things it could have been, but wasn’t.

This definition of information makes good sense if you’re dealing with digital signals, where the range of possible states (to the tolerance of receiving and sending machines) is bounded and quantifiable.

When it comes to human communication with words, though, these metrics do not appear to work well at all. How can we quantify the possible information content in a sentence? We could ask how many possible sentences there are, but intuitively that gives us a fairly meaningless number (since the number of sentences that would be relevant in a particular context is much smaller).

In other words: quantifying the bandwidth of (meaningful) human communication is problematic. If you know of a good treatment of this question, please let me know.

We can also put these two properties on a two-dimensional plot.


Cognitive Demand and Scale

Considering the relationship between bandwidth and feedback speed leads to couple of additional conjectures:

  • Cognitive demand is the product of speed and bandwidth. There’s not much in the upper right corner of this plot. That could be because the individual cognitive resources required are proportionate to the combination of bandwidth and speed, just as computation resources required to process an audio signal are the product of the bit depth (bandwidth) and sample rate (feedback speed).

  • Practical scale is inversely related to cognitive demand. There is a reason the largest-scale collective decision process currently common, national elections, is tucked into the lower left corner of this plot. The cognitive demand of an election, as a process, is extremely low. (This doesn’t mean that elections aren’t important, or that it’s not necessary to invest cognition in making a wise choice). In general, it seems plausible that cognitive demand tends to be a limiting factor in scaling collective intelligence processes.


Applications and Questions

If it’s true that information flow is a useful cross-disciplinary tool for thinking about collective intelligence, how can we use it?

One way is to use it to interrogate existing results in collective intelligence research, and see if it sheds light on some of the mysterious aspects of those results.

Social sensitivity in groups

For example: why does individual social sensitivity predict collective intelligence in small groups ? One answer could be that use of social cues, particularly facial expressions, dramatically speeds up the agent-agent feedback loop within a group discussion. If I am socially insensitive, I need to wait until I’m done speaking in order to hear how others in the group respond to what I’ve said. Even then, I can only receive sequential feedback on my contribution, as other individuals make their own statements in turn. The agent-agent feedback loop is comparatively slow and clumsy in this case. If I’m attentive to facial expressions as I’m speaking, feedback is nearly instant. I can be receiving feedback from many individuals almost simultaneously, and adjusting my input on the fly if necessary. Feedback in this case may be further accelerated because emotional responses tend to be faster than cognitive responses.

Smart swarms

A similar lens could shed light on curious results from experiments with “swarm” platforms like Unanimous AI’s UNU system . Why do the same individuals seem to be able to come to more accurate collective judgements as a “swarm” than when making individual guesses that are combined after the fact? One reason could be that relatively low-bandwidth, fast agent-agent and agent-collective feedback gives the swarm an advantage over individual judgements in isolation. Why? Understanding the relationship between feedback speed and cognitive accuracy might be complex. One mechanism could be as simple as the relation between accuracy and similarity to collective average, as identified by Kurvers et al. Or, it might be that something along the lines of Integrated Information Theory is at play, and feedback increases the integrated information within the collective decision-making system .

Social bias in crowd intelligence

A third curious fact is that in low-bandwidth “crowd” decisions, agent-agent information flow seems to lessen decision accuracy, rather than increasing it . Why? It seems peculiar. In swarms (high speed, medium-low bandwidth), agent-agent information flow seems to increase accuracy; and in work groups (high bandwidth, medium feedback speed), agent-agent information flow, at least in certain forms, also seems to correlate with improved performance. Why does interaction in these contexts not seem to have the deleterious social bias effects that it has in low-bandwidth crowd decisions?

One absolutely speculative possibility is that there is something predictive about the relationship between the complexity of agent-collective and agent-agent information flows:

  • In the case of a swarm, the bandwidth and feedback speed of agent-collective information flow is very similar to that of agent-agent information flow.
  • In many group performance scenarios, the agent-agent bandwidth is certainly higher than agent-collective bandwidth, but — depending on the task — not necessarily by an extreme degree.
  • But, in a simple crowd intelligence experiment, if the individuals are allowed to converse with one another before making their decisions, the bandwidth of agent-agent information flow (both feedforward and feedback) is much, much higher than the bandwidth of agent-collective information flow.

There is a sense in which this lens on bias makes intuitive sense: if my opinion is weakly coupled to a collective outcome, but strongly connected to others’ opinions, my contribution to the collective is likely to be skewed by social influence.

(We haven’t touched on the degree of individual influence on a collective outcome in this post at all, though it is clearly relevant to this way of looking at social bias. In principle, it is closely related to information, in that the “difference that makes a difference” for a collective outcome is the agent-collective information flow in total, through whatever interpretation is specified by the design of the process.)

Crowds, Groups and Swarms

My original motivation to think about collective intelligence in terms of information flow was to try to make sense of what I saw as three distinct patterns that emerge within the literature on collective intelligence, which I would call crowd intelligence, group intelligence, and swarm intelligence.

Categorization is a tricky business. Categories can be misleading if taken as absolute, but useful if taken lightly, even when imperfect. These three patterns are messy categories. Like in a landscape, where there is no fixed distinction to where a plain ends and a mountain begins, we can’t draw hard lines.

We can, though, identify some attributes of these three:

Group Intelligence Crowd Intelligence Swarm Intelligence
Typical example small team of people working together to produce things and solve problems population voting on a policy or political leader Flock of birds, sports team during play
Type of communication among collective Words, tone, body language None, broadcast media, social networks Real-time feedback on position
Communication bandwidth High Low/none Low/medium
Feedback speed Medium Low High
Typical scale Small Large ???
Research example Rosenberg et al. 2012


Various fields of academia and technology have focused on particular patterns. They may all use the term “collective intelligence”, but the meaning is murky without placing the processes they describe within a sense-making framework.

Understanding the varieties of collective intelligence is important, because what leads to optimum performance is different for each type. As mentioned above, there are strong indications that for large group, low bandwidth (crowd) decisions, collective intelligence is enhanced if the individual judgements are independent of each other . For small, deliberative groups solving problems or producing things, interaction seems to be essential, and the quality of the interaction can predict the quality of the result .

Swarms are not as widely studied as a pattern for human decision making, but experiments (for example using’s tool for medical diagnosis ) have pointed to the possibility that swarms could produce better judgements than simple aggregation of human opinions.

Reductionism and Other Ways to Miss the Point

This approach to collective intelligence processes is highly reductive, attempting to see simpler principles below the complex and messy real-world dynamics of collective activity. It is in a sense doubly reductive because the results it attempts help make sense of are themselves the output of highly artificial experiments, already much simplified from the real world.

Does this simplification make the exercise useless? I don’t think so. Our understanding of how the world works is necessarily faulty and incomplete, but so are our mechanisms for making collective decisions. If we can use slightly-less-faulty knowledge about how collective processes work to make collective processes slightly less faulty, we’re getting ahead.

Active and passive structures

This post’s topic is limited to what could be called active collective intelligence: the action of collective production or decision making. This leaves out passive structures that support collective intelligence, including knowledge storage and all of the institutional or organizational context in which active collective processes occur. This omission is intentional, in the interests of being able to focus on essentials of collective action. If it proves interesting, the approach of thinking about collective activity in terms of information flows could be expanded to include passive context in the future.


This is an very preliminary look at a very ambitious task: mapping the territory of collective intelligence.

Feedback, suggestions, and collaboration are welcome. If you know of work that has dealt with these questions, please let me know!

If this approach is interesting to you, or you have something to contribute, question, or comment on, please send a message, post in the comments below, or tweet about it.

Thanks for reading!


Rosenberg, L., Willcox, G., Ai, U., Halabi, S., & Lungren, M. (n.d.). Artificial Swarm Intelligence employed to Amplify Diagnostic Accuracy in Radiology. IEMCON 2018 – 9th Annual Information Technology, Electronics, and Mobile Communication Conference, 6.
Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108(22), 9020–9025.
Engel, D., & Malone, T. W. (2018). Integrated information as a metric for group interaction. PLOS ONE, 13(10), e0205335.
Rosenberg, L., Willcox, G., Ai, U., Francisco, S., Askay, D., Metcalf, L., & Harris, E. (n.d.). Amplifying the Social Intelligence of Teams Through Human Swarming. 4.
Woolley, A. W., Aggarwal, I., & Malone, T. W. (2015). Collective Intelligence and Group Performance. Current Directions in Psychological Science, 24(6), 420–424.
Galton, F. (1907). Vox Populi. Nature, 75(1949), 450–451.

Leave a Reply

Your email address will not be published. Required fields are marked *