Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

Our elemental sense of collective flow

Periodic Reporting for period 1 - FLOW (Our elemental sense of collective flow)

Reporting period: 2020-09-01 to 2022-08-31

The world we live in is continuously on the move and contains a wide variety of motion flow. The flow of traffic, the flow of a poem, the flow of ocean currents, or your workflow. With this continuous flow of information comes complexity, complexity that is hard to tease apart and fully understand. We humans seem to be able to effortlessly deal with these complexities. Just by looking at a picture of a swarm of starlings we can practically see the motion in the static image. We somehow can interpret complex biological behaviours of collective flow, flow where agents show both collective and individual behaviour following a coordinated set of rules. In this research project we investigate how the human visual system interprets and predicts these collective behaviours.
Which visual features, cues, or information do we use to interpret these collective patterns and how do we make conclusions out of them? Even more intriguing is how we can predict the future states of such complex patterns. Over eons of evolution, we’ve developed mechanisms that appear to do this very efficiently. We are not perfect, we make mistakes in these estimations, but we do it well and fast enough to pass it on to future generations.
Perception of collective flow has high potential as a field of research because of two main reasons. 1. Very basic (low-level) depictions of collective motion can be generated while very complex (high-level) behaviours are perceived (e.g. agitation, discipline, leadership). This allows for an interesting use case to investigate bottom-up and top-down interactions in the visual cortex. Especially top-down processing (e.g. can cognitive reasoning steer our sensitivities to particular types of motion) is something vision scientists try to better understand and could have large implications for computer vision, machine learning, AI, and applications thereof. 2. Potential of generalization. There are many types of collective flow with inanimate occurrences (e.g. shaken metallic rods, nematic fluids), microscopic occurrences (e.g. macromolecules, cells, bacteria colonies), and richer manifestations with more intelligent organisms (e.g. insect swarms, flocks of birds, humans, traffic). When you look for it, it is all around us. When we understand how we humans process this information efficiently we can mimic this behaviour with models that in turn can be applied in technology to interact with collective patterns more efficiently and robustly.
The project kicked off by picking a simulation model that is easy to understand and yet can create complex depictions of collective flow. A model from 2002 by the biologist Iain Couzin was chosen. Because this project kicked off in lockdown in September 2020 due to Covid, the choice was made to develop an environment that can run simulations of collective flow real-time in your browser on an average laptop or PC. This allowed for online experimentation instead of having participants making observations in a lab. In the first year the findings of the first online experiments were disseminated through conferences concentrating on visual science. A special session was organised to discuss the benefits and disadvantages of online experimentation in perception science. Furthermore, we found that observers can see a rich variety of behaviours such as agitation, cohesion, grouping, discipline, and even associate these behaviours with different animal groups (e.g. birds, bugs, fish). It also became evident that the interactions of collective flow perception can be quite complex due to non-linear interactions and dependencies between the parametric space and the perceptual observations. What is beautiful to see is that perceptually this space is very simple, where the dominant perceptual dimension aligns with uniform and grouping behaviour on one side and exclusion and chaotic behaviour on the other.
The second year of this project investigated different experimental tasks and comparisons thereof (i.e. ratings of properties and similarity judgements). Metrics were explored that capture features that are predictive of human observations such as mean position distance (low when grouped, high when scattered) and mean orientation distance (low when uniform in direction, high when directions vary) which can explain more than 90% of the main perceptual dimension. At this point the project gained more traction in various scientific fields and we organised a special session on collective behaviour at the Human Vision and Electronic Imaging conference bringing people from psychology, vision, cognitive, and computer science together to discuss the topic. Colloquia were given at a lab that works on neural information processing and a lab that works on sustainable modes of transport such as walking and cycling. A Master graduation project was defined where a prototype was designed and tested at Naturalis (natural history museum) to teach children through an interactive installation the simple rules behind complex depictions of collective behaviour. In newer research results show that observers can predict the future state of collective flow quite well, where our accuracy varies between 72% and 51% predicting states up to 5 seconds in the future (chance performance is at 25%). We make systematic errors as well which are especially interesting to capture with our models.
The methods, techniques, and pipelines developed for this project are used in three other projects that look at the perceived motion differences of objects caused by optical material differences (e.g. do transparent objects appear to move faster than matte objects), the detection and perception of deforming objects (e.g. do transparent objects appear more non-rigid than metal objects), and the perception of fictional materials created by multiple state-of-the-art text to image AI generators.
The project generated some novel insights that shed new light on scientific methods currently applied in vision related sciences. Nature provides many examples on how to efficiently deal with complexity and these examples are often easier to interpret as well.
A novel mixture of techniques was applied to face the challenges that comes with complexity. The interactive online experiments with real-time adjustments of dynamic properties were not possible before due to technical limitations and introduce a whole new assortment of experimental possibilities. The framework developed for this project can serve as inspiration in dealing with perceptual complexity and the scalability issues that this complexity introduces.
The simplicity and interpretability of the models themselves while dealing with these complex spaces is another aspect that makes this project so attractive. A multimillion parameter deep neural network is likely to predict human performance in these tasks well but what do we learn from that? Combining a more traditional and interpretable approach with the latest techniques from various fields in high-dimensional modelling we were able to generate robust and interpretable conclusions.
Collective behaviour is everywhere, and better understanding of its occurrences can contribute to substantial challenges in for example logistics, robotics, and automation in traffic. This is further emphasized by the wide range of scientific disciplines that showed interest this work, and this is just the beginning.
Simulation Overview

Related documents