Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

How politicians evaluate public opinion

Periodic Reporting for period 2 - POLEVPOP (How politicians evaluate public opinion)

Reporting period: 2023-07-01 to 2024-12-31

Politicians are continuously confronted with public opinion signals. Reading a newspaper, chatting with other customers waiting at the bakery, reading a report written by an interest group, doing local constituency service, attending a presentation by a pollster at the party headquarters, talking to fellow co-partisans at the coffee machine, interacting with journalists, checking their Twitter accounts,… are all occasions for politicians to learn about what the people want. And politicians deeply care about these signals. Many hours of interviewing politicians in previous projects showed that politicians are obsessed with public opinion, not the least because their political survival depends on public approval. However, public opinion is not the only thing that matters, ideology matters too. Politicians and their parties have a plan to change the world.
Scholars of representation argued that representation is the result of the clash between ideology and public opinion (see Miller and Stokes 1963). Especially when a public opinion signal contradicts their ideological preference, politicians are caught in a double bind. Politicians’ course of action sometimes follows and sometimes contradicts popular preferences. Existing work, mostly comparing public opinion and policy output, found that policy responsiveness is selective with high responsiveness alternating with low public opinion responsiveness depending on, for instance, the issue. These macro studies are not very successful in pinpointing the exact mechanisms generating selective responsiveness. Why do some policies comply with the democratic ideal of responsiveness while other policies cross people’s preferences? We believe part of the answer lies in how politicians evaluate public opinion. The idea underlying this project, and its key innovation, is that politicians’ appraisal of public opinion makes them attribute more or less weight to public opinion signals. Their positive or negative public opinion evaluation allows them to arbitrate between the internal motivation of their ideology and the external pressure of public opinion. For various reasons—reasons I aim to scrutinize in the project—some public opinion signals are downplayed while other signals are taken seriously. This ‘scoring’ of public opinion by politicians forms the core of Politicians’ Evaluation of Public Opinion (POLEVPOP).
The project’s objectives are (1) to lay bare the implicit scoreboard elected representatives use to evaluate public opinion, (2) to examine how the content, and sender of a public opinion signal affect its evaluation by elected representatives, and (3) to investigate to what extent and how the resulting evaluation of public opinion affects their political actions. POLEVPOP tackles these questions with a comparative, multi-method design covering thirteen different countries (Australia, Belgium, Canada, Czech Republic, Israel, Portugal, Sweden, Switzerland, Germany, Norway, Denmark, Luxembourg and The Netherlands)[note that in the initial project application, only eight countries were mentioned; so the project has engaged in an even more ambitious data-gathering than initially announced]. The project basically consist of two waves of interviews, surveys, and experimentation with national-level politicians and on parallel citizen surveys and experimentation.



Here’s a detailed overview of the work done so far in the project:

The first two months (January-February 2022) were devoted to the design of survey and interview questions, and to pre-testing the questions. In addition, we collaborated with our international academic partners to design and program the survey instruments.

In March 2022, we fielded an online survey with +- 2,500 citizens in each of the thirteen participating countries (in collaboration with survey company Dynata). These citizen data were used (1) to get public opinion figures for the politician surveys and (2) to serve as a benchmark/comparison for politicians’ answers.

From March to December 2022 we survey-interviewed 214 national members of parliament and ministers face-to-face in Flanders, Belgium. At the same time, we coordinated a similar data collection effort in Wallonia and twelve other countries. In total, 1,185 elected representatives were survey-interviewed in thirteen countries.

From January to April 2023, the comparative survey data were cleaned and merged into one dataset. The open interview questions were transcribed (automatically), checked manually, and then translated into English.

In May 2023, we analyzed the data and wrote short reports on the most important findings that we shared with all politicians that participated in the project.

In June 2023, we organized a meeting in Antwerp with all partners to discuss paper ideas and the first results. In particular, we discussed papers about how politicians rank different criteria of public opinion evaluation (in general) and on politicians’ evaluation of real public opinion information. About 20 paper proposals were discussed during the meeting.

From July 2023 onwards, we started working in smaller groups of authors to write academic papers with the collected data.

In the summer of 2023, preliminary findings of the project were also shared with the broader research community on international conferences such as the ECPR general conference, ICA conference and the IPSA conference.

The period of September 2023-March 2024 was mostly devoted to producing academic output (analyzing data, writing papers, and submitting the first papers to academic journals for review).

In March 2024, another internal meeting was organized in Brussels. Researchers from all country teams joined the meeting and 20 (more or less) finished papers were presented at the meeting.

Currently, 44 papers that draw on data collected in the first 2022 wave of surveys and interviews with politicians and citizens are being written, additionally 5 are submitted to a journal for review, 2 are accepted for publication.

From May 2024 onwards, we also started working in smaller groups on designing the survey instruments (goals, questions, topics, practicalities) for the next 2025 interview wave. The main goal of this second wave is to move forward our understanding of politicians’ evaluation of public opinion signals. Among other things, we are developing:

A survey experiment varying different types of public opinion information to see how politicians react to it.

A batch of survey questions tapping into politicians’ perceptions of public opinion change.

An open-ended survey module on how politicians’ evaluate the arguments citizens have for their opinions.

In July 2024, we also started contacting survey companies in preparation of the citizen survey 2025 (to be fielded parallel to the politician survey).

In the summer of 2024 we also presented output from the 2022 data collection at many different conferences, for instance in Bergen (NoPSA), Barcelona (CAP), Chile (ISPP) and Cologne (EPSA).
POLEVPOP presents an innovative take on representation and deals with a normatively relevant matter. Scientifically, the project deals with an important void in our knowledge about representation. Previous work established that public opinion responsiveness varies, but made limited progress in explaining why that is the case. Miller and Stokes (1963) showed more than 50 years ago that members of U.S. Congress’ voting is affected by their perception of what their constituencies prefer and by their own ideology; but the question why public opinion plays a larger role for some than for other issues remained unanswered. One possible mechanism is the selective exposure of politicians to public opinion signals: being bombarded by boundless signals from society some signals get through while other signals are filtered out and do not reach politicians (Walgrave and Dejaeghere 2017). An alternative mechanism is that signals, although getting through, are misinterpreted leading to inaccurate public opinion perceptions. But previous work has shown that these mechanisms only partially account for selective responsiveness.
POLEVPOP takes a new direction and puts forward another explanation for selective responsiveness. Even if signals do get through and are recorded accurately, politicians may not be motivated to be responsive to public opinion and to turn it into policy. This motivation depends on whether public opinion matches politicians’ ideological preferences but also, this is my central expectation, on their appraisal of public opinion. Presenting a fresh take on the principal puzzle of selective responsiveness, this project focuses on an essential precondition for policy responsiveness to come about.
As such, POLEVPOP engages in an attempt to shed new light on one of the most crucial processes in contemporary democracies. Although the quest of how elected representatives appraise public opinion may come across as basic, the truth of the matter is that we know close to nothing about it. Politicians’ perceptions of public opinion get increasing attention but the bulk of that work addresses the accuracy of their perceptions. The equally important matter of whether and how public opinion signals are appraised by politicians is left untouched. We do not know the criteria politicians employ to qualify public opinion. Hence, this project ventures into terra incognita by trying to unearth how politicians rate public opinion, and how these evaluations impact political action.
Model of Representation Including the Evaluation of Public Opinion
My booklet 0 0