Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

How politicians evaluate public opinion

Periodic Reporting for period 1 - POLEVPOP (How politicians evaluate public opinion)

Reporting period: 2022-01-01 to 2023-06-30

Politicians are continuously confronted with public opinion signals. Reading a newspaper, chatting with other customers waiting at the bakery, reading a report written by an interest group, doing local constituency service, attending a presentation by a pollster at the party headquarters, talking to fellow co-partisans at the coffee machine, interacting with journalists, checking their Twitter accounts,… are all occasions for politicians to learn about what the people want. And politicians deeply care about these signals. Many hours of interviewing politicians in previous projects showed that politicians are obsessed with public opinion, not the least because their political survival depends on public approval. However, public opinion is not the only thing that matters, ideology matters too. Politicians and their parties have a plan to change the world.
Scholars of representation argued that representation is the result of the clash between ideology and public opinion (see Miller and Stokes 1963). Especially when a public opinion signal contradicts their ideological preference, politicians are caught in a double bind. Politicians’ course of action sometimes follows and sometimes contradicts popular preferences. Existing work, mostly comparing public opinion and policy output, found that policy responsiveness is selective with high responsiveness alternating with low public opinion responsiveness depending on, for instance, the issue. These macro studies are not very successful in pinpointing the exact mechanisms generating selective responsiveness. Why do some policies comply with the democratic ideal of responsiveness while other policies cross people’s preferences? We believe part of the answer lies in how politicians evaluate public opinion. The idea underlying this project, and its key innovation, is that politicians’ appraisal of public opinion makes them attribute more or less weight to public opinion signals. Their positive or negative public opinion evaluation allows them to arbitrate between the internal motivation of their ideology and the external pressure of public opinion. For various reasons—reasons I aim to scrutinize in the project—some public opinion signals are downplayed while other signals are taken seriously. This ‘scoring’ of public opinion by politicians forms the core of Politicians’ Evaluation of Public Opinion (POLEVPOP).
The project’s objectives are (1) to lay bare the implicit scoreboard elected representatives use to evaluate public opinion, (2) to examine how the content, and sender of a public opinion signal affect its evaluation by elected representatives, and (3) to investigate to what extent and how the resulting evaluation of public opinion affects their political actions. POLEVPOP tackles these questions with a comparative, multi-method design covering thirteen different countries (Australia, Belgium, Canada, Czech Republic, Israel, Portugal, Sweden, Switzerland, Germany, Norway, Denmark, Luxembourg and The Netherlands) and relying on two waves of interviews, surveys, and experimentation with national-level politicians and on parallel citizen surveys and experimentation.

In the first period (18 months), we collected a first wave of data among politicians and citizens, cleaned and merged data, and started working on academic output. Here’s a more detailed overview of the work done:
- The first two months were devoted to the design of survey and interview questions, and to pre-testing the questions. In addition, we collaborated with our international academic partners to design and program the survey instruments.
- In March 2022, we fielded an online survey with 2,500 citizens in thirteen countries (in collaboration with Dynata). These citizen data are used (1) to get public opinion figures for the politician surveys and (2) to serve as a benchmark/comparison for politicians’ answers.
- From March to December 2022 we survey-interviewed 214 national members of parliament and ministers face-to-face in Flanders, Belgium. At the same time, we coordinated a similar data collection effort in Wallonia and twelve other countries. In total, 1,185 elected representatives were survey-interviewed in thirteen countries.
- From January to April 2023, the comparative survey data were cleaned and merged into one dataset. The open interview questions were transcribed (automatically), checked manually, and then translated into English.
- In May and June 2023, we started working on producing academic output.
- In June we organized a meeting in Antwerp with all partners to discuss paper ideas and first results. In particular, we discussed papers about how politicians rank different criteria of public opinion evaluation (in general) and on politicians’ evaluation of real public opinion information.
POLEVPOP presents an innovative take on representation and deals with a normatively relevant matter. Scientifically, the project deals with an important void in our knowledge about representation. Previous work established that public opinion responsiveness varies, but made limited progress in explaining why that is the case. Miller and Stokes (1963) showed more than 50 years ago that members of U.S. Congress’ voting is affected by their perception of what their constituencies prefer and by their own ideology; but the question why public opinion plays a larger role for some than for other issues remained unanswered. One possible mechanism is the selective exposure of politicians to public opinion signals: being bombarded by boundless signals from society some signals get through while other signals are filtered out and do not reach politicians (Walgrave and Dejaeghere 2017). An alternative mechanism is that signals, although getting through, are misinterpreted leading to inaccurate public opinion perceptions. But previous work has shown that these mechanisms only partially account for selective responsiveness.
POLEVPOP takes a new direction and puts forward another explanation for selective responsiveness. Even if signals do get through and are recorded accurately, politicians may not be motivated to be responsive to public opinion and to turn it into policy. This motivation depends on whether public opinion matches politicians’ ideological preferences but also, this is my central expectation, on their appraisal of public opinion. Presenting a fresh take on the principal puzzle of selective responsiveness, this project focuses on an essential precondition for policy responsiveness to come about.
As such, POLEVPOP engages in an attempt to shed new light on one of the most crucial processes in contemporary democracies. Although the quest of how elected representatives appraise public opinion may come across as basic, the truth of the matter is that we know close to nothing about it. Politicians’ perceptions of public opinion get increasing attention but the bulk of that work addresses the accuracy of their perceptions. The equally important matter of whether and how public opinion signals are appraised by politicians is left untouched. We do not know the criteria politicians employ to qualify public opinion. Hence, this project ventures into terra incognita by trying to unearth how politicians rate public opinion, and how these evaluations impact political action.
Model of Representation Including the Evaluation of Public Opinion