Periodic Reporting for period 1 - POLEVPOP (How politicians evaluate public opinion)
Reporting period: 2022-01-01 to 2023-06-30
Scholars of representation argued that representation is the result of the clash between ideology and public opinion (see Miller and Stokes 1963). Especially when a public opinion signal contradicts their ideological preference, politicians are caught in a double bind. Politicians’ course of action sometimes follows and sometimes contradicts popular preferences. Existing work, mostly comparing public opinion and policy output, found that policy responsiveness is selective with high responsiveness alternating with low public opinion responsiveness depending on, for instance, the issue. These macro studies are not very successful in pinpointing the exact mechanisms generating selective responsiveness. Why do some policies comply with the democratic ideal of responsiveness while other policies cross people’s preferences? We believe part of the answer lies in how politicians evaluate public opinion. The idea underlying this project, and its key innovation, is that politicians’ appraisal of public opinion makes them attribute more or less weight to public opinion signals. Their positive or negative public opinion evaluation allows them to arbitrate between the internal motivation of their ideology and the external pressure of public opinion. For various reasons—reasons I aim to scrutinize in the project—some public opinion signals are downplayed while other signals are taken seriously. This ‘scoring’ of public opinion by politicians forms the core of Politicians’ Evaluation of Public Opinion (POLEVPOP).
In the first period (18 months), we collected a first wave of data among politicians and citizens, cleaned and merged data, and started working on academic output. Here’s a more detailed overview of the work done:
- The first two months were devoted to the design of survey and interview questions, and to pre-testing the questions. In addition, we collaborated with our international academic partners to design and program the survey instruments.
- In March 2022, we fielded an online survey with 2,500 citizens in thirteen countries (in collaboration with Dynata). These citizen data are used (1) to get public opinion figures for the politician surveys and (2) to serve as a benchmark/comparison for politicians’ answers.
- From March to December 2022 we survey-interviewed 214 national members of parliament and ministers face-to-face in Flanders, Belgium. At the same time, we coordinated a similar data collection effort in Wallonia and twelve other countries. In total, 1,185 elected representatives were survey-interviewed in thirteen countries.
- From January to April 2023, the comparative survey data were cleaned and merged into one dataset. The open interview questions were transcribed (automatically), checked manually, and then translated into English.
- In May and June 2023, we started working on producing academic output.
- In June we organized a meeting in Antwerp with all partners to discuss paper ideas and first results. In particular, we discussed papers about how politicians rank different criteria of public opinion evaluation (in general) and on politicians’ evaluation of real public opinion information.
POLEVPOP takes a new direction and puts forward another explanation for selective responsiveness. Even if signals do get through and are recorded accurately, politicians may not be motivated to be responsive to public opinion and to turn it into policy. This motivation depends on whether public opinion matches politicians’ ideological preferences but also, this is my central expectation, on their appraisal of public opinion. Presenting a fresh take on the principal puzzle of selective responsiveness, this project focuses on an essential precondition for policy responsiveness to come about.
As such, POLEVPOP engages in an attempt to shed new light on one of the most crucial processes in contemporary democracies. Although the quest of how elected representatives appraise public opinion may come across as basic, the truth of the matter is that we know close to nothing about it. Politicians’ perceptions of public opinion get increasing attention but the bulk of that work addresses the accuracy of their perceptions. The equally important matter of whether and how public opinion signals are appraised by politicians is left untouched. We do not know the criteria politicians employ to qualify public opinion. Hence, this project ventures into terra incognita by trying to unearth how politicians rate public opinion, and how these evaluations impact political action.