Community Research and Development Information Service - CORDIS

Evaluation and self-evaluation of universities in Europe: Evaluations, statistics and indicators

The relevance of descriptive statistics and indicators is growing in all countries, particularly in the countries which already have a longer tradition of evaluation or which develop systematic evaluations. On the one hand, indicators enable comparisons between the performance of an academic and/or department and/or university with the performance of other academics, departments or universities at a given point of time (‘synchronic’ perspective). On the other hand, they enable comparisons between performance over a period of time (‘diachronic’ perspective). At present, there are statistics/indicators produced at different levels: at an international level (OECD, Eurostat), at a national (and sometimes regional, at university level. Despite the obviously growing relevance of statistics/indicators in evaluation in general there is considerable variation in terms of the way in which statistics and/or indicators are actually used at a university and /or national level in the different countries: statistics and indicators are a social construction; they always answer contextualised questions, questions linked to political, economical, social stakes.

Statistics and/or indicators have traditionally been used for the purposes of providing information. The various other aims highlighted (quality assurance, reduction of costs, distribution of resources and marketing) are all closely linked with "new evaluation" procedures.

In all the countries, statistics and indicators are produced in various fields. Four fields can be identified where indicators are typically used as part of evaluation processes, namely, teaching (number of students per subject, per university...; number of students who are successful in examinations), research (number and size of research grants attracted by an academic or by an institution, publications), costs/resources (to identify the inefficient use of resources) and the relationship between education and employment (to measure the success of students, coming from a certain institution or with a qualification in a certain discipline, on the labour market).

There are clearly tremendous problems associated with the production and interpretation of statistics and indicators, especially in relation to international comparisons using nationally produced statistics. This does not necessarily mean that one has to object the use of statistics/indicators in evaluation processes at all; one simply has to take these possible difficulties into account when using them. Problems of reliability (this can be explained by either ex-post corrections of former provisional statistics or by the fact that the basis for the calculation of statistics/indicators has changed over time). Problems of validity (do drop-out rates of students really measure the quality of the course and/or of the teaching? do citations by other academics which are the basis of citation indices really measure the quality of the research of a scholar?). Problems of interpretation (in Germany, the indicator "length of study per subject" is a highly debated aspect of the present discussion on university reform).

Three kinds of statistics/indicators are produced for evaluation matters at universities: input (number of students, number of academic staff...), process (student drop-out rates...) and output (examination results, employment rate...). What can be observed as a trend in Europe at present is a shift of emphasis from input to process and output. With the political pressure for reforms of the public sector (the desire for more efficiency and the introduction of market principles, with their focus on outputs and outcomes), universities also came under scrutiny. Outputs and resource allocations are more and more linked.

Statistics/indicators are of a growing importance in the evaluation procedures of the universities. Despite substantial criticism of their use, there is a legitimate interest of the public to get concise and precise information about what is going on within the universities and how the tax-payers’ money is spent, by whom and for what purposes and whether this is being done in an efficient way. Statistics/indicators might help to keep universities under public and democratic control.

Reported by

Universite de Paris X (Nanterre)
200 Rue de la Republique
92001 Nanterre
See on map
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top