Skip to main content
European Commission logo print header

Evaluation and Self-Evaluation of Universities in Europe

Exploitable results

By resources, we mean evidently the personnel (academics, engineers, technicians, administrative personnel and workers), but also the financial resources. At last, the organisation is also a resource: the university government and the structures (faculties, departments, administrative and technical services). In comparison to the teaching and to the research evaluations, the evaluation of resources is still weakly structured and is relatively new, even if the evaluation of staff, as individuals, is traditional. The evaluation of resources is centred on the efficiency: are they used in the best way to reach good results? Evaluations of academic staff: Several tendencies are observed in Europe. In the recent period, the number of teachers has increased because of the increase of the student number. Academic staff is traditionally organised in disciplines and in ranks; their missions are teaching, research and responsibilities in these two fields. Their evaluation, as individuals, is traditional: they are evaluated when they are recruited and during their career. Academic staff evaluation is traditionally made by the peers of the same discipline. Another tendency is largely observed: the lack of professional and of continuous training to practise the teaching function. Beyond these common tendencies, we observe differences and changes: supervision rates (number of teachers by student) differ according to the countries (they are higher in the Northern Europe) and according to the disciplines (they are higher in the scientific and health disciplines); teachers are, in most countries, civil servants, but the number of teachers who have a contract for a limited duration is increasing. The changes in the evaluation field are also obvious. We observe a strengthening of the recruitment power by each autonomous university, parallel to the traditional power of the disciplinary professional bodies. The evaluation of the teachers' contribution in the quality and in the performance of teaching and research is developing. This tendency is paradoxical: evaluation of the individual performance and of the collective one sometimes coexist. A logical, but still rare, consequence of performance measures is the development of individual and/or collective contracts, which fix the objectives to reach. The recruitment always associates at least two decisional authorities (the faculty or the department, and/or the university as such, and/or a national authority); it always pays attention for research. Conversely, the composition and the size of recruitment committees, the periodicity of recruitment, the modalities and the criteria taken in count to evaluate the applicants (apart from the research criteria) are very diverse. The role plaid by the central authorities of the university seems to be increasing: they decide on the employment policies (they are directly linked to the financial resources), and so they influence the number of jobs; sometimes, they influence some elements of the wages (amount of premiums and conditions to have them); at last, these authorities have a power of sanction. The teachers' collective contribution in the faculty teaching and in the department researches is more and more evaluated. Sometimes, the evaluation of teaching involves an evaluation of the pedagogical performances of the teachers; a phase of self-evaluation of the working conditions, of the time devoted to the preparation of lectures and of the interrelations with the students can precede the evaluation; students can be associated to the process. External evaluators, when they evaluate teaching or teaching projects, can evaluate teachers' abilities and skills; it is the same for the evaluation of the research centres. A possible consequence is a greater competitiveness between colleagues and a greater control by them; so the traditional freedom of teaching and research could be limited in the future. Evaluations of the non-academic staff: There are two kinds of non-academic staff evaluation; managers, engineers, technicians, administrative personnel, workers are concerned. The first deals with the people, considered as individuals, and with the main steps of their career (recruitment, learning, stabilisation on the job, promotion, mobility); that evaluation organises people' flows according to the available jobs in administrative structures, according to rules and to individuals' demands. The second one, more innovative, deals with the collective contribution of those personnel to the efficient and effective university functioning. If the first type of evaluation is present in all the countries, the second one is only beginning. In a general context of increasing workloads for universities, two configurations of countries can be identified on the basis of two parameters: the number of non-academic staff and the university financial situation. The first configuration deals with the countries of the northern Europe (Finland, United-Kingdom, Germany, Norway): there is a great number of non-academic staff, but the financial pressure on the universities is strong. The second one deals with the countries of the southern Europe (Spain, France, Italy, Portugal): there is a relatively smaller population of non-academic staff, but the financial pressure on the universities is less strong. So, the countries, in which the rate of supervision (number of non-academic staff by student) is the best, implement, because of the financial pressure, evaluations of the non-academic staff contribution. In the first configuration of countries, the evaluation objectives are rather: to measure the effectiveness of the administration, to use people in the best way in order to achieve the university missions, to create performance and quality indicators of the delivered services, to reduce non-academic number, to simplify and to rationalise the administrative structures, to find the best arbitration between centralisation and decentralisation of the administration, to clarify the hierarchical lines. In the second configuration of countries, the evaluation objectives are rather: to have a better knowledge of the non-academic population, to check the implementation of the administrative rules, to create individual payment systems, to set up equal and standardised workloads, to make the personnel more professional and responsible, to create new functions and new jobs. The process of the non-academic staff evaluation is relatively slow: it needs several years, knows successive steps, and involves a large participation of personnel. The evaluation deals with a lot of objects, linked to questions of effectiveness and efficiency: job contents, tasks, task allocation, relations and orders, payment systems (for the job and for the individual performance). The most frequent evaluation effects, or the clearest ones, are the development of staff continuous training, the clarifying of responsibilities, the development of computerised information systems, the creation of internal evaluation units of cost and/or performance indicators. Among the factors pushing to the evaluation of the non-academic staff contribution to the university functioning, we observe: the stabilisation of the student number (the university has to be attractive), the budget "globalisation" and the potential financial difficulties, the structure diversification and the strengthening of the central administration, an administration government which gives the priority to the quality of services delivered to the users. Among the factors slowing down the evaluation, we observe: strict external regulations (recruitment and mobility rules, payment systems, promotion and career directly linked to the seniority, working time, job security...), uncertainties about the administration government (lengthening of the hierarchical lines, lack of unity in the hierarchical lines, persistence of the traditional trade union control).
In the present period and in most countries, university teaching and research missions are becoming more precise. From now, universities have to prepare students to employment, to participate to the production and to the updating of skills which are required by the changes in production systems (as well for the future employees as for the present ones), to contribute to the economic development and more particularly to the dynamics of the territory in which they are located. In spite of the importance of these new assignments, their evaluation has been developed more recently in comparison to the teaching and research evaluations. This evaluation field is not very regulated and not very institutionalised. Optional, the evaluation is largely informal and punctual. When it is implemented, it is characterised by a great variety of actors, contents and objectives, by the diversity of evaluation instruments. Several evaluation fields are possible: creation of profession-oriented diplomas and/or changes in their contents, continuous training, and insertion of students in the labour market, university territory relationship. The development of an evaluation dealing with the professional insertion of students depends on the labour market situation, on the university seniority, on the specificity of the offered diplomas. Structures, provisions, methodologies, measures are diverse: punctual surveys, observatory inside the university, regional observatory working together with a national observatory. Evaluation of the university-territory relationship is, still more, marked by the multiplicity of objects and objectives: increasing skills of the local young population, fixing young people on the territory (avoiding their departure towards the great university towns), increasing the locals markets of products and service by the student consumption, giving life to the territory by cultural and social activities organised by the university, creating jobs within the university, launching university-firm partnerships for research and technological transfers, partnership between universities of the same region. The attention paid for the new university missions is explained by different factors: graduates' difficulties to find a job (so, the students question the relative value of different university degrees), greater attention paid by firms in the university resources and in the continuous education opportunities, a partial de-centralisation of education questions towards the local authorities (specially towards the regions), increasing of university funding by the local authorities. However, the weak development of the evaluation of the education-employment-territory relationship can be explained by a lot of obstacles, due to the actors and to the difficulty in building questions and analysis. Graduates keep few contacts or do not have contacts with their university and, so, make few feedbacks about the teaching they have received. Some teachers are reluctant to an evaluation of the diplomas and of their contents by the professional milieus ("employers only know their specialised and short term interests"); they are in favour of an evaluation, which also measures the social pertinence of degrees and not only their economic performance. Employers are interested in the partnership with universities, however they are reluctant to promise to hire students in training courses, to recruit new graduates, to fund the research in a long term. Public authorities, State and Regions, are also responsible lack of implication o the official evaluation bodies in the field, weak allocation of specific financial resources. The weak development of this evaluation field is also explained by the difficulty of building clear problematic. Methodologies to measure the students' professional insertion are got under control, but a central question remains: are the difficulties to find a job for the students graduated in a given discipline explainable by the poor quality of teaching and/or by the deterioration of the labour market due to other factors? The evaluation of the university-territory relationship questions the diversity of territories: which is the pertinent space for the evaluation? The local space? The regional, the national, the European ones? More, the results of the interrelation between the education system and the social, economic (labour market), cultural environment are particularly difficult to apprehend and to interpret, because the parameters to take into account are a lot.
By resources, we mean evidently the personnel (academics, engineers, technicians, administrative personnel and workers), but also the financial resources. At last, the organisation is also a resource: the university government and the structures (faculties, departments, administrative and technical services). In comparison to the teaching and to the research evaluations, the evaluation of resources is still weakly structured and is relatively new, even if the evaluation of staff, as individuals, is traditional. The evaluation of resources is centred on the efficiency: are they used in the best way to reach good results? Evaluations of the organisation: The evaluation of the organisation concerns a whole of resources: the university government and the decision-making process, the teaching and research structures, the financial resources. The evaluation questions: is the university organisation efficient and effective to achieve the teaching and research missions which are assigned by the law? We observe, in the case studies, two tendencies. Evaluations of the organisation are rather specialised; they are not directly linked to teaching and research evaluations; they are weakly co-ordinated. The evaluation of the organisation is developing, but it is not institutionalised in all the countries and in all the universities; the evaluation of the university government and of the decision-making process meets a lot of obstacles. Evaluations of the university government: It is paradoxical to observe that the external evaluation of universities, which is developing in all the European countries, pays little attention for the university governments, in spite of the fact that they are reinforced. How the university government analyses the needs of the society, of the users, of the partners? How they decide the objectives to achieve? Which priorities they set up? Which resources they allocate to the priorities? How and by which organisation they implement them? How they evaluate the results? Who are the governing people? If we have to rule out the hypothesis that governments have a little influence on the university results, several hypothesis can explain the weak development of the governments evaluation. The governing staff, when he decides an external evaluation, can exclude the topic from the evaluation. The external evaluators do not evaluate the government because he has to implement the imposed or recommended changes. To evaluate the governments can lead to destabilise some of them and this fact is not desirable because, at present, it is difficult to find teachers who accept to take responsibilities. At last, university governments are elected for a limited period: the evaluation would take the place of the election. In fact, the university government and particularly the rector play a key-role in the development of external evaluations and in the dynamics of internal changes. At the same time, the external evaluation strengthens the central government of universities. More, the evaluation is one of the factors contributing to the consolidation of a specific government, the presidential one or more precisely the presidential-managerial one (strong rector and strong administrative hierarchical line working according to the entrepreneurial criteria). However, the perpetuation of such a government is governed by the alliances or compromises passed with the two traditional university governments, the collegial one (one cannot abolish the influence of the academic staff profession bodies), the bureaucratic one (with an administrative hierarchy who controls the implementation of rules set up by the public authority). Evaluations of financial means: The role of public resources in the university funding is predominant in all the countries. However, changes are obvious and allow understanding orientations of resources evaluations. All the changes seem to be the consequence of increased financial pressures: in a context of Higher Education growth, public authorities want keep under control and to rationalise financial resources allocated to universities. The financial pressure is higher in the countries of northern Europe, i.e. in the countries in which the expenses by student are higher than in the average of OECD. First evolution: the "globalisation" of the allocated resources. Universities can distribute a lump sum budget according to their strategy; it is a way to reaffirm the autonomy and the responsibility of each institution. In fact, the globalisation is not total: it rarely includes investments for real estates; in France, the lump sum budget does not include the civil servant wages. Another tendency is that the resources are allocated by the public authorities not only according to activity criteria (number of students for instance), but, more and more, upon the basis of contracts. In the first time, these contracts fund objectives, bargained with and accepted by the public authorities; in the second time, but according to a proportion, which is still minority, contracts allocate funding according to the achieved results (funding according to the performance achieved during the previous period). We also observe a tendency to allocate funding for a number of years for the investments or for the contract objectives. At last, we observe a tendency to the diversification of financial sources: increased funding by regional public authorities, by firms, by students (student fees). To evaluate the financial means is to evaluate the resources and the expenses of each institution and of its components. Resources and expenses are presented under the form of a budget (resources and expenses for the following period) and/or of a balance sheet (resources and expenses of a previous period). The budgeting process (preparation, discussion and vote by the university council) is an internal evaluation of the financial means. At the same time, budget and balance sheets are the main tools for the evaluations conducted by external evaluation bodies. A clear and transparent presentation of the balance sheet is the necessary basis for the "accountability" principle. This principle is still rarely implemented: we have to emphasise that budgets and balance sheets, presented in the university councils, have a very diverse structure, not only from a country to another, but also from a university to another within the same country. The most often, comparisons are impossible. A second tendency is, in most of countries, that the university balance sheet is regulated by the public accounting: this limits the financial autonomy of universities; as a consequence, universities sometimes set up more flexible and private structures (Foundations or Associations), particularly for the research or continuous training activities. One of the financial means evaluations is the "conformity control": resources and expenses are examined by external bodies that have in charge the economic and financial control of the public institutions. In some countries, we observe the tendency of these bodies to audit the pertinence of expenses. A third tendency is the punctual recourse to private consultant agencies: they audit such or such aspect of the financial situation. The last tendency is the development of the internal evaluation. It is developed in relation with the internal process of resources allocation. New allocation mechanisms and new criteria of resources distribution between the university components: they allow changes in allocated resources according to the strategic choices, decided by the university. The internal evaluation is particularly developing when there is a financial pressure, limited or reduces resources; it is also an effect of the external evaluations. It needs, at least in a first period, a strengthening and a centralisation of the budgetary and financial management: we observe, for instance, the creation of central funds that the rector can allocate according to the university strategy. Then, the evaluation can lead to decentralised budgetary policies (each component is responsible for its resources and its expenses), to policies of internal contracts (funds are allocated to a university component according to its objectives and to its results).
Universities have a mission of teaching and learning. The diversification of this mission is a tendency of the present period: universities do not have only to disseminate a high level knowledge for students registered in a process of an initial education, but, and more and more, they also have to prepare students to employment, to organise the continuous training for employees. Knowledge is structured in diplomas: they are dominantly disciplinary or dominantly profession-oriented (in that case, they often conjugate several disciplines). The dissemination of knowledge and expertise is organised according to a progression (undergraduate to postgraduates degrees). The tendencies observed in Europe are: diversification of the degrees, increasing importance devoted to profession-oriented degrees and to high level degrees (masters and doctorates), will to increase the number of graduates to have a better economic and social development. The diversification and the lengthening of studies involve a diversification of the student population according to the age, the status, the attendance modalities (part-time or full-time, at distance, sandwich courses...) In most of countries, public authorities control the degrees, either by defining their contents (national curricula), either by distributing them on the territory, or evidently by allocating resources to organise them. This traditional control (a priori control) is a first form of evaluation; for profession-oriented degrees, the control is also made by professional bodies (accreditation procedures). At the same time, universities have, traditionally but also by law, an autonomy in the pedagogical matters. Evaluation of teaching and learning cannot be understood without that double reference (external control and pedagogical utonomy). In the nineties, the evaluation of teaching and learning is developing: it deals with diverse aspects and has varied forms. The external evaluation, made by national bodies or by cooperative bodies initiated by some universities, has two great modalities. The first compares the teaching of a given discipline in all the universities or in a whole of universities; the second one compares all the diplomas inside only one university. These two forms present an advantage and a disadvantage. The first allows a comparative state of play of a discipline at the national level; so, each university is able to know its strong and eak points; however, each university is permanently engaged in an evaluation process of its different degrees. The second form concentrates in the time all the teaching evaluations, makes easier the internal mobilisation, links in a better way the teaching evaluations and the organisation functioning; conversely, it makes difficult the comparison of a specific diploma between universities. The external evaluation is successful when it allows setting up internal evaluation processes in a permanent way, when changes are decided in the teaching contents, in the learning methods… In that case, the pedagogical autonomy is more or less practised, by the way of innovative practices (student participation in evaluation). Nevertheless, internal evaluation of teaching and learning is under pressure: it is an effectiveness evaluation, looking for an improvement of teaching quality, pedagogical methods, student learning, successes in the exams, insertion of graduate students in the labour market. At the same time, the internal evaluation has to take into account the available and limited resources, to rationalise and to save them: it is also an efficiency evaluation. So, it is not surprising if some teachers are reluctant to evaluation, if evaluation sometimes generates frustrations (only one example: teaching in small group is very effective, but, due to the lack of resources, it is systematically developed only in some countries.
The relevance of descriptive statistics and indicators is growing in all countries, particularly in the countries which already have a longer tradition of evaluation or which develop systematic evaluations. On the one hand, indicators enable comparisons between the performance of an academic and/or department and/or university with the performance of other academics, departments or universities at a given point of time (‘synchronic’ perspective). On the other hand, they enable comparisons between performance over a period of time (‘diachronic’ perspective). At present, there are statistics/indicators produced at different levels: at an international level (OECD, Eurostat), at a national (and sometimes regional, at university level. Despite the obviously growing relevance of statistics/indicators in evaluation in general there is considerable variation in terms of the way in which statistics and/or indicators are actually used at a university and /or national level in the different countries: statistics and indicators are a social construction; they always answer contextualised questions, questions linked to political, economical, social stakes. Statistics and/or indicators have traditionally been used for the purposes of providing information. The various other aims highlighted (quality assurance, reduction of costs, distribution of resources and marketing) are all closely linked with "new evaluation" procedures. In all the countries, statistics and indicators are produced in various fields. Four fields can be identified where indicators are typically used as part of evaluation processes, namely, teaching (number of students per subject, per university...; number of students who are successful in examinations), research (number and size of research grants attracted by an academic or by an institution, publications), costs/resources (to identify the inefficient use of resources) and the relationship between education and employment (to measure the success of students, coming from a certain institution or with a qualification in a certain discipline, on the labour market). There are clearly tremendous problems associated with the production and interpretation of statistics and indicators, especially in relation to international comparisons using nationally produced statistics. This does not necessarily mean that one has to object the use of statistics/indicators in evaluation processes at all; one simply has to take these possible difficulties into account when using them. Problems of reliability (this can be explained by either ex-post corrections of former provisional statistics or by the fact that the basis for the calculation of statistics/indicators has changed over time). Problems of validity (do drop-out rates of students really measure the quality of the course and/or of the teaching? do citations by other academics which are the basis of citation indices really measure the quality of the research of a scholar?). Problems of interpretation (in Germany, the indicator "length of study per subject" is a highly debated aspect of the present discussion on university reform). Three kinds of statistics/indicators are produced for evaluation matters at universities: input (number of students, number of academic staff...), process (student drop-out rates...) and output (examination results, employment rate...). What can be observed as a trend in Europe at present is a shift of emphasis from input to process and output. With the political pressure for reforms of the public sector (the desire for more efficiency and the introduction of market principles, with their focus on outputs and outcomes), universities also came under scrutiny. Outputs and resource allocations are more and more linked. Statistics/indicators are of a growing importance in the evaluation procedures of the universities. Despite substantial criticism of their use, there is a legitimate interest of the public to get concise and precise information about what is going on within the universities and how the tax-payers’ money is spent, by whom and for what purposes and whether this is being done in an efficient way. Statistics/indicators might help to keep universities under public and democratic control.
By resources, we mean evidently the personnel (academics, engineers, technicians, administrative personnel and workers), but also the financial resources. At last, the organisation is also a resource: the university government and the structures (faculties, departments, administrative and technical services). In comparison to the teaching and to the research evaluations, the evaluation of resources is still weakly structured and is relatively new, even if the evaluation of staff, as individuals, is traditional. The evaluation of resources is centred on the efficiency: are they used in the best way to reach good results? Evaluations of structures: One of the consequences of the growth in the student number is the increasing number of structures within universities, particularly at their central level. The other tendency is this of more complex structures because of the diversified missions assigned to universities. So, the problems questioned by evaluations are: do new structures have to be set up? Do the existing structures have to be split or merged? How structure levels are pertinent? Is it necessary to centralise or decentralise? Are the same tasks achieved by several structures? The will of more flexible structures, more dynamic, ready to fill the users' needs according to the quick changes of the environment, is central in the evaluation of structures. Three great types of structures have been identified for the analysis: traditional academic structures (faculties, departments, institutes, research centres...), support structures for teaching and research (libraries, computing centres...), non-academic structures (administrative and technical services, the most often centralised, such as personnel, financial, student registration services...). National evaluation bodies essentially evaluate teaching and research, and, at a lower degree, they evaluate the organisation : 62 evaluations of structures have been identified in the 31 case studies; they have been classified and some statistical operations have been made on them (it is the only case in the research). More than half of the evaluations deal with the academic structures of teaching and research. On third concerns the non-academic structures and only one sixth the support structures for teaching and research. These evaluations are essentially decided by the universities in the context of their autonomy: it seems to be an important condition to monitor changes. They are internal evaluations in one third of the cases, external evaluations but decided by the university management in another third of cases, audits by private consultant agencies in 5% of the cases. The evaluations of structures are decided by the public authorities, only in 25% of the cases. 75% of the identified evaluations have been made in the universities of general character: they essentially evaluate their academic and their non academic structures and, at a lesser degree, their support structures; they decide on the evaluation and on their external or internal realisation as the average of universities. The profession-oriented universities of education and applied sciences evaluate, more than the average, the support structures and the non-academic structures; more than the average, they make internal evaluations and mobilise private agencies, as if, because of their proximity with firms, they adopt their behaviour. At last, the universities of territorial development do not evaluate a lot their structures; they are more concerned by the external evaluations (evaluations essentially concern their support structures); maybe, these universities, because they know an important growth and because they frequently set up new structures, are not ready to evaluate the structures (it would be a non-sense for the new ones). More, other results are important. It seems that there are not more evaluations in the universities with a great autonomy than in the universities with less autonomy; nevertheless, it seems that there are more external evaluations when the university autonomy is strong. The degree of decision-taking decentralisation seems to be an interesting advantage for the valuation development. Internal evaluation bodies seem to play a pushing role in the development of the organisation evaluation. However, the most important factor for the evaluation of structures is the financial situation: the financial is pushing the rationalisation.
The configurations between the actors are crucial for the results and the outcomes of evaluation, for the organisational change and learning. Two criteria can be used to elaborate typical configurations: the initiator of the evaluation process and the system of the authority, power, dependency or autonomy between the key actors. The key-question is the type of connection between the evaluation, decision, negotiation and action. Controlling evaluation, the most often, is initiated by the public authorities and is compulsory, its aim being to elaborate or legitimise decisions of financial, statutory or organisational nature, which are made by the initiator on behalf of his position of authority. Autonomous evaluations result from an initiative by the evaluees themselves (university, faculty or laboratory, local actor). Hybrid situations, conjugating controlling evaluation and autonomous evaluations, are a lot. This contractual form is highly unstable, as it combines contradictory elements: devices which correspond to the logic of controlling evaluation, such as the use of quantitative indicators triggering off automatic decisions of resource allocation on a global basis along with the approaches which favour negotiations on a project. The case studies demonstrate several experiments in "benchmarking" between two or several universities, co-operative initiatives (bilateral or federate). These experiments have been based on joint initiatives of two or several universities providing interesting examples of participatory "cross-evaluation". Bilateral configurations allow an opportunity to establish a climate of confidence, especially if universities are not competing with one another. However, the problem of means is left open. Horizontal multilateral co-operative forms of the federate type make it possible to disconnect evaluation from decisions, which means a less threatening process for the evaluees. At last, it seems that a combined set of evaluation initiatives is not the most frequent situation, but an uncontrolled accumulation of evaluation ventures, launched independently by the various actors and/or bodies. Three problems result from the situation: - A problem of priorities, - A problem of timing (calendar), and - A problem of co-ordination (coherence). But, it may happen that an initiative triggers off another initiative, or is strengthened by another initiative; central initiatives in the field of evaluation are not necessarily a hindrance to the development of the initiatives at the level of the universities. In the case of a central initiative, the evaluation can be delegated to the administration of the Ministry, to official evaluation bodies of their own standing, to a consulting firm, to the university itself. In the case of an initiative coming from the leadership of the university, the operation may be commissioned to an agency set up by a Rectors’ conference, to a management consulting group, to an internal body. There are several problems connected with the commissioning processes: the initiator has difficulties to make the complex terms of reference sufficiently explicit for the commissioned agency, or for the commissioned agency to the experts; there is a risks of bureaucratisation of the agency, especially if it is established on a long-term basis; the commissioned agency or experts may lack responsibility if they are not considering the consequences of the evaluations they produce; this can lead to irresponsible and decontextualized evaluations or to loose, unstructured and consensual evaluations which are not very useful. Concerning the institutionalisation of the evaluation at the university level, the case studies demonstrate the crucial importance of permanent structures, ensuring coherence and appropriate timing of the various evaluation procedures in the university, and maintaining continuity. They can provide support for de-centralised initiatives. They seem to be efficient only if they are tightly linked with the direction on one side and with the faculties on the other. The degree of participation in the evaluative process has an effect on the acceptance of the results, on the fate of actions or decisions, which can be taken, and on the conditions of long-term learning processes. The quality of participation is also different if the decision is seen as an open one to be taken on the basis of the results of the evaluative process, or if the evaluation appears to the actors as being set up to legitimise a decision which has already been taken. The link between evaluation and decision varies a lot according to the model of evaluation. Controlling evaluations may become destructive if the complex field of negotiations and political decisions is eliminated by automatically connecting the indicators to the decisions. The contractual model acknowledges the importance of negotiation on the basis of the results, linking it with a negotiation on the project, which is being set up by the university in exchange for allocations. Nevertheless, the link to actual decisions is an important component of motivation among the evaluees, as well as a factor of responsibility for the experts.
Universities have a traditional mission of research; but a diversification is in progress: fundamental research and research-&-development are from now linked, in favour of the economic development. The tendencies observed in Europe are: a research activity existing in all the universities (we do not observe a cut between research universities and teaching universities), an extension within universities of specific structures devoted to research at the expense of structures associating teaching and research, an extreme fragmentation of the research fields (linked to the knowledge evolution and to the question - non till resolved - of the interdisciplinary co-operation), a diversification and a specialisation of the financial resources (decrease of the funding issued from the university lump sum budget), a stronger competition between universities to catch external funding. The strengthening of the research evaluation, of its activities, resources, processes and results are another observed tendency: universities have to be accountable of their researches, of their research performance, because the allocated financial resources are important. More precise points: the development of external evaluation, linked with an internal evaluation, is almost universal; the external evaluation can be in keeping with a contractualisation process between the public authority and the university; the conjugation of external and internal evaluation makes more complex evaluation processes. The research evaluation is more and more a collective one, an evaluation of the research units and no more only an evaluation of the researchers as individuals. Evaluating the quality and the performance of research makes necessary the use of referents, of criteria: the tendency is the use of international quality standards, the mobilisation of international experts; the development of European research contracts has certainly reinforced this tendency towards the homogenisation of referents. The evaluation of research associates qualitative evaluation and quantitative evaluation. It makes compulsory the recourse to experts of the research field and to their qualitative judgements. However more and more often, it mobilises quantitative indicators, specially when research centres have to be compared: abilities to catch external funding, publications ranked by importance, international co-operations and mobility, post-graduate education and training for research... Conversely, quantitative indicators for the applied sciences (patents, mobility of researchers towards the industry, creation of small companies issued from research centres) are not so developed. In all the countries, publications are taken into account in the evaluation process; researchers accept this evaluation criteria: the potential perverse effect - researches without risks or publishable in a short term, multiplication of publications issued from the same research - is not actually observed and can be easily maintained under control (for the last Research Assessment Exercise, British researchers have been allowed to submit to evaluation a maximum of four publications). The most difficult question is the question of comparability of the quantitative indicators between the scientific disciplines and the social/human sciences: the latest meet difficulties to have good scores for each of the indicators; the question is resolved in some universities, when they have set up internal policies of contractualisation, of partial resource re-allocations between the disciplines on the base of locally bargained criteria. A last tendency is observed. The evaluation results have more and more an financial impact: policy of excellence centres, receiving additional resources. However, the tendency is not universal: development funds, internal or external to universities, allow the creation of new research centres or the launch of new research topics; they counterbalance the tendency to a funding based upon results.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available