Evaluation of the Community Innovation Survey (CIS)
Phase 1, 1995
The Community Innovation Survey (CIS) was organised jointly by DGXIII/SPRINT/EIMS and EUROSTAT and it was a first attempt to collect firm-level data on input to and output on innovation throughout the European Union.
In recent years the political awareness of the crucial role played by technology in economic devel-op-ment has been rapidly increasing. Whereas only 15 to 20 years ago technology related policy instruments were rather unknown and little utilised tools in economic policy, they have increased in importance and today they make up some of the core instruments in industrial policies in many Western countries.
However, there is a serious lack of empirical knowledge that can guide these policies and thus there is a severe risk of employing less efficient instruments to less suitable target groups - or even a risk of employing wrong policies, resulting in retardation of technological change.
Even today, data on innovation and diffusion consists mostly of indirect indicators such as R&D, patents, trade in high-tech products and technological balance of payment. Even though much information has been retrieved from this data through refinement, extension and re-classification, much still has to be explored if technology policy should be built on a sound empirical basis. What we know now can be considered to picture only the 'tip of the innovative iceberg': there is a hidden part which we must explore (cf. sections 2.2-2.4).
At the moment, direct firm-based surveys of innovation are the best possible method for shedding light on the hidden part of this 'innovative iceberg' since this type of survey can supply information on, e.g., innovative strategies, sources of innovation, barriers to innovation, innovative efforts, innovative results and diffusion of technology (cf. chapter 3).
Furthermore, as is the case with most statistics, the value of this type of information is multiplied once it is comparable across countries or over time. Thus, if quality and comparability prove to be satisfactory, this new set of international data can provide vital information for the setting up of technology policy both at the national level and at the EU level. Therefore, we believe that it was necessary and urgent to initiate such an initiative as the Community Innovation Survey and this report evaluates the surveys in all EU member states and Norway.
Implementation of the CIS
The process of initiating and developing the CIS was not an ideal one in which a series of different phases (creation of a conceptual framework, creation of a common questionnaire, development of guidelines for implementation and sampling, implementation across member states, creation of the database) followed each other consecutively and in a moderate pace.
Rather, due to several factors, the project was hampered. First, because of the pilot-character of the project, several difficult issues from the questionnaire had to be investigated in depth; most importantly, the possibility of measuring innovation costs. Second, because of the international character of the project the Commission had to negotiate with many national contractors, that, a priori, had very different experiences and expectations. Third, since the aim of the CIS survey was to reach comparability also with surveys in non-EU countries there were negotiations with non-EU OECD countries. Some member countries did not await this development but wanted to implement nationally developed surveys based on the Oslo manual and a draft harmonized questionnaire. These processes meant that the project became difficult to monitor and co-ordinate and that the process became partly self-driven, timing being unsatisfactory (cf. section 5.2).
Moreover, because no legal basis exists for collection of innovation data, the Commission was unable to impose any demands on member states and was limited to compiling a list of imple-men-ta-tion and sampling issues which were presented to member states as recommendations. Thus, the Commission was highly dependent on member states' co-operation in the first CIS project. However, it is the opinion of the evaluation team that the recommendations supplied by the Commission were too general and did not cover all relevant aspects (cf. section 5.2).
It seems that there has been too little awareness among national contractors of the importance of keeping the harmonized lay-out to secure international comparability. Therefore, at the national level a variety of different aims and methods were employed, depending of the different experience and expertise of the national contractors (cf. sections 5.3-5.5, 6.2 and separate annex with country reports). Although this variety has created problems of comparability there are also useful lessons to be learned from this experience. Since the CIS is a pilot action this variety gives a unique possibility to assess the efficiency and efficacy of different methods of conducting innovation surveys.
In view of the novelty of the project and the international character of the project it is the opinion of the evaluation team that the realisation of the first CIS has been successful in its first aim. From this pilot action the Commission, the national contractors and scholars in innovation measurement have learned much which can secure a high quality in a possible next CIS (or any other national or international innovation survey) (cf. section 5.8, 6.6 and 7.5-7.6). Regarding the second aim of the CIS - collection of comparable innovation data across EU member states - the level of success is more moderate. Data is not comparable across all countries and all variables (cf. section 6.5). However, it is the opinion of the evaluators that this aim - given the conditions under which the project was initiated and developed - could not be reached in this first phase of the CIS. Under the circumstances it is an achievement that a new data source which may be used for some types of analysis of innovation in Europe (cf. later) has been created.
If we make a 'relative' assessment of the degree of success in the CIS - i.e. an assessment of the CIS compared to other types of data on technological development we may conclude that the CIS has been more successful. In one venture the CIS project has gathered and - with this report - disseminated much information on the 'field-methodology' of innovation surveys compared to what has been collected and disseminated for other types of surveys of technological development. Furthermore, there are also problems with the international comparability of these types of data even though, in several cases, they have been collected for many years. Thus, in a 'relative light' the CIS has come far in its first year.
Overall quality of the realized CIS data
We stated above that it should not be expected that the CIS would reach full international comparability of data in its first attempt. Therefore it is not surprising that we can conclude that, on the basis of the definitions employed in this evaluation, the results of the first CIS cannot be regarded as statistically comparable between all countries (cf. section 6.5), which implies that the analytical possibilities are restricted (cf. later).
At the time of the evaluation neither Eurostat nor member countries had completed the statistical work (margins of errors have not been calculated and analyses of non-response have not been performed). Thus, we have no exact knowledge of the quality of the realized data across countries (cf. section 6.4). This issue will be properly assessed by Eurostat in building up the EU database and until these margins of error are calculated results should be interpreted with utmost care.
Reasons for lack of comparability
We concluded above that, as might be expected, there are problems with comparability of data. In our view the main factors to account for this lack are:
- Some contractors modified some questions and thus questionnaires were not comparable between all countries.
- The survey frame was not satisfactory for a few countries.
- The sampling methods were not sufficiently harmonized.
- High levels of total and/or item non-response occurred in some countries.
Two other problems exist at the time of the evaluation (according to Eurostat and DGXIII these problems will be solved in the compilation of the EU database):
- Raising factors have not been calculated for all countries and used in the aggregation of data.
- Margins of error have not been calculated.
The evaluation has shown that the main factors to account for these deficiencies are:
- Lack of co-ordination.
- Lack of instructions about or expertise on sampling and implementation.
- Lack of awareness of the importance of international comparability.
Again, however, it is possible to go back one more step and assess the reasons for these problems. These can, in our opinion, be summarised into six points:
The international character of the project. The Commission had to negotiate with national contractors that, a priori, had different experiences and expectations. Furthermore, since a key priority for the CIS was to make the data comparable also with other OECD countries, negotiations took place with the OECD. These processes delayed the project and some member states did not await the results of this endeavour but implemented the survey before a final harmonised questionnaire was agreed on. Therefore both timing and harmonisation was retarded.
Lack of co-ordination power. Since no legal basis exists on collection of innovation data the Commission was compelled to make only recommendations on sampling and implementation to national contractors. They could make no demands on the services to be rendered by national contractors and they could not pick the best possible national contractors. This seriously hampered harmonisation and in some cases influenced the quality of data.
Lack of advice. The set of recommendations worked out by the Commission was not sufficiently detailed. Even though still voluntary in nature the recommendations could have been more itemised, providing detailed advice to some of the more inexperienced national contractors. This may have reduced the quality of data for some countries.
Lack of will. Even though all national contractors agreed on the importance of creating an internationally comparable data base of innovation statistics, some of the national contractors did not seem to have the will to comply with this aim of the project. Therefore they introduced various national-specific changes and this hampers the comparability of the data.
Lack of expertise. It seems that in a few countries national contractors did not have the full economic or statistical expertise to carry out the innovation survey in a satisfactory way. This hampers the quality of the data for these countries.
Lack of comparison of experiences. Too little was done to facilitate an interactive learning process where national contractors could learn from each other (best practices, errors, difficulties, etc.).
Analytical possibilities with the CIS data
Three different uses of the CIS data may be envisaged:
- Descriptive analysis of differences between countries.
- Analysis of innovation in selected industries across countries.
- Analysis of innovative structures within countries.
Within each of these uses a variety of projects may be performed. However, for all analysis margins of error must be taken into consideration.
Since errors may be smaller if non-innovating enterprises are left out of the analysis, i.e. if explorations are restricted to the set of innovative enterprises, such analysis may provide more manifest results. In these explorations also the data from Greece and Portugal may be used.
The CIS data should not be used for assessment of EU totals. For example assessment of total innovation costs in EU or the share of turnover used for innovation across EU cannot be made because of deficiencies of the data for some countries. For policy-related advice these data may be used to assess issues like the non-R&D costs of inno-va-tion, the sources of and barriers to innovation, R&D cooperation and innovative strategies, etc. Because compara-bility between countries in some cases is low the data should not be used for detailed analysis of diffe-rences between countries that result in detailed policy advice on the (re)distribution of EU resources between countries or regions, or on initiatives aiming at harmonising structural and insti-tu-tional factors across countries.
Recommendations for future innovation surveys
The evaluation has shown that co-ordination at the European level is essential if comparable results across countries are going to be created. Innovation surveys are a new initiative and are not yet backed by a such solid experience as data on the main economic indicators. A common standard is still sought, and to achieve this requires close co-operation among the various organisations that are involved in the field. It is the opinion of the evaluation team that a satisfactory co-ordination can only be achieved through a legal basis and consequently it is recommended that a legal basis for innovation surveys is adopted. On the basis of a legal basis the Commission should:
- Coordinate the venture, i.e. make sure that timing is appropriate;
- Select the best possible national contractors (or contractor teams), i.e. contractors with relevant experience;
- Create a revised pre-tested questionnaire to be implemented in all EU countries (using the lessons gained during this first round of the CIS);
Work out detailed instructions of all aspects and levels of the implementation of the surveys on the national level. I.e. recommendations on:
Target population, cut off point, timing, reminder procedure, follow up on responses, survey method, frame, survey unit, sample size, sampling technique, subsampling for non-response, imputation of missing data, raising factors and assessment of reliability (cf. section 7.6).
In case these issues are too detailed to be included in a legal basis it is recommended that the legal basis is made as a frame which can be filled in by the Commission. It is recommended that the basis for creating the legal basis is the experience from this first round of the CIS.
The learning effects
One thing that is considered very important by the evaluation team is the learning effects of repetitive innovation surveys. These include both 'internal' learning effects (in repeating this venture both the Commission, national contractors and respondents will have learned a lot from this first CIS), and, equally important, 'interactive' learning effects (a horizontal process in which national contractors learn from each other).
If the venture is repeated, the learning effects will ultimately imply that innovation surveys will provide more reliable and more comparable data. This information will be invaluable for the design of technology policy, both at the national level and at the EU level.
Empirical studies and the Community Innovation Survey