Skip to main content
European Commission logo print header

Testing the multi-component model of human cognitive abilities

Final Report Summary - TEMCOM (Testing the multi-component model of human cognitive abilities)

Final Report for the project TEMCOM
The main purpose of the research project TEMCOM was to test a new theory of human intelligence and develop it as a psyhometric as well as a structural model. In particular, a cognitive theory of item response processes involved in mental test performance, developed by the Fellow and Dr. Conway, a researcher at Princeton University, purports to explain the all-positive correlation matrix that emerges whenever diverse mental ability tests are administered to a large sample of people. This finding, called the positive manifold, is one of the most replicated results in Psychology. It is also somewhat counterintuitive. Tests of vocabulary, spatial rotation, or mental arithmetics superficially measure different abilities. Yet if someone performs above average on any of these tests, they are likely to perform above average on all of them.

The positive manifold has led to the concept of psychometric g, the general factor of intelligence which, in turn, has often been interpreted as psychological g, i.e. general intelligence: an ability that permeates all human cognitive activity. Yet there is a massive amount of evidence contradicting the idea that people use the same general cognitive ability to perform on tests with different content. Damage to different areas of the brain results in the double dissociation of various cognitive abilities. Similarly, specific developmental disorders result in impaired spatial abilities while certain verbal skills remain intact, or vice versa. This provides strong evidence against the explanation of the positive manifold by a general cognitive ability operating within individuals. Hence the puzzle of the positive manifold can be summarized as this: why does the variation between people in mental test performance appear massively domain-general if the abilities they employ to solve such tests are largely domain-specific?

The theory proposed by the fellow and Dr. Conway (which was called ‘multi-component model’ at the time of the application for the Fellowship, but was later published as ‘Process Overlap Theory’) provides an answer to this question. It assumes that the positive manifold reflects multiple domain-general processes that are tapped in an overlapping fashion across batteries of cognitive tests. Domain-general processes involved in executive attention, and mainly dependent upon the dorsolateral prefrontal cortex, are central to such performance. These processes are activated by a large number of test items, alongside with domain-specific processes tapped by specific types of tests only.

The theory interprets the general factor, or g, as an emergent property reflecting the pattern of positive correlations observed among test scores, not as a causal latent variable, and therefore challenges the notion of general ability. It also bridges correlational and experimental psychology and accounts for inter-individual differences in behavior in terms of intra-individual psychological processes. The main research goals of the Fellowhip have been related to abridging the ‘two disciplines’ of psychology: correlational and experimental. In particular, the primary purpose was to abridge the correlational and experimental disciplines of psychology by translating Process Overlap Theory to a psychometric item response model and to a structural latent variable model.

Modern test theory or item response theory (IRT) describes the probability of giving a correct response to an item as the monotonically increasing function of the underlying ability.

Based on the difficulty parameter of the item, it is possible to calculate the probability of giving a correct answer as the function of ability. Traditional IRT relies on the basic assumption that each person's response to an item is the function of a single underlying ability. Yet Process Overlap Theory claims that any test item taps a number of different items from different domains, and in order to arrive at a correct answer, each individual domain has to be passed successfully as each dimension were separate items. This means that the probability of arriving at a correct answer equals the product of the probabilities of passing each dimension. This is reflected in the multidimensional item response model, which represents Process Overlap Theory, and was developed as part of the fellowship. The fellow, along with Dr. Conway and one of his graduate students, have also conducted simulation studies based on the IRT model. The results demonstrate that when test scores are simulated according to the model the positive manifold does indeed emerge. The simulations, along with the item response model, are expected to be published in 2015.

According to the idea of psychological g, the positive manifold is due to the causal effect of a latent variable, whereas according to Process Overlap Theory the positive manifold is an emergent property, the result of the specific patterns in which item response processes overlap. Therefore, according to Process Overlap Theory, the general factor is a formative, rather than a reflective variable. Figure 1 illustrates the difference. The model on the left is a reflective model. According to the theory of general intelligence, g causes the measures because, ceteris paribus, a person’s score on the measure, e.g. an IQ-test, is determined by his score on g. In formative models the chain of causation is the opposite: the latent variable emerges because of the indicators and not the other way around, hence g is the result, rather than the cause of the correlations between group factors. Similar formative latent variables are socio-economic-status (SES), general health, etc., which all tap common variance between measures, but do not explain it.

Therefore, Process Overlap Theory challenges the hierarchical model of g presented in the left part of Figure 1. In fact, the theory translates to a hybrid model: part reflective, part formative (Figure 2). That is, as a reflective causal model it corresponds to the so-called ‘oblique model’, which does not have a general factor to explain the correlation between the group factors. But it also accommodates g as a formative latent variable – the common consequence, rather than the common cause, of the correlation between group factors.

Besides these main projects the Fellow has completed a number of smaller projects. One of these is a simulation study about how different models of intelligence reflect the domaingenerality and the strength of various positive manifolds; the results have been presented at an international conference. The other study investigates the phenomenon called ‘differentiation’: the empirical result that correlations between tests measuring different mental abilities are lower in high ability populations. In collaboration with Dr. Molenaar, a researcher in the host institution, it has been demonstrated that differentiation is not restricted to tests of mental ability, but also occurs in tasks measuring working memory (a construct developed by cognitive psychologists to characterize how human beings maintain access to goal-relevant information in the face of concurrent processing and/or distraction).