Skip to main content
Weiter zur Homepage der Europäischen Kommission (öffnet in neuem Fenster)
Deutsch Deutsch
CORDIS - Forschungsergebnisse der EU
CORDIS

Sensing, predicting and exploiting consumer visual attention in fast-paced marketing environments

Periodic Reporting for period 1 - CONVISE (Sensing, predicting and exploiting consumer visual attention in fast-paced marketing environments)

Berichtszeitraum: 2023-10-01 bis 2025-09-30

Three of the main present challenges in Attention-Based Marketing include how to: a) efficiently measure consumer attention in fast-paced environments (e.g. in electronic marketing), b) optimise marketing stimuli for attracting consumer attention and c) use/democratise consumer attention data (e.g. eye-sensing data/facial expressions) for improving the experience of individual consumers, while respecting their privacy at the same time.
In-line with the above-mentioned challenges, the CONVISE project develops methods for sensing, predicting, and exploiting consumer visual attention that can be used to optimise marketing effort and enhance consumer well-being, in social media advertising settings.
CONVISE designed video-based eye-tracking sensing technology for extracting reliable, market-relevant consumer visual attention maps, minimising the necessary human sensor (pre)-processing effort.
CONVISE focused on the improvement of appearance-based gaze estimation methods, based on deep learning. Appearance-based gaze estimation is the regression problem of mapping facial images of (human) subjects to specific gaze directions (or points to the screen). Deep learning methods for gaze estimation work by training a neural network using examples of facial images and gaze direction vectors.
Perhaps the most important challenge in appearance-based gaze estimation is how to deal with the multiple sources of input variance, which can significantly affect the precision of the solution. The first group of variances is related to the external environment. Those can be controlled by defining stringent experimental settings, such as well-defined camera specifications, experiment locations, illumination conditions, subject distances/angles from the sensor, and/or even employing head/chin rests wherever possible. The second group of variances are related to the visual appearance of individual subjects, such as their physical characteristics (e.g. age, gender, skin/eye/face color/dimensions, and ophthalmic health conditions), which require meticulous effort to control.
The standard method for controlling the latter sources of variance is the so-called “calibration process”, which occurs at the start of each eye-tracking data collection process. This process includes the following steps. Crafted images of some calibration patterns are shown to the subjects, e.g. crosses placed at specific, known points of the screen. Subjects are asked to look at those points, while their facial images are collected. Then, those images are used to finetune/re-train the model on the new subjects, thus improving the overall accuracy of the solution for each specific subject. The disadvantages of this approach are that a) it is time-consuming, b) it requires honest cooperation of the subject, and c) it only works when the person collecting the data (e.g. a researcher) is present to verify that the calibration process was successful. To address those challenges, CONVISE developed a novel methodology that addresses the above limitations.

Second, CONVISE conducted a thorough literature review of advertising research that featured eye-tracking experiments, in order to collect eye-tracking data to develop neural networks for eye-tracking prediction. From the beginning of this work, it became evident that standards seem lacking in advertising studies using eye-tracking data. The eye-tracking data collection procedure is affected by several factors starting from the eye-tracking sensor and its configuration, the scene geometry, the data acquisition quality assurance, and the software.

Modern eye-tracking sensors compute a multi-dimensional signal consisting of geometric variables captured every specific time intervals according to the sensor sampling frequency, that includes a) the estimated relative eye positions, in the format of screen coordinates, the pupil dilation data over time. Processing of these raw data results in the eye-tracking variables normally seen in advertsing research papers, e.g. fixation time/counts, saccade duration/count, and their variants or aggregations. Having access to this raw data, along with appropriate reporting of the remainder factors, allows for research reproducibility. However, providing access to raw data is rarely the case in advertising research. The remainder factors include:
• Eye-tracking Equipment. The type of sensor used and its configuration.
• Eye-tracking software and algorithms used to extract eye-tracking measures.
• The geometry of the scene, such as the distance of the subject/participant to the screen, the screen size and its resolution and lighting conditions.
• Data quality assurance practises, such as calibration.
• The precise visual stimuli definition, in terms of AOIs, their size their correspondance with the actual stimuli.
• The eye-tracking measures used and how they have been calculated.
Based on our study featuring 34 recently published articles including eye-tracking experiments in advertising and business research, it has been found that many of the above factors are underreported and significantly vary across eye-tracking experiments.
CONVISE developed a methodology called “Implicit train-free calibration for eye-tracking”, which is the main project’s result. The developed methodology significantly increased the state-of-the-art by achieving closing the accuracy gap between infrared-based systems and webcam-based eye-tracking by 20%. The developed methodology includes the design of an entirely new neural architecture that is adaptable to each eye-tracking subject, without using gaze data for each subject. Unlike existing approaches, the developed methodology requires no model re-training/finetuning and thus no annotations (e.g. head poses, gaze information) at all. Instead, implicit calibration is achieved by exploiting merely per participant (facial) images, in a novel calibration-aware neural architecture that can operate with comparative information between the test participant image and the proposed “calibration anchors”. Calibration anchors are merely features of subject-specific images. The derived features of the test images are combined with the calibration anchors using an attention mechanism. The developed architecture provides considerable performance gains when compared to its respective uncalibrated baseline (which remains a fair comparison). Besides performance gains, the developed method offers practical implications, i.e. it can minimize individual researcher calibration efforts and potential calibration errors, reducing the time spent by human subjects during each experimental session.

CONVISE developed new “Guidelines for data sharing and data reporting practises for neuromarketing data”.
Standardization in data reporting and data sharing is essential for reproducibility, yet standards seem lacking in advertising studies using neuromarketing techniques. Our review of recent relevant experimental studies using eye-tracking and Electroencephalography from a technical perspective, revealed gaps in reporting important factors affecting data acquisition and data processing. These gaps in turn may cause misinterpretations of the obtained results and conclusions. CONVISE addresses the need of standardization in neuromarketing studies by proposing a set of guidelines that may serve as a checklist for ensuring proper reporting that leads to the extraction of reliable measures. It argues in favour of full open data sharing and discuss the pros and cons of employing open software in the analysis. The guidelines apply to almost any eye-tracking and EEG-related advertising (and, by extension, marketing) study and serves both researchers, reviewers and journal editors of such studies.

More details of the above works can be found in project publications.
Mein Booklet 0 0