Periodic Reporting for period 1 - LOGIVIS (The logics of information visualisation)
Berichtszeitraum: 2015-10-01 bis 2017-09-30
The background against which this project is developed is double: on the one hand there is the role of visualisation within the new epistemic practices that are associated with the data-revolution (data science, analytics, the use of algorithms); on the other, there is the call within the visualisation-sciences (information visualisation, scientific visualisation, and visual analytics) to develop new theoretical frameworks that can inform the practice of visualisation, drive innovation, and lead to better predictions regarding the effectiveness of visualisations. In this context, this project strives to contribute to the critical reflexes and the development of epistemic standards that are needed as new epistemic practices arise, and to narrow the gap between existing theories on visualisation and insights (from logic, epistemology, and the philosophy of science) regarding the epistemic value of visualisations.
Progress within this project was made on two levels:
First, a more precise characterisation of the epistemological problem of visualisation was developed by (a) contrasting the problem of visualisation in the philosophical literature with how it is approached within the visualisation sciences; (b) disambiguating the object-level and meta-level inference problems in visualisation; and (c) analysing this meta-level problem as a design-problem.
Second, a formal analysis of data-transformations was developed in which it is possible to reason about simple and complex data-objects, as well as about the transformations (combine, aggregate, abstract, ...) we rely on to construct and modify such data-objects.
Work on ""The Problem of Visualisation"" and ""The Design Problem of Visualisation"" contributed the following insights:
(1) Understanding what is at stake epistemologically in visualisation requires us to contrast (a) the philosophical and the technical problem of visualisation (what is it vs how do we make it?), (b) the epistemological and the computational problem of visualisation (how is a visualisation related to its target, i.e. what it depicts, represents, conveys information about, vs how is a visualisation generated and consumed), and (c) the semantic and the syntactical problem (what does it mean or tell us vs how does it encode a data-object).
(2) Understanding the role of inference in visualisation requires us to clearly distinguish the object-level and meta-level inference problems in visualisation.
(3) The meta-level problem of visualisation is an ampliative or non-deductive inference problem that is best understood as a design-problem. That is, a problem whose solution does not require more or better data, but better insight in the object-level problem (the requirements) and more knowledge of the design-space. From a formal point of view, this relates the meta-level problem to so-called characterisation problems (how do we characterise and describe the logical spaces in which we organise different possibilities, and how do we unambiguously single out a selection of those possibilities). From an applied perspective, this establishes connections with two existing lines of research within the visualisation-sciences, namely the development of taxonomies of visualisations and visual actions (our options), and the development of specification-languages (the formal languages we use to unambiguously describe a graphical representation of data-objects).
Work on ""The Logical Analysis of Visualisation-Operations and Data-Transformations"" contributed the following insights:
(4) The unification of a philosophical outlook on visualisation that is based on information-flow across networks of abstraction with the technical outlook that approaches the problem of visualisation in terms of coding and de-coding.
(5) The formulation of a qualitative or logical counterpart of recent work done by Min Chen et al. on the use of Shannon’s information-theory in the context of visualisation.
(6) A formal reconstruction of some insights from Bertin’s classic “Semiology of Graphics”.
The unexpected applications of this work concern classification-practices and profiling, and contributed insights on:
(7) classification, abstraction, and their role in prediction.
(8) The generation of information-asymmetries in profiling.
(10) A new understanding of the epistemic risks in profiling practices."
At a more general level, this project sought to move beyond the state of the art in how, on the one hand, critical perspectives on data-practices and representational practices are usually developed by integrating them into a more formal setting and focusing on the inferential processes within such practices, and, on the other hand, by applying logical tools in a context where they are rarely used for philosophical (rather than for technical) purposes, namely to evaluated epistemic processes wherein technology plays a central role.