Skip to main content

Uncertain Knowledge Maintenance and Revision in Geographic Information Systems


This research addresses quantification and visualization of existential uncertainty of spatial objects derived from remotely sensed imagery. A split-and-merge image segmentation technique is applied at various step sizes of splitting and merging parameters. We test the hypothesis that objects occurring at many step sizes have less existential uncertainty than those occurring at only a few step sizes. Segmentation accuracy is quantitatively assessed by comparing segmentation results with a topographic reference map. Seven objects are identified and their correspondence with mapped objects in terms of existence as expressed by content and extension are investigated. For that purpose we calculated the area fit index. Large, homogeneous objects in an isolated position like a lake have low existential uncertainty, whereas small or heterogeneous objects have a much higher existential uncertainty. A second point addressed in this study is visualization of segmentation uncertainty. For that purpose we used the boundary stability image. We conclude that the boundary stability index (BSI) allows a quantification of existential uncertainty and is suitable for its visualization. The area fit index (AFI) is a measure for assessing the accuracy of individual objects. The boundary fit index (D(B)) is used to assess segmentation accuracy for the whole image.
We have developed a system proper to manage partially ordered constraints for building a construction. We have shown how to help an expert for finding the best place to build a house (condominium) in an urban area, according to several criteria both defined by city legislation and the property developer. These rules can sometimes be ordered by an expert according to their level of importance. Examples of rules are: 1. Field having a minimal area of 1000 m2; 2. Average field maximal slope of 6°; 3. Field situated at less than 2 kilometres of a commercial area; 4. Field situated at less than 150 meters from a fire hydrant. Usually, there is no parcel following all these rules. The expert can express its preference between some constraints but he cannot always express a preference between all constraints: this leads to incomparablities. Despite these incomparablities, we are able to find the "best" locations according to the constraints they satisfy or, dually, according to the constraints they falsify. For that, we need to build a preference relation between all the locations. Our method was tested on a data set on the city of Sherbrooke (Canada). For that, we have integrated proper to the commercial Geographic Information System Geomedia from Intergraph. We have considered 2782 parcels and 12 partially ordered constraints. The treatment was immediate and two parcels were chosen. An aerial snap shop confirm the properties of these parcels, what are the best we can find, according of the partial order on the constraints.
This study proposes a segmentation procedure based on grey-level and multivariate texture to extract spatial objects from an image scene. Object uncertainty is quantified to identify transitions zones of objects with indeterminate boundaries. The Local Binary Pattern (LBP) operator, modelling texture, is integrated into a hierarchical splitting segmentation to identify homogeneous texture regions in an image. We propose a multivariate extension of the standard univariate LBP operator to describe colour texture. The paper is illustrated with two case studies. The first considers an image with a composite of five textures regions. The two LBP operators provide good segmentation results. The second case study involves segmentation of coastal landform and land cover objects using a LiDAR DEM and multi-spectral CASI image of a coastal area in the UK. The multivariate LBP operator performs better than the univariate LBP operator, segmenting the area into meaningful objects, yielding valuable information on uncertainty at the transition zones. We conclude that the multivariate LBP operator is a meaningful extension to standard texture classifiers.
The revision problem in the context of GIS is represented in propositional calculus and amounts to revise a knowledge base K represented by a finite set of clauses by a new item of information A represented by another finite set of clauses. The revision method is the Removed Sets Revision, which removes the minimal subsets of clauses from the initial knowledge base K, called removed sets, in order to restore consistency, while keeping the new information. We first formalize the Removed Sets Revision in terms of answer set programming, translating the revision problem into a satisfiabily problem. We first apply a transformation on K, denoted by H(K), introducing for each clause of K, a new variable, called hypothesis variable, which acts as a clause selector and we built a logic program P, in the same spirit of Niemela, corresponding to the union of H(K) and A. The revision of H(K) by A amounts to look for the answer sets of P which minimize the number of hypothesis variables assigned false. We formally established the correspondence between removed sets and answer sets, which minimize the number of hypothesis variables assigned false. We then adapted the S-models algorithm proposed by I. Niemela and P. Simons and proposed an algorithm, called Rsets, in order to compute the answer sets corresponding to removed sets. The main adaptation of the original S-models algorithm consists in stopping the recursive call to avoid certain answer sets and throwing away certain answer sets already found. We conducted an experimental study on the flooding application. This revision problem is represented in propositional calculus and we focused on an area consisting in 120 compartments, which involve 33751 propositional clauses with 2343 propositional variables. The test was conducted on a Pentium III at 1GHz with 256Mo of RAM. Until 20 compartments the Rsets algorithm gave similar results than the ones obtained by the previously proposed REM algorithm, however from 25 compartments the Rsets algorithm is significally more efficient than the REM algorithm. Even if Rsets algorithm is significally better than the REM one, it can only deal with 80 compartments. In order to deal with the whole area we introduced the Prioritised Removed Sets Revision. Prioritised Removed Set Revision (PRSR) generalizes the Removed Set Revision to the case of prioritised belief bases. Let K be a prioritised finite set of clauses, where K is partitioned into n strata, such that clauses in Ki have the same level of priority and are more proprietary than the ones in Kj where i is lower than j. K1 contains the clauses which are the most proprietary beliefs in K, and Kn contains the ones which are the least proprietary in K. When K is prioritised in order to restore consistency the principle of minimal change stems from removing the minimum number of clauses from K1, then the minimum number of clauses in K2, and so on. We introduce the notion of prioritised removed sets, which generalizes the notion of removed set in order to perform Removed Sets Revision with prioritised sets of clauses. This generalization requires the introduction of a preference relation between subsets of K reflecting the principle of minimal change for prioritised sets of clauses. We then formalize the Prioritised Removed Sets Revision in terms of answer set programming. We first construct a logic program, in the same spirit of Niemela but, for each clause of K, we introduce a new atom and a new rule, such that the preferred answer sets of this program correspond to the prioritised removed sets of the union of K and A. We then define the notion of preferred answer set in order to perform PRSR. In order to get a one to one correspondence between preferred answer sets and prioritised removed sets, instead of computing the set of preferred answer sets of $P_{K \cup A}$ we compute the set of subsets of literals which are interpretations of Rk and that lead to preferred answer sets. The computation of Prioritised Removed Sets Revision is based on the adaptation of the smodels system. This is achieved using two algorithms. The first algorithm, Prio, is an adaptation of smodels system algorithm which computes the set of subsets of literals of Rk which lead to preferred answer sets and which minimize the number of clauses to remove from each stratum. The second algorithm, Rens, computes the prioritised removed sets of the union of K and A, applying the principle of minimal change for PRSR, that is, stratum by stratum. In the flooding application we have to deal with an area consisting of 120 compartments and the stratification is useful to deal with the whole area. A stratification of S1 is induced from the geographic position of compartments. Compartments located in the north part of the valley are preferred to the compartments located in the south of the valley. Using stratification, the Rens algorithm can deal with the whole area with a reasonable running time.
Binary Decision Diagrams (BDD) is a compact and empirically efficient data structure for representing formulae in propositional logic. More precisely, BDDs allow a compact representation of the models of a formula. In the framework of the flooding application, we explored two possible uses of BDDs to achieve revision. The first approach, considering a K*A revision operation, we tried to encode all the knowledge of K and A into a single BDD, and then select suitable models of the revised knowledge. This approach appears to be intractable because of the huge data size. We recently developed a new approach, which uses BDD. This approach relies on a semantic characterization of our revision operation, which leads to the building of three separate BDDs: one for the knowledge in K, one for the knowledge in A a third BDD containing the preference ordering on the knowledge which defines the revision strategy. This new approach has better partial results of complexity than the first one. The theoretical model has been fully described, and now we start an experimental phase. This is a mandatory step when dealing with BDDs, as if the worst-case complexity is still high, the mean case is too difficult to express theoretically, thus requiring experimentation.
Quality issues are important in geographic information (ranked #1 by users of GI products, both in request for information about data quality, and in claim about its use when available). The classical definition of quality as "fitness for use" cannot be implemented directly as a computable operator. The proposition is to split the process into two distinct parts: (1) to establish what is the user need for a particular problem, in terms of what features (with definition, categories, relations), with which quality (different quality elements required for each feature). This double set should be structured as a whole, and translated as a set of formal clauses called"problem ontology". A very simple version of this has been implemented with a table model (Excel-like) with computable cells; (2) to translate the specifications of the different available data sets, into as many "product ontologies" (see the other REVIGIS result about their logical checking). Then the fitness for use can be approximated this way: - try to identify a cell of the problem ontology as a cell of one of the product ontologies, and, if acceptable, compare required quality with actual quality from the dataset; - if correct, fitness is established; - else: start negotiation: try to derive some data from what is available, using some derivation model (eg: a vege index from pixel colours, or interpolating missing values, etc.), and estimate a quality for this derived data, then back to previous steps. What is important to notice here: the problem ontology has been designed independently from the available data (this is not easy sometimes), in order to avoid any bias that anticipates the required quality from the available quality (impeding any actual fitness for use approximation).
Any map product is an instance of a predefined process of representing a part of the real world for a particular purpose. A geological map has not the same purpose than a topographic map, and even a 1/25000-topo-map has not the same purpose than a 1/100000 topo-map. All these map series are based on a particular set of specifications, whose only a part is standardised into meta-data. Many other definition (attribute domains, integrity constraints, cross-domain constraints) may exist and be available. The work done in the project is to consider the whole set of specifications as the basis for a "product ontology". Then the method is to build a logical version of it, enabling us: (1) to translate the constraints into a Prolog set of clauses; (2) to check the self-consistency of clauses; (3) to translate (classical first order) a particular map data set into 'facts' that instantiate the Prolog program, and to check if this data set is a logically proved model for its own specification set. This is a full-scale method for a complete, proven model checking of any data set with respects to its specifications (interest for map agencies); (4) possibly to add new constraints to check their consistency with previous specifications, and to perform again a data set model-checking. This permits a consistent evolution of specifications. This permits a consistent fusion of external data within a specified map product (or at least to track reasons for fusion issues), either these data being more recently acquired (map update), or extracted from another map product (with different specifications, but a similar definition). Finally this approach provides a rigorous way of representing the underlying "ontology" of a map product as a set of axioms directly computable in a logical solver. Some tractability issues may arise if the propagation of conflicts is not controlled (additional constraints should be added rather individually).
Identifying inconsistency between ontologically discordant data by combining semantics and metadata Introduction The issue of dataset inconsistency is endemic to resource inventory because: - different surveys at the same instance in time may record nominally similar features (such as land cover), but may do so in completely different ways due to their particular institutional or national perspectives; - different surveys at different points in time would not be expected to record objects of interest (such as areas of land cover) in the same way because of scientific developments and new policy objectives (Comber et al., 2002; 2003a). The effect of changing methodologies is that much of the value of the previous land resource inventories is lost with each successive survey; each inventory becomes the new baseline against which future changes are theoretically to be measured, but in reality never are. Ideally each successive methodological evolution would be accommodated in a multi-layered derived dataset, presenting the previous and the new approaches alongside each other. We have developed an approach for integrating ontologically discordant spatial data that combines expert descriptions of how the semantics of different land cover datasets relate with object level spectral meta-data. This approach is applied to two satellite derived land cover surveys of the UK in order to identify inconsistency between the datasets, a subset of which is locales of actual land cover change. Data description In Britain, the only national land cover datasets are the LCM2000 and its predecessor the LCM1990. Yet because of the problems of semantic and methodological difference, the 2000 dataset is accompanied with a "health warning" against comparing it to its predecessor (Fuller et al., 2002). In the research reported here we are interested in identifying those locations where the land covers are inconsistent between the two dates of classification, 1990 and 2000 as a first step in identifying change. Initial Work Therefore we define inconsistency in this context as whether the information for a particular land-cover object (in this case a LCM2000 parcel) is inconsistent with the cover types within that parcel in 1990 when viewed through the lens of the Spectral and Semantic LUTs. If the semantic definition of the land cover types at a location in the 1990 map are inconsistent with those present in 2000, we can identify two possible causes of the change. Either the cover type at one time or the other is in error, or else the cover type on the ground has changed. Previous analyses used a Euclidian distance calculated between 2 characterisations of the parcel based on the Spectral and Semantic LUTs. Parcels with the greatest distance (by proportion of the parcel area and in absolute terms) were identified. The pattern of vector directions was found to be related to the level of ontological change. This methodology is reported in full in Comber et al. (2003b, 2003c). Field visits showed that 26% of the parcels identified as inconsistent were believed to have actually changed since 1990. We also identified situations with meta-data inconsistencies (e.g. empty attribution fields) and small parcels that bore no relation to landscape objects on the ground. Filters were developed to eliminate such artefacts from analysis (Comber et al., 2003d). A second tranche of analyses considered the filtered data and identified a second set of inconsistent parcels based on absolute vector distance (not proportion). Again a sample of these was visited in the field. The results showed 41% of these parcels were believed to have actually changed since 1990. The remainder (59%) were due to inconsistencies (errors of classification) in either the 1990 or the 2000 dataset. This work is reported in Comber et al. (submitted). Future Work The information provided by a single expert to describe relations between land-cover class concepts under a scenario of idealised semantics has been used. There are LUTs from two other experts, both familiar with LCM19990 and LCM2000, and LUTs for all three under two other scenarios "change" (the expected transitions between land cover classes) and "technical" (how different land cover class concepts relate based on heuristic knowledge of where spectral confusion may occur). Evidence from these might be supportive or contradictory, which in turn might allow stronger or weaker inferences about change to be made. Future Work will be directed in a number of areas. Firstly, the use of different expressions of expert opinion. Secondly, these multiple statements of Expectedness and Unexpectedness from different experts, under different scenarios would be suitable for combination using uncertainty formalisms such as Dempster-Shafer or Rough Sets. Thirdly we hope to develop a "Cook Book" for users of LCM2000.
In the context of GIS we deal with data coming from different sources providing data with various qualities. Revision holds in the case of two conflicting sources, one preferred, more plausible, more reliable than the other. Revision means restoring consistency while keeping the preferred observation and removing the least possible previous observations. Revision process can be seen as a special case of the fusion process, fusion of two weighted sources one assigned a weight more important than the other. Fusion is a more general process, which deals with merging several sources. Fusion generally is more complex than revision, anyway in case of conflicting data; restoring consistency is crucial, removing the least possible data from the different sources, the strategy depending on the relative reliability, plausibility, and preferences on the sources. Different fusion operators have been proposed according to priorities are available or not. However these operators are not reversible which is a real problem when dealing with real applications. We add the property of reversibility to some known fusion operators (max, sum, lex, weighted sum), when priorities of propositional belief bases are explicit which generalizes the results obtained for the reversibility of revision. At a semantic level, epistemic states are represented by total pre-orders on interpretations, called local pre-orders, and the semantic fusion process constructs a global total pre-order on interpretations. Total pre-orders are represented by means of polynomials, which allow to recover the local pre-orders from the global pre-order. At a syntactic level, epistemic states are represented by weighted belief bases and the syntactic fusion process consists in constructing a global weighted belief base. Since weights are represented by polynomials they allow recovering the original belief bases from the global belief base. We show the equivalence between semantic and syntactic approaches of reversibility. We use this approach is a real application in the framework of submarine archaeology where the reversible fusion is applied to amphoras measurement. Reference: J. Seinturier, P. Drap, O. Papini : Fusion reversible: application a l'information archeologique. Proceedings of JNMR'2003. Paris. 2003.
Intelligent agent's beliefs are represented by epistemic states, which encode a set of beliefs about the real world based on aviable information. They often are represented by total pre-orders. We propose an encoding of total pre-orders based on polynomials, which enable revision rules to be reversible. Epistemic states are semantically represented by total pre-orders on interpretations. Total pre-orders are encoded by polynomials equipped with lexivographic order, which allow to easily formalising the change of total pre-orders according to the incoming observation. Each interpretation is assigned a weight, which is a polynomial. Polynomials allow to keep track of the sequence of observations and to come back to previous pre-orders, which is not possible with other representations. An alternative but equivalent syntactic representation of epistemic states is provided by means of weighted (or stratified) belief bases,i. e. set of weighted formulas. Each formula is assigned a weight which a polynomial. A function is defined to recover a total pre-order on interpretation from: a weighted belief base. These encodings are successfully applied to different revision rules like Papini's revision based on history, Boutilier's natural revision, Dubois and Prade's possilistic revision. These encodings add the property of reversibility to these revision operations at the semantic level as well as at the syntactic level.
Different observations on a geographical domain result in different sources. The information carried by two different sources is semantically heterogeneous, in general. It is necessary to integrate the information into a single data set, and to monitor the 'quality' of this 'fusion' process, which is composed of two parts: (1) Resolving semantic heterogeneity: Ontology can help to resolve heterogeneity problem, and ontology integration is an important initial step of the fusion process. The Galois lattice is an efficient tool, which permits to identify connections between the elements from two ontologies (e.g.: two classifications), and which helps to build a 'common ontology'. Different 'distances' have been proposed in order to build different possible results, ranging between the cautious intersection (often empty!) and the lax union (often non informative). (2) Integration of information sources: we propose several methods for integrating information sources under lattice structure. The issues are to identify conflicts, redundancy, then to propose a consensus or an aggregation, as result. The main contributions of this work have been: a solution for identify the correspondence between the concepts from two ontologies by using the Galois lattice, a method for information integration that can be graded with respect to the lattice structure.
The quality of the geographic information is difficult to assess unless being confronted to the real world. At least it is sometimes possible to check the validity of several data sets respectively to each other, when explicit constraints are available. Logical consistency can be therefore a very valuable source of quality assessment. Several situations of spatial propagation of constraints have been studied, and a general canvass is proposed to handle them with several techniques (described through other individual results). The balance is between: either an appropriate and specific representation allowing efficient algorithms, or a more general representation with hardly tractable algorithms. (1) If the spatial propagation is "linear", then it is possible to design some ad-hoc representations with linear-time algorithms. It has been applied to the analysis of a flooding described by incomplete and imprecise information: we have been able to detect conflicts and to propose solutions that restore the overall data consistency. It has been applied to the monitoring of a fleet (in Town), also with incomplete information: we have been able to alert about possibly blocking segments in a street graph. (cf. result about logic of linear constraints) (2) if the constraints are more complex, it is still possible to represent them with data, but the tractability of the algorithms is rapidly an issue. We have develop two approaches relevant for spatial data: a) the use of local containment restrictions (the opposite of the "butterfly effect") when some semantics is available (maximal extension of some phenomenon, etc.), or b) the use of random stratification (cf. result about S-models). Finally, this general approach can be used in a range of geographical application and can be integrated in a mediation scenario, where a front-end helps a user to check if his needs and the available data are in one of the tractable configurations, and to run it. A demonstrator has been coded and will be release by the end of the project. The code will be available for future implementation.
Broadcasting driving information for car-drivers is becoming popular for individuals, and can be highly sensitive in crisis situations. The basic information is a street network available on PDA's and on servers. The basic goal is to reach a destination B, from a start point A. Static solutions are easy, but additional information about up-to-date traffic issues, if available, must be provided in a consistent way (without provoking conflicts). We address the following situation: - the static information provides the network and all the possible solutions to get from A to B (all minimal ones, removing loops); - the dynamic information is made of time intervals where anonymous cars have been observed at various nodes of the network (passive collection of anonymous phone calls, or GPS tracking of special vehicles of a same taxi fleet, for instance). The inconsistency can come from a static model describing a possible A-B path (a "model" in logical sense) and locally incompatible sub-paths (negative time). Solution is to "disqualify" those "models" and to propose consistent ones (at least temporarily). This solution, computed with the help of algorithms developed in the REVIGIS project (see result about Linear Constraints), has the advantage to take into account any local information, even incomplete ones, and can be used in conjunction with Viterbi-like (or Dijkstra-like) best-path algorithms (regularly updated with the above time information).
This study addresses semantic accuracy in relation to images obtained with remote sensing. Semantic accuracy is defined in terms of map complexity. Map indices are applied as a metric to measure complexity. The idea is that a homogeneous map of a low complexity is of a high semantic accuracy. Complexity indices have been developed to quantify semantic issues like aggregation, fragmentation and patch size. In this study, these indices are applied on two images with different objectives, one from an agricultural area in the Netherlands, and one from a rural area in Kazakhstan. Images are segmented first using region merging segmentation. Effects on indices and semantic accuracy are discussed. On the basis of well-defined subsets we conclude that the complexity indices are suitable to quantify the semantic accuracy of the map. Segmentation is the most useful for an agricultural area including various agricultural fields. The indices are mutually comparable being highly correlated, but showing on the other hand some different aspects in quantifying map homogeneity and identifying objects of a high semantic accuracy.
The objective is to provide formal definitions of vague spatial types on R2, and basic spatial operators on them. We use fuzzy sets and fuzzy topology to model vague objects, and fuzzy set operators to build vague spatial operators. The spatial vague types together with the spatial operators form a spatial algebra. The vague object types we provide are generalizations of 0-, 1-, and 2-dimensional crisp object types. We identify three general types, which we call vague points, vague lines, and vague regions, in combination with some simple object types, usually named by adding the word `simple' to the general type name. The general types are defined such that they are closed under basic spatial operators. The simple types are structural elements of the general ones that are easy to handle. This means their structure can be easily translated into a computer representation. Also, spatial operators like topological predicates or metric operators can be understood and defined on them in a straightforward way. Each vague object we introduce is represented as a fuzzy set in R2 with specific properties, expressed in terms of topological notions. The basic spatial operators we define are complement, union, and intersection of vague objects. Other operators on vague spatial objects that result again in vague spatial objects can be represented as a combination of the basic ones. Basic operators are regularized fuzzy set operators such that the resulting object is of one of the predefined vague types.
In many applications, the reliability relation associated with available information is only partially defined, while most of existing uncertainty frameworks deal with totally ordered pieces of knowledge. Partial pre-orders offer more flexibility than total pre-orders to represent incomplete knowledge. Moreover, they avoid comparing unrelated pieces of information. Possibilistic logic, which is an extension of classical logic, deals with totally ordered information. It offers a natural qualitative framework for handling uncertain information. Priorities are encoded by means of weighted formulas, where weights are lower bounds of necessity measures. We have proposed, in the framework of the REVIGIS project, an extension of possibilistic logic when pieces of information are only partially ordered. We have first proposed a natural definition of the possibilistic logic inference based on the family of totally ordered knowledge bases (resp. possibility distributions), which are compatible with a partial knowledge base. Then we provided a semantic (resp. syntactic) characterization of this inference, which is based on a strict partial order on interpretations (resp. on a strict partial order on consistent sub-bases of the knowledge base). Then, we have shown that the main properties of the possibilistic logic (subsumed formulas, clausal form, soundness and completeness results) holds for a partially ordered knowledge base. Finally, we have proposed an algorithm for computing the set of plausible conclusions of a partially ordered knowledge base. We also generalized several iterated revision methods, defined for totally ordered information, to the case of partially ordered information, according to a semantic and a syntactic point of view, and we showed the equivalence between semantic and syntactic approaches.
The circulation of numeric geographic information on the market, in addition to the difficulty for users to appreciate its quality can result in wrong uses or interpretations, and this increases the number of cases and contentious matters brought to Courts. Due to the geographic information complexity (more complex because of the apparent ease to technically merge several digital sources), and due to and the many juridical uncertainties present in new information technologies Law, some strong recommendations are mandatory. We propose the design and the transmission of an "instructions manual", as a good instrument in a safe juridical risk management strategy. In fact, the analysis demonstrates the relevance to shift, from delivery of "internal quality" information in application-free context, toward delivery of "external quality" information in an application-restricted context.
Broad sandy beaches and extensive dune ridges dominate the Dutch coastal zone. The Wadden region in the northern Netherlands exhibits a highly dynamic character due to tidal waves. It is subject to continuous processes as beach erosion and sedimentation, which influence its morphology. This in turn has an economic impact on beach management and public security. Beach nourishments are carried out if safety of the land is at risk. Here the problems are defined as: how to localize and quantify beach areas that require nourishment, and (2) how to assist the decision maker to manage the process of nourishment in time. To tackle the above-mentioned problems, geographic information of different sources is used. An ontology-driven approach is applied to integrate the different data sources and to conceptualise the beach areas, their attributes and relationships. Moreover, ontologies in the beach nourishment process support knowledge sharing within various government organizations. Furthermore, the ontological approach greatly helps to understand the role of the quality of the data sources and also the required qualities for the decision maker. In here, two approaches are principle for a spatio-temporal ontology in the beach nourishment problem. First, the description of the beach objects suitable for nourishment is described by several attributes, as altitude, vegetation index and wetness index. Since the definitions for beach nourishment, as well as the attributes, are vaguely described in contents and geometry, suitable objects are described by a membership of dry, non-vegetated beaches. Second, the portrayal of temporal processes ought to incorporate scale issues. Beach volumes derived from altitude can be described on yearly trends. The vegetation index has monthly fluctuations, while wetness index is characterized by daily fluctuations from tidal waves. The result is a reasoning framework representing a spatio-temporal application in terms of quality, using actual standards, and understandable by other. The application is disseminated in ontological features and derived quality elements showing some inference rules and order of preference.
Quality information can be described using various parameters (ex: data positional accuracy, semantic accuracy, completeness, etc.) and each parameter can describe data at different levels of detail (ex: quality of a dataset, of a single object class, a single object instance, etc). In order to be able to use quality information into GIS functioning (communicate data quality information to user, construct error-buttons, avoid some function misuse by enabling or disabling these functions according to data quality, etc). This contribution aimed to explore data quality parameters and the possible levels of detail they refer. Then, a data model was designed in order to support the management of heterogeneous data quality information at different levels of analysis. Using a multidimensional database approach, we propose a conceptual framework named the Quality Information Management Model (QIMM) relying on quality dimensions and measures.
Efforts to describe the influence of the data quality on the final decisions lead to formalization of the interaction between data quality description and the description of the user task. It brought together under in one framework the different strands of revision of geographic data explored; in particular, the research at TU on integration of data and the propagation of data quality showed: (1) how the well known and widely used methods to propagate precision or accuracy of spatial data using Gauss' rule of error propagation can be generalized to apply to data not expressed on a continuous scale; and (2) how methods widely used in decision making - related to masslow's work in psychology - can be integrated in a formal framework.
Geographical data rarely, if ever, come truly free of error since imperfection is an endemic feature of geographical information. Imperfection can be thought of as comprising two distinct orthogonal concepts: error and imprecision. Error, or inaccuracy, concerns a lack of correlation of an observation with reality; imprecision concerns a lack of specificity in representation. Starting from this ontology of imperfection, formalisms to reason with these different aspects of uncertainty in spatio-temporal data are to be investigated (i.e. degrees of certainty, fuzzy and rough sets etc.). There is a widespread research activity on these topics. In particular we are working on exploiting computational intelligence techniques for spatial data analysis, by means of logic-based and constraint-based query languages. An approach particularly interesting for uncertainty handling that is receiving much attention in literature is spatial qualitative reasoning such as reasoning on proximity, topology, and directions of spatial objects. In particular, we intend to stress here more the connection with the temporal aspect for qualitative spatio-temporal reasoning. We propose an approach to qualitative spatial reasoning based on the spatio-temporal language STACLP. In particular, we work on the topological 9-intersection model and the direction relations based on projections can be modelled in such a framework. STACLP is a constraint logic programming language where formulae can be annotated with labels (annotations) and where relations between these labels can be expressed by using constraints. Annotations are used to represent both time and space.
Geographic information often involves constraints on spatial and/or temporal information, which can be represented in terms of real-valued variables. In many problems these constraints are linear, and there is a developing literature on reasoning with linear constraints in GIS. One of the main advantages of expressing reasoning with linear constraints as a logic is that it makes it relatively easy to generalise many uncertainty formalisms, both numerical and logical, for reasoning with uncertain linear constraints; this work has numerous potential applications in GIS. Even if constraints are not linear, they may sometimes be approximated by linear constraints, e.g., using polygons to approximately represent the boundaries of spatial objects. The main contributions of this work have been the expression of reasoning with linear inequality constraints as logic, with a very simple proof theory, which is sound and complete for finite sets of constraints.