Invited speakers

Invited Speakers  

 

Markus Krötzsch
Technical University of Dresden

Ontologies for Knowledge Graphs?

Modern knowledge representation, and DL in particular, promises many advantages for information management, based on an unambiguous, implementation-independent semantics for which a range of reasoning services is available. These strengths align well with the needs of an ever growing information industry that embraces declarative knowledge modelling as one of the pillars of their businesses and operations.
Today, giants like Google, Facebook, and Wikimedia consciously deploy ontological models, and store information in graph-like data structures that are more similar to DL ABoxes than to traditional relational databases. Many smaller organisations follow, and “knowledge graphs” appear in numerous places. One would therefore expect to see a steep increase in the practical adoption of logic-based KR. In practice, however, this is rarely observed. I will discuss the current state of affairs and ask which technical and pragmatic issues might contribute to the surprisingly limited impact of DLs in this area. Wikidata, the knowledge graph of Wikipedia, will serve as a concrete illustration for a lack of formal KR in an application where other knowledge engineering and (semantic) data management technologies have been adopted with great enthusiasm. Focusing on this particular example, I will motivate some requirements arising in such systems, and present initial ideas for addressing them in various logics.

 

Brief Author Biography:

Markus Krötzsch is a full professor at TU Dresden, where is is heading the chair for Knowledge-Based Systems. He obtained his Ph.D. from Karlsruhe Institute of Technology (KIT) in 2010, and thereafter worked as a postdoctoral researcher at the Department of Computer Science of the University of Oxford until October 2013. From 2013 to 2016, he has been an Emmy-Noether-Fellow of the German Research Foundation (DFG), and in 2016 he was awarded the DFG Maier-Leibnitz Prize. His research has contributed to the fields of tractable knowledge representation and reasoning, rule-based ontology languages, query answering, reasoning complexity, and collaborative data management platforms. Notable projects he contributed to include the W3C OWL 2 standard, the OWL EL reasoner ELK, and Wikipedia’s free knowledge graph Wikidata.


 

Andreas Pieris
University of Edinburgh

Query Rewriting under Existential Rules

There is a clear consensus that the required level of scalability in ontology-mediated querying (OMQ) can only be achieved via query rewriting, a prominent tool that allows us to exploit standard database technology for OMQ purposes. The key idea is to reduce the problem in question to the problem of evaluating a first-order (FO)query over a relational database. This technique was originally proposed in 2005 in the context of DL-Lite. Since then it has been extensively applied, not only to more expressive DLs but also to existential rules (a.k.a. tuple-generating dependencies and Datalog+/- rules). This talk is about query rewriting under the main decidable classes of existential rules. The first part of the talk will focus on pure FO-rewritability, where the rewriting process is database independent. For the classes of existential rules that always admit FO-rewritings, I will present algorithms for constructing such rewritings and discuss their practical relevance. For the classes that do not always admit FO-rewritings, I will discuss the challenging problem of deciding whether a rewriting exists. In view of the fact that the above (pure) FO-rewritings are unavoidably very large, the second part of the talk will focus on combined FO-rewritability, a technique that allows us to construct small rewritings at the price of touching the database (but in a controlled way). In both parts of the talk, I will try to emphasize how query rewriting under DLs has influenced query rewriting under existential rules (and vice versa).

 

Brief Author Biography:

Andreas Pieris is a Lecturer in the School of Informatics at the University of Edinburgh. Prior to this, he was a postdoctoral researcher at the Institute of Information Systems of the Vienna University of Technology, and a postdoctoral researcher at the Department of Computer Science of the University of Oxford. His research interests are database theory with emphasis on query languages, knowledge representation and reasoning, and computational logic and its applications to computer science. His current work mostly focuses on theoretical and practical aspects of query answering in the presence of ontologies. He has published more than sixty papers, many of them in leading international conferences and journals. He has served on the PCs of numerous international conferences and workshops, including the top-tier database and AI conferences.

 

 

Ulrike Sattler
(30th anniversary talk)

University of Manchester

From reasoning problems to non-standard reasoning problems and one step further

In this talk, I will concentrate on non-standard reasoning problems in DLs, and mainly on even less standard problems: non-standard reasoning problems ask for the computation of some minimal subset, maximally specific concept, etc. with certain properties. In contrast, subjective problems involve other parameters and thus come with additional design choices. I will focus on the problem of “Learning Ontologies from Data” but will mention other problems like “How similar are these concepts” on the way. “Learning Ontologies from Data” could also be called “Finding axioms that describe interesting correlations in our data” or “Semantic Data Analysis” and is an interesting and challenging topic. I will report on our experience with DL Miner, a framework & tool Slava Sazonau and I developed, which is able to “learn” general DL axioms from a DL knowledge base, i.e., given certain parameters, it generates all “potentially interesting” axioms/hypothesis and evaluates each across a range of (independent) quality measures. For an axiom to be “potentially interesting”, it has to be somehow reflected in/supported by the ABox and TBox, but not yet covered in/entailed by the TBox. As it turns out, an axiom can be “interesting” for each of 3 reasons: (1) it can indicate known domain knowledge (so it should be added if we are after a comprehensive TBox). (2) it can indicate possibly new domain knowledge and thus provide new insights into the domain in the classical Machine Learning sense. (3) it can reveal biased data and modelling errors, so support data cleaning. I will describe the computational and conceptual challenges & solutions, evaluation strategies, the insights gained, and the ways in which this framework can be applied.

 

Comments are closed.