VSL 2014: VIENNA SUMMER OF LOGIC 2014
PROGRAM FOR SUNDAY, JULY 20TH, 2014
Days:
previous day
next day
all days

View: session overviewtalk overview

08:45-10:15 Session 123: FLoC Plenary Talk (joint session of 10 meetings)
Location: FH, Hörsaal 1
08:45
FLoC Plenary Talk: Electronic voting: how logic can help?

ABSTRACT. Electronic voting should offer at least the same guarantees than traditional paper-based voting systems. In order to achieve this, electronic voting protocols make use of cryptographic primitives, as in the more traditional case of authentication or key exchange protocols. All these protocols are notoriously difficult to design and flaws may be found years after their first release. Formal models, such as process algebra, Horn clauses, or constraint systems, have been successfully applied to automatically analyze traditional protocols and discover flaws. Electronic voting protocols however significantly increase the difficulty of the analysis task. Indeed, they involve for example new and sophisticated cryptographic primitives, new dedicated security properties, and new execution structures.

After an introduction to electronic voting, we will describe the current techniques for e-voting protocols analysis and review the key challenges towards a fully automated verification.

09:00-10:15 Session 124: Contributed Talks (DL)
Location: EI, EI 7
09:00
Lower and Upper Approximations for Depleting Modules of Description Logic Ontologies
SPEAKER: unknown

ABSTRACT. It is known that no algorithm can extract the minimal depleting Σ-module from ontologies in expressive description logics (DLs). Thus research has focused on algorithms that approximate minimal depleting modules ‘from above’ by computing a depleting module that is not necessarily minimal. The first contribution of this paper is an implementation (AMEX) of such a depleting module extraction algorithm for expressive acyclic DL ontologies that uses a QBF solver for checking conservative extensions relativised to singleton interpretations. To evaluate AMEX and other module extraction algorithms we propose an algorithm approximating minimal depleting modules ‘from below’ (which also uses a QBF solver). We present experiments based on NCI (the National Cancer Institute Thesaurus) that indicate that our lower approximation often coincides with (or is very close to) the upper approximation computed by AMEX, thus proving for the first time that an approximation algorithm for minimal depleting modules can be almost optimal on a large ontology.

09:25
Axiom Dependency Hypergraphs for Fast Modularisation and Atomic Decomposition
SPEAKER: unknown

ABSTRACT. In this paper we use directed hypergraphs to represent the locality-based dependencies between the axioms of an OWL ontology. We define a notion of an axiom dependency hypergraph, where axioms are represented as nodes and dependencies between axioms as hyperedges connecting possibly several nodes with one node. We show that a locality-based module of an ontology corresponds to a connected component in the hypergraph, and an atom of an ontology to a strongly connected component. Collapsing the strongly connected components into single nodes yields a condensed axiom dependency hypergraph, which contains the atomic decomposition of the ontology. To condense the axiom dependency hypergraph we exploit linear time graph algorithms on its graph fragment. This optimization can significantly reduce the time needed to compute the atomic decomposition of an ontology. We provide an experimental evaluation for computing the atomic decomposition of large biomedical ontologies, and for computing syntactic locality-based modules using the condensed axiom dependency hypergraph.

09:50
DeaLing with Ontologies using CODs

ABSTRACT. A major challenge in knowledge representation is to manage the access to knowledge: users should not be presented with knowledge that is irrelevant to their topic of interest, or have no right to access.

Two general strategies exist for providing access restrictions: (1) the ontology engineers describe the conditions that allow access to specific fragments of the ontology, or (2) fragments are automatically identified through their logical properties. The former is prone to miss logical connections between axioms, while the latter can fail to capture relevant knowledge that has no logical connection with the topic of interest.

We define the Context-Oriented Decomposition (COD) of an ontology as a technique that combines the benefits of both approaches: it allows authors to identify relevant fragments, while guaranteeing the strong semantic properties of the logic-based Atomic Decomposition.

09:15-10:15 Session 125A: Tutorial 1 - Ekaterina Ovchinnikova: Natural Language Understanding with World Knowledge and Inference (KR)
Location: EI, EI 10
09:15
Natural Language Understanding with World Knowledge and Inference (Part I)

ABSTRACT. In order to understand natural language expressions it is usually not enough to know the literal (dictionary) meaning of words used in these expressions and compositional rules of the corresponding language. Much more knowledge is involved here; knowledge, which may have nothing to do with the linguistic competence but is rather related to our general conception of the world. In this tutorial, I will present an overview of the approaches to modeling knowledge-intensive natural language understanding (NLU) in a computational framework, with the main focus on inference-based NLU. I will discuss the types of knowledge required for NLU and the techniques to obtain this knowledge in a machine-readable form. The tutorial will cover lexical- semantic knowledge bases, ontologies (general and domain-specific), and corpora-based resources. Then I will focus on reasoning procedures applicable to NLU and compare two main forms of logical inference: deduction and abduction. In the last part of the tutorial, I will present applications of some of the recent large-scale inference-based NLU systems to such knowledge-intensive tasks as recognizing textual entailment, metaphor interpretation, and narrativization of videos.

09:15-10:15 Session 125B: Tutorial 2 - Lawrence S. Moss: Dynamic Epistemic Logic and its Interaction with Knowledge Representation (KR)
Location: EI, EI 9
09:15
Dynamic Epistemic Logic and Its Interaction with Knowledge Representation (Part I)

ABSTRACT. Dynamic epistemic logic (DEL) takes the classical topic epistemic logic and adds features from propositional dynamic logic (PDL). DEL comes to life in the multiagent setting. It allows one to represent epistemic actions in addition to epistemic situations. DEL has been around for about 20 years, and so it has seen a lot of progress. By now, the original topics have been applied, generalized, and adapted in various ways. My primary goal in this tutorial is to present an overview of the area to people at KR 2014. The talk will discuss recent work in KR that uses DEL. After this, I would like to ask why it is that tools from DEL are not more used in the KR community. Turning things around, I also ask what influence issues from KR and other fields will have on the future of DEL and related frameworks.

10:15-10:45Coffee Break
10:15-18:00 Session 126A: FLoC Olympic Games Big Screen: CASC Automated Theorem Proving System Competition (CASC-J7) (joint session of 10 meetings)
Location: FH, 2nd floor
10:15
FLoC Olympic Games Big Screen: 7th IJCAR ATP System Competition (CASC-J7)

ABSTRACT. The CADE and IJCAR conferences are the major forums for the presentation of new research in all aspects of automated deduction. In order to stimulate ATP research and system development, and to expose ATP systems within and beyond the ATP community, the CADE ATP System Competition (CASC) is held at each CADE and IJCAR conference. CASC-J7 will be held on 20th July 2014, during the 7th International Joint Conference on Automated Reasoning (IJCAR, which incorporates CADE), CASC evaluates the performance of sound, fully automatic, ATP systems. The evaluation is in terms of:

  • the number of problems solved,
  • the number of problems solved with a solution output, and
  • the average runtime for problems solved;

in the context of:

  • a bounded number of eligible problems,
  • chosen from the TPTP Problem Library, and
  • specified time limits on solution attempts.

The competition organizer is Geoff Sutcliffe. The competition is overseen by a panel of knowledgeable researchers who are not participating in the event. The panel members are Bernhard Beckert, Maria Paola Bonacina, and Aart Middeldorp.

10:15-18:00 Session 126B: FLoC Olympic Games Big Screen: Termination Competition (termComp) (joint session of 10 meetings)
Location: FH, 2nd floor
10:15
FLoC Olympic Games Big Screen: Termination Competition (termCOMP 2014)
SPEAKER: Albert Rubio

ABSTRACT. The termination competition focuses on automated termination analysis for all kinds of programming paradigms, including categories for term rewriting, imperative programming, logic programming, and functional programming. Moreover, there are also categories for automated complexity analysis. In all categories, participation of tools providing certified proofs is welcome.

In every category, all participating tools of that category are run on a randomly selected subset of the available problems. The goal of the termination competition is to demonstrate the power of the leading tools in each of these areas.

The 2014 execution of the competition is organized by Johannes Waldmann and Stefan von der Krone. The Steering Committee is formed by Jürgen Giesl, Frederic Mesnard, Albert Rubio (Chair), René Thiemann and Johannes Waldmann.

10:45-12:45 Session 127A: Tutorial 1 (Continued) - Ekaterina Ovchinnikova: Natural Language Understanding with World Knowledge and Inference (KR)
Location: EI, EI 10
10:45
Natural Language Understanding with World Knowledge and Inference (Part II)

ABSTRACT. In order to understand natural language expressions it is usually not enough to know the literal (dictionary) meaning of words used in these expressions and compositional rules of the corresponding language. Much more knowledge is involved here; knowledge, which may have nothing to do with the linguistic competence but is rather related to our general conception of the world. In this tutorial, I will present an overview of the approaches to modeling knowledge-intensive natural language understanding (NLU) in a computational framework, with the main focus on inference-based NLU. I will discuss the types of knowledge required for NLU and the techniques to obtain this knowledge in a machine-readable form. The tutorial will cover lexical- semantic knowledge bases, ontologies (general and domain-specific), and corpora-based resources. Then I will focus on reasoning procedures applicable to NLU and compare two main forms of logical inference: deduction and abduction. In the last part of the tutorial, I will present applications of some of the recent large-scale inference-based NLU systems to such knowledge-intensive tasks as recognizing textual entailment, metaphor interpretation, and narrativization of videos.

10:45-12:45 Session 127B: Tutorial 2 (Continued) - Lawrence S. Moss: Dynamic Epistemic Logic and its Interaction with Knowledge Representation (KR)
Location: EI, EI 9
10:45
Dynamic Epistemic Logic and Its Interaction with Knowledge Representation (Part II)

ABSTRACT. Dynamic epistemic logic (DEL) takes the classical topic epistemic logic and adds features from propositional dynamic logic (PDL). DEL comes to life in the multiagent setting. It allows one to represent epistemic actions in addition to epistemic situations. DEL has been around for about 20 years, and so it has seen a lot of progress. By now, the original topics have been applied, generalized, and adapted in various ways. My primary goal in this tutorial is to present an overview of the area to people at KR 2014. The talk will discuss recent work in KR that uses DEL. After this, I would like to ask why it is that tools from DEL are not more used in the KR community. Turning things around, I also ask what influence issues from KR and other fields will have on the future of DEL and related frameworks.

10:45-13:05 Session 127C: Model Checking and Testing (CAV)
Location: FH, Hörsaal 1
10:45
Engineering a Static Verification Tool for GPU Kernels

ABSTRACT. We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for GPU kernels written in OpenCL and CUDA. We have published papers on the top-level research ideas underpinning GPUVerify, but embedding theses ideas in a tool that can be applied to real-world examples with a reasonable degree of efficiency and automation has required significant optimisation effort and a number of important engineering decisions. We describe these decisions and optimisations, and demonstrate their effectiveness using data experimental gathered for a set of more than 500 benchmarks, encompassing all the major publicly available benchmark suites and a commercial benchmark suite. Our experiments plot the progress of GPUVerify, starting from a sound concept that did not scale well in practice, through a series of improvements and optimisations, to a fully functional and relatively efficient analysis tool. We also report on our efforts to encourage uptake of the tool by industry through features to reduce the rate of false positives and provide clear error messages when verification fails. Finally, we describe our engagement with industry to promote usage of the tool beyond academia and the fruitful feedback this has brought. Our hope is that this engineering and experience report will serve the verification community by helping to inform future tooling efforts.

11:05
Lazy Annotation Revisited

ABSTRACT. Lazy Annotation is a method of software model checking that performs a backtracking search for a symbolic counterexample. When the search backtracks, the program is annotated with a learned fact that constrains future search. In this sense, the method is closely analogous to conflict-driven clause learning in Boolean satisfiability solvers. In this paper, we develop several improvements to the basic lazy annotation approach. These include a decision heuristic, a form of non-chronological backtracking, and an adaptation to large-block encodings that exploits the power of modern satisfiability modulo theories solvers.

The resulting algorithm is compared both theoretically and experimentally to two approaches that rely on similar principles but use different learning strategies: unfolding-based bounded model checking and property-driven reachability. The lazy annotation approach is seen to be significantly more robust over a collection of benchmarks from the SV-COMP 2013 software model checking competition and the Microsoft Static Driver Verifier regression suite.

11:25
Interpolating Property Directed Reachability
SPEAKER: Yakir Vizel

ABSTRACT. Current SAT-based Model Checking is based on two major approaches: Interpolation-based (IMC) (global, with unrollings) and Property Directed Reachability/IC3 (PDR) (local, without unrollings). IMC generates candidate invariants using interpolation over an unrolling of a system, without putting any restrictions on the SAT-solver's search. PDR generates candidate invariants by a local search over a single instantiation of the transition relation, effectively guiding the SAT solver's search. The two techniques are considered to be orthogonal and have different strength and limitations. In this paper, we present a new technique, called AVY, that effectively combines the key insights of the two approaches. Like \Imc, it uses unrollings and interpolants to construct an initial candidate invariant, and, like PDR, it uses local inductive generalization to keep the invariants in compact clausal form. On the one hand, AVY is an incremental IMC extended with a local search for CNF interpolants. On the other, it is PDR extended with a global search for bounded counterexamples. We implemented the technique using ABC and have evaluated it on the HWMCC'12 and HWMCC'13 benchmark-suites. Our results show that the prototype significantly outperforms PDR and McMillan's interpolation algorithm (as implemented in ABC) on a large number of test cases.

11:45
Verifying Relative Error Bounds using Symbolic Simulation
SPEAKER: unknown

ABSTRACT. In this paper we consider the problem of formally verifying hardware that is specified to compute reciprocal, reciprocal square root, and power-of-two functions on floating point numbers to within a given relative error. Such specifications differ from the common case in which any given input is specified to have exactly one correct output. Our approach is based on symbolic simulation with binary decision diagrams, and involves two distinct steps. First, we prove a lemma that reduces the relative error specification to several inequalities that involve reasoning about natural numbers only. The most complex of these inequalities asserts that the product of several naturals is less-than/greater-than another natural. Second, we invoke one of several customized algorithms that decides the inequality, without performing the expensive symbolic multiplications directly. We demonstrate the effectiveness of our approach on a next-generation Intel processor design and report encouraging time and space metrics for these proofs.

12:05
Regression Test Selection for Distributed Software Histories

ABSTRACT. Regression test selection analyzes incremental changes to a codebase and chooses to run only those tests whose behavior may be affected by the latest changes in the code. By focusing on a small subset of all the tests, the testing process runs faster and can be more tightly integrated into the development process. Existing techniques for regression test selection consider two versions of the code at a time, effectively assuming a development process where changes to the code occur in a linear sequence.

Modern development processes that use distributed version-control systems are more complex. Software version histories are generally modeled as directed graphs; in addition to version changes occurring linearly, multiple versions can be related by other commands, e.g., branch, merge, rebase, cherry-pick, revert, etc. This paper describes a regression test-selection technique for software developed using modern distributed version-control systems. By modeling different branch or merge commands directly in our technique, it computes safe test sets that can be substantially smaller than applying previous techniques to a linearization of the software history.

We evaluate our technique on software histories of several large open-source projects. The results are encouraging: our technique obtained an average of 10.89× reduction in the number of tests over an existing technique while still selecting all tests whose behavior may differ.

12:25
GPU-Based Graph Decomposition into Strongly Connected and Maximal End Components
SPEAKER: Anton Wijs

ABSTRACT. This paper presents parallel algorithms for component decomposition of graph structures on General Purpose Graphics Processing Units (GPUs). In particular, we consider the problem of decomposing sparse graphs into strongly connected components, and decomposing stochastic games (such as MDPs) into maximal end components. These problems are key ingredients of many (probabilistic) model-checking algorithms. We explain the main rationales behind our GPU-algorithms, and show a significant speed-up over the sequential counterparts in several case studies.

12:45
Software Verification in the Google App-Engine Cloud

ABSTRACT. Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework CPAchecker to it. We provide our new verification service as a web front-end to users who wish to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.

12:55
The nuXmv Symbolic Model Checker
SPEAKER: Marco Roveri

ABSTRACT. This paper describes the nuXmv symbolic model checker for finite- and infinite-state synchronous transition systems. nuXmv is the evolution of the NuSMV open source model checker. It builds on and extends NuSMV along two main directions. For finite-state systems it complements the basic verification techniques of NuSMV with state-of-the-art verification algorithms. For infinite-state systems, it extends the NuSMV language with new data types, namely Integers and Reals, and it provides advanced SMT-based model checking techniques.

Besides extended functionalities, nuXmv has been optimized in terms of performance to be competitive with the state of the art. nuXmv has been used in several industrial projects as verification back-end, and it is the basis for several extensions to cope with requirements analysis, contract based design, model checking of hybrid systems, safety assessment, and software model checking.

10:45-13:00 Session 127D: Test of Time Award: 10 Years / Technical Session: Constraints and Constraint Handling Rules (ICLP)
Location: FH, Hörsaal 8
10:45
Test of Time Award: 10 Years - The Refined Operational Semantics of Constraint Handling Rules
SPEAKER: Peter Stuckey
11:30
On Termination, Confluence and Consistent CHR-based Type Inference

ABSTRACT. We consider the application of Constraint Handling Rules (CHR) for the specification of type inference systems, such as that used by Haskell. Confluence of CHR guarantees that the answer provided by type inference is correct and consistent. The standard method for establishing confluence relies on an assumption that the CHR program is terminating. However, many examples in practice are give rise to non-terminating CHR programs, rendering this method inapplicable. Despite no guarantee of termination or confluence, the Glasgow Haskell Compiler (GHC) supports options that allow the user to proceed with type inference anyway, e.g. via the use of the UndecidableInstances flag. In this paper we formally identify and verify a set of relaxed criteria, namely range-restricted-ness, local confluence, and ground termination, that ensure the consistency of CHR-based type inference that maps to potentially non-terminating CHR programs.

12:00
Exchanging Conflict Resolution in an Adaptable Implementation of ACT-R
SPEAKER: unknown

ABSTRACT. In computational cognitive science, the cognitive architecture ACT-R is very popular. It describes a model of cognition that is amenable to computer implementation, paving the way for computational psychology. Its underlying psychological theory has been investigated in many psychological experiments, but ACT-R lacks of a formal definition of its underlying concepts from a mathematical-computational view. Although the canonical implementation of ACT-R is now modularized, this production rule system is still hard to adapt and extend in central parts like the conflict resolution mechanism (which decides which of the applicable rules to apply next). In this work, we present a concise implementation of ACT-R based on Constraint Handling Rules which has been derived from a formalization in prior work. To show the adaptability of our approach, we implement several different conflict resolution mechanisms discussed in the ACT-R literature. This results in the first implementation of one such approach. For the other mechanisms, we empirically evaluate if our implementation matches the results of reference implementations of ACT-R.

12:30
The P-Box CDF-Intervals: Reliable Constraint Reasoning with Quantifiable Information
SPEAKER: Aya Saad

ABSTRACT. This paper introduces a new constraint domain for reasoning about data with uncertainty. It extends convex modeling with the notion of p-box to gain additional quantifiable information on the data whereabouts. Unlike existing approaches, the p-box envelops an unknown probability instead of approximating its representation. The p-box bounds are uniform cumulative distribution functions (cdf) in order to employ linear computations in the probabilistic domain. The reasoning by means of p-box cdf-intervals is an interval computation which is exerted on the real domain then it is projected onto the cdf domain. This operation conveys additional knowledge represented by the obtained probabilistic bounds. The empirical evaluation of our implementation shows that, with minimal overhead, the output solution set realizes a full enclosure of the data along with tighter bounds on its probabilistic distributions.

10:45-13:00 Session 127E: Equational Reasoning (IJCAR)
Location: FH, Hörsaal 5
10:45
A Rewriting Strategy to Generate Prime Implicates in Equational Logic

ABSTRACT. Generating the prime implicates of a formula consists in finding its most general consequences. This has many fields of application in automated reasoning, like planning and diagnosis, and although the subject has been extensively studied (and still is) in propositional logic, very few have approached the problem in more expressive logics because of its intrinsic complexity. This paper presents one such approach for flat ground equational logic. Aiming at efficiency, it intertwines an existing method to generate all prime implicates of a formula with a rewriting technique that uses atomic equations to simplify the problem by removing constants during the search. The soundness, completeness and termination of the algorithm are proven. The algorithm has been implemented and an experimental analysis is provided.

11:15
Finite Quantification in Hierarchic Theorem Proving
SPEAKER: unknown

ABSTRACT. Many applications of automated deduction require reasoning in first-order logic modulo background theories, in particular some form of integer arithmetic. A major unsolved research challenge is to design theorem provers that are ``reasonably complete'' even in the presence of free function symbols ranging into a background theory sort. In this paper we consider the case when all variables occurring below such function symbols are quantified over a finite subset of their domains. We present a (non-naive) decision procedure for background theories extended this way on top of black-box decision procedures for the EA-fragment of the background theory. In its core, it employs a model-guided instantiation strategy for obtaining pure background formulas that are equi-satisfiable with the original formula. Unlike traditional finite model finders, it avoids exhaustive instantiation and, hence, is expected to scale better with the size of the domains. Our main results in this paper are a correctness proof and first experimental results.

11:45
Computing All Implied Equalities via SMT-based Partition Refinement
SPEAKER: Josh Berdine

ABSTRACT. Consequence finding is used in many applications of deduction.  This paper develops and evaluates a suite of optimized SMT-based algorithms for computing equality consequences over arbitrary formulas and theories supported by SMT solvers.  It is inspired by an application in the SLAyer analyzer, where our new algorithms are commonly 10-100x faster than simpler algorithms.  The main idea is to incrementally refine an initially coarse partition using models extracted from a solver.  Our approach requires only O(N) solver calls for N terms, but in the worst case creates O(N2) fresh subformulas.  Simpler algorithms, in contrast, require O(N2) solver calls.  We also describe an asymptotically superior algorithm that requires O(N) solver calls and only O(N log N) fresh subformulas.  We evaluate algorithms which reduce the number of fresh formulas required either by using specialized data structures or by relying on subformula sharing.

12:15
Proving Termination of Programs Automatically with AProVE
SPEAKER: Jürgen Giesl

ABSTRACT. AProVE is a system for automated termination and complexity proofs of Java, Haskell, Prolog, and term rewrite systems (TRSs). To analyze programs in high-level programming languages, AProVE automatically converts them to TRSs. Then, a wide range of techniques is employed to prove termination and to infer complexity bounds for the resulting TRSs. Finally, the generated proofs can be exported to certify their correctness using automatic theorem proving techniques. For use in software construction, we present an AProVE plug-in for the popular Eclipse software development environment.

10:45-12:45 Session 127F: Usable Security (CSF)
Location: FH, Hörsaal 6
10:45
Who’s Afraid of Which Bad Wolf? A Survey of IT Security Risk Awareness
SPEAKER: unknown

ABSTRACT. The perception of risk has been established as an important part of the study of human aspects of security research. Similarly, risk awareness is often considered a central precursor for the adoption of security mechanisms and how people use them and interact with them. However, the state of risk awareness in users during their everyday use of the modern Internet has not been studied in detail. While it is well known that users have a limited "budget" for security behavior and that trying to coerce them into considering additional risks does not work well, it remains unclear which risks are on users' minds and therefore already accounted for in terms of their budget. Hence, assessing which risks and which consequences users currently perceive when using information technology is an important and currently overlooked foundation to shape usability aspects of IT security mechanisms. In this paper, we present a survey of risk and consequence awareness in users, analyze how this may influence the current lack of adoption for improved security measures, and make recommendations how this situation can be alleviated.

11:15
How task familiarity and cognitive predispositions impact behavior in a security game of timing
SPEAKER: unknown

ABSTRACT. This paper addresses security and safety choices that involve a decision on the timing of an action, either on a short- or a long-term scale. Examples of such decisions include when to make backups, when to patch, when to update passwords, or when to check the correctness of personal financial account information. We effectively combine survey measures with economic experiments to better understand how performance in timing-related security situations is shaped by individuals' cognitive predispositions. Our work is aimed to explore timing-relevant cognitive biases and to eventually develop strategies to help decision-makers with behavioral intervention strategies. Two behavioral experiments are presented in which the timing of online security actions is the critical decision-making factor. We distinguish between visual feedback and estimation (Experiment 1) and purely temporal estimation (Experiment 2). Using psychometric scales, we investigate the role of individual difference variables, specifically risk propensity and need for cognition. The analysis is based on the data from over 450 participants and a careful statistical methodology. Risk propensity is not a hindrance in timing tasks. Participants of average risk propensity generally benefit from a reflective disposition (high need for cognition), particularly when visual feedback is given. Overall, participants benefit from need for cognition; in the more difficult, temporal-estimation task, this requires task experience.

11:45
Panel: Usability
SPEAKER: Lujo Bauer
10:45-13:00 Session 127G: Contributed Talks (DL)
Location: EI, EI 7
10:45
Controlled Query Evaluation over Lightweight Ontologies
SPEAKER: unknown

ABSTRACT. We study confidentiality enforcement in ontologies under the Controlled Query Evaluation (CQE) framework. In a CQE system, a policy specifies the sensitive information, and a censor ensures that answers to user queries that could violate the policy are not returned. Our goal is the design of optimal CQE algorithms, which ensure confidentiality while maximising access to information. We study two natural classes of censors that can be realised using existing infrastructure for query answering and propose optimal CQE algorithms for the standardised profiles of the ontology language OWL 2.

11:10
Query Rewriting under EL TBoxes: Efficient Algorithms
SPEAKER: unknown

ABSTRACT. We propose a new type of algorithm for computing first-order (FO) rewritings of concept queries under EL-TBoxes that is tailored towards efficient implementation. The algorithm outputs a non-recursive datalog rewriting if the input is FO-rewritable and otherwise reports non-FO-rewritability. We carry out experiments with ontologies from practical applications which show that our algorithm performs very well in practice, and that EL-TBoxes originating from real-world applications admit FO-rewritings (of reasonable size) in almost all cases, even when in theory such rewritings are not guaranteed to exist.

11:35
Parallel OWL 2 RL Materialisation in Centralised, Main-Memory RDF Systems
SPEAKER: unknown

ABSTRACT. We present a novel approach to parallel materialisation (i.e., fixpoint computation) of OWL RL Knowledge Bases in centralised, main-memory, multi-core RDF systems. Our approach comprises a parallel reasoning algorithm that evenly distributes the workload to cores, and an RDF indexing data structure that supports efficient, `mostly' lock-free parallel updates. Our empirical evaluation shows that our approach parallelises computation very well so, with 16 physical cores, materialisation can be up to 13.9 times faster than with just one core.

12:00
SPARQL Update for Materialized Triple Stores under DL-Lite_RDFS Entailment
SPEAKER: unknown

ABSTRACT. Updates in RDF stores have recently been standardised in the SPARQL 1.1 Update specification. However, computing answers entailed by ontologies in triple stores is usually treated orthogonal to updates. Even W3C's SPARQL 1.1 Update language and SPARQL 1.1 Entailment Regimes specifications explicitly exclude a standard behaviour how entailment regimes other than simple entailment in the context of updates. In this paper, we take a first step to close this gap. We define a fragment of SPARQL basic graph patterns corresponding to (the RDFS fragment of) DL-Lite and the corresponding SPARQL update language, dealing with updates both of ABox and of TBox statements. We discuss possible semantics along with potential strategies for implementing them. Particularly, we treat materialised RDF stores, which store all entailed triples explicitly and how materialisation can be preserved upon ABox and TBox updates.

12:25
XPath for DL-Lite Ontologies
SPEAKER: unknown

ABSTRACT. Applications of description logics (DLs) such as OWL 2 and ontology-based data access (OBDA) require understanding of how to pose database queries over DL knowledge bases. While there have been many studies regarding traditional relational query formalisms such as conjunctive queries and their extensions, little attention has been paid to graph database queries, despite the fact that graph databases share the structure of interpretations with DLs; that is they describe essentially the same objects. In particular, not much is known about the interplay between DLs and XPath. The last is a powerful formalism for querying semistructured data: it is in the core of most practical query languages for XML trees, and it is also gaining popularity in theory and practice of graph databases.

In this paper we make a step towards coupling knowledge bases and graph databases by studying how to answer powerful XPath-style queries over DL-Lite. We start with adapting the definition of XPath to the DL context, and then proceed to study the complexity of evaluating XPath queries over knowledge bases. Results show that, while query answering is undecidable for the full XPath, by carefully tuning the amount of negation allowed in the queries we can arrive to XPath fragments that have a potential to be used in practical applications.

10:45-13:00 Session 127H (ICLP)
Location: FH, Seminarraum 107
10:45
Test of Time Award: 10 Years (Talk presented at ICLP Main Conference)
SPEAKER: Peter Stuckey
11:45
The Impact of Disjunction on Reasoning under Existential Rules
SPEAKER: Michael Morak

ABSTRACT. Datalog± is a Datalog-based language family enhanced with existential quantification in rule heads, equalities and negative constraints. Query answering over databases with respect to a Datalog± theory is generally undecidable, however several syntactic restrictions have been proposed to remedy this fact. However, a useful and natural feature however is as of yet missing from Datalog±: The ability to express uncertain knowledge, or choices, using disjunction. It is the precise objective of the doctoral thesis herein discussed, to investigate the impact on the complexity of query answering, of adding disjunction to well-known decidable Datalog± fragments, namely guarded, sticky and weakly-acyclic Datalog± theories. For guarded theories with disjunction, we obtain a strong 2Exp lower bound in the combined complexity, even for very restricted formalisms like fixed sets of (disjunctive) inclusion dependencies. For sticky theories, the query answering problem becomes undecidable, even in the data complexity, and for weakly-acyclic query answering we see a reasonable and expected increase in complexity.

12:10
Visualization of Constraint Handling Rules
SPEAKER: Nada Sharaf

ABSTRACT. Constraint Handling Rules (CHR) has matured into a general purpose language over the past two decades. Any general purpose language requires its own development tools. Visualization tools, in particular, facilitate many tasks for programmers as well as beginners to the language. The article presents on-going work towards the visualization of CHR programs. The process is done through source-to-source transformation. It aims towards reaching a generic transformer to visualize different algorithms implemented in CHR.

12:35
Reasoning with Probabilistic Logics
SPEAKER: Riccardo Zese

ABSTRACT. The interest in the combination of probability with logics for modeling the world has rapidly increased in the last few years. One of the most effective approaches is the Distribution Semantics which was adopted by many logic programming languages and in Descripion Logics. In this paper, we illustrate the work we have done in this research field by presenting a probabilistic semantics for description logics and reasoning and learning algorithms. In particular, we present in detail the system TRILLP, which computes the probability of queries w.r.t. probabilistic knowledge bases, which has been implemented in Prolog.

10:45-12:45 Session 127I (NCPROOFS)
Location: FH, Seminarraum 104
10:45
Construction and meaning
SPEAKER: Matthias Baaz
11:15
Proof theory for ordered algebra: amalgamation and densification
11:45
The Epsilon Calculus and Nonclassical Logics
SPEAKER: Richard Zach
12:15
Automated and Interactive Theorem Proving for Modal Logics via embedding into Classical Higher-Order Logic
13:00-14:30Lunch Break
14:30-16:00 Session 129A: Tutorial 3 - Alessio Lomuscio: Verification of Multi-Agent Systems against Epistemic Specifications (KR)
Location: EI, EI 10
14:30
Verification of Multi-Agent Systems against Epistemic Specifications (Part I)

ABSTRACT. Autonomous multiagent systems are typically specified by using expressive properties representing, among others, the knowledge of the agents and what they can bring about collectively or on their own in a system. This tutorial will survey some of the current techniques addressing the model checking of multi-agent systems against agent-based specifications. Specifically, the tutorial will cover syntax and semantics for the temporal-epistemic logic CTLK and ATLK, the epistemic variant of alternating time temporal logic. It will also cover ordered- binary decision diagrams and symbolic labeling, parameter synthesis and strategy synthesis in ATLK specifications, verification of systems with an unbounded number of agents, and MCMAS: a model checker for multiagent systems implementing these techniques, as well as the application of the techniques to a range of scenarios including in service- oriented computing.

14:30-16:00 Session 129B: Tutorial 4 - Riccardo Rosati: Query Answering and Rewriting in Ontology-Based Data Access (KR)
Location: EI, EI 9
14:30
Query Answering and Rewriting in Ontology-Based Data Access (Part I)

ABSTRACT. Ontology-based data access (OBDA) has recently become a hot research topic in knowledge representation and data management. The basic idea underlying OBDA is to superimpose an ontology (conceptual layer) to a set of data sources (data layer), and use the ontology as a virtual schema for querying the data sources. The ontology and the data sources are connected through declarative mappings that provide the semantic relationship between the two layers. This tutorial will introduce ontology-based data access and will deal with the problem of query answering in OBDA, focusing in particular on the query rewriting approach to query answering. After a brief history of ontology-based data access, the tutorial will overview the most important results obtained in this area. Then, the main query rewriting techniques for OBDA will be presented and compared. The relationship between OBDA and other research topics will also be addressed. The main focus of the tutorial is on ontologies expressed using Description Logics: however, other ontology specification languages, like Datalog+/–:, will be considered.

14:30-16:00 Session 129C: Biology and Hybrid Systems (CAV)
Location: FH, Hörsaal 1
14:30
Hardware Model Checking Competition 2014 CAV Edition
SPEAKER: Armin Biere

ABSTRACT. Presentation of the results of the 7th International Hardware Model Checking Competition in 2014, CAV Edition.

14:40
Analyzing and Synthesizing Genomic Logic Functions

ABSTRACT. Deciphering the developmental program of an embryo is a fundamental question in biology. Landmark papers by Peter et al. [Peter and Davidson, Nature, 2011; Peter, Faure and Davidson, PNAS, 2012] have recently shown how computational models of gene regulatory networks provide system-level causal understanding of the developmental processes of the sea urchin, and enable powerful predictive capabilities. A crucial aspect of their work is empirically deriving plausible models that explain all the known experimental data, a task that becomes infeasible in practice due to the inherent complexity of the biological systems. We present a generic SMT-based approach to analyse and synthesize real models. We apply our approach to the sea urchin embryo, and successfully improve the state-of-the-art by providing, for the first time, biologists with models that perfectly explains all known data. Furthermore, we show how infeasibility results on certain classes of models can be used to drive new biological insights.

15:00
Finding instability in biological models

ABSTRACT. The stability of biological models is an important test for establishing their soundness and accuracy. Stability in biological systems represents the ability of a robust system to always return to homeostasis. In recent work, modular approaches for proving stability have been found to be swift and scalable. If stability is however not proved, the currently available techniques apply an exhaustive search through the unstable state space to find loops. This search is frequently prohibitively computationally expensive, limiting its usefulness. Here we present a new modular approach eliminating the need for an exhaustive search for loops. Using models of biological systems we show that the technique finds loops significantly faster than brute force approaches. Furthermore, for a subset of stable systems which are resistant to modular proofs, we observe a speed up of up to 3 orders of magnitude as the exhaustive searches for loops which cause instability are avoided. With our new procedure we are able to prove instability and stability in a number of realistic biological models, including adaptation in bacterial chemotaxis, the lambda phage lysogeny/lysis switch, voltage gated channel opening and cAMP oscillations in the slime mold Dictyostelium discoideum. This new approach will support the development of new clinically relevant tools for industrial biomedicine.

15:20
Invariant verification of nonlinear hybrid automata networks of cardiac cells
SPEAKER: Zhenqi Huang

ABSTRACT. Verification algorithms for networks of nonlinear hybrid automata (HA) can aid our understanding and our ability to control biological processes such as cardiac arrhythmia, memory formation, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. It uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with $5$ cells (20 continuous variables, 29^5 locations) typically in less than 15 minutes for upto reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

15:40
Diamonds are a Girl's Best Friend: Partial Order Reduction for Timed Automata With Abstractions
SPEAKER: Henri Hansen

ABSTRACT. A major obstacle for using partial order reduction in the context of real time verification is that the presence of clocks and clock constraints breaks the usual \emph{diamond structure} of otherwise independent transitions. This is especially true when information of the relative values of clocks is preserved in the form of diagonal constraints. However, when diagonal constraints are relaxed by a suitable abstraction, some diamond structure is preserved in the zone graph. In this article, we introduce a variant of the stubborn set method for reducing an abstracted zone graph. Our method works with all abstractions, but are especially suited to to situations where one abstract execution can simulate several permutations of the corresponding concrete execution, even though it might not be able to simulate the permutations of the abstract execution. We define independence relations with respect to this ``hidden'' diamond structure, and define stubborn sets using these relations. We provide a reference implementation for verifying timed language inclusion, to demonstrate the effectiveness of our method.

14:30-16:00 Session 129D: Technical Session: Semantics (ICLP)
Location: FH, Hörsaal 8
14:30
On Cascade Products of Answer Set Programs

ABSTRACT. Describing complex objects by elementary ones is a common strategy in mathematics and science in general. In their seminal 1965 paper, Kenneth Krohn and John Rhodes showed that every finite deterministic automaton can be represented (or “emulated”) by a cascade product of very simple automata. This led to an elegant algebraic theory of automata based on finite semigroups (Krohn-Rhodes Theory). Surprisingly, by relating logic programs and automata, we can show in this paper that the Krohn-Rhodes Theory is applicable in Answer Set Programming (ASP). More precisely, we recast the concept of a cascade product to ASP, and prove that every program can be represented by a product of very simple programs, the reset and standard programs. Roughly, this implies that the reset and standard programs are the basic building blocks of ASP with respect to the cascade product. In a broader sense, this paper is a first step towards an algebraic theory of products and networks of nonmonotonic reasoning systems based on Krohn-Rhodes Theory, aiming at important open issues in ASP and AI in general.

15:00
Vicious Circle Principle and Logic Programs with Aggregates
SPEAKER: unknown

ABSTRACT. The paper presents a knowledge representation language Alog which extends ASP with aggregates. The goal is to have a language based on simple syntax and clear intuitive and mathematical semantics. We give some properties of Alog, an algorithm for computing its answer sets, and comparison with other approaches.

15:30
Causal Graph Justifications of Logic Programs

ABSTRACT. In this work we propose a multi-valued extension of logic programs under the stable models semantics where each true atom in a model is associated with a set of justifications. These justifications are expressed in terms of causal graphs formed by rule labels and edges that represent their application ordering. For positive programs, we show that the causal justifications obtained for a given atom have a direct correspondence to (relevant) syntactic proofs of that atom using the program rules involved in the graphs. The most interesting contribution is that this causal information is obtained in a purely semantic way, by algebraic operations (product, sum and application) on a lattice of causal values whose ordering relation expresses when a justification is stronger than another. Finally, for programs with negation, we define the concept of causal stable model by introducing an analogous transformation to Gelfond and Lifschitz's program reduct. As a result, default negation behaves as ``absence of proof'' and no justification is derived from negative literals, something that results convenient for elaboration tolerance, as we explain with several examples.

14:30-16:00 Session 129E: Verification (IJCAR)
Location: FH, Hörsaal 5
14:30
Locality Transfer: From Constrained Axiomatizations to Reachability Predicates

ABSTRACT. In this paper, we build upon our previous work on using constrained clauses in order to finitely represent possibly infinite sets of clauses.

We explore theories and theory extensions that are axiomatized by sets of constrained clauses, and we characterize syntactic criteria for their locality. We have previously shown that constrained axiomatizations are local if they are saturated under a version of resolution. The appearing constraints often capture interesting properties like reachability or extensions thereof.

We extend this result by proving that locality of saturated axiomatizations is maintained if we enrich the base theory by introducing new predicates instead of using constraints for these properties. Often, these new predicates are reachability predicates.

Applications of these results include reasoning about data structures; we indicate how the ideas can be adapted for also handling certain theories with greatest/smallest fixpoints.

15:00
Proving Termination and Memory Safety for Programs with Pointer Arithmetic

ABSTRACT. While automated verification of imperative programs has been studied intensively, proving termination of programs with explicit pointer arithmetic is still an open problem. To close this gap, we introduce a novel abstract domain that can track allocated memory in detail. We use it to automatically construct a symbolic execution graph that represents all possible runs of the program. This graph is then transformed into an integer transition system, whose termination can be proved by standard techniques. We implemented this approach in the automated termination prover AProVE and demonstrate its capability of analyzing C programs with pointer arithmetic that existing tools cannot handle.

15:30
QBF Encoding of Temporal Properties and QBF-Based Verification
SPEAKER: Wenhui Zhang

ABSTRACT. SAT and QBF solving techniques have applications in various areas. SAT-solving has also been applied in the area of formal verification of temporal properties of transition system models. Because of the limited expressive succinctness, complicated verification problems cannot be naturally represented with SAT-formulas succinctly. This paper investigates QBF-applications in this area, aiming at the verification of branching-time temporal logic properties of transition system models. Branching-time temporal logic properties involve operators that may require whether there exist certain kinds of paths starting at different states of a system model. This requires the use of quantifiers in the encoding of such properties. The focus of this paper is on temporal logic properties specified by the extended computation tree logic that allows some sort of fairness, and the main contribution of this paper is a bounded semantics for the extended computation tree logic. Then a QBF encoding of the temporal logic is developed from the definition of the bounded semantics, and then an implementation of QBF-based verification follows from the QBF encoding. Experimental evaluation of the feasibility and the computational properties of such a QBF-based verification algorithm is reported.

14:30-16:00 Session 129F: Cryptography 1 (CSF)
Location: FH, Hörsaal 6
14:30
Attribute-based Encryption for Access Control Using Elementary Operations
SPEAKER: unknown

ABSTRACT. Attribute-based encryption (ABE) has attracted considerable attention in the research community in recent years. It has a number of applications such as broadcast encryption and the cryptographic enforcement of access control policies. Existing instantiations of ABE make use of access structures encoded either as trees of Shamir threshold secret-sharing schemes or monotone span programs. In both cases, the appropriate computations for these schemes need to be interleaved with the standard operations for cryptographic pairings. Moreover, the resulting schemes are not particularly appropriate for access control policies. In this paper, therefore, we start by examining the representation of access control policies and investigate alternative secret-sharing schemes that could be used to enforce them. We develop new ABE schemes based on the Benaloh-Leichter scheme, which employs only elementary arithmetic operations, and then extend this to arbitrary linear secret sharing schemes. We then compare the complexity of existing schemes with our scheme based on Benaloh-Leichter.

15:00
Automated Analysis and Synthesis of Block-Cipher Modes of Operation

ABSTRACT. Block cipher modes of operation provide a mechanism for encrypting arbitrary length messages based on block ciphers having some fixed (small) block size. Existing techniques for proving modes secure under standard security definitions require analyzing the global behavior of the mode. We propose a new approach which requires only \emph{local} analysis of a single block. As part of our approach, we model modes of operations as graphs, and specify a set of type-inference rules over the edges of the graph. This allows us to reduce the question of security (against chosen-plaintext attacks) for some mode to the question of whether the graph has a consistent type assignment, which in turn can be reduced to a question of boolean satisfiability. We couple this security-analysis tool with a synthesizer that automatically generates viable modes using an SMT solver.

15:30
Certified Synthesis of Efficient Batch Verifiers
SPEAKER: unknown

ABSTRACT. Many algorithms admit very efficient batch versions that compute simultaneously the output of the algorithms on a set of inputs. Batch algorithms are widely used in cryptography, especially in the setting of pairing-based computations, where they deliver significant speed-ups.

AutoBatch is an automated tool that computes highly optimized batch verification algorithms for pairing-based signature schemes. Thanks to finely tuned heuristics, AutoBatch is able to rediscover efficient batch verifiers for several signature schemes of interest, and in some cases to output batch verifiers that outperform the best known verifiers from the literature. However, \AutoBatch only provides weak guarantees (in the form of a LaTeX proof) of the correctness of the batch algorithms it outputs. In this paper, we verify the correctness and security of these algorithms using the EasyCrypt framework. To achieve this goal, we define a domain-specific language to describe verification algorithms based on pairings and provide a very efficient algorithm for checking (approximate) observational equivalence between expressions of this language. By translating the output of AutoBatch to this language and applying our verification procedure, we obtain machine-checked correctness proofs of the batch verifiers. Moreover, we formalize notions of security for batch verifiers and we provide a generic proof in EasyCrypt that batch verifiers satisfy a security property called screening, provided they are correct and the original signature is unforgeable against chosen-message attacks. We apply our techniques to several well-known pairing-based signature schemes from the literature, and to Groth-Sahai proofs in the SXDH setting.

14:30-16:00 Session 129G: FLoC Olympic Games: Answer Set Programming Modeling Competition 2014 (joint session of 10 meetings)
Location: FH, Hörsaal 2
14:30
FLoC Olympic Games: Answer Set Programming Modeling Competition 2014
SPEAKER: unknown

ABSTRACT. Answer Set Programming (ASP) is a well-established paradigm of declarative programming that has been developed in the field of logic programming and nonmonotonic reasoning. The ASP Modeling Competition 2014, held on-site collocated with the 30th International Conference on Logic Programming (ICLP), follows the spirit of Prolog Programming Contests from previous ICLP editions. That is, complex problems of various kinds are waiting to be modeled, using the ASP-Core-2 standard input language of ASP systems. Dauntless programmers, novices and experts, are encouraged to team up, take the challenge and the glory.

14:30-16:00 Session 129H (ICLP)
Location: FH, Seminarraum 107
14:30
Application of Methods for Syntax Analysis of Context-Free Languages to Query Evaluation of Logic Programs
SPEAKER: Heike Stephan

ABSTRACT. My research goal is to employ a parser generation algorithm based on the Earley parsing algorithm to the evaluation and compilation of queries to logic programs, especially to deductive databases. By means of partial deduction, from a query to a logic program a parameterized automaton is to be generated that models the evaluation of this query. This automaton can be compiled to executable code; thus we expect a speedup in runtime of query evaluation.

14:55
Bound Founded Answer Set Programming

ABSTRACT. Answer Set Programming (ASP) is a powerful modelling formalism that is very efficient in solving combinatorial problems. ASP solvers implement the stable model semantics that eliminates circular derivations between Boolean variables from the solutions of a logic program. Due to this, ASP solvers are better suited than propositional satisfiability (SAT) and Constraint Programming (CP) solvers to solve a certain class of problems whose specification includes inductive definitions such as reachability in a graph. On the other hand, ASP solvers suffer from the grounding bottleneck that occurs due to their inability to model finite domain variables. Furthermore, the existing stable model semantics are not sufficient to disallow circular reasoning on the bounds of numeric variables. An example where this is required is in modelling shortest paths between nodes in a graph. Just as reachability can be encoded as an inductive definition with one or more base cases and recursive rules, shortest paths between nodes can also be modelled with similar base cases and recursive rules for their upper bounds. This deficiency of stable model semantics introduces another type of grounding bottleneck in ASP systems that cannot be removed by naively merging ASP with CP solvers, but requires a theoretical extension of the semantics from Booleans and normal rules to bounds over numeric variables and more general rules. In this work, we propose Bound Founded Answer Set Programming (BFASP) that resolves this issue and consequently, removes all types of grounding bottleneck inherent in ASP systems.

15:20
CDF-Intervals: A Reliable Framework to Reason about Data with Uncertainty
SPEAKER: Aya Saad

ABSTRACT. This research introduces a new constraint domain for reasoning about data with uncertainty. It extends convex modeling with the notion of p-box to gain additional quantifiable information on the data whereabouts. Unlike existing approaches, the p-box envelops an unknown probability instead of approximating its representation. The p-box bounds are uniform cumulative distribution functions (cdf) in order to employ linear computations in the probabilistic domain. The reasoning by means of p-box cdf-intervals is an interval computation which is exerted on the real domain then it is projected onto the cdf domain. This operation conveys additional knowledge represented by the obtained probabilistic bounds. Empirical evaluation shows that, with minimal overhead, the output solution set realizes a full enclosure of the data along with tighter bounds on its probabilistic distributions.

14:30-16:00 Session 129I (NCPROOFS)
Location: FH, Seminarraum 104
14:30
Introducing Substitution in Proof Theory
15:00
Conditional logics: the quest for internal proof systems
15:30
From Frame Properties to Hypersequent Rules in Modal Logics
SPEAKER: Ori Lahav
16:00-16:30Coffee Break
16:30-18:00 Session 130A: Tutorial 3 (Continued) - Alessio Lomuscio: Verification of Multi-Agent Systems against Epistemic Specifications (KR)
Location: EI, EI 10
16:30
Verification of Multi-Agent Systems against Epistemic Specifications (Part II)

ABSTRACT. Autonomous multiagent systems are typically specified by using expressive properties representing, among others, the knowledge of the agents and what they can bring about collectively or on their own in a system. This tutorial will survey some of the current techniques addressing the model checking of multi-agent systems against agent-based specifications. Specifically, the tutorial will cover syntax and semantics for the temporal-epistemic logic CTLK and ATLK, the epistemic variant of alternating time temporal logic. It will also cover ordered- binary decision diagrams and symbolic labeling, parameter synthesis and strategy synthesis in ATLK specifications, verification of systems with an unbounded number of agents, and MCMAS: a model checker for multiagent systems implementing these techniques, as well as the application of the techniques to a range of scenarios including in service- oriented computing.

16:30-18:00 Session 130B: Tutorial 4 (Continued) - Riccardo Rosati: Query Answering and Rewriting in Ontology-Based Data Access (KR)
Location: EI, EI 9
16:30
Query Answering and Rewriting in Ontology-Based Data Access (Part II)

ABSTRACT. Ontology-based data access (OBDA) has recently become a hot research topic in knowledge representation and data management. The basic idea underlying OBDA is to superimpose an ontology (conceptual layer) to a set of data sources (data layer), and use the ontology as a virtual schema for querying the data sources. The ontology and the data sources are connected through declarative mappings that provide the semantic relationship between the two layers. This tutorial will introduce ontology-based data access and will deal with the problem of query answering in OBDA, focusing in particular on the query rewriting approach to query answering. After a brief history of ontology-based data access, the tutorial will overview the most important results obtained in this area. Then, the main query rewriting techniques for OBDA will be presented and compared. The relationship between OBDA and other research topics will also be addressed. The main focus of the tutorial is on ontologies expressed using Description Logics: however, other ontology specification languages, like Datalog+/–:, will be considered.

16:30-17:10 Session 130C: Hybrid Systems (CAV)
Location: FH, Hörsaal 1
16:30
Reachability Analysis of Hybrid Systems Using Symbolic Orthogonal Projections

ABSTRACT. This talk deals with reachability analysis of hybrid systems with continuous dynamics described by linear differential inclusions and arbitrary linear maps for discrete updates. The invariants, guards, and sets of reachable states are given as convex polyhedra. Our reachability algorithm is based on a novel representation class for convex polyhedra, the symbolic orthogonal projections (sops), on which various geometric operations, including convex hulls, Minkowski sums, linear maps, and intersections, can be performed efficiently and exactly. The capability to represent intersections of convex polyhedra exactly is superior to support function-based approaches like the LGG-algorithm of Le Guernic and Girard.

Accompanied by some simple examples, we address the problem of the monotonic growth of the exact representation and propose a combination of our reachability algorithm with the LGG-algorithm. This results in an efficient method of better accuracy than the LGG-algorithm and its productive implementation in SpaceEx.

16:50
Verifying LTL properties of hybrid systems with K-liveness

ABSTRACT. The verification of liveness properties is an important challenge in the design of real-time and hybrid systems. In contrast to the verification of safety properties, for which there are several solutions available, there are really few tools that support liveness properties such as general LTL formulas for real-time systems, even in the case of timed automata. In the context of finite-state model checking, K-Liveness is a recently proposed algorithm that tackles the problem by proving that an accepting condition can be visited at most K times. K-Liveness has shown to be very efficient, thanks also to its tight integration with IC3, probably the current most effective technique for safety verification. Unfortunately, the approach is neither complete nor effective (even for simple properties) in the case of infinite-state systems with continuous time.

In this paper, we extend K-Liveness to deal with LTL for hybrid systems. On the theoretical side, we show how to extend the reduction from LTL to the reachability of an accepting condition in order to make the algorithm work with continuous time. In particular, we prove that the new reduction is complete for rectangular hybrid automata, in the sense that the LTL property holds if and only if there exists K such that the accepting condition is visited at most K times. On the practical side, we present an efficient integration of K-Liveness in an SMT-version of IC3, and demonstrate its effectiveness on several benchmarks.

16:30-18:00 Session 130D: ASP Competition Report / Technical Session - Knowledge Representation and Reasoning (ICLP)
Location: FH, Hörsaal 8
16:30
The Design of the Fifth Answer Set Programming Competition
SPEAKER: unknown

ABSTRACT. Answer Set Programming (ASP) is a well-established paradigm of declarative programming that has been developed in the field of logic programming and nonmonotonic reasoning. Advances in ASP solving technology are customarily assessed in competition events, as it happens for other closely-related problem-solving technologies like SAT/SMT, QBF, Planning and Scheduling. ASP Competitions are (usually) biennial events; however, the Fifth ASP Competition departs from tradition, in order to join the FLoC Olympic Games at the Vienna Summer of Logic 2014, which is expected to be the largest event in the history of logic. This edition of the ASP Competition series is jointly organized by the University of Calabria (Italy), the Aalto University (Finland), and the University of Genova (Italy), and is affiliated with the 30th International Conference on Logic Programming (ICLP 2014). It features a completely re-designed setup, with novelties involving the design of tracks, the scoring schema, and the adherence to a fixed modeling language in order to push the adoption of the ASP-Core-2 standard. Benchmark domains are taken from past editions, and best system packages submitted in 2013 are compared with new versions and solvers.

17:00
Simulating Dynamic Systems Using Linear Time Calculus Theories
SPEAKER: Bart Bogaerts

ABSTRACT. Dynamic systems play a central role in fields such as planning, verification, and databases. Fragmented throughout these fields, we find a multitude of languages to formally specify dynamic systems and a multitude of systems to reason on such specifications. Often, such systems are bound to one specific language and one specific inference task. It is troublesome that performing several inference tasks on the same knowledge requires translations of your specification to other languages. In this paper we study whether it is possible to perform a broad set of well-studied inference tasks on one specification. More concretely, we extend IDP with several inferences from fields concerned with dynamic specifications.

17:30
claspfolio 2: Advances in Algorithm Selection for Answer Set Programming

ABSTRACT. Building on the award-winning, portfolio-based ASP solver claspfolio, we present claspfolio 2, a modular and open solver architecture that integrates several different portfolio-based algorithm selection approaches and techniques. Our claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques. The default configuration of claspfolio 2 relies on a light-weight version of the ASP solver clasp, whose statistics are used for generating static and dynamic instance features. The flexible open design of claspfolio 2 is a distinguishing factor even beyond ASP. As such, it provides a unique framework for comparing and combining existing portfolio-based algorithm selection approaches and techniques in a unified framework. Taking advantage of this, we conducted an extensive experimental study and compared the impact of different feature sets, selection approaches and base solver portfolios. In addition to gaining substantial insights into the impact of the various approaches and techniques, we identified a default configuration of claspfolio 2 that achieves substantial performance gains not only over clasp's default configuration but also over manually tuned configurations of clasp.

16:30-18:00 Session 130E: Proof Theory (IJCAR)
Location: FH, Hörsaal 5
16:30
Introducing quantified cuts in logic with equality
SPEAKER: unknown

ABSTRACT. Cut-introduction is a technique for structuring and compressing formal proofs. In this paper we generalize our cut-introduction method for the introduction of quantified lemmas of the form ∀ x.A (for quantifier-free A) to a method generating lemmas of the form ∀ x_1 ... ∀ x_n.A. Moreover, we extend the original method to predicate logic with equality. The new method was implemented and applied to the TSTP proof database. It is shown that the extension of the method to handle equality and quantifier-blocks leads to a substantial improvement of the old algorithm.

17:00
Quati: An Automated Tool for Proving Permutation Lemmas
SPEAKER: unknown

ABSTRACT. The proof of many foundational results in structural proof theory, such as the admissibility of the cut rule and the completeness of the focusing discipline, rely on permutation lemmas. It is often a tedious and error prone task to prove such lemmas as they involve many cases. This paper describes the tool Quati which is an automated tool capable of proving a wide range of inference rule permutations for a wide number of proof systems. Given a proof system specification in the form of a theory in linear logic with subexponentials, Quati outputs in LaTeX the permutation transformations for which it was able to prove correctness and also the possible derivations for which it was not able to do so. As illustrated in this paper, Quati's output is very similar to proof derivation figures one would normally find in a proof theory book.

17:20
A History-Based Theorem Prover for Intuitionistic Propositional Logic using Global Caching: IntHistGC System Description
SPEAKER: unknown

ABSTRACT. We describe an implementation of a new theorem prover for Intuitionistic Propositional Logic based on a sequent calculus with histories due to Corsi and Tassi. The main novelty of the prover lies in its use of dependency directed backtracking for global caching. We analyse the performance of the prover, and various optimisations, in comparison to current state of the art theorem provers and show that it produces competitive results on many classes of formulae.

17:40
MleanCoP: A Connection Prover for First-Order Modal Logic
SPEAKER: Jens Otten

ABSTRACT. MleanCoP is an automated theorem prover for first-order modal logic. The proof search is based on a prefixed connection calculus and an additional prefix unification, which captures the Kripke semantics of different modal logics. MleanCoP is implemented in Prolog and the source code of the core proof search procedure consists only of a few lines. It supports the standard modal logics D, T, S4, and S5 with constant, cumulative, and varying domain conditions. The most recent version supports heterogeneous multimodal logics and outputs a compact prefixed connection proof. An experimental evaluation shows the strong performance of MleanCoP.

16:30-18:00 Session 130F: Cryptography 2 (CSF)
Location: FH, Hörsaal 6
16:30
A Peered Bulletin Board for Robust Use in Verifiable Voting Systems
SPEAKER: unknown

ABSTRACT. The Secure Web Bulletin Board (WBB) is a key component of verifiable election systems. However, there is very little in the literature on their specification, design and implementation, and there are no formally analysed designs. The WBB is used in the context of election verification to publish evidence of voting and tallying that voters and officials can check, and where challenges can be launched in the event of malfeasance. In practice, the election authority has responsibility for implementing the web bulletin board correctly and reliably, and will wish to ensure that it behaves correctly even in the presence of failures and attacks. To ensure robustness, an implementation will typically use a number of peers to be able to provide a correct service even when some peers go down or behave dishonestly. In this paper we propose a new protocol to implement such a Web Bulletin Board, motivated by the needs of the vVote verifiable voting system. Using a distributed algorithm increases the complexity of the protocol and requires careful reasoning in order to establish correctness. Here we use the Event-B modelling and refinement approach to establish correctness of the peered design against an idealised specification of the bulletin board behaviour. In particular we have shown that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design. The algorithm also behaves correctly even if honest or dishonest peers temporarily drop out of the protocol and then return. The verification approach also establishes that the protocols used within the bulletin board do not interfere with each other. This is the first time a peered secure web bulletin board suite of protocols has been formally verified.

17:00
From input private to universally composable secure multiparty computation primitives

ABSTRACT. Secure multiparty computation systems are commonly built form a small set of primitive components. Composability of security notions has a central role in the analysis of such systems, since it allows us to deduce security properties of complex protocols from the properties of its components. We show that the standard notions of universally composable security are overly restrictive in this context and can lead to protocols with sub-optimal performance. As a remedy, we introduce a weaker notion of privacy that it is satisfied by simpler protocols and is preserved by composition. After that we fix a passive security model and show how to convert a private protocol into a universally composable protocol. As a result, we obtain modular security proofs without performance penalties.

17:30
Malleable Signatures: New Definitions and Delegatable Anonymous Credentials
SPEAKER: unknown

ABSTRACT. A signature scheme is malleable if, on input a message m and a signature s, it is possible to efficiently compute a signature s' on a related message m' = T(m), for a transformation T that is allowed with respect to this signature scheme. In this paper, we first provide new definitions for malleable signatures that allow us to capture a broader range of transformations than was previously possible. We then give a generic construction based on malleable zero-knowledge proofs that allows us to construct malleable signatures for a wide range of transformation classes, with security properties that are stronger than those that have been achieved previously. Finally, we construct delegatable anonymous credentials from signatures that are malleable with respect to an appropriate class of transformations (that we show our malleable signature supports). The resulting instantiation satisfies a stronger security notion than previous schemes while also scaling linearly with the number of delegations.

16:30-17:20 Session 130G (ICLP)
Location: FH, Seminarraum 107
16:30
Model Revision Inference for Extensions of First Order Logic

ABSTRACT. I am Joachim Jansen and this is my research summary, part of my application to the Doctoral Consortium at ICLP'14. I am a PhD student in the Knowledge Representation and Reasoning (KRR) research group, a subgroup of the Declarative Languages and Artificial Intelligence (DTAI) group at the department of Computer Science at KU Leuven. I started my PhD in September 2012. My promotor is prof. dr. ir. Gerda Janssens and my co-promotor is prof. dr. Marc Denecker.

16:55
Logic Programming as Scripting Language for Bots in Computer Games

ABSTRACT. This publication is to present a summary of research (referred as κ-Labs) carried out in author’s Ph.D studies on topic of application of Logic Programming as scripting language for virtual character behavior control in First Person Shooter (FPS) games.

16:30-18:00 Session 130H (NCPROOFS)
Location: FH, Seminarraum 104
16:30
Admissibility and Exact Unification
17:00
What can semantics do for proof theory: the case of Paraconsistent Logics
SPEAKER: Anna Zamansky
17:30
Applications of Nested-Sequent Proof Systems for Modal Logics to the Craig Interpolation Property
SPEAKER: Roman Kuznets
17:10-18:00 Session 131: Invited Talk (CAV)
Location: FH, Hörsaal 1
17:10
Automated Testing (preliminary title)