# CAV on Saturday, July 17th

09:00‑10:00 Invited Talk
Chair: Helmut Veith
Location: AT LT4
 09:00 Pasquale Malacaria Quantitative Information Flow: from Theory to Practice? We will introduce Shannon's Information Theory as a valuable tool for measuring leakage of confidential information in programs. The relation between Information Theory and the Lattice of Information in the context of programs leakage will be explored. The basic properties of this approach (called Quantitative Information Flow) and the problems in implementing it on real code will be discussed. We will then demonstrate some recent work (with Jonathan Heusser) where these quantitative ideas are integrated with verification techniques to quantify information leak vulnerabilities in the Linux Kernel and to verify, for the first time, whether the applied software patches indeed plug the information leaks.
10:30‑12:30 Counter and Hybrid Systems Verification
Chair: Rajeev Alur
Location: AT LT4
 10:30 Federico Mari, Igor Melatti, Ivano Salvo and Enrico Tronci Synthesis of Quantized Feedback Control Software for Discrete Time Linear Hybrid Systems Many embedded systems are indeed \emph{control systems} consisting of a \emph{Quantized Feedback Controller} (QFC) and of a controlled system (\emph{plant}). We present an effective algorithm that given a \emph{Discrete Time Linear Hybrid System} (DTLHS) model $P$ for the plant returns a correct-by-construction software implementation $K$ of a robust, (\emph{near time optimal}) QFC for $P$ along with the set of states on which $K$ is guaranteed to work (\emph{controllable region}). Furthermore, we show that $K$ has a \emph{Worst Case Execution Time} (WCET) guaranteed to be linear in the number of bits of the quantization schema. We implemented our algorithm on top of the CUDD OBDD package and of the GLPK MILP solver and present results on using such an implementation to synthesize C controllers for the buck DC-DC converter, a widely used mixed-mode analog circuit. Our experimental results show that our proposed approach is viable for moderate size systems and is indeed able to synthesize control software meeting all safety requirements, notwithstanding nondeterministic variations in the plant parameters (robustness). 10:55 Lijun Zhang, Zhikun She, Stefan Ratschan, Holger Hermanns and Ernst Moritz Hahn Safety Verification for Probabilistic Hybrid Systems The interplay of random phenomena and continuous real-time control deserves increased attention for instance in wireless sensing and control applications. Safety verification for such systems thus needs to consider probabilistic variations of systems with hybrid dynamics. In safety verification of classical hybrid systems we are interested in whether a certain set of unsafe system states can be reached from a set of initial states. In the probabilistic setting, we may ask instead whether the probability of reaching unsafe states is below some given threshold. In this paper, we consider probabilistic hybrid systems and develop a general abstraction technique for verifying probabilistic safety problems. This gives rise to the first mechanisable technique that can, in practice, formally verify safety properties of non-trivial continuous-time stochastic hybrid systems -- without resorting to point-wise discretisation. Moreover, being based on arbitrary abstractions computed by tools for the analysis of non-probabilistic hybrid systems, improvements in effectivity of such tools directly carry over to improvements in effectivity of the technique we describe. We demonstrate the applicability of our approach on a number of case studies, tackled using a prototypical implementation. 11:20 Khalil Ghorbal, Eric Goubault and Putot Sylvie A Logical Product Approach to Zonotope Intersection We define and study a new abstract domain which is a fine-grained combination of zonotopes with polyhedric domains such as the interval, octagon, linear templates or polyhedron domain. While abstract transfer functions are still rather inexpensive and accurate even for interpreting non-linear computations, we are able to also interpret tests (i.e. intersections) efficiently. This fixes a known drawback of zonotopic methods, as used for reachability analysis for hybrid systems as well as for invariant generation in abstract interpretation: intersection of zonotopes are not always zonotopes, and there is not even a best zonotopic over-approximation of the intersection. We describe some examples and an implementation of our method in the APRON library, and discuss some further interesting combinations of zonotopes with non-linear or non-convex domains such as quadratic templates and maxplus polyhedra. 11:45 Radu Iosif, Marius Bozga and Filip Konecny Fast Acceleration of Ultimately Periodic Relations Computing transitive closures of integer relations is the key to finding exact invariants of integer programs. In this paper, we describe an efficient algorithm for computing the transitive closures of difference bounds, octagonal and finite monoid affine relations. On the theoretical side, this framework provides a common solution to the acceleration problem, for all these three classes of relations. In practice, according to our experiments, the new method performs up to four orders of magnitude better than the previous ones, making it a promising approach for the verification of integer programs. 12:10 Luca Pulina and Armando Tacchella An Abstraction-Refinement Approach to Verification of Artificial Neural Networks A key problem in the adoption of Artificial Neural Networks (ANNs) in safety-related applications is that misbehaviors can be hardly ruled out with traditional analytical or probabilistic techniques. In this paper we propose a solution wherein ANNs are abstracted to Boolean combinations of linear arithmetic constraints, and verified for compliance to a given safety target. We show that our abstraction is sound, i.e., whenever the abstract ANN is declared to be safe, the same holds for the concrete one. On the other hand, abstract counterexamples must be checked against the concrete ANN: False negatives call for a refinement of the abstraction, and true negatives can hint to repairing the concrete ANN. We describe an implementation of our approach based on the HySAT solver, detailing the abstraction-refinement process and an automated repair strategy. Finally, we present experimental results confirming the feasibility of our approach on a realistic case study.
14:00‑14:50 Memory Consistency
Chair: Ganesh Gopalakrishnan
Location: AT LT4
 14:00 Jade Alglave, Luc Maranget, Susmit Sarkar and Peter Sewell Fences in Weak Memory Models We present an axiomatic framework, implemented in Coq, to define weak memory models w.r.t. several parameters: local reorderings of reads and writes, and visibility of inter and intra processor communications through memory, including full store atomicity relaxation. Thereby, we give a formal hierarchy of weak memory models, in which we provide a formal study of what should be the action and placement of fences to restore a given model such as SC from a weaker one. Finally, we provide a tool, diy, that tests a given machine to determine the architecture it exhibits. We detail the results of our experiments on Power and the model we extract from it. This identified a rare implementation error in Power 5 memory barriers (for which IBM is providing a software workaround); our results also suggest that Power 6 does not suffer from this problem. 14:25 Sela Mador-Haim, Rajeev Alur and Milo M. K. Martin Generating Litmus Tests for Contrasting Memory Consistency Models Well-defined memory consistency models are necessary for writing correct parallel software. Developing and understanding formal specifications of hardware memory models is a challenge due to the subtle differences in allowed reorderings, different specification styles, and frequent revisions. To facilitate exploration and comprehension of memory model specifications, we have developed a technique for systematically comparing hardware memory models formally specified using both operational and axiomatic styles. Given two specifications, our solution generates all possible multi-threaded programs upto a specified bound, and for each such program, checks if one of the models can lead to an observable behavior not possible in the other model. A number of optimizations reduce the number of programs that need to be examined, and when the models differ, the tool finds a minimal litmus test'' program that demonstrates the difference. Our prototype implementation has successfully compared both axiomatic and operational specifications of six different hardware memory models. A case study of specifying a non-store-atomic variant of an existing memory model illustrates the tool's utility in quickly identifying subtle specification mistakes.
15:30‑17:10 Verification of Hardware and Low Level Code
Chair: Daniel Kroening
Location: AT LT4
 15:30 Aditya Thakur, Junghee Lim, Akash Lal, Amanda Burton, Evan Driscoll, Matt Elder, Tycho Andersen and Thomas Reps Directed Proof Generation for Machine Code We present the algorithms used in McVeto (Machine-Code VErification TOol), a tool to check whether a machine-code program satisfies a safety property. McVeto does not require the usual preprocessing steps of building control-flow graphs and performing points-to/alias analysis; nor does it require symbol-table, debugging, or type information to be supplied. What distinguishes McVeto from other work on software model checking is that it resolves many issues that have been ignored (unsoundly) in previous model checkers. 15:55 Christopher Conway and Clark Barrett Verifying Low-Level Implementations of High-Level Datatypes For efficiency and portability, network packet processing code is typically written in low-level languages like C and makes use of bit-level operations to compactly represent data. Although packet data is highly structured, low-level implementation details make it difficult to verify that the behavior the code is consistent with high-level data invariants. We introduce a notation for defining the high-level types of network packet headers. The types are not used to check the code directly; rather, the types introduce predicates that can be used to assert the consistency of code with programmer-defined data invariants. We describe an encoding of the type predicates using the theories of inductive datatypes and bit vectors in the CVC3 SMT solver. Finally, we present the verification results for open-source networking code using the Cascade verification engine. 16:20 Satrajit Chatterjee and Michael Kishinevsky Automatic Generation of Inductive Invariants from High-Level Microarchitectural Models of Communication Fabrics Abstract microarchitectural models of communication fabrics present a challenge for verification. Due to the presence of deep pipelining, a large number of queues and distributed control, the state space of such models is usually too large for enumeration by protocol verification tools such as Murphi. On the other hand, we find that state-of-the-art \rtl model checkers such as \abc have poor performance on these models since there is very little opportunity for localization and most of the recent capacity advances in \rtl model checking have come from better ways of discarding the irrelevant parts of the model. In this work we explore an new approach for verifying these models where we capture a model at a high level of abstraction by requiring that it be described using a small set of well-defined microarchitectural primitives. We exploit the high level structure present in this description, to automatically strengthen some classes of properties, in order to make them 1-step inductive, and then use an \rtl model checker to prove them. In some cases, even if we cannot make the property inductive, we can dramatically reduce the number and complexity of lemmas that are needed to make the property inductive. 16:45 Juncao Li, Fei Xie, Thomas Ball and Vladimir Levin Efficient Reachability Analysis of Büchi Pushdown Systems for Hardware/Software Co-verification We present an efficient approach to reachability analysis of Büchi Pushdown System (BPDS) models for Hardware/Software (HW/SW) co-verification. This approach utilizes the asynchronous nature of the HW/SW interactions to reduce unnecessary HW/SW state transition orders being explored in co-verification. The reduction is applied when the verification model is constructed. We have realized this approach in our co-verification tool, CoVer, and applied it to the co-verification of two fully functional Windows device drivers with their device models respectively. Both of the drivers are open source and their original C code has been used. CoVer has proven seven safety properties and detected seven previously undiscovered software bugs. Evaluation shows that the reduction can significantly scale co-verification.
17:15‑17:45 Tools
Chair: Karen Yorav
Location: AT LT4
 17:15 Stefan Blom, Jaco van de Pol and Michael Weber LTSMIN: Distributed and Symbolic Reachability The LTSMIN toolset provides a new level of modular design of high-performance model checkers. Its distinguishing feature is the wide spectrum of its supported specification languages and of its model checking paradigms. On the language side, it supports process algebras (mCRL), state based languages (Promela, DVE) and even discrete abstractions of ODE-models (Maple, GNA). On the algorithmic side, it supports two mainstreams in high-performance model checking: symbolic BDD-based model checking, and distributed model checking on a cluster of workstations. It also incorporates the distributed implementation of bisimulation preserving minimization of partitioned state spaces. For end users, this implies that they can exploit other, scalable, verification algorithms than supported by their native tools, without changing specification language. From an algorithm engineering point of view, it fosters the availability of benchmark suites across multiple verification communities, enabling a wider and more robust variety of benchmarking studies. The technical enabler of the LTSMIN toolset is its PINS interface (Partitioned Next-State function). This provides a general abstraction of specification languages, based on a dependency matrix. This abstraction still enables algorithms to exploit the combinatorial structure inherent in many specifications. Several optimizations are implemented as generic PINS2PINS wrappers, thus abstracting from both the specification language and the actual model checking paradigm. 17:30 Benedikt Bollig, Joost-Pieter Katoen, Carsten Kern, Martin Leucker, Daniel Neider and David R. Piegdon libalf: the Automata Learning Framework This paper presents libalf, a comprehensive, open-source library for learning formal languages. libalf covers various well-known learning techniques for finite automata (e.g. Angluin's L*, Biermann, RPNI etc.) as well as novel learning algorithms (such as for NFAs and visibly one-counter automata). libalf is flexible and allows facilely interchanging learning algorithms and combining domain-specific features in a plug-and-play fashion. Its modular design and C++ implementation make it a suitable platform for adding and engineering further learning algorithms for new target models (e.g., Büchi automata).