Abstracts FLoC 2010

From ERCIM Working Group MLQA
Revision as of 12:45, 14 August 2010 by Flemming (talk | contribs)
Jump to navigation Jump to search

Bernhard Steffen. From the How to the What: Static Analysis via Model Checking. See slides here.

Conceptually comparing the notions of Static Analysis and Model Checking is delicate. Where are the technical differences, in the use of logics, abstract interpretation, refinement techniques, fixpoint computation? Or rather in their pragmatics: the one is fast but often returns 'don't know', whereas the other may not terminate at all (cf. Patrick Cousot)? It is hard to agree on a borderline, and, in fact, earlier recongnized differences vanish as we go. This is a good sign, as it shows that syngery already happens here, as is also witnessed by the existence of conferences like VMCAI. The talk will therefore present a personal view, and a success story based on this understanding.

Flemming Nielson. Model Checking is Static Analysis of Modal Logic. See slides here.

Flow Logic is an approach to the static analysis of programs that has been developed for functional, imperative and object-oriented programming languages and for concurrent, distributed, mobile and cryptographic process calculi. It is often implemented using an efficient differential worklist based solver (the Succinct Solver) working on contraints presented in a stratified version of Alternation-free Least Fixed Point Logic (ALFP).

In this talk we show how to deal with modal logics; to be specific we show how to deal with modal logics in the families CTL, ACTL and alternation-free modal mu calculus. We prove that we obtain an exact characterisation of the semantics of formulae in the modal logics and that we remain within stratified ALFP. The computational complexity of model checking by means of static analysis is as for classical approaches to model checking.

Together with the work of Steffen et al this allows us to conclude that the problems of model checking and static analysis are reducible to each other in many cases. This provides further motivation for transferring methods and techniques between these two approaches to verifying and validating programs and systems.

This is joint work with Hanne Riis Nielson, Fyuyan Zhang and Piotr Filipiuk.

Marta Kwiatkowska. Quantitative Abstraction Refinement. See slides here.

Probabilistic model checking has established itself as a valuable technique for formal modelling and analysis of systems that exhibit stochastic behaviour. It has been used to study quantitative properties of a wide range of systems, from randomised communication protocols to biological signalling pathways. In practice, however, scalability quickly becomes a major issue and, for large or even infinite-state systems, abstraction is an essential tool. What is needed are automated and efficient methods for constructing such abstractions.

In non-probabilistic model checking, this is often done using counterexample-guided abstraction-refinement (CEGAR), which takes a simple, coarse abstraction and then repeatedly refines it until it is amenable to model checking. This talk describes recent and ongoing work on quantitative abstraction-refinement techniques, which can be used to automate the process of building abstractions for probabilistic models. This has already been applied to probabilistic verification of software and of real-time systems, where abstraction is essential.

Joost-Pieter Katoen. Invariant Generation for Probabilistic Programs. See slides here.

Model checking probabilistic programs, typically represented as Markov decision processes, is en vogue. Abstraction-refinement techniques (a la CEGAR) have been developed and parametric model checking approaches have recently been suggested. Prototypical tools support these techniques. Their usage for programs whose invariants are quantitative, i.e., arithmetic expressions in the program variables, is however limited. An alternative is to re- sort to static analysis techniques.

We present a constraint-based method for automatically generating quantitative invariants for probabilistic programs. We show how it can be used, in combination with proof-based methods, to verify properties of probabilistic programs that cannot be analysed by any of currently existing probabilistic model checkers. To our knowledge, this is the first automated method for quantitative- invariant generation.

(Joint work with Larissa Meinicke, Annabelle McIver and Carroll Morgan.)

Arie Gurfinkel. Partial Models and Software Model-Checking. See slides here.

Partial models combine necessary and possible behaviours in a single model. Since they encode both over- and under-approximation of the behavior of the underlying system into a single model, they allow both verification and falsification of a broad class of properties. Thus, they are a natural choice for abstraction in model-checking.

Yasm is the first symbolic software model-checker to integrate partial models with the Counterexample-Guided Abstraction Refinement (CEGAR) framework. Yasm can prove and disprove temporal logic properties with equal effectiveness, and its performance is comparable to standard over-approximating model-checkers.

In this talk, we describe the types of partial models used by Yasm and highlight some of our recent developments in software model-checking, including proving non-termination (i.e., finding counterexamples to liveness properties) and reasoning about recursive programs.

Orna Grumberg. The 2-valued and the 3-Valued Abstraction-Refinement Frameworks in Model Checking.

Model checking is a widely used technique for automatic verification of hardware and software systems. Significant amount of research in model checking is devoted to extending its scope to larger and even infinite-state systems. Abstraction is one of the most successful approaches for doing so. It hides some of the irrelevant details of the system, thus resulting in a small (abstract) model whose correct behavior can be checked. Sometimes the abstraction is too coarse and a desired property cannot be checked on the abstract model. In this case, the abstract model is refined by adding some of the hidden details back.

In this talk we present two different frameworks for abstraction-refinement in model checking: The 2-valued (CEGAR) and the 3-valued (TVAR) frameworks. We will describe the abstract models and the type of properties that can be verified with those abstractions. We will also show when refinement is needed. Finally, we will mention some applications of TVAR and try to convince that it is most useful.

David Monniaux. Policy iteration for static analysis.

Static analysis by abstract interpretation has traditionally computed over-approximations of least fixed points (loops or recursive functions) using Kleene iteration accelerated by widening operators. In recent years, techniques inspired from game theory have been applied to fixpoint problems in certain abstract domains: min-policy iterations (E. Goubault's group) and max-policy iterations (H. Seidl's group). We shall outline these techniques.

Finally, we shall discuss recent results, obtained with T. Gawlitza, about the computation of least fixed points from succinct representations of programs, giving the same precision as explicitly distinguishing all program paths but with the same exponential complexity as the coarse analysis that merges program paths (a naive approach would incur double exponential complexity). The succinct representation is exploited through SMT-solving.

We would appreciate insights for possible applications of such techniques to succinct representations of e.g. probabilistic systems.

Michael Huth. From validating quantitative models to generating valid ones. See slides here.

Verification techniques for quantitative systems typically require a system model as object of their investigations. Probabilistic model checking is a representative example of this successful approach.

However, emerging needs and constraints of quantitative systems no longer allow for, or benefit from, a complete decoupling of the development of a system from its a posteriori verification.

We paint a somewhat personal picture of what this may mean in terms of challenges and opportunities for those who research the construction of valid quantitative systems.

Go back to MLQA meeting at FLoC 2010