ARCH19: Editor's PrefaceThis volume of proceedings contains the papers presented at the sixth International Workshop on Applied veRification for Continuous and Hybrid systems (ARCH) and the results of the third edition of ARCH-COMP, a competition for the formal verification of continuous and hybrid systems. The workshop was held as part of CPS-IoT Week in Montréal, Canada, on April 15, 2019. Previous editions of the ARCH workshop series were held 2014 in Berlin, 2015 in Seattle, 2016 in Vienna, 2017 in Pittsburgh, and 2018 in Oxford. The goal of the ARCH workshops is to bring together people from industry with researchers and tool developers interested in applying verification techniques to continuous and hybrid systems. The workshops are accompanied by a collaborative website (cps-vo.org/group/ARCH), which features a curated collection of benchmarks, disseminates results submitted by researchers and tool developers, and provides feedback from practitioners in the form of experience reports. The benchmark repository is intended to serve as a lasting and evolving resource to the research community. The workshop received 5 submissions, 4 of which were accepted by the program committee. Each submission was reviewed by 3-4 program committee members, including at least one member from academia and one from industry. In addition to the workshop papers, these proceedings present the results of the third edition of ARCH-COMP. ARCH-COMP is a friendly competition that was carried out online from January to April 2019. ARCH-COMP showcases the participating tools and serves as a testing ground to see which methods are particularly suitable to which types of problems. As a side effect, it aims at establishing a consensus for comparing different software implementations in the context of verification, as such comparisons are routinely demanded by reviewers of scientific publications. All participating tools were represented in the competition jury, headed by the organizers. In the problem phase of the competition, participants submitted problem instances, which were then approved by the jury by consensus. In the evaluation phase, experiments were carried out by the tool authors themselves, who then submit the performance measurements and a repeatability package to the evaluation chair. The submitted results were approved by the jury and verified by an independent repeatability evaluation lead by Taylor T. Johnson. To establish further trustworthiness of the results, the code with which the results have been obtained is publicly available (link provided on ARCH website). In this third edition of ARCH-COMP, 33 tools participated in the competition. Out of these, 19 tools participated in the repeatability evaluation and successfully passed the evaluation. The competition was divided into the following categories, including a new category on hybrid systems with AI components:
The 2019 prize for the most promising result was sponsored by Bosch and was awarded according to a vote by the attending audience to the verification tool Isabelle/HOL-ODE-Numerics, represented by Fabian Immler. The problem descriptions and the results are provided in a report for each category, drafted by the category lead together with representatives of the participating tools. Due to the diversity of problems, ARCH-COMP does not provide any ranking of tools. Nonetheless, the presented results probably provide the most complete assessment of tools for the safety verification of continuous and hybrid systems up to this date. Goran Frehse, Matthias Althoff (Program Chairs)
Sergiy Bogomolov (Publicity Chair) Taylor T. Johnson (Evaluation Chair) April 15, 2019
Montreal |