Comparing Transparent Static Analyzers with Open Verification Dashboard

Abstract

Given an input program, sound static analyzers compute a list of potential runtime errors in it. Static analysis practitioners have difficulties measuring the precision of static analyzers and comparing their results. In this work, we formalize a notion of transparent static analyzers providing a fine-grained output of the proof obligations they verified. This transparent output enables a semantics-directed, fine-grained comparison and the combination of static analyzers. We introduce the Open Verification Dashboard (OVD), which provides a unified interface to aggregate the results of multiple static analyzers. By juxtaposing verified properties and outstanding warnings, OVD highlights coverage gaps, variabilities and inconsistencies across tools. We experimentally evaluate the benefits of OVD on benchmarks from the Competition on Software Verification (SV-COMP). This work paves the way for a static analysis standard for C runtime error reporting.

Publication
ECOOP 2026