Tools and Datasets

Our lab has released the following tools and datasets

DNNF: Deep Neural Network Falsification

Tool

DNNF is a framework for falsification of deep neural networks (DNN). While many DNN verification techniques have been introduced in the past few years to enable the checking of DNN safety properties, these techniques are often limited in their applicability, due to simplifying assumptions about DNN structure or to high computational cost. Falsification is a complementary approach to verification that seeks only to find violations to a safety property. In the context of DNNs, adversarial attacks can be viewed as falsifiers for DNN local robustness properties. While these techniques often scale to large real-world DNNs, they are currently limited in the range of properties they can falsify. In Reducing DNN Properties to Enable Falsification with Adversarial Attacks, we introduce an approach for reducing a DNN and an associated safety property – a correctness problem – into an equivalid set of correctness problems formulated with robustness properties which can be processed by existing adversarial attack techniques. DNNF is the implementation of that approach.

Visit Website
Team Members: David Shriver, Sebastian Elbaum, Matthew Dwyer

DNNV: Deep Neural Network Verification

Tool

DNNV is a framework for verifying deep neural networks (DNN). DNN verification takes in a neural network, and a property over that network, and checks whether the property is true, or false. DNNV standardizes the network and property input formats to enable multiple verification tools to run on a single network and property. This facilitates both verifier comparison, and artifact re-use. A video demonstration of our tool is available here.

Visit Website
Team Members: David Shriver, Dong Xu, Sebastian Elbaum, Matthew Dwyer

GDVB: Systematic Generation of Diverse Benchmarks for DNN Verification

Tool

GDVB is a framework for generating benchmarks for DNN verification. GDVB leverages the idea of conbinatorial interaction testing in software engineering, as well as the combinations of the identified nine controlling factors that contributes to DNN verification performance. With a given seed DNN verification problem, it can generate a systematic and diverse benchmark that challenges DNN verification tools’ capabilities in various aspects. Refer to the paper and the tool’s website for more details.

Visit Website
Team Members: Dong Xu, David Shriver, Matthew Dwyer, Sebastian Elbaum

Feasible and Stressful Trajectory Generation for Mobile Robots

Tool Artifact

While executing nominal tests on mobile robots is required for their validation, such tests may overlook faults that arise under trajectories that accentuate certain aspects of the robot’s behavior. Uncovering such stressful trajectories is challenging as the input space for these systems, as they move, is extremely large, and the relation between a planned trajectory and its potential to induce stress can be subtle. This tool allows you to efficiently generate physically valid yet stressful trajectories for on a range of quadrotors.

Visit Website
Team Members: Carl Hildebrandt, Sebastian Elbaum, Matthew Dwyer, Nicola Bezzo