Description
With an abundance of static code analysis tools, being able to
distinguish between them and knowing about their strengths and
weaknesses is important.
Test suites are frequently chosen for this purpose. They are often based
on a selection of specific vulnerabilities, for which the selection can
vary from suite to suite.
This thesis takes a broader approach and creates a test suite for static
code analysis tools that focuses on analyzing how they deal with more
general static analysis problems and so-called sensitivities in static
code analysis. It creates a variation for each sensitivity and every chosen
category, allowing for a systematic tool evaluation.
It also provides a method of automatically evaluating tool results based
on the Static Analysis Results Interchange Format (SARIF).
The test suite is used to benchmark six open-source static analysis
tools and demonstrates how it can be used to conclude how the tools
perform generally and regarding each sensitivity.
|