Description
In the rapidly evolving field of vulnerability discovery, fuzzing
remains a central technique, often relying on random or heuristic-
based mutations to generate test cases. However, the inherent ran-
domness of these processes can lead to inefficiencies in exploring
the input space. In this paper, we propose an innovative approach
that uses reinforcement learning (RL) to guide and refine the muta-
tion process in fuzzing. While our preliminary results do not yet
surpass those of state-of-the-art fuzzers, our primary contribution
lies in the introduction of a modular framework. The inherent mod-
ularity of this framework facilitates easy integration and adaptation,
setting the stage for future research efforts. By providing a platform
that seamlessly combines different RL models and fuzzers, we offer
a way for researchers and practitioners to iterate, innovate, and
potentially raise the performance bar in the field of fuzzing.
|