Skip to content

Silent batch fuzzing failures #149

@V0ldek

Description

@V0ldek

Hello everyone

We've been using CFL in rsonpath with some success (e.g. rsonquery/rsonpath#281 caught by fuzzing).

Recently, I started getting seemingly spurious failures tracked under rsonquery/rsonpath#749 and discovered that actually, the batch fuzzing was failing for quite some time with timeouts and out-of-memory errors. However, none of those were actually reported as a failure of the pipeline. For example, here three out of four of our fuzzers failed, yet the pipeline is green. We have a separate action step to automatically create an issue in case of a failure, and that also wasn't triggered. Regarding this, I have three separate questions:

  1. How to increase the memory limit? Locally I'd use rss_limit_mb, but I don't know how to pass it into the fuzzer through the .clusterfuzzlite configuration.
  2. How to ensure that my pipeline fails when fuzzers fail for any reason, including timeouts and OOM?
  3. Do you know what could cause a failure like here?
2026-01-16 04:08:04,449 - root - INFO - Done downloading corpus. Contains 2479 elements.
2026-01-16 04:08:04,449 - root - INFO - Starting fuzzing
time="2026-01-16T04:10:25Z" level=error msg="error waiting for container: unexpected EOF"

I'd appreciate your help with either of those points.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions