Our submission for the Google Graph Scheduling Competition at MLSys 2026.
git clone https://github.com/ami2802/MLSys
cd MLSys
# Initialize third part dependencies
git submodule update --init --recursive To run on a single file (output to stdout):
cargo run -- -i "./tests/testcases/example-1/input.json" -s naiveTo run example tests:
cargo test --test run_testcasesTests are defined in ./tests/run_testcases.rs, and expect files to be placed in ./tests/testcases/test-name.
After adding, new tests can be defined by appending a new line inside the generate_solver_tests macro in run_testcases.rs.
generate_solver_tests! {
example_1, "example-1", naive, "always-spill.json";
example_2, "example-2", naive, "always-spill.json";
example_3, "example-3", naive, "spill.json";
}You can define new solvers as separate files under src/solvers directory as structs with a solve impl:
pub fn solve(&self, problem: &Problem) -> Result<Solution>Once done, register the solver in src/solvers/mod.rs:
define_solvers! {
Naive, "naive", naive::NaiveSolver;
}The competition benchmarks are located in tests/benchmarks/. To run:
cargo test --release --test run_benchmarks -- --nocaptureTo change the solver or benchmarks, modify the benchmarks variable in run_benchmarks.rs:
let benchmarks = vec![
("benchmark-1", "dp"),
("benchmark-5", "dp"),
("benchmark-9", "dp"),
("benchmark-13", "dp"),
("benchmark-17", "dp"),
];The output will appear in tests/output/ as [benchmark_name].json. Benchmarks have a standard timeout, but it's currently increased by 2x for debugging.