Skip to content

bench: lazily initialize script benchmarks#5190

Open
iammdzaidalam wants to merge 1 commit intoboa-dev:mainfrom
iammdzaidalam:fix/5169-lazy-bench-init
Open

bench: lazily initialize script benchmarks#5190
iammdzaidalam wants to merge 1 commit intoboa-dev:mainfrom
iammdzaidalam:fix/5169-lazy-bench-init

Conversation

@iammdzaidalam
Copy link
Copy Markdown
Contributor

Closes #5169

Summary

Defer script benchmark setup until the selected benchmark actually runs.

benches/benches/scripts.rs was eagerly reading, parsing, compiling, and evaluating every script during registration, so filtered runs could still fail on unrelated entries before reaching the requested benchmark.

This moves that setup behind the benchmark closure and caches the prepared state per benchmark, so unmatched scripts are no longer initialized.

Changes

  • add a small PreparedScriptBench helper for cached per-benchmark state
  • move script file reading into lazy setup
  • move Context creation, runtime registration, parse/compile/evaluate, and main lookup into lazy setup
  • keep benchmark discovery and existing v8-benches group config unchanged
  • cache the prepared script once per matched benchmark so setup is not repeated during measurement

Verification

Ran locally:

  • cargo fmt --check
  • cargo check -p boa_benches
  • cargo bench -p boa_benches -- --list
  • cargo bench -p boa_benches -- call-loop

Also temporarily added logging in the lazy init path to verify behavior:

  • call-loop only initialized basic/call-loop.js
  • a nonexistent filter initialized nothing
  • deltablue only initialized v8-benches/deltablue.js

So filtered runs no longer initialize unrelated scripts first.

@iammdzaidalam iammdzaidalam requested a review from a team as a code owner March 20, 2026 22:33
@github-actions github-actions bot added the Waiting On Review Waiting on reviews from the maintainers label Mar 20, 2026
@github-actions github-actions bot added this to the v1.0.0 milestone Mar 20, 2026
@github-actions github-actions bot added C-Benchmark Issues and PRs related to the benchmark subsystem. C-Builtins PRs and Issues related to builtins/intrinsics and removed Waiting On Review Waiting on reviews from the maintainers labels Mar 20, 2026
@github-actions
Copy link
Copy Markdown

Test262 conformance changes

Test result main count PR count difference
Total 52,963 52,963 0
Passed 50,126 50,126 0
Ignored 2,025 2,025 0
Failed 812 812 0
Panics 0 0 0
Conformance 94.64% 94.64% 0.00%

Tested main commit: b8c684580787968c613045c2588834b4442af518
Tested PR commit: eb961ba832f27bd8fec14127a20a2f81499d7d2d
Compare commits: b8c6845...eb961ba

@jedel1043 jedel1043 removed the C-Builtins PRs and Issues related to builtins/intrinsics label Mar 20, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.80%. Comparing base (6ddc2b4) to head (eb961ba).
⚠️ Report is 906 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #5190       +/-   ##
===========================================
+ Coverage   47.24%   59.80%   +12.55%     
===========================================
  Files         476      582      +106     
  Lines       46892    63414    +16522     
===========================================
+ Hits        22154    37923    +15769     
- Misses      24738    25491      +753     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

.unwrap_or_else(|| panic!("'main' is not a function in script: {}", path.display()))
.clone();
group.bench_function("Execution", move |b| {
let prepared = prepared.get_or_insert_with(|| prepare_script_bench(&path));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should put the initialization code inside the benchmark, it'll just pollute the results.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was about to say. The reason we use the main function is to benchmark specific bits of VM, not the initialization and parsing (and optimization, etc).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I was mainly trying to avoid the eager init issue but putting it inside the benchmark isnt the right tradeoff here... thinking of instead filtering before registration so only matching scripts get initialized, and keeping the setup outside bench_function like before

does that sound like the right direction?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

C-Benchmark Issues and PRs related to the benchmark subsystem.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

boa_benches: filtered script benchmark runs still eagerly initialize unrelated scripts

3 participants