Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughUpdates Move-related git pins across many Cargo.toml files, adds a new workspace crate Changes
Sequence Diagram(s)sequenceDiagram
participant CLI
participant Launcher as BenchLauncher
participant Node as FullNode
participant TxPool
participant VM as StarcoinVM
participant Observer
CLI->>Launcher: parse args & prepare genesis
Launcher->>Node: start node with config (temp or path)
Node->>TxPool: open txpool / ready
Launcher->>Observer: initialize observer & BenchmarkState
loop for each batch
Launcher->>VM: build & sign batch transactions
alt async import via VM2
Launcher->>Node: import signed txns into txpool / node
else sync fallback
Launcher->>Node: submit via sync import path
end
Node->>Observer: emit tx submit events / mined events
Observer->>BenchmarkState: advance state, collect latencies
end
Observer->>Launcher: final stats
Launcher->>Results: dump_stats / export SVG
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can customize the high-level summary generated by CodeRabbit.Configure the |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6b10d708f1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
vm2/compiler/Cargo.toml (1)
3-5: Nice alignment; consider centralizing the Move rev pin.Lines 3–5 are correct, but repeated git-rev pins across crates are easy to drift. Consider moving these to shared workspace dependency declarations to reduce future bump churn.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vm2/compiler/Cargo.toml` around lines 3 - 5, Centralize the Move git rev by removing the repeated { git = "...", rev = "e767a526..." } pins from the individual crate entries (move-binary-format, move-command-line-common, move-compiler) and instead declare a single shared dependency entry in the workspace manifest using workspace.dependencies (or [patch.crates-io] if appropriate) that sets the git URL and rev once; then update the individual Cargo.toml entries to reference that workspace dependency (or just the crate name/version) so future rev bumps are made in one place and run cargo metadata/cargo update to verify the workspace resolves correctly.Cargo.toml (1)
52-55: Remove the extranodeentries added at Line 55 and Line 191.
nodeis already listed earlier inmembersat Line 49 and indefault-membersat Line 186. Keeping the second copy does not add anything functionally, but it does make the workspace manifest harder to audit and easier to skew later.🧹 Suggested cleanup
members = [ @@ "sync/api", "sync/starcoin-execute-bench", "block-relayer", "miner", - "node", "network-p2p", @@ default-members = [ @@ "block-relayer", "miner", - "node", "network-p2p",Also applies to: 191-191
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 52 - 55, Remove the duplicate "node" entries from the workspace manifest: locate the "members" array where "node" is listed twice and delete the redundant occurrence (the extra "node" string), and also remove the duplicate "node" from the "default-members" list so each workspace member appears only once; ensure you only delete the duplicate tokens and preserve surrounding commas/formatting in the Cargo.toml manifest.test-helper/src/node.rs (1)
24-27: Make the pacemaker side effect explicit.Unlike
run_node_by_config, Line 24 leaves the pacemaker running.node/src/node.rs:130-137andnode/src/node.rs:418-423show thatGenerateBlockEventPacemakerstarts during launch and keeps emitting block-generation events untilstop_pacemaker()is called, so this helper is easy to misuse in tests/benchmarks unless the name or docs call that out clearly.📝 Suggested clarification
+/// Launches a node with all services active, including the pacemaker. +/// Unlike `run_node_by_config`, this helper does not call `stop_pacemaker()`, +/// so the caller will keep receiving background block-generation events +/// until the pacemaker is stopped explicitly. pub fn run_node_with_all_service(config: Arc<NodeConfig>) -> Result<NodeHandle> { let logger_handle = starcoin_logger::init_for_test(); let node_handle = NodeService::launch(config, logger_handle)?; Ok(node_handle) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-helper/src/node.rs` around lines 24 - 27, run_node_with_all_service currently launches the node (via NodeService::launch) but leaves the GenerateBlockEventPacemaker running, which is a hidden side effect; update the helper to make this explicit by either (A) stopping the pacemaker before returning (call stop_pacemaker() on the returned NodeHandle) so callers don’t get unexpected block-generation events, or (B) rename/document the function to indicate the pacemaker remains active (e.g., run_node_with_pacemaker_running) and add doc comment mentioning GenerateBlockEventPacemaker and the need to call stop_pacemaker() when finished; modify run_node_with_all_service accordingly and keep NodeService::launch and run_node_by_config references intact.sync/starcoin-execute-bench/src/results.rs (1)
287-313: Edge case: TPS calculation returns count as TPS when fewer than 2 samples.At lines 299-301, when there are fewer than 2 execution times, the function returns the count as TPS, which isn't meaningful (e.g., returns
1.0for a single transaction). Consider returning0.0orf64::NANto indicate insufficient data.♻️ Optional: Return 0.0 for insufficient samples
if all_exec_times.len() < 2 { - return all_exec_times.len() as f64; + return 0.0; // Insufficient samples for meaningful TPS }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/results.rs` around lines 287 - 313, The TPS calculation in calculate_tps_from_executed currently returns the sample count when there are fewer than two execution timestamps; update the early-return branch to return 0.0 (or f64::NAN if you prefer to signal insufficient data) instead of all_exec_times.len() as f64, leaving the rest of the logic (sorting, duration calculation, and division) intact; locate the method calculate_tps_from_executed and its early-return that checks all_exec_times.len() and change that return value, keeping handling of TransactionExecutionResult::Executed and transaction_data iteration unchanged.sync/starcoin-execute-bench/src/main.rs (1)
456-503: Consider adding a timeout to prevent indefinite blocking.The
wait_for_sufficient_balancefunction loops indefinitely until the association account has sufficient balance. If the balance never reaches the required threshold (e.g., due to misconfiguration or insufficient initial supply), this will hang forever.♻️ Proposed fix: Add timeout parameter
async fn wait_for_sufficient_balance( account_count: u32, initial_balance: u128, initial_gas_fee: u128, gas_price: u64, max_gas: u64, chain_reader_service: ServiceRef<ChainReaderService>, storage1: Arc<Storage>, storage2: Arc<Storage2>, + timeout: Duration, ) -> Result<()> { + let deadline = Instant::now() + timeout; loop { + if Instant::now() >= deadline { + bail!("Timed out waiting for sufficient association balance"); + } let current_header = get_current_header(chain_reader_service.clone()).await?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/main.rs` around lines 456 - 503, The wait_for_sufficient_balance function currently loops forever; change its signature (wait_for_sufficient_balance) to accept a timeout (e.g., max_wait: std::time::Duration or max_wait_ms: u64) and implement a deadline check inside the loop (use tokio::time::Instant::now()/deadline or compare elapsed against max_wait) so that if the deadline is exceeded you break and return an Err (create a clear error message like "timeout waiting for sufficient balance"). Keep existing logic (calls to get_current_header, get_balance, association_address) but after each sleep/iteration check the deadline and log the timeout before returning the error to avoid indefinite blocking. ensure callers are updated to pass a timeout and handle the error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@vm2/resource-viewer/Cargo.toml`:
- Around line 4-5: The Move dependencies in this Cargo.toml (move-binary-format
and move-core-types) use revision e767a5266b..., but other Move crates (e.g.,
move-vm-test-utils, move-prover, move-stdlib, move-package) still point at
babf994a3...; update these crate entries so all Move-related git dependencies
across the workspace reference the same commit SHA (pick and replace with the
predominant rev e767a5266b19862d5695cf8eefe1ba1f35d1362a) to ensure alignment
and rebuild to confirm no compatibility issues.
---
Nitpick comments:
In `@Cargo.toml`:
- Around line 52-55: Remove the duplicate "node" entries from the workspace
manifest: locate the "members" array where "node" is listed twice and delete the
redundant occurrence (the extra "node" string), and also remove the duplicate
"node" from the "default-members" list so each workspace member appears only
once; ensure you only delete the duplicate tokens and preserve surrounding
commas/formatting in the Cargo.toml manifest.
In `@sync/starcoin-execute-bench/src/main.rs`:
- Around line 456-503: The wait_for_sufficient_balance function currently loops
forever; change its signature (wait_for_sufficient_balance) to accept a timeout
(e.g., max_wait: std::time::Duration or max_wait_ms: u64) and implement a
deadline check inside the loop (use tokio::time::Instant::now()/deadline or
compare elapsed against max_wait) so that if the deadline is exceeded you break
and return an Err (create a clear error message like "timeout waiting for
sufficient balance"). Keep existing logic (calls to get_current_header,
get_balance, association_address) but after each sleep/iteration check the
deadline and log the timeout before returning the error to avoid indefinite
blocking. ensure callers are updated to pass a timeout and handle the error.
In `@sync/starcoin-execute-bench/src/results.rs`:
- Around line 287-313: The TPS calculation in calculate_tps_from_executed
currently returns the sample count when there are fewer than two execution
timestamps; update the early-return branch to return 0.0 (or f64::NAN if you
prefer to signal insufficient data) instead of all_exec_times.len() as f64,
leaving the rest of the logic (sorting, duration calculation, and division)
intact; locate the method calculate_tps_from_executed and its early-return that
checks all_exec_times.len() and change that return value, keeping handling of
TransactionExecutionResult::Executed and transaction_data iteration unchanged.
In `@test-helper/src/node.rs`:
- Around line 24-27: run_node_with_all_service currently launches the node (via
NodeService::launch) but leaves the GenerateBlockEventPacemaker running, which
is a hidden side effect; update the helper to make this explicit by either (A)
stopping the pacemaker before returning (call stop_pacemaker() on the returned
NodeHandle) so callers don’t get unexpected block-generation events, or (B)
rename/document the function to indicate the pacemaker remains active (e.g.,
run_node_with_pacemaker_running) and add doc comment mentioning
GenerateBlockEventPacemaker and the need to call stop_pacemaker() when finished;
modify run_node_with_all_service accordingly and keep NodeService::launch and
run_node_by_config references intact.
In `@vm2/compiler/Cargo.toml`:
- Around line 3-5: Centralize the Move git rev by removing the repeated { git =
"...", rev = "e767a526..." } pins from the individual crate entries
(move-binary-format, move-command-line-common, move-compiler) and instead
declare a single shared dependency entry in the workspace manifest using
workspace.dependencies (or [patch.crates-io] if appropriate) that sets the git
URL and rev once; then update the individual Cargo.toml entries to reference
that workspace dependency (or just the crate name/version) so future rev bumps
are made in one place and run cargo metadata/cargo update to verify the
workspace resolves correctly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 148876c4-2bd7-4fd8-ba9e-adfb7e581a4e
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (38)
Cargo.tomlcommons/mvhashmap/Cargo.tomlcommons/parallel-executor/Cargo.tomlrpc/client/tests/chain_get_block_txn_infos_in_seq_test.rssync/starcoin-execute-bench/Cargo.tomlsync/starcoin-execute-bench/src/main.rssync/starcoin-execute-bench/src/results.rstest-helper/src/lib.rstest-helper/src/node.rsvm2/abi/decoder/Cargo.tomlvm2/abi/resolver/Cargo.tomlvm2/abi/types/Cargo.tomlvm2/aggregator/Cargo.tomlvm2/compiler/Cargo.tomlvm2/framework/Cargo.tomlvm2/framework/cached-packages/Cargo.tomlvm2/framework/gas-algebra/Cargo.tomlvm2/framework/gas-meter/Cargo.tomlvm2/framework/gas-schedule/Cargo.tomlvm2/framework/move-stdlib/Cargo.tomlvm2/framework/native-interface/Cargo.tomlvm2/framework/table-natives/Cargo.tomlvm2/move-package-manager/Cargo.tomlvm2/package-builder/Cargo.tomlvm2/resource-viewer/Cargo.tomlvm2/sdk-builder/Cargo.tomlvm2/starcoin-transactional-test-harness/Cargo.tomlvm2/types/Cargo.tomlvm2/vm-runtime-types/Cargo.tomlvm2/vm-runtime/Cargo.tomlvm2/vm-runtime/src/data_cache.rsvm2/vm-runtime/src/parallel_executor/mod.rsvm2/vm-runtime/src/parallel_executor/storage_wrapper.rsvm2/vm-runtime/src/parallel_executor/vm_wrapper.rsvm2/vm-runtime/tests/data_cache_manual_exchange.rsvm2/vm-runtime/tests/delayed_field_exchange_baseline.rsvm2/vm-status-translator/Cargo.tomlvm2/vm-types/Cargo.toml
💤 Files with no reviewable changes (1)
- vm2/vm-runtime/tests/data_cache_manual_exchange.rs
There was a problem hiding this comment.
🧹 Nitpick comments (4)
config/src/txpool_config.rs (1)
119-130: Remove the duplicatedmax_mem_usagemerge block.
max_mem_usageis assigned twice (Line 119-121 and Line 128-130). It’s harmless now but easy to desync later; keep only one branch.Proposed cleanup
if let Some(m) = txpool_opt.max_mem_usage.as_ref() { self.max_mem_usage = Some(*m); } if let Some(m) = txpool_opt.max_count.as_ref() { self.max_count = Some(*m); } if let Some(m) = txpool_opt.max_per_sender.as_ref() { self.max_per_sender = Some(*m); } - if let Some(m) = txpool_opt.max_mem_usage.as_ref() { - self.max_mem_usage = Some(*m); - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/src/txpool_config.rs` around lines 119 - 130, The merge logic in txpool_config contains a duplicated branch setting self.max_mem_usage from txpool_opt.max_mem_usage (the second occurrence after handling max_per_sender); remove the redundant block so max_mem_usage is only assigned once during the merge. Locate the lines that read and set self.max_mem_usage using txpool_opt.max_mem_usage in the merge/merge_from method (or impl block) and delete the duplicate branch, keeping the other branches that set max_count and max_per_sender intact.sync/starcoin-execute-bench/src/main.rs (3)
1351-1378: Consider removing or documenting commented-out code.These commented-out event handlers for
NewDagBlockandNewDagBlockFromPeeradd noise. If they're preserved for future reference or debugging, consider adding a brief comment explaining why they're kept, or remove them if no longer needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/main.rs` around lines 1351 - 1378, The commented-out EventHandler implementations for ObserverService (handling NewDagBlock and NewDagBlockFromPeer) should be either removed or annotated with a short justification: either delete the commented blocks to reduce noise, or add a one-line comment above each explaining why they are preserved (e.g., kept for future re-enablement, debugging, or referencing update_transaction_status/try_submit_next_batch/benchmark_state behavior) and reference any tracking ticket/PR; ensure the comment mentions the specific symbols (ObserverService, EventHandler, NewDagBlock, NewDagBlockFromPeer, update_transaction_status, try_submit_next_batch, benchmark_state) so reviewers know the intent.
880-881: Magic delay before benchmark start.The 10-second sleep is a hardcoded delay to "ensure all is settled." Consider making this configurable or using a more deterministic readiness check if the settling time varies across environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/main.rs` around lines 880 - 881, The hardcoded 10s delay using tokio::time::sleep(Duration::from_millis(10000)).await should be replaced with a deterministic or configurable mechanism: either add a configurable parameter (e.g., CLI flag or env var) that controls the delay and use that value instead of the literal, or implement a readiness check that waits for the actual condition (e.g., network/peer readiness or a health-check future) before starting the benchmark; update the call site (tokio::time::sleep / Duration::from_millis) to use the new config value or to await the readiness future and remove the magic constant.
475-510: Consider adding a timeout to prevent indefinite waiting.This loop will run forever if the association account never accumulates sufficient balance (e.g., if minting/faucet is not configured). While users can interrupt with Ctrl+C, adding a configurable timeout or max retry count would improve robustness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/main.rs` around lines 475 - 510, The waiting loop that repeatedly calls get_current_header/get_balance and compares association_balance to needed_balance can spin forever; add a configurable timeout or max retry logic (e.g., max_retries or timeout_duration read from config/args) around this loop and bail out with a clear error if exceeded. Modify the loop surrounding the get_balance/needed_balance check (the loop using get_current_header, get_balance, association_balance, needed_balance) to track elapsed time or attempt count, sleep as now between retries, and when the limit is reached return an Err (or exit with a descriptive log and non-zero status) so the caller of main (or the surrounding async context) can handle the failure. Ensure the new config variable is passed into this code path and include its name in logs for observability.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@config/src/txpool_config.rs`:
- Around line 119-130: The merge logic in txpool_config contains a duplicated
branch setting self.max_mem_usage from txpool_opt.max_mem_usage (the second
occurrence after handling max_per_sender); remove the redundant block so
max_mem_usage is only assigned once during the merge. Locate the lines that read
and set self.max_mem_usage using txpool_opt.max_mem_usage in the
merge/merge_from method (or impl block) and delete the duplicate branch, keeping
the other branches that set max_count and max_per_sender intact.
In `@sync/starcoin-execute-bench/src/main.rs`:
- Around line 1351-1378: The commented-out EventHandler implementations for
ObserverService (handling NewDagBlock and NewDagBlockFromPeer) should be either
removed or annotated with a short justification: either delete the commented
blocks to reduce noise, or add a one-line comment above each explaining why they
are preserved (e.g., kept for future re-enablement, debugging, or referencing
update_transaction_status/try_submit_next_batch/benchmark_state behavior) and
reference any tracking ticket/PR; ensure the comment mentions the specific
symbols (ObserverService, EventHandler, NewDagBlock, NewDagBlockFromPeer,
update_transaction_status, try_submit_next_batch, benchmark_state) so reviewers
know the intent.
- Around line 880-881: The hardcoded 10s delay using
tokio::time::sleep(Duration::from_millis(10000)).await should be replaced with a
deterministic or configurable mechanism: either add a configurable parameter
(e.g., CLI flag or env var) that controls the delay and use that value instead
of the literal, or implement a readiness check that waits for the actual
condition (e.g., network/peer readiness or a health-check future) before
starting the benchmark; update the call site (tokio::time::sleep /
Duration::from_millis) to use the new config value or to await the readiness
future and remove the magic constant.
- Around line 475-510: The waiting loop that repeatedly calls
get_current_header/get_balance and compares association_balance to
needed_balance can spin forever; add a configurable timeout or max retry logic
(e.g., max_retries or timeout_duration read from config/args) around this loop
and bail out with a clear error if exceeded. Modify the loop surrounding the
get_balance/needed_balance check (the loop using get_current_header,
get_balance, association_balance, needed_balance) to track elapsed time or
attempt count, sleep as now between retries, and when the limit is reached
return an Err (or exit with a descriptive log and non-zero status) so the caller
of main (or the surrounding async context) can handle the failure. Ensure the
new config variable is passed into this code path and include its name in logs
for observability.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f8bea81f-fc35-47e9-a000-ea98440d0bb1
📒 Files selected for processing (2)
config/src/txpool_config.rssync/starcoin-execute-bench/src/main.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Cargo.toml`:
- Line 55: Remove the duplicate workspace entries by editing the Cargo.toml
workspace sections: in workspace.members remove the extra "node" and the
duplicate "commons/parallel-executor" entries, and in workspace.default-members
remove the duplicate "node" entry so each member (e.g., "node" and
"commons/parallel-executor") appears only once; ensure the final
workspace.members and workspace.default-members arrays contain unique strings
only.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 97d2dbd2-53b6-4ac0-86e5-b2a1d15dac0c
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
Cargo.tomlcommons/mvhashmap/Cargo.tomlcommons/parallel-executor/Cargo.tomlvm2/abi/decoder/Cargo.tomlvm2/abi/resolver/Cargo.tomlvm2/abi/types/Cargo.tomlvm2/aggregator/Cargo.tomlvm2/compiler/Cargo.tomlvm2/framework/Cargo.tomlvm2/framework/cached-packages/Cargo.tomlvm2/framework/gas-algebra/Cargo.tomlvm2/framework/gas-meter/Cargo.tomlvm2/framework/gas-schedule/Cargo.tomlvm2/framework/move-stdlib/Cargo.tomlvm2/framework/native-interface/Cargo.tomlvm2/framework/table-natives/Cargo.tomlvm2/move-package-manager/Cargo.tomlvm2/package-builder/Cargo.tomlvm2/resource-viewer/Cargo.tomlvm2/sdk-builder/Cargo.tomlvm2/starcoin-transactional-test-harness/Cargo.tomlvm2/types/Cargo.tomlvm2/vm-runtime-types/Cargo.tomlvm2/vm-runtime/Cargo.tomlvm2/vm-status-translator/Cargo.tomlvm2/vm-types/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (14)
- vm2/framework/native-interface/Cargo.toml
- vm2/framework/move-stdlib/Cargo.toml
- vm2/compiler/Cargo.toml
- vm2/resource-viewer/Cargo.toml
- commons/mvhashmap/Cargo.toml
- vm2/package-builder/Cargo.toml
- vm2/framework/cached-packages/Cargo.toml
- vm2/starcoin-transactional-test-harness/Cargo.toml
- vm2/abi/decoder/Cargo.toml
- vm2/types/Cargo.toml
- vm2/abi/resolver/Cargo.toml
- vm2/framework/table-natives/Cargo.toml
- vm2/vm-status-translator/Cargo.toml
- vm2/aggregator/Cargo.toml
41ba125 to
def7fd4
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
vm2/vm-runtime/src/data_cache.rs (1)
453-465:⚠️ Potential issue | 🟡 MinorSame inconsistency:
Nonepassed instead ofself.max_value_nest_depth.Both
ValueSerDeContext::new(None)calls in this method should useself.max_value_nest_depthfor consistency with the pattern instorage_wrapper.rs.🔧 Proposed fix
- let value = ValueSerDeContext::<DelayedFieldID>::new(None) + let value = ValueSerDeContext::<DelayedFieldID>::new(self.max_value_nest_depth) .with_delayed_fields_replacement(&mapping) .deserialize(state_value.bytes(), layout) .ok_or_else(|| { StateviewError::Other("Failed to replace delayed values with ids".to_string()) })?; - let serialized = ValueSerDeContext::<DelayedFieldID>::new(None) + let serialized = ValueSerDeContext::<DelayedFieldID>::new(self.max_value_nest_depth) .with_delayed_fields_serde() .serialize(&value, layout)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vm2/vm-runtime/src/data_cache.rs` around lines 453 - 465, The two ValueSerDeContext::new(None) calls in this block should pass self.max_value_nest_depth instead of None to match the pattern used elsewhere (e.g., storage_wrapper.rs); update both the deserialize call (ValueSerDeContext::new(...).with_delayed_fields_replacement(&mapping).deserialize(...)) and the serialize call (ValueSerDeContext::new(...).with_delayed_fields_serde().serialize(...)) to use self.max_value_nest_depth so the context uses the configured nesting limit.
🧹 Nitpick comments (3)
test-helper/src/node.rs (1)
24-28: Document the lifecycle contract forrun_node_with_all_service.Line 24 introduces behavior that differs from
run_node_by_config(pacemaker remains running). Please add a doc comment that callers must explicitly manage pacemaker state and callNodeHandle::stop()to avoid flaky tests and lingering background services.Suggested doc update
+/// Launches a test node with all services enabled. +/// +/// Unlike `run_node_by_config`, this function does not stop pacemaker. +/// Callers should explicitly stop pacemaker when deterministic block +/// production is required and must call `NodeHandle::stop()` for cleanup. pub fn run_node_with_all_service(config: Arc<NodeConfig>) -> Result<NodeHandle> { let logger_handle = starcoin_logger::init_for_test(); let node_handle = NodeService::launch(config, logger_handle)?; Ok(node_handle) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test-helper/src/node.rs` around lines 24 - 28, Add a doc comment to run_node_with_all_service explaining its lifecycle contract: unlike run_node_by_config, the pacemaker remains running after this function returns, so callers are responsible for explicitly managing pacemaker state and must call NodeHandle::stop() (or equivalent) to shut down background services and avoid flaky tests or lingering processes; document this behavior, the required caller responsibilities, and mention NodeHandle as the mechanism to stop the node.sync/starcoin-execute-bench/src/results.rs (2)
676-693: Note: This TPS calculation differs fromcalculate_tps_from_executed.This chart-embedded TPS is calculated from Added (submission) times, while
calculate_tps_from_executeduses execution times. Both are valid metrics:
- Submission TPS: Measures how fast transactions enter the system
- Execution TPS: Measures how fast transactions are confirmed
This is fine if intentional, but consider adding a brief comment clarifying the distinction.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/results.rs` around lines 676 - 693, Add a short clarifying comment above the TPS computation in results.rs (the block that computes `tps` from `valid_executions`, `first_time`/`last_time`) that explicitly states this TPS is computed from transaction "Added" (submission) timestamps and is distinct from the execution-based metric produced by `calculate_tps_from_executed`; mention the intended meaning (submission TPS vs execution TPS) so future readers understand why both metrics exist and differ.
168-185: Consider extracting statistics calculation into a helper function.The min/max/avg/median calculation pattern is repeated in multiple places:
calculate_stats(here)calculate_per_block_tps_stats(lines 364-376)calculate_per_block_mined_tps_stats(lines 439-451)draw_latency_chart(lines 654-674)Extracting this into a small helper would reduce duplication and make future changes easier.
♻️ Optional helper function
fn calculate_statistics(values: &[f64]) -> (f64, f64, f64, f64) { if values.is_empty() { return (0.0, 0.0, 0.0, 0.0); } let min = values.iter().fold(f64::INFINITY, |a, &b| a.min(b)); let max = values.iter().fold(0.0f64, |a, &b| a.max(b)); let avg = values.iter().sum::<f64>() / values.len() as f64; let mut sorted = values.to_vec(); sorted.sort_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal)); let median = if sorted.len() % 2 == 0 { (sorted[sorted.len() / 2 - 1] + sorted[sorted.len() / 2]) / 2.0 } else { sorted[sorted.len() / 2] }; (min, max, avg, median) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sync/starcoin-execute-bench/src/results.rs` around lines 168 - 185, The min/max/avg/median logic duplicated in calculate_stats, calculate_per_block_tps_stats, calculate_per_block_mined_tps_stats, and draw_latency_chart should be extracted into a single helper (e.g., fn calculate_statistics(values: &[f64]) -> (f64,f64,f64,f64)); implement that helper to return (min, max, avg, median) handling the empty case, then replace the local folding/sorting blocks in calculate_stats, calculate_per_block_tps_stats, calculate_per_block_mined_tps_stats, and draw_latency_chart to call calculate_statistics(&all_delays) (or the appropriate slice) and use the returned tuple values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@commons/mvhashmap/Cargo.toml`:
- Line 23: The Cargo.toml entry for the dependency move-vm-types is pointing to
a direct git rev (2dfc8d3...) which conflicts with the workspace's move-vm-types
rev (babf994...), so either change this crate's dependency to use { workspace =
true } (so it inherits the workspace revision) or update the git rev here to
match the workspace rev; locate the move-vm-types entry in
commons/mvhashmap/Cargo.toml and apply one of those two fixes consistently
across other vm2/* crates.
In `@rpc/client/tests/chain_get_block_txn_infos_in_seq_test.rs`:
- Around line 205-223: The success check in wait_for_consistent_txn_infos
currently only verifies continuity and lengths, allowing early return when
VM1/VM2 entries don't match the in-sequence data; update the success condition
(the if that returns Ok((seq_infos, vm1_infos, vm2_infos))) to also verify that
corresponding hashes/ids in seq_infos match those in vm1_infos and vm2_infos
(e.g., compare seq_infos[i].txn_hash == vm1_infos[i'].txn_hash and
vm2_infos[i''].txn_hash as appropriate), and when the three RPCs are all Ok but
inconsistent, set last_err to a descriptive anyhow::Error capturing the mismatch
details (include which indexes/hashes differ and the three result snapshots) so
timeout diagnostics are meaningful; make the same change in the other similar
block referenced (the second match arm at lines ~227-232).
In `@sync/starcoin-execute-bench/src/results.rs`:
- Around line 456-482: The public method dump_results is currently a no-op;
either remove it or restore/implement its intended file and SVG export behavior:
if you choose to implement, re-enable the file write using OpenOptions to
create/truncate "./transaction_results.txt", iterate over self.transaction_data
(use references like &self.transaction_data and write each transaction and
results with writeln!), then call
self.export_combined_svg("./benchmark_results.svg") and propagate any errors
(use anyhow::Context or map_err to wrap errors with a helpful message) instead
of silently returning Ok(()); alternatively, if this feature isn't ready,
replace the body with a clear TODO comment stating the planned behavior or
remove the public dump_results signature to avoid a misleading no-op.
In `@vm2/framework/Cargo.toml`:
- Around line 51-64: The Move dependency pins in vm2/framework/Cargo.toml must
match the workspace root; update each move-* entry (e.g., move-binary-format,
move-cli, move-command-line-common, move-compiler, move-core-types, move-docgen,
move-model, move-package, move-prover, move-prover-boogie-backend,
move-prover-bytecode-pipeline, move-stackless-bytecode, move-vm-runtime,
move-vm-types) so their git rev value equals the root Cargo.toml revision
(babf994a38cda17b84186c7992f92fb3554347f0), ensuring all move-* dependencies
across the workspace use the identical git rev to avoid duplicate graphs and
type/version skew.
In `@vm2/vm-runtime/src/data_cache.rs`:
- Around line 337-344: The code creates
ValueSerDeContext::<DelayedFieldID>::new(None) but should use the adapter's
configured nesting limit; replace the None with self.max_value_nest_depth so the
call becomes ValueSerDeContext::<DelayedFieldID>::new(self.max_value_nest_depth)
(ensuring the option/type matches the constructor), i.e. update the
instantiation used before .with_delayed_fields_serde().deserialize(...) to pass
the stored self.max_value_nest_depth consistent with storage_wrapper.rs and the
StorageAdapter field.
---
Outside diff comments:
In `@vm2/vm-runtime/src/data_cache.rs`:
- Around line 453-465: The two ValueSerDeContext::new(None) calls in this block
should pass self.max_value_nest_depth instead of None to match the pattern used
elsewhere (e.g., storage_wrapper.rs); update both the deserialize call
(ValueSerDeContext::new(...).with_delayed_fields_replacement(&mapping).deserialize(...))
and the serialize call
(ValueSerDeContext::new(...).with_delayed_fields_serde().serialize(...)) to use
self.max_value_nest_depth so the context uses the configured nesting limit.
---
Nitpick comments:
In `@sync/starcoin-execute-bench/src/results.rs`:
- Around line 676-693: Add a short clarifying comment above the TPS computation
in results.rs (the block that computes `tps` from `valid_executions`,
`first_time`/`last_time`) that explicitly states this TPS is computed from
transaction "Added" (submission) timestamps and is distinct from the
execution-based metric produced by `calculate_tps_from_executed`; mention the
intended meaning (submission TPS vs execution TPS) so future readers understand
why both metrics exist and differ.
- Around line 168-185: The min/max/avg/median logic duplicated in
calculate_stats, calculate_per_block_tps_stats,
calculate_per_block_mined_tps_stats, and draw_latency_chart should be extracted
into a single helper (e.g., fn calculate_statistics(values: &[f64]) ->
(f64,f64,f64,f64)); implement that helper to return (min, max, avg, median)
handling the empty case, then replace the local folding/sorting blocks in
calculate_stats, calculate_per_block_tps_stats,
calculate_per_block_mined_tps_stats, and draw_latency_chart to call
calculate_statistics(&all_delays) (or the appropriate slice) and use the
returned tuple values.
In `@test-helper/src/node.rs`:
- Around line 24-28: Add a doc comment to run_node_with_all_service explaining
its lifecycle contract: unlike run_node_by_config, the pacemaker remains running
after this function returns, so callers are responsible for explicitly managing
pacemaker state and must call NodeHandle::stop() (or equivalent) to shut down
background services and avoid flaky tests or lingering processes; document this
behavior, the required caller responsibilities, and mention NodeHandle as the
mechanism to stop the node.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1a25664e-abb7-434d-8863-aaf262c8fb9a
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (37)
Cargo.tomlcommons/mvhashmap/Cargo.tomlcommons/parallel-executor/Cargo.tomlconfig/src/txpool_config.rsrpc/client/tests/chain_get_block_txn_infos_in_seq_test.rssync/starcoin-execute-bench/Cargo.tomlsync/starcoin-execute-bench/src/main.rssync/starcoin-execute-bench/src/results.rstest-helper/src/lib.rstest-helper/src/node.rsvm2/abi/decoder/Cargo.tomlvm2/abi/resolver/Cargo.tomlvm2/abi/types/Cargo.tomlvm2/aggregator/Cargo.tomlvm2/compiler/Cargo.tomlvm2/framework/Cargo.tomlvm2/framework/cached-packages/Cargo.tomlvm2/framework/gas-algebra/Cargo.tomlvm2/framework/gas-meter/Cargo.tomlvm2/framework/gas-schedule/Cargo.tomlvm2/framework/move-stdlib/Cargo.tomlvm2/framework/native-interface/Cargo.tomlvm2/framework/table-natives/Cargo.tomlvm2/move-package-manager/Cargo.tomlvm2/package-builder/Cargo.tomlvm2/resource-viewer/Cargo.tomlvm2/sdk-builder/Cargo.tomlvm2/starcoin-transactional-test-harness/Cargo.tomlvm2/types/Cargo.tomlvm2/vm-runtime-types/Cargo.tomlvm2/vm-runtime/Cargo.tomlvm2/vm-runtime/src/data_cache.rsvm2/vm-runtime/src/parallel_executor/mod.rsvm2/vm-runtime/src/parallel_executor/storage_wrapper.rsvm2/vm-runtime/src/parallel_executor/vm_wrapper.rsvm2/vm-status-translator/Cargo.tomlvm2/vm-types/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (24)
- vm2/abi/resolver/Cargo.toml
- vm2/package-builder/Cargo.toml
- config/src/txpool_config.rs
- vm2/aggregator/Cargo.toml
- vm2/move-package-manager/Cargo.toml
- vm2/framework/cached-packages/Cargo.toml
- vm2/framework/native-interface/Cargo.toml
- vm2/framework/table-natives/Cargo.toml
- vm2/framework/move-stdlib/Cargo.toml
- vm2/sdk-builder/Cargo.toml
- vm2/vm-status-translator/Cargo.toml
- vm2/framework/gas-schedule/Cargo.toml
- vm2/vm-runtime-types/Cargo.toml
- Cargo.toml
- commons/parallel-executor/Cargo.toml
- vm2/starcoin-transactional-test-harness/Cargo.toml
- vm2/types/Cargo.toml
- vm2/abi/types/Cargo.toml
- sync/starcoin-execute-bench/src/main.rs
- vm2/abi/decoder/Cargo.toml
- vm2/framework/gas-algebra/Cargo.toml
- vm2/compiler/Cargo.toml
- vm2/framework/gas-meter/Cargo.toml
- sync/starcoin-execute-bench/Cargo.toml
There was a problem hiding this comment.
♻️ Duplicate comments (1)
vm2/vm-runtime/src/data_cache.rs (1)
337-344:⚠️ Potential issue | 🟠 Major
max_value_nest_depthis currently ignored in StorageAdapter serde paths.Line 337, Line 453, and Line 459 use
ValueSerDeContext::new(None), which bypasses the adapter’s configured depth (Line 164). This weakens the nested-depth enforcement introduced by this PR.🔧 Proposed fix
- let value = ValueSerDeContext::new(None) + let value = ValueSerDeContext::new(self.max_value_nest_depth) .with_delayed_fields_serde() .deserialize(bytes, layout) .ok_or_else(|| { - let value = ValueSerDeContext::new(None) + let value = ValueSerDeContext::new(self.max_value_nest_depth) .with_delayed_fields_replacement(&mapping) .deserialize(state_value.bytes(), layout) .ok_or_else(|| { StateviewError::Other("Failed to replace delayed values with ids".to_string()) })?; - let serialized = ValueSerDeContext::new(None) + let serialized = ValueSerDeContext::new(self.max_value_nest_depth) .with_delayed_fields_serde() .serialize(&value, layout)#!/bin/bash set -euo pipefail echo "Check ValueSerDeContext constructor usage in both adapters:" rg -n -C2 'ValueSerDeContext::new\((None|self\.max_value_nest_depth)\)' \ vm2/vm-runtime/src/data_cache.rs \ vm2/vm-runtime/src/parallel_executor/storage_wrapper.rsExpected result: no
ValueSerDeContext::new(None)remains invm2/vm-runtime/src/data_cache.rsfor delayed-field serde/replacement paths.Also applies to: 453-461
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vm2/vm-runtime/src/data_cache.rs` around lines 337 - 344, The deser path is bypassing the adapter's nested-depth limit by calling ValueSerDeContext::new(None) for delayed-field serde; update these calls (e.g., the one that creates value via ValueSerDeContext::new(None).with_delayed_fields_serde().deserialize(bytes, layout)) to pass the adapter's configured depth (use self.max_value_nest_depth or the corresponding max_value_nest_depth field) instead of None, and make the same replacement for the other occurrences in the delayed-field serde/replacement paths so the max_value_nest_depth is enforced consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@vm2/vm-runtime/src/data_cache.rs`:
- Around line 337-344: The deser path is bypassing the adapter's nested-depth
limit by calling ValueSerDeContext::new(None) for delayed-field serde; update
these calls (e.g., the one that creates value via
ValueSerDeContext::new(None).with_delayed_fields_serde().deserialize(bytes,
layout)) to pass the adapter's configured depth (use self.max_value_nest_depth
or the corresponding max_value_nest_depth field) instead of None, and make the
same replacement for the other occurrences in the delayed-field
serde/replacement paths so the max_value_nest_depth is enforced consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 354f5af9-25af-44d0-b38f-b0ae56756bd8
📒 Files selected for processing (2)
vm2/vm-runtime/src/data_cache.rsvm2/vm-runtime/src/parallel_executor/storage_wrapper.rs
6388e7d to
43cf024
Compare
2, remove the type delayed field id
fa562c2 to
3a5a6b2
Compare
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Other information
Summary by CodeRabbit
New Features
Tests
Improvements