Skip to content

History archive hardening#5185

Open
SirTyson wants to merge 4 commits intostellar:masterfrom
SirTyson:history-archive-hardening
Open

History archive hardening#5185
SirTyson wants to merge 4 commits intostellar:masterfrom
SirTyson:history-archive-hardening

Conversation

@SirTyson
Copy link
Contributor

Description

Hardens the processing of History Archive files during catchup. Basically, we do a lot more error checking to gracefully handle malformed history archive files instead of crashing. Addressed the following issues:

https://github.com/stellar/stellar-core-internal/issues/527
https://github.com/stellar/stellar-core-internal/issues/520
https://github.com/stellar/stellar-core-internal/issues/464
https://github.com/stellar/stellar-core-internal/issues/461

None of these are particularly exploitable, as they require a malicious tier 1 and can only momentarily stall nodes catching up to the network for the first time. But they're good to fix so we can stop getting bug bounty/AI reports on them.

Checklist

  • Reviewed the contributing document
  • Rebased on top of master (no merge commits)
  • Ran clang-format v8.0.0 (via make format or the Visual Studio extension)
  • Compiles
  • Ran all tests
  • If change impacts performance, include supporting evidence per the performance document

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Hardens History Archive State (HAS) and related archive-file processing during catchup/publish flows by adding tighter validation and replacing aborting asserts with exceptions, so malformed or crafted archive inputs fail gracefully instead of crashing Stellar Core.

Changes:

  • Add HAS JSON size limits and post-deserialization validation (version, bucket vector sizes, ledger bounds, hex-hash format).
  • Harden deserialization of FutureBucket (required fields, shadow-hash cap) and fs::hexDir (throw instead of abort).
  • Add concurrency annotations/locking for publish enqueue timing, plus comprehensive HAS format-validation tests.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/util/Fs.cpp hexDir now throws on invalid hex input rather than aborting.
src/historywork/VerifyBucketWork.cpp Handle synchronous verifier completion to return success/failure immediately.
src/history/test/HistoryArchiveFormatTests.cpp New test suite covering malformed/crafted HAS JSON inputs and hexDir behavior.
src/history/HistoryManagerImpl.h Add annotated mutex guarding enqueue-time map.
src/history/HistoryManagerImpl.cpp Lock around enqueue-time map updates/reads.
src/history/HistoryArchive.h Introduce HAS size and ledger upper-bound constants.
src/history/HistoryArchive.cpp Enforce HAS size limits and validate HAS contents after JSON deserialization.
src/bucket/FutureBucket.h Add shadow-hash count cap and required-field checks during deserialization.

Comment on lines +245 to +252
// Check file size before parsing to prevent OOM from crafted JSON
auto fileSize = std::filesystem::file_size(inFile);
if (fileSize > MAX_HAS_FILE_SIZE)
{
throw std::runtime_error(
fmt::format(FMT_STRING("HAS file size {} exceeds maximum {}"),
fileSize, MAX_HAS_FILE_SIZE));
}
Comment on lines +91 to +95
// Upper bound on currentLedger to prevent uint32_t overflow in
// downstream arithmetic.
static constexpr uint32_t MAX_CURRENT_LEDGER =
std::numeric_limits<uint32_t>::max() - 256;

@SirTyson SirTyson force-pushed the history-archive-hardening branch from 9c24cbe to 15e5d28 Compare March 19, 2026 20:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants