Skip to content

BF: Seek file data to 0 before every retry in request(), not just for retryable status codes#1824

Merged
yarikoptic merged 2 commits intomasterfrom
bf-zarr-upload2
Mar 30, 2026
Merged

BF: Seek file data to 0 before every retry in request(), not just for retryable status codes#1824
yarikoptic merged 2 commits intomasterfrom
bf-zarr-upload2

Conversation

@yarikoptic
Copy link
Copy Markdown
Member

@yarikoptic yarikoptic commented Mar 25, 2026

@dstansby please try

claude's slopsplaining:

Previously, data.seek(0) was only called inside the retry_statuses/retry_if branch, which handles specific HTTP response codes (429, 500-504, and zarr-specific conditions). When a ConnectionError occurred mid-upload (e.g., connection dropped during a multi-minute large zarr chunk transfer), tenacity caught the exception and retried, but the file pointer was left at whatever position the read was interrupted.

On retry, requests computed Content-Length as (file_size - current_position) and sent only the tail of the file. S3 received partial data whose MD5 didn't match the Content-MD5 header (computed from the full file), resulting in a BadDigest 400 error. This error was also not retried (not in RETRY_STATUSES, not matched by retry_if), so the upload failed permanently.

The fix uses tenacity's before_sleep callback to seek the file data back to position 0 before every retry, regardless of whether the retry was triggered by ConnectionError, HTTPError, or a retryable status code.

Evidence from two independent logs (issue #1821):

  • Linux: 188 level-0 zarr chunks failed (large, ~6 min upload each), 67 level-2 chunks succeeded (small, fast upload) -- all with "succeeded after 1 retry" + BadDigest
  • Windows local disk: 187 level-0 failures after 2-8 retries, 68 level-1 successes -- ruling out NFS/filesystem as the cause
  • Both logs show "Resetting dropped connection: dandiarchive.s3.amazonaws.com" confirming ConnectionErrors preceded the BadDigest errors

… retryable status codes

Previously, data.seek(0) was only called inside the retry_statuses/retry_if
branch, which handles specific HTTP response codes (429, 500-504, and
zarr-specific conditions).  When a ConnectionError occurred mid-upload
(e.g., connection dropped during a multi-minute large zarr chunk transfer),
tenacity caught the exception and retried, but the file pointer was left at
whatever position the read was interrupted.

On retry, requests computed Content-Length as (file_size - current_position)
and sent only the tail of the file.  S3 received partial data whose MD5
didn't match the Content-MD5 header (computed from the full file), resulting
in a BadDigest 400 error.  This error was also not retried (not in
RETRY_STATUSES, not matched by retry_if), so the upload failed permanently.

The fix uses tenacity's before_sleep callback to seek the file data back to
position 0 before every retry, regardless of whether the retry was triggered
by ConnectionError, HTTPError, or a retryable status code.

Evidence from two independent logs (issue #1821):
- Linux: 188 level-0 zarr chunks failed (large, ~6 min upload each), 67
  level-2 chunks succeeded (small, fast upload) -- all with "succeeded after
  1 retry" + BadDigest
- Windows local disk: 187 level-0 failures after 2-8 retries, 68 level-1
  successes -- ruling out NFS/filesystem as the cause
- Both logs show "Resetting dropped connection: dandiarchive.s3.amazonaws.com"
  confirming ConnectionErrors preceded the BadDigest errors

Closes #1821

Co-Authored-By: Claude Code 2.1.81 / Claude Opus 4.6 <noreply@anthropic.com>
@yarikoptic yarikoptic added patch Increment the patch version when merged zarr labels Mar 25, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 25, 2026

Codecov Report

❌ Patch coverage is 66.66667% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 75.12%. Comparing base (34af06f) to head (d7d5dd4).
⚠️ Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
dandi/dandiapi.py 66.66% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1824   +/-   ##
=======================================
  Coverage   75.12%   75.12%           
=======================================
  Files          84       84           
  Lines       11930    11931    +1     
=======================================
+ Hits         8962     8963    +1     
  Misses       2968     2968           
Flag Coverage Δ
unittests 75.12% <66.66%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@CodyCBakerPhD CodyCBakerPhD marked this pull request as ready for review March 30, 2026 14:27
@CodyCBakerPhD
Copy link
Copy Markdown
Contributor

User confirmed the resolution of the digest mismatch

What a silly source of such an error

Great to confirm no files were harmed in the process of dandi upload

@yarikoptic
Copy link
Copy Markdown
Member Author

Thanks @CodyCBakerPhD for the review.
Frankly, I would not call it "silly" since machinery there is quite complex and sandwiched now quite deep to provide generally robust and scalable operation. Just an oversight ;)

I will wait for CI to pass and then merge -- there will be follow ups though.

@yarikoptic yarikoptic merged commit fb5e2f0 into master Mar 30, 2026
36 of 37 checks passed
@yarikoptic yarikoptic deleted the bf-zarr-upload2 branch March 30, 2026 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

patch Increment the patch version when merged zarr

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants