libbeat/diskqueue: reuse decoder buffer when capacity matches size#49530
libbeat/diskqueue: reuse decoder buffer when capacity matches size#49530github-actions[bot] wants to merge 3 commits intomainfrom
Conversation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
/ai can you run and share results from impacted benchmarks please, feel free to also run and report on benchmarks that might be impacted |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThe change modifies 🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
TL;DR
Remediation
Investigation detailsRoot CauseThe decoder reads Evidence
Validation
Follow-up
What is this? | From workflow: PR Actions Detective Give us feedback! React with 🚀 if perfect, 👍 if helpful, 👎 if not. |
Our encoder only writes EventFlags (uint8) values, so to.Flags is always 0-255. Use nolint:gosec instead of runtime bounds check to avoid hot-path overhead. Made-with: Cursor
9ab67b9 to
27f5282
Compare
VihasMakwana
left a comment
There was a problem hiding this comment.
LGTM. I'd appreciate another review from @faec as she worked on the diskqueue implementation.
|
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
Summary
eventDecoder.Bufferinlibbeat/publisher/queue/diskqueue/serialize.goto reuse the existing slice whencap(d.buf) == ncap(d.buf) > ntocap(d.buf) >= nWhy
The reader loop calls
decoder.Buffer(int(header.eventSize))on a hot path. With the previous strict>check, equal-capacity buffers were unnecessarily reallocated.Additional audit requested in issue
I looked for the same pattern (
cap(...) > nfollowed bymake([]byte, n)in reuse/allocation branches) in hot-path areas.Findings:
libbeat/publisher/queue/diskqueue/serialize.go(this PR): real issue, fixed.libbeat/publisher/**.cap(...) > ...match exists inlibbeat/common/streambuf/streambuf.go, but it is a different condition (retainable && cap(data) > newCap) and not the equal-capacity reuse bug pattern.Validation
go test ./libbeat/publisher/queue/diskqueuego test ./libbeat/publisher/queue/diskqueue -run '^$' -bench 'BenchmarkAsync1k$' -benchmem -count=1Both commands pass in this workspace.
What is this? | From workflow: Mention in Issue
Give us feedback! React with 🚀 if perfect, 👍 if helpful, 👎 if not.