Skip to content

Drop fewer packets on UDP packet flood#5565

Open
AndrewMeadows wants to merge 3 commits intodevelopfrom
leviathan/ghi-5562
Open

Drop fewer packets on UDP packet flood#5565
AndrewMeadows wants to merge 3 commits intodevelopfrom
leviathan/ghi-5562

Conversation

@AndrewMeadows
Copy link
Collaborator

This PR tries to prevent packet loss (#5562) when connecting to a new region under a flood of data. To achieve this:

(1) The LLPacketRing max buffer size was increased.

(2) LLMessageSystem now has two LLPacketRings: one for "high-priority" packets and one for "low-priority".

(3) Reliable packets are ACKed when they are queued rather than waiting until they are processed.

Prioritize packets between high- and low-priority

Increase max size of LLPacketRing buffer

gMessageSystem->mPacketRing.sendPacket(packetp->mSocket,
(char *)packetp->mBuffer, packetp->mBufferLength,
gMessageSystem->sendPacketToSocket((char *)packetp->mBuffer, packetp->mBufferLength,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The socket communication logic has been pulled out of LLPacketRing and moved into LLMessageSystem.

// let's call this one a loss!
mPacketsLost++;
gMessageSystem->mDroppedPackets++;
gMessageSystem->mLostPackets++;
Copy link
Collaborator Author

@AndrewMeadows AndrewMeadows Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the socket communication logic was moved into LLMessageSystem it caused a naming confusion: there was already a data member called mDroppedPackets being used for tracking outgoing packets that were lost, but new data members for tracking intentionally dropped (for testing purposes) incoming packets had very similar names. To address this we are changing this variable name.

#include "u64.h"

constexpr S16 MAX_BUFFER_RING_SIZE = 1024;
constexpr S16 MAX_BUFFER_RING_SIZE = 8192;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're greatly increasing the maximum size of the ring. We hope no viewer ever reaches this limit because when that happens packets will be dropped. However that risk should be addressed by providing more bandwidth backpressure in a timely fashion, which is outside the scope of this pull request.

data_size + SOCKS_HEADER_SIZE,
LLProxy::getInstance()->getUDPProxy().getAddress(),
LLProxy::getInstance()->getUDPProxy().getPort());
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As already mentioned: all of this logic moved into LLMessageSystem. LLPacketRing is now just a simple ring.

mTotalBytesOut = 0;

mDroppedPackets = 0; // total dropped packets in
mLostPackets = 0; // total lost packets out
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on how the mDroppedPackets data member was actually used: the comment was incorrect --> this name change is justified.

F32 LLMessageSystem::getBufferLoadRate() const
{
return llmax(mHighPriorityInbound.getBufferLoadRate(), mLowPriorityInbound.getBufferLoadRate());
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This measure of "buffer load rate" is an approximation, but it is "good enough" because we expect the "low-priority" buffer to dominate.

S32 packets_in = gMessageSystem->mPacketsIn - mLastPacketsIn;
S32 packets_out = gMessageSystem->mPacketsOut - mLastPacketsOut;
S32 packets_lost = gMessageSystem->mDroppedPackets - mLastPacketsLost;
S32 packets_lost = gMessageSystem->mLostPackets - mLastPacketsLost;
Copy link
Collaborator Author

@AndrewMeadows AndrewMeadows Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned earlier: LLMessageSystem::mDroppedPackets was actually being used to count "lost outgoing packets" rather than "dropped inbound packets".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Packets still lost due to buffer overflow at connection to new Region

1 participant