-
Notifications
You must be signed in to change notification settings - Fork 390
Implement LRU cache for storing hashes to filter out for flood repeating #1380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Currently, some busy nodes are seeing more than 128 packets before they are being received again. I'm not sure if this would help since I'm not running one of those nodes. I think it might be beneficial though. Opted for uint16_t instead of uint32_t in order to save memory.
|
There's no reason they are both the same repeater, they are just two repeaters which share the same first byte. |
I would like to think so too, but how likely is the path 2b,d0,cc,2b,d0,cc,cc and then more d0 showing up later? I've even observed this packet:
d0 occurs a staggering 11 times. sequence d0cc occurs 5 times! |
Sorry, I missed that part and only saw the highlighted pair of e5. I agree that does look odd, I would even say it looks deliberate. Is it possible that the entire beginning of the path is faked, and the packet was custom built and injected somewhere closer to the receiving node? |
|
Hmm, i thought loops occur in a much shorter timeframe too, like with only 1-2 hops before it happens, my conclusion was before that its not possible to have 128 packets in that time (due to airtime etc). [Edit] |
|
@weebl2000 see also #1386 , we're seeing such loops daily. There's definitely something fishy here because I don't think we have that much traffic to fill the hashtable. |
It would be interesting to see if some of the more central nodes in the network could run with LRU eviction and see if anything changes. I'm running this PR on two of my repeaters - they are behaving like normal - but they are not really central nodes. |
|
@awhite2000 also: question - do you know if and how many nodes have rxdelay set? (anything other than 0) |


Currently, some busy nodes are seeing more than 128 packets before they are being received again. It's only some hashes that make another full cycle, so I feel having an LRU cache would maybe fix the problem.
Opted for uint16_t instead of uint32_t in order to save memory. Useful, or opt for uint32_t and remove the 60 second expiry?I think we will need full timestamps.
Would be very interesting to see a very central node in the network try this out.
For example, the Utrecht repeater is seeing ~658 packets/20 min even at low activity times, with 72 neighbors it's not hard to imagine a message arriving again after looping through the network, after the node has seen 128 different hashes.
658 packets / 1200 seconds = 0.548 packets/second
Time to see 128 packets = 128 / 0.548 = 233.6 seconds ≈ 234 seconds
So with FIFO (dev branch):
The question becomes: what percentage of flooded packets take longer than 234 seconds (~4 minutes) to potentially return to node Utrecht (cc)?
Given the mesh topology with 72 neighbors and long multi-hop paths, packets taking 4+ minutes to traverse a long route back isn't unusual - especially with randomized retransmit delays adding up over 30-50+ hops.
Even if only 5-10% of packets return "late," that's:
658 × 5% = 33 loop events per 20 minutes
658 × 10% = 66 loop events per 20 minutes