Elements are incorrectly evicted from eBPF LRU hashmap
I observed that elements in the ebpf lru hash map (bpf_map_type_lru_hash
) were being evicted incorrectly. In the code below, I insert an lru hash map of size 8 and print its contents every second:
package main import ( "fmt" "github.com/cilium/ebpf" "log" "time" ) func main() { spec := ebpf.mapspec{ name: "test_map", type: ebpf.lruhash, keysize: 4, valuesize: 8, maxentries: 8, } hashmap, err := ebpf.newmap(&spec) if err != nil { log.fatalln("could not create map:", err) } var insertkey uint32 for range time.tick(time.second) { err = hashmap.update(insertkey, uint64(insertkey), ebpf.updateany) if err != nil { log.printf("update failed. insertkey=%d|value=%d|err=%s", insertkey, insertkey, err) } var key uint32 var value uint64 count := 0 elementsstr := "" iter := hashmap.iterate() for iter.next(&key, &value) { elementsstr += fmt.sprintf("(%d, %d) ", key, value) count++ } log.printf("total elements: %d, elements: %s", count, elementsstr) insertkey++ } }
When I run the above program, I see this:
2023/03/29 17:32:29 total elements: 1, elements: (0, 0) 2023/03/29 17:32:30 total elements: 2, elements: (1, 1) (0, 0) 2023/03/29 17:32:31 total elements: 3, elements: (1, 1) (0, 0) (2, 2) 2023/03/29 17:32:32 total elements: 3, elements: (3, 3) (0, 0) (2, 2) ...
Since the map has eight entries, I expected the fourth row to show four values, but it only shows three because entry (1, 1)
has been evicted.
If I change max_entries
to 1024, I noticed that this problem occurs after inserting the 200th element, but sometimes it happens after that. Inconsistent.
This issue is not limited to creating/inserting maps from user space, as I observed this issue in an xdp program that created a map and inserted it into the map; the above reproduces the issue I observed in my actual program. In my real program which also has 1024 entries, I noticed that this problem occurred after inserting 16 elements.
I tested this on a production server running linux kernel 5.16.7.
I tested on a linux vm and upgraded the kernel to 6.2.8 and I noticed a difference in the eviction policy. For example, when max_entries
is 8, I observe:
2023/03/29 20:38:02 Total elements: 1, elements: (0, 0) 2023/03/29 20:38:03 Total elements: 2, elements: (0, 0) (1, 1) 2023/03/29 20:38:04 Total elements: 3, elements: (0, 0) (2, 2) (1, 1) 2023/03/29 20:38:05 Total elements: 4, elements: (0, 0) (2, 2) (1, 1) (3, 3) 2023/03/29 20:38:06 Total elements: 5, elements: (4, 4) (0, 0) (2, 2) (1, 1) (3, 3) 2023/03/29 20:38:07 Total elements: 6, elements: (4, 4) (0, 0) (2, 2) (1, 1) (5, 5) (3, 3) 2023/03/29 20:38:08 Total elements: 7, elements: (4, 4) (0, 0) (2, 2) (1, 1) (6, 6) (5, 5) (3, 3) 2023/03/29 20:38:09 Total elements: 8, elements: (7, 7) (4, 4) (0, 0) (2, 2) (1, 1) (6, 6) (5, 5) (3, 3) 2023/03/29 20:38:10 Total elements: 1, elements: (8, 8) ...
When max_entries
is 1024, I noticed that after adding the 1025th element, the total elements are 897. I was unable to test using kernel 6.2.8 on our production server.
Correct answer
The LRU hash map does not guarantee exactly the maximum number of items, and the implementation is clearly designed to work well beyond 8 Provides good performance for multiple projects. I took a quick look at the code and saw this:
-
The LRU is divided into two parts: the "active list" and the "inactive list", and its task is to periodically move elements from one part to the other based on whether they have been visited recently. It's not true LRU (items don't move to the head on every access).
-
When the map is full and something needs to be evicted before a new item can be inserted, the code will evict up to 128 items from the inactive list in one pass; only if the inactive list is empty , it will evict a single item from the active list.
-
There is also a per-CPU "local free list" that contains allocated items waiting to be filled with data; when it runs empty, it tries to pull from the global free list, and if that list is empty, it will enter the eviction path. The target size of the local free list is 4 entries.
So the behavior in 6.2.8 looks simple and consistent: presumably all your keys are on the "inactive list" (not too surprising for a scan type access pattern, or maybe just None of them had a chance to get promoted), and then everyone was kicked out. I don't know much about 5.16, but it might have something to do with the local freelist and all updates running from the same CPU.
Basically, I think the data type is not meant to be used the way you are using it, and the error is what you expect. If you don't agree, I think you'll have to discuss it with the kernel developers.
The above is the detailed content of Elements are incorrectly evicted from eBPF LRU hashmap. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



OpenSSL, as an open source library widely used in secure communications, provides encryption algorithms, keys and certificate management functions. However, there are some known security vulnerabilities in its historical version, some of which are extremely harmful. This article will focus on common vulnerabilities and response measures for OpenSSL in Debian systems. DebianOpenSSL known vulnerabilities: OpenSSL has experienced several serious vulnerabilities, such as: Heart Bleeding Vulnerability (CVE-2014-0160): This vulnerability affects OpenSSL 1.0.1 to 1.0.1f and 1.0.2 to 1.0.2 beta versions. An attacker can use this vulnerability to unauthorized read sensitive information on the server, including encryption keys, etc.

The library used for floating-point number operation in Go language introduces how to ensure the accuracy is...

Queue threading problem in Go crawler Colly explores the problem of using the Colly crawler library in Go language, developers often encounter problems with threads and request queues. �...

This article introduces a variety of methods and tools to monitor PostgreSQL databases under the Debian system, helping you to fully grasp database performance monitoring. 1. Use PostgreSQL to build-in monitoring view PostgreSQL itself provides multiple views for monitoring database activities: pg_stat_activity: displays database activities in real time, including connections, queries, transactions and other information. pg_stat_replication: Monitors replication status, especially suitable for stream replication clusters. pg_stat_database: Provides database statistics, such as database size, transaction commit/rollback times and other key indicators. 2. Use log analysis tool pgBadg

Backend learning path: The exploration journey from front-end to back-end As a back-end beginner who transforms from front-end development, you already have the foundation of nodejs,...

The problem of using RedisStream to implement message queues in Go language is using Go language and Redis...

The difference between string printing in Go language: The difference in the effect of using Println and string() functions is in Go...

To improve the performance of DebianHadoop cluster, we need to start from hardware, software, resource management and performance tuning. The following are some key optimization strategies and suggestions: 1. Select hardware and system configurations carefully to select hardware configurations: Select the appropriate CPU, memory and storage devices according to actual application scenarios. SSD accelerated I/O: Use solid state hard drives (SSDs) as much as possible to improve I/O operation speed. Memory expansion: Allocate sufficient memory to NameNode and DataNode nodes to cope with larger data processing and tasks. 2. Software configuration optimization Hadoop configuration file adjustment: core-site.xml: Configure HDFS default file system
