Home > Backend Development > Golang > Why does Go use less memory for a slice of length 100k than for an array of length 100k?

Why does Go use less memory for a slice of length 100k than for an array of length 100k?

王林
Release: 2024-02-09 10:12:09
forward
500 people have browsed it

为什么 Go 对于长度为 100k 的切片使用的内存比长度为 100k 的数组要少?

When the Go language handles slices and arrays, a slice with a length of 100k uses less memory than an array with a length of 100k. This is because slices use a combination of pointers and lengths in their underlying implementation, while arrays require contiguous memory space to store data. Since the length of a slice is variable, memory can be allocated and released dynamically, while an array needs to have a fixed length specified when it is declared. Therefore, when processing large amounts of data, using slicing can use memory space more efficiently and reduce memory usage. This is also one of the advantages of Go language when processing large-scale data.

Question content

Consider the following code, I allocated 4000 arrays, each of 100k length:

parentmap := make(map[int][100_000]int)
    for i := 0; i < 4000; i++ {
        parentmap[i] = [100_000]int{}
        time.sleep(3 * time.millisecond)
    }
Copy after login

If I run the program locally and analyze its memory usage, it starts using >2gb of memory.

Now if we change the code slightly to use array slices (but also of length 100k) like this:

parentMap := make(map[int][]int)
    for i := 0; i < 4000; i++ {
        parentMap[i] = make([]int, 100_000)
        time.Sleep(3 * time.Millisecond)
    }
Copy after login

On my machine, memory peaked at about 73mb. why is that?

I think both fragments will use roughly the same memory for the following reasons:

  • In both cases, the go runtime will allocate the value of parentmap on the heap. go does this because if it allocates these values ​​on the stack, the values ​​of parentmap will all be cleared once the current function goes out of scope.
  • So the first code snippet allocates a 4k array directly on the heap.
  • And, the second fragment allocates 4k slice header on the heap. Each slice header has a pointer to a unique array of size 100k (also on the heap).
  • In both cases, there are 4k arrays on the heap of size 100k. Therefore, roughly equal amounts of memory should be used in either case.

I read: https://go.dev/blog/slices-intro. But can't find implementation details explaining this.

Workaround

The version with slicing may benefit from lazy allocation. Nothing will attempt to write to the data buffers in one of these slices, so the operating system is free to not actually allocate memory for these buffers until a write is indeed attempted. (The operating system can also delay zero-initialization of buffers so allocations are not forced.)

Meanwhile, the version with an array requires the array to be actually copied into the map, which means actually performing the write. Even if the values ​​written are all zeros, they are still writes, so the operating system must actually allocate memory for the data to be written.

Attempt to write data to these slices, the sliced ​​versions should also take up gigabytes of memory. (I think one value per page of memory should be enough, but it might be easier to populate the slice with 1s.)

The above is the detailed content of Why does Go use less memory for a slice of length 100k than for an array of length 100k?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:stackoverflow.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template