aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mpagealloc_test.go
AgeCommit message (Collapse)Author
2020-10-22runtime: define the AddrRange used for testing in terms of addrRangeMichael Anthony Knyszek
Currently the AddrRange used for testing is defined separately from addrRange in the runtime, making it difficult to test it as well as addrRanges. Redefine AddrRange in terms of addrRange instead. For #40191. Change-Id: I3aa5b8df3e4c9a3c494b46ab802dd574b2488141 Reviewed-on: https://go-review.googlesource.com/c/go/+/242677 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Austin Clements <austin@google.com>
2020-07-31runtime: validate candidate searchAddr in pageAlloc.findMichael Anthony Knyszek
Currently pageAlloc.find attempts to find a better estimate for the first free page in the heap, even if the space its looking for isn't necessarily going to be the first free page in the heap (e.g. if npages >= 2). However, in doing so it has the potential to return a searchAddr candidate that doesn't actually correspond to mapped memory, but this candidate might still be adopted. As a result, pageAlloc.alloc's fast path may look at unmapped summary memory and segfault. This case is rare on most operating systems since the heap is kept fairly contiguous, so the chance that the candidate searchAddr discovered is unmapped is fairly low. Even so, this is totally possible and outside the user's control when it happens (in fact, it's likely to happen consistently for a given user on a given system). Fix this problem by ensuring that our candidate always points to mapped memory. We do this by looking at mheap's arenas structure first. If it turns out our candidate doesn't correspond to mapped memory, then we look at inUse to round up the searchAddr to the next mapped address. While we're here, clean up some documentation related to searchAddr. Fixes #40191. Change-Id: I759efec78987e4a8fde466ae45aabbaa3d9d4214 Reviewed-on: https://go-review.googlesource.com/c/go/+/242680 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-01-28runtime: ensure that searchAddr always refers to inUse memoryMichael Anthony Knyszek
This change formalizes an assumption made by the page allocator, which is that (*pageAlloc).searchAddr should never refer to memory that is not represented by (*pageAlloc).inUse. The portion of address space covered by (*pageAlloc).inUse reflects the parts of the summary arrays which are guaranteed to mapped, and so looking at any summary which is not reflected there may cause a segfault. In fact, this can happen today. This change thus also removes a micro-optimization which is the only case which may cause (*pageAlloc).searchAddr to point outside of any region covered by (*pageAlloc).inUse, and adds a test verifying that the current segfault can no longer occur. Change-Id: I98b534f0ffba8656d3bd6d782f6fc22549ddf1c2 Reviewed-on: https://go-review.googlesource.com/c/go/+/216697 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-12-26runtime: disable pageAlloc tests on OpenBSD in short modeMichael Anthony Knyszek
This change disables pageAlloc tests on OpenBSD in short mode because pageAlloc holds relatively large virtual memory reservations and we make two during the pageAlloc tests. The runtime may also be carrying one such reservation making the virtual memory requirement for testing the Go runtime three times as much as just running a Go binary. This causes problems for folks who just want to build and test Go (all.bash) on OpenBSD but either don't have machines with at least 4ish GiB of RAM (per-process virtual memory limits are capped at some constant factor times the amount of physical memory) or their per-process virtual memory limits are low for other reasons. Fixes #36210. Change-Id: I8d89cfde448d4cd2fefff4ad6ffed90de63dd527 Reviewed-on: https://go-review.googlesource.com/c/go/+/212177 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
2019-12-13runtime: fix off-by-1 error on address rangesCherry Zhang
When growing the address ranges, the new length is the old length + 1. Fixes #36113. Change-Id: I1b425f78e473cfa3cbdfe6113e166663f41fc9f3 Reviewed-on: https://go-review.googlesource.com/c/go/+/211157 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-12-11runtime: use inUse ranges to map in summary memory only as neededMichael Anthony Knyszek
Prior to this change, if the heap was very discontiguous (such as in TestArenaCollision) it's possible we could map a large amount of memory as R/W and commit it. We would use only the start and end to track what should be mapped, and we would extend that mapping as needed to accomodate a potentially fragmented address space. After this change, we only map exactly the part of the summary arrays that we need by using the inUse ranges from the previous change. This reduces the GCSys footprint of TestArenaCollision from 300 MiB to 18 MiB. Because summaries are no longer mapped contiguously, this means the scavenger can no longer iterate directly. This change also updates the scavenger to borrow ranges out of inUse and iterate over only the parts of the heap which are actually currently in use. This is both an optimization and necessary for correctness. Fixes #35514. Change-Id: I96bf0c73ed0d2d89a00202ece7b9d089a53bac90 Reviewed-on: https://go-review.googlesource.com/c/go/+/207758 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-12-11runtime: track ranges of address space which are owned by the heapMichael Anthony Knyszek
This change adds a new inUse field to the allocator which tracks ranges of addresses that are owned by the heap. It is updated on each heap growth. These ranges are tracked in an array which is kept sorted. In practice this array shouldn't exceed its initial allocation except in rare cases and thus should be small (ideally exactly 1 element in size). In a hypothetical worst-case scenario wherein we have a 1 TiB heap and 4 MiB arenas (note that the address ranges will never be at a smaller granularity than an arena, since arenas are always allocated contiguously), inUse would use at most 4 MiB of memory if the heap mappings were completely discontiguous (highly unlikely) with an additional 2 MiB leaked from previous allocations. Furthermore, the copies that are done to keep the inUse array sorted will copy at most 4 MiB of memory in such a scenario, which, assuming a conservative copying rate of 5 GiB/s, amounts to about 800µs. However, note that in practice: 1) Most 64-bit platforms have 64 MiB arenas. 2) The copies should incur little-to-no page faults, meaning a copy rate closer to 25-50 GiB/s is expected. 3) Go heaps are almost always mostly contiguous. Updates #35514. Change-Id: I3ad07f1c2b5b9340acf59ecc3b9ae09e884814fe Reviewed-on: https://go-review.googlesource.com/c/go/+/207757 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Austin Clements <austin@google.com>
2019-12-03runtime: convert page allocator bitmap to sparse arrayMichael Anthony Knyszek
Currently the page allocator bitmap is implemented as a single giant memory mapping which is reserved at init time and committed as needed. This causes problems on systems that don't handle large uncommitted mappings well, or institute low virtual address space defaults as a memory limiting mechanism. This change modifies the implementation of the page allocator bitmap away from a directly-mapped set of bytes to a sparse array in same vein as mheap.arenas. This will hurt performance a little but the biggest gains are from the lockless allocation possible with the page allocator, so the impact of this extra layer of indirection should be minimal. In fact, this is exactly what we see: https://perf.golang.org/search?q=upload:20191125.5 This reduces the amount of mapped (PROT_NONE) memory needed on systems with 48-bit address spaces to ~600 MiB down from almost 9 GiB. The bulk of this remaining memory is used by the summaries. Go processes with 32-bit address spaces now always commit to 128 KiB of memory for the bitmap. Previously it would only commit the pages in the bitmap which represented the range of addresses (lowest address to highest address, even if there are unused regions in that range) used by the heap. Updates #35568. Updates #35451. Change-Id: I0ff10380156568642b80c366001eefd0a4e6c762 Reviewed-on: https://go-review.googlesource.com/c/go/+/207497 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-10runtime: make the test addresses for pageAlloc smaller on 32-bitMichael Anthony Knyszek
This change makes the test addresses start at 1 GiB instead of 2 GiB to support mips and mipsle, which only have 31-bit address spaces. It also changes some tests to use smaller offsets for the chunk index to avoid jumping too far ahead in the address space to support 31-bit address spaces. The tests don't require such large jumps for what they're testing anyway. Updates #35112. Fixes #35440. Change-Id: Ic68ff2b0a1f10ef37ac00d4bb5b910ddcdc76f2e Reviewed-on: https://go-review.googlesource.com/c/go/+/205938 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-07runtime: count scavenged bits for new allocation for new page allocatorMichael Anthony Knyszek
This change makes it so that the new page allocator returns the number of pages that are scavenged in a new allocation so that mheap can update memstats appropriately. The accounting could be embedded into pageAlloc, but that would make the new allocator more difficult to test. Updates #35112. Change-Id: I0f94f563d7af2458e6d534f589d2e7dd6af26d12 Reviewed-on: https://go-review.googlesource.com/c/go/+/195698 Reviewed-by: Austin Clements <austin@google.com>
2019-11-07runtime: add scavenging code for new page allocatorMichael Anthony Knyszek
This change adds a scavenger for the new page allocator along with tests. The scavenger walks over the heap backwards once per GC, looking for memory to scavenge. It walks across the heap without any lock held, searching optimistically. If it finds what appears to be a scavenging candidate it acquires the heap lock and attempts to verify it. Upon verification it then scavenges. Notably, unlike the old scavenger, it doesn't show any preference for huge pages and instead follows a more strict last-page-first policy. Updates #35112. Change-Id: I0621ef73c999a471843eab2d1307ae5679dd18d6 Reviewed-on: https://go-review.googlesource.com/c/go/+/195697 Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-11-07runtime: add new page allocator coreMichael Anthony Knyszek
This change adds a new bitmap-based allocator to the runtime with tests. It does not yet integrate the page allocator into the runtime and thus this change is almost purely additive. Updates #35112. Change-Id: Ic3d024c28abee8be8797d3918116a80f901cc2bf Reviewed-on: https://go-review.googlesource.com/c/go/+/190622 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>