aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mpagealloc.go
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2020-02-10 23:11:30 +0000
committerMichael Knyszek <mknyszek@google.com>2020-05-08 16:25:31 +0000
commitdba1205b2fc458829e783bd0a4d1eff7231ae16c (patch)
tree5b067d2a5efb0fe5689f0985322746f8858bf624 /src/runtime/mpagealloc.go
parent55ec5182d7b84eb2461c495a55984162b23f3df8 (diff)
downloadgo-dba1205b2fc458829e783bd0a4d1eff7231ae16c.tar.gz
go-dba1205b2fc458829e783bd0a4d1eff7231ae16c.zip
runtime: avoid re-scanning scavenged and untouched memory
Currently the scavenger will reset to the top of the heap every GC. This means if it scavenges a bunch of memory which doesn't get used again, it's going to keep re-scanning that memory on subsequent cycles. This problem is especially bad when it comes to heap spikes: suppose an application's heap spikes to 2x its steady-state size. The scavenger will run over the top half of that heap even if the heap shrinks, for the rest of the application's lifetime. To fix this, we maintain two numbers: a "free" high watermark, which represents the highest address freed to the page allocator in that cycle, and a "scavenged" low watermark, which represents how low of an address the scavenger got to when scavenging. If the "free" watermark exceeds the "scavenged" watermark, then we pick the "free" watermark as the new "top of the heap" for the scavenger when starting the next scavenger cycle. Otherwise, we have the scavenger pick up where it left off. With this mechanism, we only ever re-scan scavenged memory if a random page gets freed very high up in the heap address space while most of the action is happening in the lower parts. This case should be exceedingly unlikely because the page reclaimer walks over the heap from low address to high addresses, and we use a first-fit address-ordered allocation policy. Updates #35788. Change-Id: Id335603b526ce3a0eb79ef286d1a4e876abc9cab Reviewed-on: https://go-review.googlesource.com/c/go/+/218997 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: David Chase <drchase@google.com>
Diffstat (limited to 'src/runtime/mpagealloc.go')
-rw-r--r--src/runtime/mpagealloc.go17
1 files changed, 16 insertions, 1 deletions
diff --git a/src/runtime/mpagealloc.go b/src/runtime/mpagealloc.go
index 771cb3a3ba..905d49d751 100644
--- a/src/runtime/mpagealloc.go
+++ b/src/runtime/mpagealloc.go
@@ -270,6 +270,14 @@ type pageAlloc struct {
// released is the amount of memory released this generation.
released uintptr
+
+ // scavLWM is the lowest address that the scavenger reached this
+ // scavenge generation.
+ scavLWM uintptr
+
+ // freeHWM is the highest address of a page that was freed to
+ // the page allocator this scavenge generation.
+ freeHWM uintptr
}
// mheap_.lock. This level of indirection makes it possible
@@ -306,6 +314,9 @@ func (s *pageAlloc) init(mheapLock *mutex, sysStat *uint64) {
// Set the mheapLock.
s.mheapLock = mheapLock
+
+ // Initialize scavenge tracking state.
+ s.scav.scavLWM = maxSearchAddr
}
// compareSearchAddrTo compares an address against s.searchAddr in a linearized
@@ -813,6 +824,11 @@ func (s *pageAlloc) free(base, npages uintptr) {
if s.compareSearchAddrTo(base) < 0 {
s.searchAddr = base
}
+ // Update the free high watermark for the scavenger.
+ limit := base + npages*pageSize - 1
+ if s.scav.freeHWM < limit {
+ s.scav.freeHWM = limit
+ }
if npages == 1 {
// Fast path: we're clearing a single bit, and we know exactly
// where it is, so mark it directly.
@@ -820,7 +836,6 @@ func (s *pageAlloc) free(base, npages uintptr) {
s.chunkOf(i).free1(chunkPageIndex(base))
} else {
// Slow path: we're clearing more bits so we may need to iterate.
- limit := base + npages*pageSize - 1
sc, ec := chunkIndex(base), chunkIndex(limit)
si, ei := chunkPageIndex(base), chunkPageIndex(limit)