aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2019-01-25 17:40:40 +0000
committerMichael Knyszek <mknyszek@google.com>2019-01-31 16:54:41 +0000
commit8e093e7a1cd8a092f23717cb8f34bca489a3eee5 (patch)
tree6a27b71104b8a3c2b179f579da4c234feaa27fc2
parentf2a416b90ac68596ea05b97cefa8c72e7416e98f (diff)
downloadgo-8e093e7a1cd8a092f23717cb8f34bca489a3eee5.tar.gz
go-8e093e7a1cd8a092f23717cb8f34bca489a3eee5.zip
runtime: scavenge memory upon allocating from scavenged memory
Because scavenged and unscavenged spans no longer coalesce, memory that is freed no longer has a high likelihood of being re-scavenged. As a result, if an application is allocating at a fast rate, it may work fast enough to undo all the scavenging work performed by the runtime's current scavenging mechanisms. This behavior is exacerbated by the global best-fit allocation policy the runtime uses, since scavenged spans are just as likely to be chosen as unscavenged spans on average. To remedy that, we treat each allocation of scavenged space as a heap growth, and scavenge other memory to make up for the allocation. This change makes performance of the runtime slightly worse, as now we're scavenging more often during allocation. The regression is particularly obvious with the garbage benchmark (3%) but most of the Go1 benchmarks are within the margin of noise. A follow-up change should help. Garbage: https://perf.golang.org/search?q=upload:20190131.3 Go1: https://perf.golang.org/search?q=upload:20190131.2 Updates #14045. Change-Id: I44a7e6586eca33b5f97b6d40418db53a8a7ae715 Reviewed-on: https://go-review.googlesource.com/c/159500 Reviewed-by: Austin Clements <austin@google.com>
-rw-r--r--src/runtime/mheap.go10
1 files changed, 10 insertions, 0 deletions
diff --git a/src/runtime/mheap.go b/src/runtime/mheap.go
index 1bf7bbecc0..055dfeed99 100644
--- a/src/runtime/mheap.go
+++ b/src/runtime/mheap.go
@@ -1190,6 +1190,16 @@ HaveSpan:
// heap_released since we already did so earlier.
sysUsed(unsafe.Pointer(s.base()), s.npages<<_PageShift)
s.scavenged = false
+
+ // Since we allocated out of a scavenged span, we just
+ // grew the RSS. Mitigate this by scavenging enough free
+ // space to make up for it.
+ //
+ // Also, scavengeLargest may cause coalescing, so prevent
+ // coalescing with s by temporarily changing its state.
+ s.state = mSpanManual
+ h.scavengeLargest(s.npages * pageSize)
+ s.state = mSpanFree
}
s.unusedsince = 0