aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mheap.go
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2020-02-20 20:58:45 +0000
committerMichael Knyszek <mknyszek@google.com>2020-04-27 18:19:26 +0000
commita13691966ad571ed9e434d591a2d612c51349fd1 (patch)
treef2a001fedba55f37871c0eeabf169d85c7003057 /src/runtime/mheap.go
parent9582b6e8fd1b278e670987c7689920888191b14f (diff)
downloadgo-a13691966ad571ed9e434d591a2d612c51349fd1.tar.gz
go-a13691966ad571ed9e434d591a2d612c51349fd1.zip
runtime: add new mcentral implementation
Currently mcentral is implemented as a couple of linked lists of spans protected by a lock. Unfortunately this design leads to significant lock contention. The span ownership model is also confusing and complicated. In-use spans jump between being owned by multiple sources, generally some combination of a gcSweepBuf, a concurrent sweeper, an mcentral or an mcache. So first to address contention, this change replaces those linked lists with gcSweepBufs which have an atomic fast path. Then, we change up the ownership model: a span may be simultaneously owned only by an mcentral and the page reclaimer. Otherwise, an mcentral (which now consists of sweep bufs), a sweeper, or an mcache are the sole owners of a span at any given time. This dramatically simplifies reasoning about span ownership in the runtime. As a result of this new ownership model, sweeping is now driven by walking over the mcentrals rather than having its own global list of spans. Because we no longer have a global list and we traditionally haven't used the mcentrals for large object spans, we no longer have anywhere to put large objects. So, this change also makes it so that we keep large object spans in the appropriate mcentral lists. In terms of the static lock ranking, we add the spanSet spine locks in pretty much the same place as the mcentral locks, since they have the potential to be manipulated both on the allocation and sweep paths, like the mcentral locks. This new implementation is turned on by default via a feature flag called go115NewMCentralImpl. Benchmark results for 1 KiB allocation throughput (5 runs each): name \ MiB/s go113 go114 gotip gotip+this-patch AllocKiB-1 1.71k ± 1% 1.68k ± 1% 1.59k ± 2% 1.71k ± 1% AllocKiB-2 2.46k ± 1% 2.51k ± 1% 2.54k ± 1% 2.93k ± 1% AllocKiB-4 4.27k ± 1% 4.41k ± 2% 4.33k ± 1% 5.01k ± 2% AllocKiB-8 4.38k ± 3% 5.24k ± 1% 5.46k ± 1% 8.23k ± 1% AllocKiB-12 4.38k ± 3% 4.49k ± 1% 5.10k ± 1% 10.04k ± 0% AllocKiB-16 4.31k ± 1% 4.14k ± 3% 4.22k ± 0% 10.42k ± 0% AllocKiB-20 4.26k ± 1% 3.98k ± 1% 4.09k ± 1% 10.46k ± 3% AllocKiB-24 4.20k ± 1% 3.97k ± 1% 4.06k ± 1% 10.74k ± 1% AllocKiB-28 4.15k ± 0% 4.00k ± 0% 4.20k ± 0% 10.76k ± 1% Fixes #37487. Change-Id: I92d47355acacf9af2c41bf080c08a8c1638ba210 Reviewed-on: https://go-review.googlesource.com/c/go/+/221182 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
Diffstat (limited to 'src/runtime/mheap.go')
-rw-r--r--src/runtime/mheap.go27
1 files changed, 20 insertions, 7 deletions
diff --git a/src/runtime/mheap.go b/src/runtime/mheap.go
index 9448748603..b7c5add40c 100644
--- a/src/runtime/mheap.go
+++ b/src/runtime/mheap.go
@@ -44,6 +44,15 @@ const (
// Must be a multiple of the pageInUse bitmap element size and
// must also evenly divid pagesPerArena.
pagesPerReclaimerChunk = 512
+
+ // go115NewMCentralImpl is a feature flag for the new mcentral implementation.
+ //
+ // This flag depends on go115NewMarkrootSpans because the new mcentral
+ // implementation requires that markroot spans no longer rely on mgcsweepbufs.
+ // The definition of this flag helps ensure that if there's a problem with
+ // the new markroot spans implementation and it gets turned off, that the new
+ // mcentral implementation also gets turned off so the runtime isn't broken.
+ go115NewMCentralImpl = true && go115NewMarkrootSpans
)
// Main malloc heap.
@@ -85,6 +94,8 @@ type mheap struct {
// unswept stack and pushes spans that are still in-use on the
// swept stack. Likewise, allocating an in-use span pushes it
// on the swept stack.
+ //
+ // For !go115NewMCentralImpl.
sweepSpans [2]gcSweepBuf
// _ uint32 // align uint64 fields on 32-bit for atomics
@@ -1278,13 +1289,15 @@ HaveSpan:
h.setSpans(s.base(), npages, s)
if !manual {
- // Add to swept in-use list.
- //
- // This publishes the span to root marking.
- //
- // h.sweepgen is guaranteed to only change during STW,
- // and preemption is disabled in the page allocator.
- h.sweepSpans[h.sweepgen/2%2].push(s)
+ if !go115NewMCentralImpl {
+ // Add to swept in-use list.
+ //
+ // This publishes the span to root marking.
+ //
+ // h.sweepgen is guaranteed to only change during STW,
+ // and preemption is disabled in the page allocator.
+ h.sweepSpans[h.sweepgen/2%2].push(s)
+ }
// Mark in-use span in arena page bitmap.
//