aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mgc.go
AgeCommit message (Collapse)Author
2022-04-29[dev.boringcrypto] crypto/ecdsa, crypto/rsa: use boring.CacheRuss Cox
In the original BoringCrypto port, ecdsa and rsa's public and private keys added a 'boring unsafe.Pointer' field to cache the BoringCrypto form of the key. This led to problems with code that “knew” the layout of those structs and in particular that they had no unexported fields. In response, as an awful kludge, I changed the compiler to pretend that field did not exist when laying out reflect data. Because we want to merge BoringCrypto in the main tree, we need a different solution. Using boring.Cache is that solution. For #51940. Change-Id: Ideb2b40b599a1dc223082eda35a5ea9abcc01e30 Reviewed-on: https://go-review.googlesource.com/c/go/+/395883 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Roland Shoemaker <roland@golang.org>
2022-04-29[dev.boringcrypto] crypto/internal/boring: add GC-aware cacheRuss Cox
In the original BoringCrypto port, ecdsa and rsa's public and private keys added a 'boring unsafe.Pointer' field to cache the BoringCrypto form of the key. This led to problems with code that “knew” the layout of those structs and in particular that they had no unexported fields. In response, as an awful kludge, I changed the compiler to pretend that field did not exist when laying out reflect data. Because we want to merge BoringCrypto in the main tree, we need a different solution. The different solution is this CL's boring.Cache, which is a concurrent, GC-aware map from unsafe.Pointer to unsafe.Pointer (if generics were farther along we could use them nicely here, but I am afraid of breaking tools that aren't ready to see generics in the standard library yet). More complex approaches are possible, but a simple, fixed-size hash table is easy to make concurrent and should be fine. For #51940. Change-Id: I44062a8defbd87b705a787cffc64c6a9d0132785 Reviewed-on: https://go-review.googlesource.com/c/go/+/395882 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com>
2022-04-26runtime: reduce max idle mark workers during periodic GC cyclesMichael Anthony Knyszek
This change reduces the maximum number of idle mark workers during periodic (currently every 2 minutes) GC cycles to 1. Idle mark workers soak up all available and unused Ps, up to GOMAXPROCS. While this provides some throughput and latency benefit in general, it can cause what appear to be massive CPU utilization spikes in otherwise idle applications. This is mostly an issue for *very* idle applications, ones idle enough to trigger periodic GC cycles. This spike also tends to interact poorly with auto-scaling systems, as the system might assume the load average is very low and suddenly see a massive burst in activity. The result of this change is not to bring down this 100% (of GOMAXPROCS) CPU utilization spike to 0%, but rather min(25% + 1/GOMAXPROCS*100%, 100%) Idle mark workers also do incur a small latency penalty as they must be descheduled for other work that might pop up. Luckily the runtime is pretty good about getting idle mark workers off of Ps, so in general the latency benefit from shorter GC cycles outweighs this cost. But, the cost is still non-zero and may be more significant in idle applications that aren't invoking assists and write barriers quite as often. We can't completely eliminate idle mark workers because they're currently necessary for GC progress in some circumstances. Namely, they're critical for progress when all we have is fractional workers. If a fractional worker meets its quota, and all user goroutines are blocked directly or indirectly on a GC cycle (via runtime.GOMAXPROCS, or runtime.GC), the program may deadlock without GC workers, since the fractional worker will go to sleep with nothing to wake it. Fixes #37116. For #44163. Change-Id: Ib74793bb6b88d1765c52d445831310b0d11ef423 Reviewed-on: https://go-review.googlesource.com/c/go/+/393394 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2022-04-11all: gofmt main repoRuss Cox
[This CL is part of a sequence implementing the proposal #51082. The design doc is at https://go.dev/s/godocfmt-design.] Run the updated gofmt, which reformats doc comments, on the main repository. Vendored files are excluded. For #51082. Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407 Reviewed-on: https://go-review.googlesource.com/c/go/+/384268 Reviewed-by: Jonathan Amsterdam <jba@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
2022-03-31runtime: remove old pacer and the PacerRedesign goexperimentMichael Anthony Knyszek
Now that Go 1.18 has been released, remove the old pacer. Change-Id: Ie7a7596d67f3fc25d3f375a08fc75eafac2eb834 Reviewed-on: https://go-review.googlesource.com/c/go/+/393396 Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-12-19runtime: mgc.go typo fix: becuse -> becauseclamyang
Change-Id: I5019d5b9520e47a99a6136f615b6c9468073cc3c GitHub-Last-Rev: 1a5392925a0c4e9b2915620fee3efa79ae14af20 GitHub-Pull-Request: golang/go#50239 Reviewed-on: https://go-review.googlesource.com/c/go/+/373055 Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Trust: Brad Fitzpatrick <bradfitz@golang.org>
2021-12-01runtime: fix racy allgs access on weak memory architecturesAustin Clements
Currently, markroot is very clever about accessing the allgs slice to find stack roots. Unfortunately, on weak memory architectures, it's a little too clever and can sometimes read a nil g, causing a fatal panic. Specifically, gcMarkRootPrepare snapshots the length of allgs during STW and then markroot accesses allgs up to this length during concurrent marking. During concurrent marking, allgadd can append to allgs *without synchronizing with markroot*, but the argument is that the markroot access should be safe because allgs only grows monotonically and existing entries in allgs never change. This reasoning is insufficient on weak memory architectures. Suppose thread 1 calls allgadd during concurrent marking and that allgs is already at capacity. On thread 1, append will allocate a new slice that initially consists of all nils, then copy the old backing store to the new slice (write A), then allgadd will publish the new slice to the allgs global (write B). Meanwhile, on thread 2, markroot reads the allgs slice base pointer (read A), computes an offset from that base pointer, and reads the value at that offset (read B). On a weak memory machine, thread 2 can observe write B *before* write A. If the order of events from thread 2's perspective is write B, read A, read B, write A, then markroot on thread 2 will read a nil g and then panic. Fix this by taking a snapshot of the allgs slice header in gcMarkRootPrepare while the world is stopped and using that snapshot as the list of stack roots in markroot. This eliminates all read/write concurrency around the access in markroot. Alternatively, we could make markroot use the atomicAllGs API to atomically access the allgs list, but in my opinion it's much less subtle to just eliminate all of the interesting concurrency around the allgs access. Fixes #49686. Fixes #48845. Fixes #43824. (These are all just different paths to the same ultimate issue.) Change-Id: I472b4934a637bbe88c8a080a280aa30212acf984 Reviewed-on: https://go-review.googlesource.com/c/go/+/368134 Trust: Austin Clements <austin@google.com> Trust: Bryan C. Mills <bcmills@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-11-05runtime: retype gcControllerState.gcPercent as atomic.Int32Michael Anthony Knyszek
[git-generate] cd src/runtime mv export_test.go export.go GOROOT=$(dirname $(dirname $PWD)) rf ' add gcControllerState.gcPercent \ // Initialized from GOGC. GOGC=off means no GC. \ gcPercent_ atomic.Int32 ex { import "runtime/internal/atomic" var t gcControllerState var v, w int32 var d int32 t.gcPercent -> t.gcPercent_.Load() t.gcPercent = v -> t.gcPercent_.Store(v) atomic.Loadint32(&t.gcPercent) -> t.gcPercent_.Load() atomic.Storeint32(&t.gcPercent, v) -> t.gcPercent_.Store(v) atomic.Xaddint32(&t.gcPercent, d) -> t.gcPercent_.Add(d) atomic.Casint32(&t.gcPercent, v, w) -> t.gcPercent_.CompareAndSwap(v, w) atomic.Xchgint32(&t.gcPercent, v) -> t.gcPercent_.Swap(v) } rm gcControllerState.gcPercent mv gcControllerState.gcPercent_ gcControllerState.gcPercent ' mv export.go export_test.go Change-Id: I1aae34a3f782d096c6b6233bbf7986e67ce9c5f9 Reviewed-on: https://go-review.googlesource.com/c/go/+/357794 Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-11-04runtime: implement GC pacer redesignMichael Anthony Knyszek
This change implements the GC pacer redesign outlined in #44167 and the accompanying design document, behind a GOEXPERIMENT flag that is on by default. In addition to adding the new pacer, this CL also includes code to track and account for stack and globals scan work in the pacer and in the assist credit system. The new pacer also deviates slightly from the document in that it increases the bound on the minimum trigger ratio from 0.6 (scaled by GOGC) to 0.7. The logic behind this change is that the new pacer much more consistently hits the goal (good!) leading to slightly less frequent GC cycles, but _longer_ ones (in this case, bad!). It turns out that the cost of having the GC on hurts throughput significantly (per byte of memory used), though tail latencies can improve by up to 10%! To be conservative, this change moves the value to 0.7 where there is a small improvement to both throughput and latency, given the memory use. Because the new pacer accounts for the two most significant sources of scan work after heap objects, it is now also safer to reduce the minimum heap size without leading to very poor amortization. This change thus decreases the minimum heap size to 512 KiB, which corresponds to the fact that the runtime has around 200 KiB of scannable globals always there, up-front, providing a baseline. Benchmark results: https://perf.golang.org/search?q=upload:20211001.6 tile38's KNearest benchmark shows a memory increase, but throughput (and latency) per byte of memory used is better. gopher-lua showed an increase in both CPU time and memory usage, but subsequent attempts to reproduce this behavior are inconsistent. Sometimes the overall performance is better, sometimes it's worse. This suggests that the benchmark is fairly noisy in a way not captured by the benchmarking framework itself. biogo-igor is the only benchmark to show a significant performance loss. This benchmark exhibits a very high GC rate, with relatively little work to do in each cycle. The idle mark workers are quite active. In the new pacer, mark phases are longer, mark assists are fewer, and some of that time in mark assists has shifted to idle workers. Linux perf indicates that the difference in CPU time can be mostly attributed to write-barrier slow path related calls, which in turn indicates that the write barrier being on for longer is the primary culprit. This also explains the memory increase, as a longer mark phase leads to more memory allocated black, surviving an extra cycle and contributing to the heap goal. For #44167. Change-Id: I8ac7cfef7d593e4a642c9b2be43fb3591a8ec9c4 Reviewed-on: https://go-review.googlesource.com/c/go/+/309869 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-29runtime: pass nanotime and gomaxprocs into startCycle and endCycle explicitlyMichael Knyszek
This is to facilitate testing of the pacer, since otherwise this is accessing global state, which is impossible to stub out properly. For #44167. Change-Id: I52c3b51fc0ffff38e3bbe534bd66e5761c0003a8 Reviewed-on: https://go-review.googlesource.com/c/go/+/353353 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-29runtime: move pacer time updates and state resets into methodsMichael Anthony Knyszek
Currently GC pacer updates are applied somewhat haphazardly via direct field access. To facilitate ease of testing, move these field updates into methods. Further CLs will move more of these updates into methods. For #44167. Change-Id: I25b10d2219ae27b356b5f236d44827546c86578d Reviewed-on: https://go-review.googlesource.com/c/go/+/309274 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-29runtime: detangle sweeper pacing from GC pacingMichael Anthony Knyszek
The sweeper's pacing state is global, so detangle it from the GC pacer's state updates so that the GC pacer can be tested. For #44167. Change-Id: Ibcea989cd435b73c5891f777d9f95f9604e03bd1 Reviewed-on: https://go-review.googlesource.com/c/go/+/309273 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-29runtime: fix sweep termination conditionMichael Anthony Knyszek
Currently, there is a chance that the sweep termination condition could flap, causing e.g. runtime.GC to return before all sweep work has not only been drained, but also completed. CL 307915 and CL 307916 attempted to fix this problem, but it is still possible that mheap_.sweepDrained is marked before any outstanding sweepers are accounted for in mheap_.sweepers, leaving a window in which a thread could observe isSweepDone as true before it actually was (and after some time it would revert to false, then true again, depending on the number of outstanding sweepers at that point). This change fixes the sweep termination condition by merging mheap_.sweepers and mheap_.sweepDrained into a single atomic value. This value is updated such that a new potential sweeper will increment the oustanding sweeper count iff there are still outstanding spans to be swept without an outstanding sweeper to pick them up. This design simplifies the sweep termination condition into a single atomic load and comparison and ensures the condition never flaps. Updates #46500. Fixes #45315. Change-Id: I6d69aff156b8d48428c4cc8cfdbf28be346dbf04 Reviewed-on: https://go-review.googlesource.com/c/go/+/333389 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2021-10-21runtime: detangle gcPaceScavenger from the pacerMichael Anthony Knyszek
Currently gcPaceScavenger is called by gcControllerState.commit, but it manipulates global state which precludes testing. This change detangles the two. Change-Id: I10d8ebdf426d99ba49d2f2cb4fb64891e9fd6091 Reviewed-on: https://go-review.googlesource.com/c/go/+/309272 Reviewed-by: Michael Pratt <mpratt@google.com> Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-10-21runtime: formalize and fix gcPercent synchronizationMichael Anthony Knyszek
Currently gcController.gcPercent is read non-atomically by gcControllerState.revise and gcTrigger.test, but these users may execute concurrently with an update to gcPercent. Although revise's results are best-effort, reading it directly in this way is, generally speaking, unsafe. This change makes gcPercent atomically updated for concurrent readers and documents the complete synchronization semantics. Because gcPercent otherwise only updated with the heap lock held or the world stopped, all other reads can remain unsynchronized. For #44167. Change-Id: If09af103aae84a1e133e2d4fed8ab888d4b8f457 Reviewed-on: https://go-review.googlesource.com/c/go/+/308690 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-20runtime: retype mheap.reclaimCredit as atomic.UintptrMichael Anthony Knyszek
[git-generate] cd src/runtime mv export_test.go export.go GOROOT=$(dirname $(dirname $PWD)) rf ' add mheap.reclaimCredit \ // reclaimCredit is spare credit for extra pages swept. Since \ // the page reclaimer works in large chunks, it may reclaim \ // more than requested. Any spare pages released go to this \ // credit pool. \ reclaimCredit_ atomic.Uintptr ex { import "runtime/internal/atomic" var t mheap var v, w uintptr var d uintptr t.reclaimCredit -> t.reclaimCredit_.Load() t.reclaimCredit = v -> t.reclaimCredit_.Store(v) atomic.Loaduintptr(&t.reclaimCredit) -> t.reclaimCredit_.Load() atomic.LoadAcquintptr(&t.reclaimCredit) -> t.reclaimCredit_.LoadAcquire() atomic.Storeuintptr(&t.reclaimCredit, v) -> t.reclaimCredit_.Store(v) atomic.StoreReluintptr(&t.reclaimCredit, v) -> t.reclaimCredit_.StoreRelease(v) atomic.Casuintptr(&t.reclaimCredit, v, w) -> t.reclaimCredit_.CompareAndSwap(v, w) atomic.Xchguintptr(&t.reclaimCredit, v) -> t.reclaimCredit_.Swap(v) atomic.Xadduintptr(&t.reclaimCredit, d) -> t.reclaimCredit_.Add(d) } rm mheap.reclaimCredit mv mheap.reclaimCredit_ mheap.reclaimCredit ' mv export.go export_test.go Change-Id: I2c567781a28f5d8c2275ff18f2cf605b82f22d09 Reviewed-on: https://go-review.googlesource.com/c/go/+/356712 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2021-10-20runtime: retype mheap.reclaimIndex as atomic.Uint64Michael Anthony Knyszek
[git-generate] cd src/runtime mv export_test.go export.go GOROOT=$(dirname $(dirname $PWD)) rf ' add mheap.reclaimIndex \ // reclaimIndex is the page index in allArenas of next page to \ // reclaim. Specifically, it refers to page (i % \ // pagesPerArena) of arena allArenas[i / pagesPerArena]. \ // \ // If this is >= 1<<63, the page reclaimer is done scanning \ // the page marks. \ reclaimIndex_ atomic.Uint64 ex { import "runtime/internal/atomic" var t mheap var v, w uint64 var d int64 t.reclaimIndex -> t.reclaimIndex_.Load() t.reclaimIndex = v -> t.reclaimIndex_.Store(v) atomic.Load64(&t.reclaimIndex) -> t.reclaimIndex_.Load() atomic.LoadAcq64(&t.reclaimIndex) -> t.reclaimIndex_.LoadAcquire() atomic.Store64(&t.reclaimIndex, v) -> t.reclaimIndex_.Store(v) atomic.StoreRel64(&t.reclaimIndex, v) -> t.reclaimIndex_.StoreRelease(v) atomic.Cas64(&t.reclaimIndex, v, w) -> t.reclaimIndex_.CompareAndSwap(v, w) atomic.Xchg64(&t.reclaimIndex, v) -> t.reclaimIndex_.Swap(v) atomic.Xadd64(&t.reclaimIndex, d) -> t.reclaimIndex_.Add(d) } rm mheap.reclaimIndex mv mheap.reclaimIndex_ mheap.reclaimIndex ' mv export.go export_test.go Change-Id: I1d619e3ac032285b5f7eb6c563a5188c8e36d089 Reviewed-on: https://go-review.googlesource.com/c/go/+/356711 Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Knyszek <mknyszek@google.com>
2021-10-20runtime: retype mheap.pagesSwept as atomic.Uint64Michael Anthony Knyszek
[git-generate] cd src/runtime mv export_test.go export.go GOROOT=$(dirname $(dirname $PWD)) rf ' add mheap.pagesSwept pagesSwept_ atomic.Uint64 // pages swept this cycle ex { import "runtime/internal/atomic" var t mheap var v, w uint64 var d int64 t.pagesSwept -> t.pagesSwept_.Load() t.pagesSwept = v -> t.pagesSwept_.Store(v) atomic.Load64(&t.pagesSwept) -> t.pagesSwept_.Load() atomic.LoadAcq64(&t.pagesSwept) -> t.pagesSwept_.LoadAcquire() atomic.Store64(&t.pagesSwept, v) -> t.pagesSwept_.Store(v) atomic.StoreRel64(&t.pagesSwept, v) -> t.pagesSwept_.StoreRelease(v) atomic.Cas64(&t.pagesSwept, v, w) -> t.pagesSwept_.CompareAndSwap(v, w) atomic.Xchg64(&t.pagesSwept, v) -> t.pagesSwept_.Swap(v) atomic.Xadd64(&t.pagesSwept, d) -> t.pagesSwept_.Add(d) } rm mheap.pagesSwept mv mheap.pagesSwept_ mheap.pagesSwept ' mv export.go export_test.go Change-Id: Ife99893d90a339655f604bc3a64ee3decec645ea Reviewed-on: https://go-review.googlesource.com/c/go/+/356709 Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com>
2021-06-11[dev.typeparams] runtime: simplify defer record allocationCherry Mui
Now that deferred functions are always argumentless and defer records are no longer with arguments, defer record can be fixed size (just the _defer struct). This allows us to simplify the allocation of defer records, specifically, remove the defer classes and the pools of different sized defers. Change-Id: Icc4b16afc23b38262ca9dd1f7369ad40874cf701 Reviewed-on: https://go-review.googlesource.com/c/go/+/326062 Trust: Cherry Mui <cherryyz@google.com> Run-TryBot: Cherry Mui <cherryyz@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-06-04[dev.typeparams] runtime: undo go'd closure argument workaroundCherry Mui
In CL 298669 we added defer/go wrapping, and, as it is not allowed for closures to escape when compiling runtime, we worked around it by rewriting go'd closures to argumentless non-capturing closures, so it is not a real closure and so not needed to escape. Previous CL removes the restriction. Now we can undo the workaround. Updates #40724. Change-Id: Ic7bf129da4aee7b7fdb7157414eca943a6a27264 Reviewed-on: https://go-review.googlesource.com/c/go/+/325110 Trust: Cherry Mui <cherryyz@google.com> Reviewed-by: Than McIntosh <thanm@google.com>
2021-05-05runtime: implement runqdrain() for GC mark worker goroutinesAndy Pan
Revive CL 310149 Change-Id: Ib4714ea5b2ade32c0f66edff841a79d8212bd79a Reviewed-on: https://go-review.googlesource.com/c/go/+/313009 Run-TryBot: Ian Lance Taylor <iant@golang.org> Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Trust: Michael Pratt <mpratt@google.com> Trust: Michael Knyszek <mknyszek@google.com>
2021-04-20Revert "runtime: implement runqdrain() for GC mark worker goroutines"Ian Lance Taylor
This reverts CL 310149. Reason for revert: Breaks longtest builders: https://build.golang.org/log/6af9fb147fa3101154db10e7ce055e8267cd4c93 https://build.golang.org/log/172ed6e1ec3bb503370333ee421c590fd2a72d0a Change-Id: Iaf5a8b9eec51d0517311e050d0b0f7569759d292 Reviewed-on: https://go-review.googlesource.com/c/go/+/312129 Trust: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-20runtime: implement runqdrain() for GC mark worker goroutinesAndy Pan
Change-Id: Ida44a2e07f277bee8806538ecee4beee3474cf3d Reviewed-on: https://go-review.googlesource.com/c/go/+/310149 Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Trust: Michael Pratt <mpratt@google.com> Trust: Ian Lance Taylor <iant@golang.org> Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-16runtime: remove useless nFlushCacheRootsLeonard Wang
Change-Id: I70cb8f8e9a0eec68ea11f22ca8699aa7e0c91ede Reviewed-on: https://go-review.googlesource.com/c/go/+/310710 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Trust: Emmanuel Odeke <emmanuel@orijtech.com>
2021-04-14runtime: pass work.userForced to gcController.endCycle explicitlyMichael Anthony Knyszek
For #44167. Change-Id: I15817006f1870d6237cd06dabad988da3f23a6d6 Reviewed-on: https://go-review.googlesource.com/c/go/+/306604 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-14runtime: move next_gc and last_next_gc into gcControllerStateMichael Anthony Knyszek
This change moves next_gc and last_next_gc into gcControllerState under the names heapGoal and lastHeapGoal respectively. These are fundamentally GC pacer related values, and so it makes sense for them to live here. Partially generated by rf ' ex . { memstats.next_gc -> gcController.heapGoal memstats.last_next_gc -> gcController.lastHeapGoal } ' except for updates to comments and gcControllerState methods, where they're accessed through the receiver, and trace-related renames of NextGC -> HeapGoal, while we're here. For #44167. Change-Id: I1e871ad78a57b01be8d9f71bd662530c84853bed Reviewed-on: https://go-review.googlesource.com/c/go/+/306603 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-14runtime: fix formatting of gcMarkLeonard Wang
Change-Id: I08aed75f3aab0da705544665e532f332adfb075e Reviewed-on: https://go-review.googlesource.com/c/go/+/308949 Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> Trust: David Chase <drchase@google.com>
2021-04-14runtime: move roots' bases calculation to gcMarkRootPrepareRuslan Andreev
This patch provides changes according to Austin's TODO. It just moves calculation of base indexes of each root type from markroot function to gcMarkRootPrepare. Change-Id: Ib231de34e7f81e922762fc3ee2b1830921c0c7cf Reviewed-on: https://go-review.googlesource.com/c/go/+/279461 Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com>
2021-04-14runtime: create initializer for gcControllerStateMichael Anthony Knyszek
Now that gcControllerState contains almost all of the pacer state, create an initializer for it instead of haphazardly setting some fields. For #44167. Change-Id: I4ce1d5dd82003cb7c263fa46697851bb22a32544 Reviewed-on: https://go-review.googlesource.com/c/go/+/306601 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-14runtime: move gcPercent and heapMinimum into gcControllerStateMichael Anthony Knyszek
These variables are core to the pacer, and will be need to be non-global for testing later. Partially generated via rf ' ex . { gcPercent -> gcController.gcPercent heapMinimum -> gcController.heapMinimum } ' The only exception to this generation is usage of these variables in gcControllerState methods. For #44167. Change-Id: I8b620b3061114f3a3c4b65006f715fd977b180a8 Reviewed-on: https://go-review.googlesource.com/c/go/+/306600 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-14runtime: make gcSetTriggerRatio a method of gcControllerStateMichael Anthony Knyszek
gcSetTriggerRatio's purpose is to set a bunch of downstream values when we choose to commit to a new trigger ratio computed by the gcController. Now that almost all the inputs it uses to compute the downstream values are in gcControllerState anyway, make it a method of gcControllerState. For #44167. Change-Id: I1b7ea709e8378566f812ae3450ab169d7fb66aea Reviewed-on: https://go-review.googlesource.com/c/go/+/306599 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-13runtime: move internal GC statistics from memstats to gcControllerMichael Anthony Knyszek
This change moves certain important but internal-only GC statistics from memstats into gcController. These statistics are mainly used in pacing the GC, so it makes sense to keep them in the pacer's state. This CL was mostly generated via rf ' ex . { memstats.gc_trigger -> gcController.trigger memstats.triggerRatio -> gcController.triggerRatio memstats.heap_marked -> gcController.heapMarked memstats.heap_live -> gcController.heapLive memstats.heap_scan -> gcController.heapScan } ' except for a few special cases, like updating names in comments and when these fields are used within gcControllerState methods (at which point they're accessed through the reciever). For #44167. Change-Id: I6bd1602585aeeb80818ded24c07d8e6fec992b93 Reviewed-on: https://go-review.googlesource.com/c/go/+/306598 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-13runtime: rename gcpercent, readgogc, and heapminimum to match Go styleMichael Anthony Knyszek
Generated with: rf 'mv gcpercent gcPercent' rf 'mv readgogc readGOGC' rf 'mv heapminimum heapMinimum' After this, comments referencing these symbols were updated via a simple sed command. For #44167. Change-Id: I6bb01597c2130686c01f967d0f106b06860ad2db Reviewed-on: https://go-review.googlesource.com/c/go/+/306597 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-13runtime: break out GC pacer into its own fileMichael Anthony Knyszek
This change breaks out the GC pacer into its own file so that it'll be easier to see the full implementation and change it. It also suggests an obvious place to put tests (mgcpacer_test.go). This includes all of gcControllerState, gcSetTriggerRatio, anything related to GOGC, and all related globals and constants. This is almost a clean move, except that globals and constants are formatted into blocks instead of having a separate "var" declaration for each one. For #44167. Change-Id: I85aa84ce85c6cfbe0b33e8a3c91cbe9dc41de8cb Reviewed-on: https://go-review.googlesource.com/c/go/+/306596 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-12runtime: consolidate "is sweep done" conditionsAustin Clements
The runtime currently has two different notions of sweep completion: 1. All spans are either swept or have begun sweeping. 2. The sweeper has *finished* sweeping all spans. Having both is confusing (it doesn't help that the documentation is often unclear or wrong). Condition 2 is stronger and the theoretical slight optimization that condition 1 could impact is never actually useful. Hence, this CL consolidates both conditions down to condition 2. Updates #45315. Change-Id: I55c84d767d74eb31a004a5619eaba2e351162332 Reviewed-on: https://go-review.googlesource.com/c/go/+/307916 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-04-12runtime: block sweep completion on all sweep pathsAustin Clements
The runtime currently has two different notions of sweep completion: 1. All spans are either swept or have begun sweeping. 2. The sweeper has *finished* sweeping all spans. Most things depend on condition 1. Notably, GC correctness depends on condition 1, but since all sweep operations a non-preemptible, the STW at the beginning of GC forces condition 1 to become condition 2. runtime.GC(), however, depends on condition 2, since the intent is to complete a complete GC cycle, and also update the heap profile (which can only be done after sweeping is complete). However, the way we compute condition 2 is racy right now and may in fact only indicate condition 1. Specifically, sweepone blocks condition 2 until all sweepone calls are done, but there are many other ways to enter the sweeper that don't block this. Hence, sweepone may see that there are no more spans in the sweep list and see that it's the last sweepone and declare sweeping done, while there's some other sweeper still working on a span. Fix this by making sure every entry to the sweeper participates in the protocol that blocks condition 2. To make sure we get this right, this CL introduces a type to track sweep blocking and (lightly) enforces span sweep ownership via the type system. This has the nice side-effect of abstracting the pattern of acquiring sweep ownership that's currently repeated in many different places. Fixes #45315. Change-Id: I7fab30170c5ae14c8b2f10998628735b8be6d901 Reviewed-on: https://go-review.googlesource.com/c/go/+/307915 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-04-06runtime: deflake TestGCTestIsReachableAustin Clements
This is a simple workaround for a bug where runtime.GC() can return before finishing a full sweep, causing gcTestIsReachable to throw. The right thing is to fix runtime.GC(), but this should get this test passing reliably in the meantime. Updates #45315. Change-Id: Iae141e6dbb26a9c2649497c1feedd4aaeaf540c7 Reviewed-on: https://go-review.googlesource.com/c/go/+/307809 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-05runtime: extend internal atomics to comply with sync/atomicRuslan Andreev
The CV add changes according to TODO in Go source-code. Internal atomic set does not comply with sync/atomic library and has shortage operations for signed integers. This patch extend internal atomic set by Int32 and Int64 operations. It's implemented new aliases and asm versions of operations. As a result Cas64 was replaced by Casint64 in findRunnableGCWorker without type casting. Another purpose is unified structure of internal atomics' source code. Before, assembly impementations for different archs were in different files. For example, filename for AMD64 was asm_amd64.s, but filename for RISC-V was atomic_riscv64.s. Some arches have both files without any meaning. So, assembly files were merged and renamed to atomic_{$ARCH}.s filenames. Change-Id: I29a05a7cbf5f4a9cc146e8315536c038af545677 Reviewed-on: https://go-review.googlesource.com/c/go/+/289152 Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-02runtime: make gcTestMoveStackOnNextCall not double the stackAustin Clements
Currently, gcTestMoveStackOnNextCall doubles the stack allocation on each call because stack movement always doubles the stack. That's rather unfortunate if you're doing a bunch of stack movement tests in a row that don't actually have to grow the stack because you'll quickly hit the stack size limit even though you're hardly using any of the stack. Fix this by adding a special stack poison value for gcTestMoveStackOnNextCall that newstack recognizes and inhibits the allocation doubling. Change-Id: Iace7055a0f33cb48dc97b8f4b46e45304bee832c Reviewed-on: https://go-review.googlesource.com/c/go/+/306672 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-02runtime: fix TestGCTestMoveStackOnNextCall flakesAustin Clements
gcTestMoveStackOnNextCall can fail to move the stack in very rare cases if there's an unfortunately timed preemption that clobbers the stack guard. This won't happen multiple times in quick succession, so make the test just retry a few times. Change-Id: I247dc0551514e269e7132cee7945291429b0e865 Reviewed-on: https://go-review.googlesource.com/c/go/+/306671 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Than McIntosh <thanm@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-03-29runtime: add GC testing helpers for regabi signature fuzzerAustin Clements
This CL adds a set of helper functions for testing GC interactions. These are intended for use in the regabi signature fuzzer, but are generally useful for GC tests, so we make them generally available to runtime tests. These provide: 1. An easy way to force stack movement, for testing stack copying. 2. A simple and robust way to check the reachability of a set of pointers. 3. A way to check what general category of memory a pointer points to, mostly so tests can make sure they're testing what they mean to. For #40724, but generally useful. Change-Id: I15d33ccb3f5a792c0472a19c2cc9a8b4a9356a66 Reviewed-on: https://go-review.googlesource.com/c/go/+/305330 Trust: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Than McIntosh <thanm@google.com>
2021-03-23cmd/compile: wrap/desugar defer calls for register abiThan McIntosh
Adds code to the compiler's "order" phase to rewrite go and defer statements to always be argument-less. E.g. defer f(x,y) => x1, y1 := x, y defer func() { f(x1, y1) } This transformation is not beneficial on its own, but it helps simplify runtime defer handling for the new register ABI (when invoking deferred functions on the panic path, the runtime doesn't need to manage the complexity of determining which args to pass in register vs memory). This feature is currently enabled by default if GOEXPERIMENT=regabi or GOEXPERIMENT=regabidefer is in effect. Included in this CL are some workarounds in the runtime to insure that "go" statement targets in the runtime are argument-less already (since wrapping them can potentially introduce heap-allocated closures, which are currently not allowed). The expectation is that these workarounds will be temporary, and can go away once we either A) change the rules about heap-allocated closures, or B) implement some other scheme for handling go statements. Change-Id: I01060d79a6b140c6f0838d6e6813f807ccdca319 Reviewed-on: https://go-review.googlesource.com/c/go/+/298669 Trust: Than McIntosh <thanm@google.com> Run-TryBot: Than McIntosh <thanm@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: David Chase <drchase@google.com>
2021-03-05runtime: encapsulate access to allgsMichael Pratt
Correctly accessing allgs is a bit hairy. Some paths need to lock allglock, some don't. Those that don't are safest using atomicAllG, but usage is not consistent. Rather than doing this ad-hoc, move all access* through forEachG / forEachGRace, the locking and atomic versions, respectively. This will make it easier to ensure safe access. * markroot is the only exception, as it has a far-removed guarantee of safe access via an atomic load of allglen far before actual use. Change-Id: Ie1c7a8243e155ae2b4bc3143577380c695680e89 Reviewed-on: https://go-review.googlesource.com/c/go/+/279994 Trust: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-02-23runtime: clarify GC fractional mode descriptionzhengjianxun
nowdays, in runtime/mgc.go,we can see the comment descrition : The fractional worker is necessary when GOMAXPROCS*gcBackgroundUtilization is not an integer. but it not true such as GOMAXPROCS=5. in the implemet of startCycle() , Fractional Mode happend only when GOMAXPROCS<=3 or GOMAXPROCS=6. so utilization can closest to 25%. Fixes #44380 Change-Id: Id0dd6d9f37759c2c9231f164a013a014216dd442 GitHub-Last-Rev: 5910e76324b2fa908235c325c8b1edafca496256 GitHub-Pull-Request: golang/go#44381 Reviewed-on: https://go-review.googlesource.com/c/go/+/293630 Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-30runtime: add heap lock assertionsMichael Pratt
Some functions that required holding the heap lock _or_ world stop have been simplified to simply requiring the heap lock. This is conceptually simpler and taking the heap lock during world stop is guaranteed to not contend. This was only done on functions already called on the systemstack to avoid too many extra systemstack calls in GC. Updates #40677 Change-Id: I15aa1dadcdd1a81aac3d2a9ecad6e7d0377befdc Reviewed-on: https://go-review.googlesource.com/c/go/+/250262 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-30runtime: add world-stopped assertionsMichael Pratt
Stopping the world is an implicit lock for many operations, so we should assert the world is stopped in functions that require it. This is enabled along with the rest of lock ranking, though it is a bit orthogonal and likely cheap enough to enable all the time should we choose. Requiring a lock _or_ world stop is common, so that can be expressed as well. Updates #40677 Change-Id: If0a58544f4251d367f73c4120c9d39974c6cd091 Reviewed-on: https://go-review.googlesource.com/c/go/+/248577 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-30runtime: simplify gcBgMarkWorker preemptionMichael Pratt
gcBgMarkWorker G's are primarily scheduled by findRunnableGCWorker, but that no longer needs to be strictly enforced. Temporary preemption to a runq is fine when the P is not in use. We still releasem in gopark in the normal case for efficiency: if gcDrain stops because gp.preempt is set, then gopark would always preempt. That is fine, but inefficient, since it will reschedule simply to park again. Thus, we keep releasem in unlockf to skip this extra cycle. Change-Id: I6d1a42e3ca41b76227142a6b5bfb376c9213e3c9 Reviewed-on: https://go-review.googlesource.com/c/go/+/262349 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-30runtime: manage gcBgMarkWorkers with a global poolMichael Pratt
Background mark workers perform per-P marking work. Currently each worker is assigned a P at creation time. The worker "attaches" to the P via p.gcBgMarkWorker, making itself (usually) available to findRunnableGCWorker for scheduling GC work. While running gcMarkDone, the worker "detaches" from the P (by clearing p.gcBgMarkWorker), since it may park for other reasons and should not be scheduled by findRunnableGCWorker. Unfortunately, this design is complex and difficult to reason about. We simplify things by changing the design to eliminate the hard P attachment. Rather than workers always performing work from the same P, workers perform work for whichever P they find themselves on. On park, the workers are placed in a pool of free workers, which each P's findRunnableGCWorker can use to run a worker for its P. Now if a worker parks in gcMarkDone, a P may simply use another worker from the pool to complete its own work. The P's GC worker mode is used to communicate the mode to run to the selected worker. It is also used to emit the appropriate worker EvGoStart tracepoint. This is a slight change, as this G may be preempted (e.g., in gcMarkDone). When it is rescheduled, the trace viewer will show it as a normal goroutine again. It is currently a bit difficult to connect to the original worker tracepoint, as the viewer does not display the goid for the original worker (though the data is in the trace file). Change-Id: Id7bd3a364dc18a4d2b1c99c4dc4810fae1293c1b Reviewed-on: https://go-review.googlesource.com/c/go/+/262348 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-26runtime,runtime/metrics: add metric for distribution of GC pausesMichael Anthony Knyszek
For #37112. Change-Id: Ibb0425c9c582ae3da3b2662d5bbe830d7df9079c Reviewed-on: https://go-review.googlesource.com/c/go/+/247047 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: rename mcache fields to match Go styleMichael Anthony Knyszek
This change renames a bunch of malloc statistics stored in the mcache that are all named with the "local_" prefix. It also renames largeAlloc to allocLarge to prevent a naming conflict, and next_sample because it would be the last mcache field with the old C naming style. Change-Id: I29695cb83b397a435ede7e9ad5c3c9be72767ea3 Reviewed-on: https://go-review.googlesource.com/c/go/+/246969 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>