aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mgcmark.go
AgeCommit message (Collapse)Author
2021-04-30cmd/compile, runtime: emit only GC data for stack objectsCherry Zhang
Currently, for stack objects, the compiler emits metadata that includes the offset and type descriptor for each object. The type descriptor symbol has many fields, and it references many other symbols, e.g. field/element types, equality functions, names. Observe that what we actually need at runtime is only the GC metadata that are needed to scan the object, and the GC metadata are "leaf" symbols (which doesn't reference other symbols). Emit only the GC data instead. This avoids bringing live the type descriptor as well as things referenced by it (if it is not otherwise live). This reduces binary sizes: old new hello (println) 1187776 1133856 (-4.5%) hello (fmt) 1902448 1844416 (-3.1%) cmd/compile 22670432 22438576 (-1.0%) cmd/link 6346272 6225408 (-1.9%) No significant change in compiler speed. name old time/op new time/op delta Template 184ms ± 2% 186ms ± 5% ~ (p=0.905 n=9+10) Unicode 78.4ms ± 5% 76.3ms ± 3% -2.60% (p=0.009 n=10+10) GoTypes 1.09s ± 1% 1.08s ± 1% -0.73% (p=0.027 n=10+8) Compiler 85.6ms ± 3% 84.6ms ± 4% ~ (p=0.143 n=10+10) SSA 7.23s ± 1% 7.25s ± 1% ~ (p=0.780 n=10+9) Flate 116ms ± 5% 115ms ± 6% ~ (p=0.912 n=10+10) GoParser 201ms ± 4% 195ms ± 1% ~ (p=0.089 n=10+10) Reflect 455ms ± 1% 458ms ± 2% ~ (p=0.050 n=9+9) Tar 155ms ± 2% 155ms ± 3% ~ (p=0.436 n=10+10) XML 202ms ± 2% 200ms ± 2% ~ (p=0.053 n=10+9) Change-Id: I33a7f383d79afba1a482cac6da0cf5b7de9c0ec4 Reviewed-on: https://go-review.googlesource.com/c/go/+/313514 Trust: Cherry Zhang <cherryyz@google.com> Reviewed-by: Than McIntosh <thanm@google.com>
2021-04-16runtime: remove useless nFlushCacheRootsLeonard Wang
Change-Id: I70cb8f8e9a0eec68ea11f22ca8699aa7e0c91ede Reviewed-on: https://go-review.googlesource.com/c/go/+/310710 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Trust: Emmanuel Odeke <emmanuel@orijtech.com>
2021-04-14runtime: update debug call protocol for register ABIMichael Anthony Knyszek
The debug call tests currently assume that the target Go function is ABI0; this is clearly no longer true when we switch to the new ABI, so make the tests set up argument register state in the debug call handler and copy back results returned in registers. A small snag in calling a Go function that follows the new ABI is that the debug call protocol depends on the AX register being set to a specific value as it bounces in and out of the handler, but this register is part of the new register ABI, so results end up being clobbered. Use R12 instead. Next, the new desugaring behavior for "go" statements means that newosproc1 must always call a function with no frame; if it takes any arguments, it closes over them and they're passed in the context register. Currently when debugCallWrap creates a new goroutine, it uses newosproc1 directly and passes a non-zero-sized frame, so that needs to be updated. To fix this, briefly use the g's param field which is otherwise only used for channels to pass an explicitly allocated object containing the "closed over" variables. While we could manually do the desugaring ourselves (we cannot do so automatically because the Go compiler prevents heap-allocated closures in the runtime), that bakes in more ABI details in a place that really doesn't need to care about them. Finally, there's an old bug here where the context register was set up in CX, so technically closure calls never worked. Oops. It was otherwise harmless for other types of calls before, but now CX is an argument register, so now that interferes with regular calls, too. For #40724. Change-Id: I652c25ed56a25741bb04c24cfb603063c099edde Reviewed-on: https://go-review.googlesource.com/c/go/+/309169 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Alessandro Arzilli <alessandro.arzilli@gmail.com> TryBot-Result: Go Bot <gobot@golang.org>
2021-04-14runtime: incorporate hbits advancement in scanobject into loopJosh Bleecher Snyder
This makes it clearer that i and hbits advance together. As a bonus, it generates slightly better code. Change-Id: I24d51102535c39f962a59c1a4a7c5c894339aa18 Reviewed-on: https://go-review.googlesource.com/c/go/+/309569 Trust: Josh Bleecher Snyder <josharian@gmail.com> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2021-04-14runtime: move roots' bases calculation to gcMarkRootPrepareRuslan Andreev
This patch provides changes according to Austin's TODO. It just moves calculation of base indexes of each root type from markroot function to gcMarkRootPrepare. Change-Id: Ib231de34e7f81e922762fc3ee2b1830921c0c7cf Reviewed-on: https://go-review.googlesource.com/c/go/+/279461 Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com>
2021-03-05runtime: encapsulate access to allgsMichael Pratt
Correctly accessing allgs is a bit hairy. Some paths need to lock allglock, some don't. Those that don't are safest using atomicAllG, but usage is not consistent. Rather than doing this ad-hoc, move all access* through forEachG / forEachGRace, the locking and atomic versions, respectively. This will make it easier to ensure safe access. * markroot is the only exception, as it has a far-removed guarantee of safe access via an atomic load of allglen far before actual use. Change-Id: Ie1c7a8243e155ae2b4bc3143577380c695680e89 Reviewed-on: https://go-review.googlesource.com/c/go/+/279994 Trust: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-01-05runtime: don't take allglock in tracebackothersMichael Pratt
tracebackothers is called from fatal throw/panic. A fatal throw may be taken with allglock held (notably in the allocator when allglock is held), which would cause a deadlock in tracebackothers when we try to take allglock again. Locking allglock here is also often a lock order violation w.r.t. the locks held when throw was called. Avoid the deadlock and ordering issues by skipping locking altogether. It is OK to miss concurrently created Gs (which are generally avoided by freezetheworld(), and which were possible previously anyways if created after the loop). Fatal throw/panic freezetheworld(), which should freeze other threads that may be racing to modify allgs. However, freezetheworld() does _not_ guarantee that it stops all other threads, so we can't simply drop the lock. Fixes #42669 Updates #43175 Change-Id: I657aec46ed35fd5d1b3f1ba25b500128ab26b088 Reviewed-on: https://go-review.googlesource.com/c/go/+/270861 Reviewed-by: Michael Knyszek <mknyszek@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-12-23runtime: fix nStackRoots comment about stack rootsMichael Anthony Knyszek
A comment in mgcmark.go indicates that we scan stacks a second time but we don't, at least not since changing to the hybrid write barrier. Change-Id: I9376adbb6d8b6dd9dc3cee62e077b5dfb8a3fdde Reviewed-on: https://go-review.googlesource.com/c/go/+/279797 Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Austin Clements <austin@google.com>
2020-10-30runtime: add world-stopped assertionsMichael Pratt
Stopping the world is an implicit lock for many operations, so we should assert the world is stopped in functions that require it. This is enabled along with the rest of lock ranking, though it is a bit orthogonal and likely cheap enough to enable all the time should we choose. Requiring a lock _or_ world stop is common, so that can be expressed as well. Updates #40677 Change-Id: If0a58544f4251d367f73c4120c9d39974c6cd091 Reviewed-on: https://go-review.googlesource.com/c/go/+/248577 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-26runtime: access the assist ratio atomicallyMichael Anthony Knyszek
This change makes it so that the GC assist ratio (the pair of gcControllerState fields assistBytesPerWork and assistWorkPerByte) is updated atomically. Note that the pair of fields are not updated together atomically, but that's OK. The code here was already racy for some time and in practice the assist ratio moves very slowly. The purpose of this change is so that we can document gcController.revise to be safe for concurrent use, which will be useful in further changes. Change-Id: Ie25d630207c88e4f85f2b8953f6a0051ebf1b4ea Reviewed-on: https://go-review.googlesource.com/c/go/+/246963 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-08-25cmd/compile: don't allow go:notinheap on the heap or stackKeith Randall
Right now we just prevent such types from being on the heap. This CL makes it so they cannot appear on the stack either. The distinction between heap and stack is pretty vague at the language level (e.g. it is affected by -N), and we don't need the flexibility anyway. Once go:notinheap types cannot be in either place, we don't need to consider pointers to such types to be pointers, at least according to the garbage collector and stack copying. (This is the big win of this CL, in my opinion.) The distinction between HasPointers and HasHeapPointer no longer exists. There is only HasPointers. This CL is cleanup before possible use of go:notinheap to fix #40954. Update #13386 Change-Id: Ibd895aadf001c0385078a6d4809c3f374991231a Reviewed-on: https://go-review.googlesource.com/c/go/+/249917 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2020-08-17runtime: clean up old markrootSpansMichael Anthony Knyszek
This change removes the old markrootSpans implementation and deletes the feature flag. Updates #37487. Change-Id: Idb5a2559abcc3be5a7da6f2ccce1a86e1d7634e3 Reviewed-on: https://go-review.googlesource.com/c/go/+/221183 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Austin Clements <austin@google.com>
2020-08-17runtime: move checkmarks to a separate bitmapAustin Clements
Currently, the GC stores the object marks for checkmarks mode in the heap bitmap using a rather complex encoding: for one word objects, the checkmark is stored in the pointer/scalar bit since one word objects must be pointers; for larger objects, the checkmark is stored in what would be the scan/dead bit for the second word of the object. This encoding made more sense when the runtime used the first scan/dead bit as the regular mark bit, but we moved away from that long ago. This encoding and overloading of the heap bitmap bits causes a great deal of complexity in many parts of the allocator and garbage collector and leads to some subtle bugs like #15903. This CL moves the checkmarks mark bits into their own per-arena bitmap and reclaims the second scan/dead bit as a regular scan/dead bit. I tested this by enabling doubleCheck mode in heapBitsSetType and running in both regular and GODEBUG=gccheckmark=1 mode. Fixes #15903. No performance degradation. (Very slight improvement on a few benchmarks, but it's probably just noise.) name old time/op new time/op delta BiogoIgor 16.6s ± 1% 16.4s ± 1% -0.94% (p=0.000 n=25+24) BiogoKrishna 19.2s ± 3% 19.2s ± 3% ~ (p=0.638 n=23+25) BleveIndexBatch100 6.12s ± 5% 6.17s ± 4% ~ (p=0.170 n=25+25) CompileTemplate 206ms ± 1% 205ms ± 1% -0.43% (p=0.005 n=24+24) CompileUnicode 82.2ms ± 2% 81.5ms ± 2% -0.95% (p=0.001 n=22+22) CompileGoTypes 755ms ± 3% 754ms ± 4% ~ (p=0.715 n=25+25) CompileCompiler 3.73s ± 1% 3.73s ± 1% ~ (p=0.445 n=25+24) CompileSSA 8.67s ± 1% 8.66s ± 1% ~ (p=0.836 n=24+22) CompileFlate 134ms ± 2% 133ms ± 1% -0.66% (p=0.001 n=24+23) CompileGoParser 164ms ± 1% 163ms ± 1% -0.85% (p=0.000 n=24+24) CompileReflect 466ms ± 5% 466ms ± 3% ~ (p=0.863 n=25+25) CompileTar 182ms ± 1% 182ms ± 1% -0.31% (p=0.048 n=24+24) CompileXML 249ms ± 1% 248ms ± 1% -0.32% (p=0.031 n=21+25) CompileStdCmd 10.3s ± 1% 10.3s ± 1% ~ (p=0.459 n=23+23) FoglemanFauxGLRenderRotateBoat 8.66s ± 1% 8.62s ± 1% -0.47% (p=0.000 n=23+24) FoglemanPathTraceRenderGopherIter1 20.3s ± 3% 20.2s ± 2% ~ (p=0.893 n=25+25) GopherLuaKNucleotide 29.7s ± 1% 29.8s ± 2% ~ (p=0.421 n=24+25) MarkdownRenderXHTML 246ms ± 1% 247ms ± 1% ~ (p=0.558 n=25+24) Tile38WithinCircle100kmRequest 779µs ± 4% 779µs ± 3% ~ (p=0.954 n=25+25) Tile38IntersectsCircle100kmRequest 1.02ms ± 3% 1.01ms ± 4% ~ (p=0.658 n=25+25) Tile38KNearestLimit100Request 984µs ± 4% 986µs ± 4% ~ (p=0.627 n=24+25) [Geo mean] 552ms 551ms -0.19% https://perf.golang.org/search?q=upload:20200723.6 Change-Id: Ic703f26a83fb034941dc6f4788fc997d56890dec Reviewed-on: https://go-review.googlesource.com/c/go/+/244539 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2020-06-08runtime: always mark span when marking an objectAustin Clements
The page sweeper depends on spans being marked if any object in the span is marked, but currently only greyobject does this. gcmarknewobject and wbBufFlush1 also mark objects, but neither set span marks. As a result, if there are live objects on a span, but they're all marked via allocation or write barriers, then the span itself won't be marked and the page reclaimer will free the span, ultimately leading to memory corruption when the memory for those live allocations gets reused. Fix this by making gcmarknewobject and wbBufFlush1 also mark pages. No test because I have no idea how to reliably (or even unreliably) trigger this. Fixes #39432. Performance is a wash or very slightly worse. I benchmarked the gcmarknewobject and wbBufFlush1 changes independently and both showed a slight performance improvement, so I'm going to call this noise. name old time/op new time/op delta BiogoIgor 15.9s ± 2% 15.9s ± 2% ~ (p=0.758 n=25+25) BiogoKrishna 15.7s ± 3% 15.7s ± 3% ~ (p=0.382 n=21+21) BleveIndexBatch100 4.94s ± 3% 5.07s ± 4% +2.63% (p=0.000 n=25+25) CompileTemplate 204ms ± 1% 205ms ± 1% +0.43% (p=0.000 n=21+23) CompileUnicode 77.8ms ± 1% 78.1ms ± 1% ~ (p=0.130 n=23+23) CompileGoTypes 731ms ± 1% 733ms ± 1% +0.30% (p=0.006 n=22+22) CompileCompiler 3.64s ± 2% 3.65s ± 3% ~ (p=0.179 n=24+25) CompileSSA 8.44s ± 1% 8.46s ± 1% +0.30% (p=0.003 n=22+23) CompileFlate 132ms ± 1% 133ms ± 1% ~ (p=0.098 n=22+22) CompileGoParser 164ms ± 1% 164ms ± 1% +0.37% (p=0.000 n=21+23) CompileReflect 455ms ± 1% 457ms ± 2% +0.50% (p=0.002 n=20+22) CompileTar 182ms ± 2% 182ms ± 1% ~ (p=0.382 n=22+22) CompileXML 245ms ± 3% 245ms ± 1% ~ (p=0.070 n=21+23) CompileStdCmd 16.5s ± 2% 16.5s ± 3% ~ (p=0.486 n=23+23) FoglemanFauxGLRenderRotateBoat 12.9s ± 1% 13.0s ± 1% +0.97% (p=0.000 n=21+24) FoglemanPathTraceRenderGopherIter1 18.6s ± 1% 18.7s ± 0% ~ (p=0.083 n=23+24) GopherLuaKNucleotide 28.4s ± 1% 29.3s ± 1% +2.84% (p=0.000 n=25+25) MarkdownRenderXHTML 252ms ± 0% 251ms ± 1% -0.50% (p=0.000 n=23+24) Tile38WithinCircle100kmRequest 516µs ± 2% 516µs ± 2% ~ (p=0.763 n=24+25) Tile38IntersectsCircle100kmRequest 689µs ± 2% 689µs ± 2% ~ (p=0.617 n=24+24) Tile38KNearestLimit100Request 608µs ± 1% 606µs ± 2% -0.35% (p=0.030 n=19+22) [Geo mean] 522ms 524ms +0.41% https://perf.golang.org/search?q=upload:20200606.4 Change-Id: I8b331f310dbfaba0468035f207467c8403005bf5 Reviewed-on: https://go-review.googlesource.com/c/go/+/236817 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2020-04-29runtime: use conservative scanning for debug callsAustin Clements
A debugger can inject a call at almost any PC, which causes significant complications with stack scanning and growth. Currently, the runtime solves this using precise stack maps and register maps at nearly all PCs, but these extra maps require roughly 5% of the binary. These extra maps were originally considered worth this space because they were intended to be used for non-cooperative preemption, but are now used only for debug call injection. This CL switches from using precise maps to instead using conservative frame scanning, much like how non-cooperative preemption works. When a call is injected, the runtime flushes all potential pointer registers to the stack, and then treats that frame as well as the interrupted frame conservatively. The limitation of conservative frame scanning is that we cannot grow the goroutine stack. That's doable because the previous CL switched to performing debug calls on a new goroutine, where they are free to grow the stack. With this CL, there are no remaining uses of precise register maps (though we still use the unsafe-point information that's encoded in the register map PCDATA stream), and stack maps are only used at call sites. For #36365. Change-Id: Ie217b6711f3741ccc437552d8ff88f961a73cee0 Reviewed-on: https://go-review.googlesource.com/c/go/+/229300 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-04-21runtime: add bitmap-based markrootSpans implementationMichael Anthony Knyszek
Currently markrootSpans, the scanning routine which scans span specials (particularly finalizers) as roots, uses sweepSpans to shard work and find spans to mark. However, as part of a future CL to change span ownership and how mcentral works, we want to avoid having markrootSpans use the sweep bufs to find specials, so in this change we introduce a new mechanism. Much like for the page reclaimer, we set up a per-page bitmap where the first page for a span is marked if the span contains any specials, and unmarked if it has no specials. This bitmap is updated by addspecial, removespecial, and during sweeping. markrootSpans then shards this bitmap into mark work and markers iterate over the bitmap looking for spans with specials to mark. Unlike the page reclaimer, we don't need to use the pageInUse bits because having a special implies that a span is in-use. While in terms of computational complexity this design is technically worse, because it needs to iterate over the mapped heap, in practice this iteration is very fast (we can skip over large swathes of the heap very quickly) and we only look at spans that have any specials at all, rather than having to touch each span. This new implementation of markrootSpans is behind a feature flag called go115NewMarkrootSpans. Updates #37487. Change-Id: I8ea07b6c11059f6d412fe419e0ab512d989377b8 Reviewed-on: https://go-review.googlesource.com/c/go/+/221178 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2020-03-20runtime: use divRoundUpAustin Clements
There are a handful of places where the runtime wants to round up the result of a division. We just introduced a helper to do this. This CL replaces all of the hand-coded round-ups (that I could find) with this helper. Change-Id: I465d152157ff0f3cad40c0aa57491e4f2de510ad Reviewed-on: https://go-review.googlesource.com/c/go/+/224385 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-03-18runtime: preempt dedicated background mark workers for STWMichael Anthony Knyszek
Currently, dedicated background mark workers are essentially always non-preemptible. This change makes it so that dedicated background mark workers park if their preemption flag is set and someone is trying to STW, allowing them to do so. This change prepares us for allowing a STW to happen (and happen promptly) during GC marking in a follow-up change. Updates #19812. Change-Id: I67fb6085bf0f0aebd18ca500172767818a1f15e3 Reviewed-on: https://go-review.googlesource.com/c/go/+/215157 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Austin Clements <austin@google.com>
2019-12-05runtime: remove comment about gcCopySpans()Johan Jansson
Remove documentation reference to gcCopySpans(), as that function was removed in https://golang.org/cl/30537 Fixes #35683 Change-Id: I7fb7c6cc60bfb3a133a019a20eb3f9d4c7627b31 Reviewed-on: https://go-review.googlesource.com/c/go/+/209917 Reviewed-by: Austin Clements <austin@google.com>
2019-11-08runtime: fix (*gcSweepBuf).block guaranteesMichael Anthony Knyszek
Currently gcSweepBuf guarantees that push operations may be performed concurrently with each other and that block operations may be performed concurrently with push operations as well. Unfortunately, this isn't quite true. The existing code allows push operations to happen concurrently with each other, but block operations may return blocks with nil entries. The way this can happen is if two concurrent pushers grab a slot to push to, and the first one (the one with the earlier slot in the buffer) doesn't quite write a span value when the block is called. The existing code in block only checks if the very last value in the block is nil, when really an arbitrary number of the last few values in the block may or may not be nil. Today, this case can't actually happen because when push operations happen concurrently during a GC (which is the only time block is called), they only ever happen during an allocation with the heap lock held, effectively serializing them. A block operation may happen concurrently with one of these pushes, but its callers will never see a nil mspan. Outside of a GC, this isn't a problem because although push operations from allocations can run concurrently with push operations from sweeping, block operations will never run. In essence, the real concurrency guarantees provided by gcSweepBuf are that block operations may happen concurrently with push operations, but that push operations may not be concurrent with each other if there are any block operations. To fix this, and to prepare for push operations happening without the heap lock held in a future CL, we update the documentation for block to correctly state that there may be nil entries in the returned slice. While we're here, make the mspan writes into the buffer atomic to avoid a block user racing on a nil check, and document that the user should load mspan values from the returned slice atomically. Finally, we make all callers of block adhere to the new rules. We choose to allow nil values rather than filter them out because the only caller of block is markrootSpans, and if it catches a nil entry, then there wasn't anything to mark in there anyway since the span is just being created. Updates #35112. Change-Id: I6450aab15f51690d7a000ba5b3d529cf2ca5da1e Reviewed-on: https://go-review.googlesource.com/c/go/+/203318 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-11-02runtime: use signals to preempt Gs for suspendGAustin Clements
This adds support for pausing a running G by sending a signal to its M. The main complication is that we want to target a G, but can only send a signal to an M. Hence, the protocol we use is to simply mark the G for preemption (which we already do) and send the M a "wake up and look around" signal. The signal checks if it's running a G with a preemption request and stops it if so in the same way that stack check preemptions stop Gs. Since the preemption may fail (the G could be moved or the signal could arrive at an unsafe point), we keep a count of the number of received preemption signals. This lets stopG detect if its request failed and should be retried without an explicit channel back to suspendG. For #10958, #24543. Change-Id: I3e1538d5ea5200aeb434374abb5d5fdc56107e53 Reviewed-on: https://go-review.googlesource.com/c/go/+/201760 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-02runtime: scan stacks conservatively at async safe pointsAustin Clements
This adds support for scanning the stack when a goroutine is stopped at an async safe point. This is not yet lit up because asyncPreempt is not yet injected, but prepares us for that. This works by conservatively scanning the registers dumped in the frame of asyncPreempt and its parent frame, which was stopped at an asynchronous safe point. Conservative scanning works by only marking words that are pointers to valid, allocated heap objects. One complication is pointers to stack objects. In this case, we can't determine if the stack object is still "allocated" or if it was freed by an earlier GC. Hence, we need to propagate the conservative-ness of scanning stack objects: if all pointers found to a stack object were found via conservative scanning, then the stack object itself needs to be scanned conservatively, since its pointers may point to dead objects. For #10958, #24543. Change-Id: I7ff84b058c37cde3de8a982da07002eaba126fd6 Reviewed-on: https://go-review.googlesource.com/c/go/+/201761 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-31runtime: atomically set span state and use as publication barrierAustin Clements
When everything is working correctly, any pointer the garbage collector encounters can only point into a fully initialized heap span, since the span must have been initialized before that pointer could escape the heap allocator and become visible to the GC. However, in various cases, we try to be defensive against bad pointers. In findObject, this is just a sanity check: we never expect to find a bad pointer, but programming errors can lead to them. In spanOfHeap, we don't necessarily trust the pointer and we're trying to check if it really does point to the heap, though it should always point to something. Conservative scanning takes this to a new level, since it can only guess that a word may be a pointer and verify this. In all of these cases, we have a problem that the span lookup and check can race with span initialization, since the span becomes visible to lookups before it's fully initialized. Furthermore, we're about to start initializing the span without the heap lock held, which is going to introduce races where accesses were previously protected by the heap lock. To address this, this CL makes accesses to mspan.state atomic, and ensures that the span is fully initialized before setting the state to mSpanInUse. All loads are now atomic, and in any case where we don't trust the pointer, it first atomically loads the span state and checks that it's mSpanInUse, after which it will have synchronized with span initialization and can safely check the other span fields. For #10958, #24543, but a good fix in general. Change-Id: I518b7c63555b02064b98aa5f802c92b758fef853 Reviewed-on: https://go-review.googlesource.com/c/go/+/203286 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-10-25runtime: only shrink stacks at synchronous safe pointsAustin Clements
We're about to introduce asynchronous safe points, where we won't have precise pointer maps for all stack frames. That's okay for scanning the stack (conservatively), but not for shrinking the stack. Hence, this CL prepares for this by only shrinking the stack as part of the stack scan if the goroutine is stopped at a synchronous safe point. Otherwise, it queues up the stack shrink for the next synchronous safe point. We already have one condition under which we can't shrink the stack for very similar reasons: syscalls. Currently, we just give up on shrinking the stack if it's in a syscall. But with this mechanism, we defer that stack shrink until the next synchronous safe point. For #10958, #24543. Change-Id: Ifa1dec6f33fdf30f9067be2ce3f7ab8a7f62ce38 Reviewed-on: https://go-review.googlesource.com/c/go/+/201438 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-25runtime: remove g.gcscanvalidAustin Clements
Currently, gcscanvalid is used to resolve a race between attempts to scan a stack. Now that there's a clear owner of the stack scan operation, there's no longer any danger of racing or attempting to scan a stack more than once, so this CL eliminates gcscanvalid. I double-checked my reasoning by first adding a throw if gcscanvalid was set in scanstack and verifying that all.bash still passed. For #10958, #24543. Fixes #24363. Change-Id: I76794a5fcda325ed7cfc2b545e2a839b8b3bc713 Reviewed-on: https://go-review.googlesource.com/c/go/+/201139 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-25runtime: add general suspendG/resumeGAustin Clements
Currently, the process of suspending a goroutine is tied to stack scanning. In preparation for non-cooperative preemption, this CL abstracts this into general purpose suspendG/resumeG functions. suspendG and resumeG closely follow the existing scang and restartg functions with one exception: the addition of a _Gpreempted status. Currently, preemption tasks (stack scanning) are carried out by the target goroutine if it's in _Grunning. In this new approach, the task is always carried out by the goroutine that called suspendG. Thus, we need a reliable way to drive the target goroutine out of _Grunning until the requesting goroutine is ready to resume it. The new _Gpreempted state provides the handshake: when a runnable goroutine responds to a preemption request, it now parks itself and enters _Gpreempted. The requesting goroutine races to put it in _Gwaiting, which gives it ownership, but also the responsibility to start it again. This CL adds several TODOs about improving the synchronization on the G status. The existing code already has these problems; we're just taking note of them. The next CL will remove the now-dead scang and preemptscan. For #10958, #24543. Change-Id: I16dbf87bea9d50399cc86719c156f48e67198f16 Reviewed-on: https://go-review.googlesource.com/c/go/+/201137 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-09-20runtime: avoid overflow in markrootBlockIan Lance Taylor
In a position independent executable the data or BSS may be located close to the end of memory. If it is placed closer than rootBlockBytes, then the calculations in markrootBlock would overflow, and the test that ensures that n is not larger than n0 would fail. This would then cause scanblock to scan data that it shouldn't, using an effectively random ptrmask, leading to program crashes. No test because the only way to test it is to build a PIE and convince the kernel to put the data section near the end of memory, and I don't know how to do that. Or perhaps we could use a linker script, but that is painful. The new code is algebraically identical to the original code, but avoids the potential overflow of b+rootBlockBytes. Change-Id: Ieb4e5465174bb762b063d2491caeaa745017345e Reviewed-on: https://go-review.googlesource.com/c/go/+/195717 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-06-10Revert "Revert "cmd/compile,runtime: allocate defer records on the stack""Keith Randall
This reverts CL 180761 Reason for revert: Reinstate the stack-allocated defer CL. There was nothing wrong with the CL proper, but stack allocation of defers exposed two other issues. Issue #32477: Fix has been submitted as CL 181258. Issue #32498: Possible fix is CL 181377 (not submitted yet). Change-Id: I32b3365d5026600069291b068bbba6cb15295eb3 Reviewed-on: https://go-review.googlesource.com/c/go/+/181378 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-06-05Revert "cmd/compile,runtime: allocate defer records on the stack"Keith Randall
This reverts commit fff4f599fe1c21e411a99de5c9b3777d06ce0ce6. Reason for revert: Seems to still have issues around GC. Fixes #32452 Change-Id: Ibe7af629f9ad6a3d5312acd7b066123f484da7f0 Reviewed-on: https://go-review.googlesource.com/c/go/+/180761 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2019-06-04cmd/compile,runtime: allocate defer records on the stackKeith Randall
When a defer is executed at most once in a function body, we can allocate the defer record for it on the stack instead of on the heap. This should make defers like this (which are very common) faster. This optimization applies to 363 out of the 370 static defer sites in the cmd/go binary. name old time/op new time/op delta Defer-4 52.2ns ± 5% 36.2ns ± 3% -30.70% (p=0.000 n=10+10) Fixes #6980 Update #14939 Change-Id: I697109dd7aeef9e97a9eeba2ef65ff53d3ee1004 Reviewed-on: https://go-review.googlesource.com/c/go/+/171758 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-05-13runtime: resolve latent TODOsTamir Duberstein
These were added in https://go-review.googlesource.com/1224; according to austin@google.com these annotations are not valuable - resolving by removing the TODOs. Change-Id: Icf3f21bc385cac9673ba29f0154680e970cf91f2 Reviewed-on: https://go-review.googlesource.com/c/go/+/176899 Reviewed-by: Austin Clements <austin@google.com>
2019-04-05runtime: separate stack freeing from stack shrinkingAustin Clements
Currently, shrinkstack will free the stack if the goroutine is dead. There are only two places that call shrinkstack: scanstack, which will never call it if the goroutine is dead; and markrootFreeGStacks, which only calls it on dead goroutines. Clean this up by separating stack freeing out of shrinkstack. Change-Id: I7d7891e620550c32a2220833923a025704986681 Reviewed-on: https://go-review.googlesource.com/c/go/+/170890 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-03-01runtime: scan defer closure in stack scanCherry Zhang
With stack objects, when we scan the stack, it scans defers with tracebackdefers, but it seems to me that tracebackdefers doesn't include the func value itself, which could be a stack allocated closure. Scan it explicitly. Alternatively, we can change tracebackdefers to include the func value, which in turn needs to change the type of stkframe. Fixes #30453. Change-Id: I55a6e43264d6952ab2fa5c638bebb89fdc410e2b Reviewed-on: https://go-review.googlesource.com/c/164118 Reviewed-by: Keith Randall <khr@golang.org>
2019-02-13runtime: scan gp._panic in stack scanCherry Zhang
In runtime.gopanic, the _panic object p is stack allocated and referenced from gp._panic. With stack objects, p on stack is dead at the point preprintpanics runs. gp._panic points to p, but stack scan doesn't look at gp. Heap scan of gp does look at gp._panic, but it stops and ignores the pointer as it points to the stack. So whatever p points to may be collected and clobbered. We need to scan gp._panic explicitly during stack scan. To test it reliably, we introduce a GODEBUG mode "clobberfree", which clobbers the memory content when the GC frees an object. Fixes #30150. Change-Id: I11128298f03a89f817faa221421a9d332b41dced Reviewed-on: https://go-review.googlesource.com/c/161778 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2018-12-02all: use "reports whether" consistently instead of "returns whether"Tobias Klauser
Follow-up for CL 147037 and after Brad noticed the "returns whether" pattern during the review of CL 150621. Go documentation style for boolean funcs is to say: // Foo reports whether ... func Foo() bool (rather than "returns whether") Created with: $ perl -i -npe 's/returns whether/reports whether/' $(git grep -l "returns whether" | grep -v vendor) Change-Id: I15fe9ff99180ad97750cd05a10eceafdb12dc0b4 Reviewed-on: https://go-review.googlesource.com/c/150918 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-11-15runtime: mark span when marking any object on the spanAustin Clements
This adds a mark bit for each span that is set if any objects on the span are marked. This will be used for sweeping. For #18155. The impact of this is negligible for most benchmarks, and < 1% for GC-heavy benchmarks. name old time/op new time/op delta Garbage/benchmem-MB=64-12 2.18ms ± 0% 2.20ms ± 1% +0.88% (p=0.000 n=16+18) (https://perf.golang.org/search?q=upload:20180928.1) name old time/op new time/op delta BinaryTree17-12 2.68s ± 1% 2.68s ± 1% ~ (p=0.707 n=17+19) Fannkuch11-12 2.28s ± 0% 2.39s ± 0% +4.95% (p=0.000 n=19+18) FmtFprintfEmpty-12 40.3ns ± 4% 39.4ns ± 2% -2.27% (p=0.000 n=17+18) FmtFprintfString-12 67.9ns ± 1% 68.3ns ± 1% +0.55% (p=0.000 n=18+19) FmtFprintfInt-12 75.7ns ± 1% 76.1ns ± 1% +0.44% (p=0.005 n=18+19) FmtFprintfIntInt-12 123ns ± 1% 121ns ± 1% -1.00% (p=0.000 n=18+18) FmtFprintfPrefixedInt-12 150ns ± 0% 148ns ± 0% -1.33% (p=0.000 n=16+13) FmtFprintfFloat-12 208ns ± 0% 204ns ± 0% -1.92% (p=0.000 n=13+17) FmtManyArgs-12 501ns ± 1% 498ns ± 0% -0.55% (p=0.000 n=19+17) GobDecode-12 6.24ms ± 0% 6.25ms ± 1% ~ (p=0.113 n=20+19) GobEncode-12 5.33ms ± 0% 5.29ms ± 1% -0.72% (p=0.000 n=20+18) Gzip-12 220ms ± 1% 218ms ± 1% -1.02% (p=0.000 n=19+19) Gunzip-12 35.5ms ± 0% 35.7ms ± 0% +0.45% (p=0.000 n=16+18) HTTPClientServer-12 77.9µs ± 1% 77.7µs ± 1% -0.30% (p=0.047 n=20+19) JSONEncode-12 8.82ms ± 0% 8.93ms ± 0% +1.20% (p=0.000 n=18+17) JSONDecode-12 47.3ms ± 0% 47.0ms ± 0% -0.49% (p=0.000 n=17+18) Mandelbrot200-12 3.69ms ± 0% 3.68ms ± 0% -0.25% (p=0.000 n=19+18) GoParse-12 3.13ms ± 1% 3.13ms ± 1% ~ (p=0.640 n=20+20) RegexpMatchEasy0_32-12 76.2ns ± 1% 76.2ns ± 1% ~ (p=0.818 n=20+19) RegexpMatchEasy0_1K-12 226ns ± 0% 226ns ± 0% -0.22% (p=0.001 n=17+18) RegexpMatchEasy1_32-12 71.9ns ± 1% 72.0ns ± 1% ~ (p=0.653 n=18+18) RegexpMatchEasy1_1K-12 355ns ± 1% 356ns ± 1% ~ (p=0.160 n=18+19) RegexpMatchMedium_32-12 106ns ± 1% 106ns ± 1% ~ (p=0.325 n=17+20) RegexpMatchMedium_1K-12 31.1µs ± 2% 31.2µs ± 0% +0.59% (p=0.007 n=19+15) RegexpMatchHard_32-12 1.54µs ± 2% 1.53µs ± 2% -0.78% (p=0.021 n=17+18) RegexpMatchHard_1K-12 46.0µs ± 1% 45.9µs ± 1% -0.31% (p=0.025 n=17+19) Revcomp-12 391ms ± 1% 394ms ± 2% +0.80% (p=0.000 n=17+19) Template-12 59.9ms ± 1% 59.9ms ± 1% ~ (p=0.428 n=20+19) TimeParse-12 304ns ± 1% 312ns ± 0% +2.88% (p=0.000 n=20+17) TimeFormat-12 318ns ± 0% 326ns ± 0% +2.64% (p=0.000 n=20+17) (https://perf.golang.org/search?q=upload:20180928.2) Change-Id: I336b9bf054113580a24103192904c8c76593e90e Reviewed-on: https://go-review.googlesource.com/c/138958 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2018-11-05runtime: clean up MSpan* MCache* MCentral* in docsMichael Anthony Knyszek
This change cleans up references to MSpan, MCache, and MCentral in the docs via a bunch of sed invocations to better reflect the Go names for the equivalent structures (i.e. mspan, mcache, mcentral) and their methods (i.e. MSpan_Sweep -> mspan.sweep). Change-Id: Ie911ac975a24bd25200a273086dd835ab78b1711 Reviewed-on: https://go-review.googlesource.com/c/147557 Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-10-09runtime: undo manual inlining of mbits.setMarkedAustin Clements
Since atomic.Or8 is now an intrinsic (and has been for some time), markBits.setMarked is inlinable. Undo the manual inlining of it. Change-Id: I8e37ccf0851ad1d3088d9c8ae0f6f0c439d7eb2d Reviewed-on: https://go-review.googlesource.com/c/138659 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-10-03cmd/compile,runtime: implement stack objectsKeith Randall
Rework how the compiler+runtime handles stack-allocated variables whose address is taken. Direct references to such variables work as before. References through pointers, however, use a new mechanism. The new mechanism is more precise than the old "ambiguously live" mechanism. It computes liveness at runtime based on the actual references among objects on the stack. Each function records all of its address-taken objects in a FUNCDATA. These are called "stack objects". The runtime then uses that information while scanning a stack to find all of the stack objects on a stack. It then does a mark phase on the stack objects, using all the pointers found on the stack (and ancillary structures, like defer records) as the root set. Only stack objects which are found to be live during this mark phase will be scanned and thus retain any heap objects they point to. A subsequent CL will remove all the "ambiguously live" logic from the compiler, so that the stack object tracing will be required. For this CL, the stack tracing is all redundant with the current ambiguously live logic. Update #22350 Change-Id: Ide19f1f71a5b6ec8c4d54f8f66f0e9a98344772f Reviewed-on: https://go-review.googlesource.com/c/134155 Reviewed-by: Austin Clements <austin@google.com>
2018-10-02runtime: eliminate gchelper mechanismAustin Clements
Now that we do no mark work during mark termination, we no longer need the gchelper mechanism. Updates #26903. Updates #17503. Change-Id: Ie94e5c0f918cfa047e88cae1028fece106955c1b Reviewed-on: https://go-review.googlesource.com/c/134785 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: eliminate work.markrootdone and second root marking passAustin Clements
Before STW and concurrent GC were unified, there could be either one or two root marking passes per GC cycle. There were several tasks we had to make sure happened once and only once (whether that was at the beginning of concurrent mark for concurrent GC or during mark termination for STW GC). We kept track of this in work.markrootdone. Now that STW and concurrent GC both use the concurrent marking code and we've eliminated all work done by the second root marking pass, we only ever need a single root marking pass. Hence, we can eliminate work.markrootdone and all of the code that's conditional on it. Updates #26903. Change-Id: I654a0f5e21b9322279525560a31e64b8d33b790f Reviewed-on: https://go-review.googlesource.com/c/134784 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: flush mcaches lazilyAustin Clements
Currently, all mcaches are flushed during STW mark termination as a root marking job. This is currently necessary because all spans must be out of these caches before sweeping begins to avoid races with allocation and to ensure the spans are in the state expected by sweeping. We do it as a root marking job because mcache flushing is somewhat expensive and O(GOMAXPROCS) and this parallelizes the work across the Ps. However, it's also the last remaining root marking job performed during mark termination. This CL moves mcache flushing out of mark termination and performs it lazily. We keep track of the last sweepgen at which each mcache was flushed and as each P is woken from STW, it observes that its mcache is out-of-date and flushes it. The introduces a complication for spans cached in stale mcaches. These may now be observed by background or proportional sweeping or when attempting to add a finalizer, but aren't in a stable state. For example, they are likely to be on the wrong mcentral list. To fix this, this CL extends the sweepgen protocol to also capture whether a span is cached and, if so, whether or not its cache is stale. This protocol blocks asynchronous sweeping from touching cached spans and makes it the responsibility of mcache flushing to sweep the flushed spans. This eliminates the last mark termination root marking job, which means we can now eliminate that entire infrastructure. Updates #26903. This implements lazy mcache flushing. Change-Id: Iadda7aabe540b2026cffc5195da7be37d5b4125e Reviewed-on: https://go-review.googlesource.com/c/134783 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: eliminate blocking GC work drainsAustin Clements
Now work.helperDrainBlock is always false, so we can remove it and code paths that only ran when it was true. That means we no longer use the gcDrainBlock mode of gcDrain, so we can eliminate that. That means we no longer use gcWork.get, so we can eliminate that. That means we no longer use getfull, so we can eliminate that. Updates #26903. This is a follow-up to unifying STW GC and concurrent GC. Change-Id: I8dbcf8ce24861df0a6149e0b7c5cd0eadb5c13f6 Reviewed-on: https://go-review.googlesource.com/c/134782 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: remove GODEBUG=gcrescanstacks=1 modeAustin Clements
Currently, setting GODEBUG=gcrescanstacks=1 enables a debugging mode where the garbage collector re-scans goroutine stacks during mark termination. This was introduced in Go 1.8 to debug the hybrid write barrier, but I don't think we ever used it. Now it's one of the last sources of mark work during mark termination. This CL removes it. Updates #26903. This is preparation for unifying STW GC and concurrent GC. Updates #17503. Change-Id: I6ae04d3738aa9c448e6e206e21857a33ecd12acf Reviewed-on: https://go-review.googlesource.com/c/134777 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: don't disable GC work caching during mark terminationAustin Clements
Currently, we disable GC work caching during mark termination. This is no longer necessary with the new mark completion detection because 1. There's no way for any of the GC mark termination helpers to have any real work queued and, 2. Mark termination has to explicitly flush every P's buffers anyway in order to flush Ps that didn't run a GC mark termination helper. Hence, remove the code that disposes gcWork buffers during mark termination. Updates #26903. This is a follow-up to eliminating mark 2. Change-Id: I81f002ee25d5c10f42afd39767774636519007f9 Reviewed-on: https://go-review.googlesource.com/c/134320 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: eliminate gcBlackenPromptly modeAustin Clements
Now that there is no mark 2 phase, gcBlackenPromptly is no longer used. Updates #26903. This is a follow-up to eliminating mark 2. Change-Id: Ib9c534f21b36b8416fcf3cab667f186167b827f8 Reviewed-on: https://go-review.googlesource.com/c/134319 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02runtime: flush write barrier buffer to create workAustin Clements
Currently, if the gcWork runs out of work, we'll fall out of the GC worker, even though flushing the write barrier buffer could produce more work. While this is not a correctness issue, it can lead to premature mark 2 or mark termination. Fix this by flushing the write barrier buffer if the local gcWork runs out of work and then checking the local gcWork again. This reduces the number of premature mark terminations during all.bash by about a factor of 10. Updates #26903. This is preparation for eliminating mark 2. Change-Id: I48577220b90c86bfd28d498e8603bc379a8cd617 Reviewed-on: https://go-review.googlesource.com/c/134315 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-09-26runtime: rename _MSpan* constants to mSpan*Austin Clements
We already aliased mSpanInUse to _MSpanInUse. The dual constants are getting annoying, so fix all of these to use the mSpan* naming convention. This was done automatically with: sed -i -re 's/_?MSpan(Dead|InUse|Manual|Free)/mSpan\1/g' *.go plus deleting the existing definition of mSpanInUse. Change-Id: I09979d9d491d06c10689cea625dc57faa9cc6767 Reviewed-on: https://go-review.googlesource.com/137875 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-08-20runtime: use gList for injectglistAustin Clements
Change-Id: Id5af75eaaf41f43bc6baa6d3fe2b852a2f93bb6f Reviewed-on: https://go-review.googlesource.com/129400 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-08-20runtime: use gList for gfree listsAustin Clements
Change-Id: I3d21587e02264fe5da1cc38d98779facfa09b927 Reviewed-on: https://go-review.googlesource.com/129398 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>