diff options
author | Michael Pratt <mpratt@google.com> | 2020-12-23 15:05:37 -0500 |
---|---|---|
committer | Michael Pratt <mpratt@google.com> | 2021-03-05 22:09:52 +0000 |
commit | d85083911d6ea742901933a544467dad55bb381f (patch) | |
tree | f8f826d01c3ff45af5908c8a1d71befe4439f97c /src/runtime/traceback.go | |
parent | 39bdd41d03725878f1fd6f8b500ba6700f03bdad (diff) | |
download | go-d85083911d6ea742901933a544467dad55bb381f.tar.gz go-d85083911d6ea742901933a544467dad55bb381f.zip |
runtime: encapsulate access to allgs
Correctly accessing allgs is a bit hairy. Some paths need to lock
allglock, some don't. Those that don't are safest using atomicAllG, but
usage is not consistent.
Rather than doing this ad-hoc, move all access* through forEachG /
forEachGRace, the locking and atomic versions, respectively. This will
make it easier to ensure safe access.
* markroot is the only exception, as it has a far-removed guarantee of
safe access via an atomic load of allglen far before actual use.
Change-Id: Ie1c7a8243e155ae2b4bc3143577380c695680e89
Reviewed-on: https://go-review.googlesource.com/c/go/+/279994
Trust: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Diffstat (limited to 'src/runtime/traceback.go')
-rw-r--r-- | src/runtime/traceback.go | 21 |
1 files changed, 9 insertions, 12 deletions
diff --git a/src/runtime/traceback.go b/src/runtime/traceback.go index 53eb6898482..f8cda83098b 100644 --- a/src/runtime/traceback.go +++ b/src/runtime/traceback.go @@ -945,19 +945,16 @@ func tracebackothers(me *g) { traceback(^uintptr(0), ^uintptr(0), 0, curgp) } - // We can't take allglock here because this may be during fatal - // throw/panic, where locking allglock could be out-of-order or a - // direct deadlock. + // We can't call locking forEachG here because this may be during fatal + // throw/panic, where locking could be out-of-order or a direct + // deadlock. // - // Instead, use atomic access to allgs which requires no locking. We - // don't lock against concurrent creation of new Gs, but even with - // allglock we may miss Gs created after this loop. - ptr, length := atomicAllG() - for i := uintptr(0); i < length; i++ { - gp := atomicAllGIndex(ptr, i) - + // Instead, use forEachGRace, which requires no locking. We don't lock + // against concurrent creation of new Gs, but even with allglock we may + // miss Gs created after this loop. + forEachGRace(func(gp *g) { if gp == me || gp == curgp || readgstatus(gp) == _Gdead || isSystemGoroutine(gp, false) && level < 2 { - continue + return } print("\n") goroutineheader(gp) @@ -971,7 +968,7 @@ func tracebackothers(me *g) { } else { traceback(^uintptr(0), ^uintptr(0), 0, gp) } - } + }) } // tracebackHexdump hexdumps part of stk around frame.sp and frame.fp |