aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/export_test.go
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2019-10-23 11:25:38 -0400
committerAustin Clements <austin@google.com>2019-10-31 17:09:50 +0000
commit7de15e362b0bc4ba83c8ca4d7cadc319c99db65a (patch)
treeced17246a0d745f32f1dbee98338197b5f8c7f10 /src/runtime/export_test.go
parenta9b37ae02604e03d2356b6143679d2a71bdd32a7 (diff)
downloadgo-7de15e362b0bc4ba83c8ca4d7cadc319c99db65a.tar.gz
go-7de15e362b0bc4ba83c8ca4d7cadc319c99db65a.zip
runtime: atomically set span state and use as publication barrier
When everything is working correctly, any pointer the garbage collector encounters can only point into a fully initialized heap span, since the span must have been initialized before that pointer could escape the heap allocator and become visible to the GC. However, in various cases, we try to be defensive against bad pointers. In findObject, this is just a sanity check: we never expect to find a bad pointer, but programming errors can lead to them. In spanOfHeap, we don't necessarily trust the pointer and we're trying to check if it really does point to the heap, though it should always point to something. Conservative scanning takes this to a new level, since it can only guess that a word may be a pointer and verify this. In all of these cases, we have a problem that the span lookup and check can race with span initialization, since the span becomes visible to lookups before it's fully initialized. Furthermore, we're about to start initializing the span without the heap lock held, which is going to introduce races where accesses were previously protected by the heap lock. To address this, this CL makes accesses to mspan.state atomic, and ensures that the span is fully initialized before setting the state to mSpanInUse. All loads are now atomic, and in any case where we don't trust the pointer, it first atomically loads the span state and checks that it's mSpanInUse, after which it will have synchronized with span initialization and can safely check the other span fields. For #10958, #24543, but a good fix in general. Change-Id: I518b7c63555b02064b98aa5f802c92b758fef853 Reviewed-on: https://go-review.googlesource.com/c/go/+/203286 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
Diffstat (limited to 'src/runtime/export_test.go')
-rw-r--r--src/runtime/export_test.go6
1 files changed, 3 insertions, 3 deletions
diff --git a/src/runtime/export_test.go b/src/runtime/export_test.go
index 0bd5c902e8..831f3f13d4 100644
--- a/src/runtime/export_test.go
+++ b/src/runtime/export_test.go
@@ -256,7 +256,7 @@ func CountPagesInUse() (pagesInUse, counted uintptr) {
pagesInUse = uintptr(mheap_.pagesInUse)
for _, s := range mheap_.allspans {
- if s.state == mSpanInUse {
+ if s.state.get() == mSpanInUse {
counted += s.npages
}
}
@@ -318,7 +318,7 @@ func ReadMemStatsSlow() (base, slow MemStats) {
// Add up current allocations in spans.
for _, s := range mheap_.allspans {
- if s.state != mSpanInUse {
+ if s.state.get() != mSpanInUse {
continue
}
if sizeclass := s.spanclass.sizeclass(); sizeclass == 0 {
@@ -542,7 +542,7 @@ func UnscavHugePagesSlow() (uintptr, uintptr) {
lock(&mheap_.lock)
base = mheap_.free.unscavHugePages
for _, s := range mheap_.allspans {
- if s.state == mSpanFree && !s.scavenged {
+ if s.state.get() == mSpanFree && !s.scavenged {
slow += s.hugePages()
}
}