aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mwbbuf.go
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2018-01-15 12:27:17 -0500
committerAustin Clements <austin@google.com>2018-02-13 16:34:46 +0000
commit20101894078199a3a9014ca99ec4e2a0a16a0869 (patch)
tree8e7ff5c0fcc37243fcd39ba23ea2976503712f63 /src/runtime/mwbbuf.go
parent245310883dcae717bb662b22d5b1fd07fdd59b76 (diff)
downloadgo-20101894078199a3a9014ca99ec4e2a0a16a0869.tar.gz
go-20101894078199a3a9014ca99ec4e2a0a16a0869.zip
runtime: remove legacy eager write barrier
Now that the buffered write barrier is implemented for all architectures, we can remove the old eager write barrier implementation. This CL removes the implementation from the runtime, support in the compiler for calling it, and updates some compiler tests that relied on the old eager barrier support. It also makes sure that all of the useful comments from the old write barrier implementation still have a place to live. Fixes #22460. Updates #21640 since this fixes the layering concerns of the write barrier (but not the other things in that issue). Change-Id: I580f93c152e89607e0a72fe43370237ba97bae74 Reviewed-on: https://go-review.googlesource.com/92705 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
Diffstat (limited to 'src/runtime/mwbbuf.go')
-rw-r--r--src/runtime/mwbbuf.go22
1 files changed, 19 insertions, 3 deletions
diff --git a/src/runtime/mwbbuf.go b/src/runtime/mwbbuf.go
index 4a2d1ad988..c5619ed3fb 100644
--- a/src/runtime/mwbbuf.go
+++ b/src/runtime/mwbbuf.go
@@ -5,6 +5,9 @@
// This implements the write barrier buffer. The write barrier itself
// is gcWriteBarrier and is implemented in assembly.
//
+// See mbarrier.go for algorithmic details on the write barrier. This
+// file deals only with the buffer.
+//
// The write barrier has a fast path and a slow path. The fast path
// simply enqueues to a per-P write barrier buffer. It's written in
// assembly and doesn't clobber any general purpose registers, so it
@@ -111,16 +114,21 @@ func (b *wbBuf) discard() {
// if !buf.putFast(old, new) {
// wbBufFlush(...)
// }
+// ... actual memory write ...
//
// The arguments to wbBufFlush depend on whether the caller is doing
// its own cgo pointer checks. If it is, then this can be
// wbBufFlush(nil, 0). Otherwise, it must pass the slot address and
// new.
//
-// Since buf is a per-P resource, the caller must ensure there are no
-// preemption points while buf is in use.
+// The caller must ensure there are no preemption points during the
+// above sequence. There must be no preemption points while buf is in
+// use because it is a per-P resource. There must be no preemption
+// points between the buffer put and the write to memory because this
+// could allow a GC phase change, which could result in missed write
+// barriers.
//
-// It must be nowritebarrierrec to because write barriers here would
+// putFast must be nowritebarrierrec to because write barriers here would
// corrupt the write barrier buffer. It (and everything it calls, if
// it called anything) has to be nosplit to avoid scheduling on to a
// different P and a different buffer.
@@ -214,6 +222,14 @@ func wbBufFlush1(_p_ *p) {
//
// TODO: Should scanobject/scanblock just stuff pointers into
// the wbBuf? Then this would become the sole greying path.
+ //
+ // TODO: We could avoid shading any of the "new" pointers in
+ // the buffer if the stack has been shaded, or even avoid
+ // putting them in the buffer at all (which would double its
+ // capacity). This is slightly complicated with the buffer; we
+ // could track whether any un-shaded goroutine has used the
+ // buffer, or just track globally whether there are any
+ // un-shaded stacks and flush after each stack scan.
gcw := &_p_.gcw
pos := 0
arenaStart := mheap_.arena_start