aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/chan.go
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2016-08-22 16:02:54 -0400
committerAustin Clements <austin@google.com>2016-10-28 20:47:52 +0000
commit8f81dfe8b47e975b90bb4a2f8dd314d32c633176 (patch)
tree68a6ba2f5e1e212ff4a6bcdb5d68848ee299cbab /src/runtime/chan.go
parent0f06d0a051714d14b923b0a9164ab1b3f463aa74 (diff)
downloadgo-8f81dfe8b47e975b90bb4a2f8dd314d32c633176.tar.gz
go-8f81dfe8b47e975b90bb4a2f8dd314d32c633176.zip
runtime: perform write barrier before pointer write
Currently, we perform write barriers after performing pointer writes. At the moment, it simply doesn't matter what order this happens in, as long as they appear atomic to GC. But both the hybrid barrier and ROC are going to require a pre-write write barrier. For the hybrid barrier, this is important because the barrier needs to observe both the current value of the slot and the value that will be written to it. (Alternatively, the caller could do the write and pass in the old value, but it seems easier and more useful to just swap the order of the barrier and the write.) For ROC, this is necessary because, if the pointer write is going to make the pointer reachable to some goroutine that it currently is not visible to, the garbage collector must take some special action before that pointer becomes more broadly visible. This commits swaps pointer writes around so the write barrier occurs before the pointer write. The main subtlety here is bulk memory writes. Currently, these copy to the destination first and then use the pointer bitmap of the destination to find the copied pointers and invoke the write barrier. This is necessary because the source may not have a pointer bitmap. To handle these, we pass both the source and the destination to the bulk memory barrier, which uses the pointer bitmap of the destination, but reads the pointer values from the source. Updates #17503. Change-Id: I78ecc0c5c94ee81c29019c305b3d232069294a55 Reviewed-on: https://go-review.googlesource.com/31763 Reviewed-by: Rick Hudson <rlh@golang.org>
Diffstat (limited to 'src/runtime/chan.go')
-rw-r--r--src/runtime/chan.go4
1 files changed, 2 insertions, 2 deletions
diff --git a/src/runtime/chan.go b/src/runtime/chan.go
index ac81cc74dc..3cddfe372e 100644
--- a/src/runtime/chan.go
+++ b/src/runtime/chan.go
@@ -294,7 +294,7 @@ func sendDirect(t *_type, sg *sudog, src unsafe.Pointer) {
// stack writes only happen when the goroutine is running and are
// only done by that goroutine. Using a write barrier is sufficient to
// make up for violating that assumption, but the write barrier has to work.
- // typedmemmove will call heapBitsBulkBarrier, but the target bytes
+ // typedmemmove will call bulkBarrierPreWrite, but the target bytes
// are not in the heap, so that will not help. We arrange to call
// memmove and typeBitsBulkBarrier instead.
@@ -302,8 +302,8 @@ func sendDirect(t *_type, sg *sudog, src unsafe.Pointer) {
// be updated if the destination's stack gets copied (shrunk).
// So make sure that no preemption points can happen between read & use.
dst := sg.elem
+ typeBitsBulkBarrier(t, uintptr(dst), uintptr(src), t.size)
memmove(dst, src, t.size)
- typeBitsBulkBarrier(t, uintptr(dst), t.size)
}
func closechan(c *hchan) {