aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2017-05-22 15:53:49 -0400
committerChris Broadfoot <cbro@golang.org>2017-05-23 19:42:57 +0000
commita43c0d2dc83bc4f5baa0a91be28078fa892e5111 (patch)
tree27da72f24352bfb3b5bf324c0a18a420e460b076
parent1054085dcf88ce802e6aa45078f9d7f3abf5b85d (diff)
downloadgo-a43c0d2dc83bc4f5baa0a91be28078fa892e5111.tar.gz
go-a43c0d2dc83bc4f5baa0a91be28078fa892e5111.zip
[release-branch.go1.8] runtime: don't corrupt arena bounds on low mmap
Cherry-pick of CL 43870. If mheap.sysAlloc doesn't have room in the heap arena for an allocation, it will attempt to map more address space with sysReserve. sysReserve is given a hint, but can return any unused address range. Currently, mheap.sysAlloc incorrectly assumes the returned region will never fall between arena_start and arena_used. If it does, mheap.sysAlloc will blindly accept the new region as the new arena_used and arena_end, causing these to decrease and make it so any Go heap above the new arena_used is no longer considered part of the Go heap. This assumption *used to be* safe because we had all memory between arena_start and arena_used mapped, but when we switched to an arena_start of 0 on 32-bit, it became no longer safe. Most likely, we've only recently seen this bug occur because we usually start arena_used just above the binary, which is low in the address space. Hence, the kernel is very unlikely to give us a region before arena_used. Since mheap.sysAlloc is a linear allocator, there's not much we can do to handle this well. Hence, we fix this problem by simply rejecting the new region if it isn't after arena_end. In this case, we'll take the fall-back path and mmap a small region at any address just for the requested memory. Fixes #20259. Change-Id: Ib72e8cd621545002d595c7cade1e817cfe3e5b1e Reviewed-on: https://go-review.googlesource.com/43954 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Chris Broadfoot <cbro@golang.org>
-rw-r--r--src/runtime/malloc.go14
1 files changed, 13 insertions, 1 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go
index da39dac510..6f07731a49 100644
--- a/src/runtime/malloc.go
+++ b/src/runtime/malloc.go
@@ -400,10 +400,12 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
if p == 0 {
return nil
}
+ // p can be just about anywhere in the address
+ // space, including before arena_end.
if p == h.arena_end {
h.arena_end = new_end
h.arena_reserved = reserved
- } else if h.arena_start <= p && p+p_size-h.arena_start-1 <= _MaxArena32 {
+ } else if h.arena_end < p && p+p_size-h.arena_start-1 <= _MaxArena32 {
// Keep everything page-aligned.
// Our pages are bigger than hardware pages.
h.arena_end = p + p_size
@@ -413,6 +415,16 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
h.arena_used = used
h.arena_reserved = reserved
} else {
+ // We got a mapping, but it's not
+ // linear with our current arena, so
+ // we can't use it.
+ //
+ // TODO: Make it possible to allocate
+ // from this. We can't decrease
+ // arena_used, but we could introduce
+ // a new variable for the current
+ // allocation position.
+
// We haven't added this allocation to
// the stats, so subtract it from a
// fake stat (but avoid underflow).