From a43c0d2dc83bc4f5baa0a91be28078fa892e5111 Mon Sep 17 00:00:00 2001 From: Austin Clements Date: Mon, 22 May 2017 15:53:49 -0400 Subject: [release-branch.go1.8] runtime: don't corrupt arena bounds on low mmap Cherry-pick of CL 43870. If mheap.sysAlloc doesn't have room in the heap arena for an allocation, it will attempt to map more address space with sysReserve. sysReserve is given a hint, but can return any unused address range. Currently, mheap.sysAlloc incorrectly assumes the returned region will never fall between arena_start and arena_used. If it does, mheap.sysAlloc will blindly accept the new region as the new arena_used and arena_end, causing these to decrease and make it so any Go heap above the new arena_used is no longer considered part of the Go heap. This assumption *used to be* safe because we had all memory between arena_start and arena_used mapped, but when we switched to an arena_start of 0 on 32-bit, it became no longer safe. Most likely, we've only recently seen this bug occur because we usually start arena_used just above the binary, which is low in the address space. Hence, the kernel is very unlikely to give us a region before arena_used. Since mheap.sysAlloc is a linear allocator, there's not much we can do to handle this well. Hence, we fix this problem by simply rejecting the new region if it isn't after arena_end. In this case, we'll take the fall-back path and mmap a small region at any address just for the requested memory. Fixes #20259. Change-Id: Ib72e8cd621545002d595c7cade1e817cfe3e5b1e Reviewed-on: https://go-review.googlesource.com/43954 Run-TryBot: Austin Clements TryBot-Result: Gobot Gobot Reviewed-by: Chris Broadfoot --- src/runtime/malloc.go | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go index da39dac510..6f07731a49 100644 --- a/src/runtime/malloc.go +++ b/src/runtime/malloc.go @@ -400,10 +400,12 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer { if p == 0 { return nil } + // p can be just about anywhere in the address + // space, including before arena_end. if p == h.arena_end { h.arena_end = new_end h.arena_reserved = reserved - } else if h.arena_start <= p && p+p_size-h.arena_start-1 <= _MaxArena32 { + } else if h.arena_end < p && p+p_size-h.arena_start-1 <= _MaxArena32 { // Keep everything page-aligned. // Our pages are bigger than hardware pages. h.arena_end = p + p_size @@ -413,6 +415,16 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer { h.arena_used = used h.arena_reserved = reserved } else { + // We got a mapping, but it's not + // linear with our current arena, so + // we can't use it. + // + // TODO: Make it possible to allocate + // from this. We can't decrease + // arena_used, but we could introduce + // a new variable for the current + // allocation position. + // We haven't added this allocation to // the stats, so subtract it from a // fake stat (but avoid underflow). -- cgit v1.2.3-54-g00ecf