diff options
author | Austin Clements <austin@google.com> | 2019-07-18 12:30:37 -0400 |
---|---|---|
committer | Austin Clements <austin@google.com> | 2019-07-19 15:04:08 +0000 |
commit | 5b15510d96b00662327fbd3eb860d767834dfadc (patch) | |
tree | 70b68e7a3112a6ea381f2fbfb406bf3b32c06949 /src/runtime/malloc.go | |
parent | ba3149612f62c011765876d7a437095fa50e0771 (diff) | |
download | go-5b15510d96b00662327fbd3eb860d767834dfadc.tar.gz go-5b15510d96b00662327fbd3eb860d767834dfadc.zip |
runtime: align allocations harder in GODEBUG=sbrk=1 mode
Currently, GODEBUG=sbrk=1 mode aligns allocations by their type's
alignment. You would think this would be the right thing to do, but
because 64-bit fields are only 4-byte aligned right now (see #599),
this can cause a 64-bit field of an allocated object to be 4-byte
aligned, but not 8-byte aligned. If there is an atomic access to that
unaligned 64-bit field, it will crash.
This doesn't happen in normal allocation mode because the
size-segregated allocation and the current size classes will cause any
types larger than 8 bytes to be 8 byte aligned.
We fix this by making sbrk=1 mode use alignment based on the type's
size rather than its declared alignment. This matches how the tiny
allocator aligns allocations.
This was tested with
GOARCH=386 GODEBUG=sbrk=1 go test sync/atomic
This crashes with an unaligned access before this change, and passes
with this change.
This should be reverted when/if we fix #599.
Fixes #33159.
Change-Id: Ifc52c72c6b99c5d370476685271baa43ad907565
Reviewed-on: https://go-review.googlesource.com/c/go/+/186919
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Diffstat (limited to 'src/runtime/malloc.go')
-rw-r--r-- | src/runtime/malloc.go | 17 |
1 files changed, 16 insertions, 1 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go index 98c028944f..8ad7035d94 100644 --- a/src/runtime/malloc.go +++ b/src/runtime/malloc.go @@ -866,7 +866,22 @@ func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer { if debug.sbrk != 0 { align := uintptr(16) if typ != nil { - align = uintptr(typ.align) + // TODO(austin): This should be just + // align = uintptr(typ.align) + // but that's only 4 on 32-bit platforms, + // even if there's a uint64 field in typ (see #599). + // This causes 64-bit atomic accesses to panic. + // Hence, we use stricter alignment that matches + // the normal allocator better. + if size&7 == 0 { + align = 8 + } else if size&3 == 0 { + align = 4 + } else if size&1 == 0 { + align = 2 + } else { + align = 1 + } } return persistentalloc(size, align, &memstats.other_sys) } |