diff options
author | Michael Anthony Knyszek <mknyszek@google.com> | 2019-04-18 15:42:58 +0000 |
---|---|---|
committer | Michael Knyszek <mknyszek@google.com> | 2019-05-06 21:15:01 +0000 |
commit | 31c4e099158b0e4999c05ee4daf08531f6640ad4 (patch) | |
tree | 1b7653c83e615dd9bfccd812e970a67bad7c2a5e /src/runtime/mem_linux.go | |
parent | 5c15ed64deaf71dd3b84470f3de8aae0b667d6ef (diff) | |
download | go-31c4e099158b0e4999c05ee4daf08531f6640ad4.tar.gz go-31c4e099158b0e4999c05ee4daf08531f6640ad4.zip |
runtime: ensure free and unscavenged spans may be backed by huge pages
This change adds a new sysHugePage function to provide the equivalent of
Linux's madvise(MADV_HUGEPAGE) support to the runtime. It then uses
sysHugePage to mark a newly-coalesced free span as backable by huge
pages to make the freeHugePages approximation a bit more accurate.
The problem being solved here is that if a large free span is composed
of many small spans which were coalesced together, then there's a chance
that they have had madvise(MADV_NOHUGEPAGE) called on them at some point,
which makes freeHugePages less accurate.
For #30333.
Change-Id: Idd4b02567619fc8d45647d9abd18da42f96f0522
Reviewed-on: https://go-review.googlesource.com/c/go/+/173338
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Diffstat (limited to 'src/runtime/mem_linux.go')
-rw-r--r-- | src/runtime/mem_linux.go | 21 |
1 files changed, 12 insertions, 9 deletions
diff --git a/src/runtime/mem_linux.go b/src/runtime/mem_linux.go index bf399227a1..cda2c78eaf 100644 --- a/src/runtime/mem_linux.go +++ b/src/runtime/mem_linux.go @@ -117,16 +117,19 @@ func sysUnused(v unsafe.Pointer, n uintptr) { } func sysUsed(v unsafe.Pointer, n uintptr) { - if physHugePageSize != 0 { - // Partially undo the NOHUGEPAGE marks from sysUnused - // for whole huge pages between v and v+n. This may - // leave huge pages off at the end points v and v+n - // even though allocations may cover these entire huge - // pages. We could detect this and undo NOHUGEPAGE on - // the end points as well, but it's probably not worth - // the cost because when neighboring allocations are - // freed sysUnused will just set NOHUGEPAGE again. + // Partially undo the NOHUGEPAGE marks from sysUnused + // for whole huge pages between v and v+n. This may + // leave huge pages off at the end points v and v+n + // even though allocations may cover these entire huge + // pages. We could detect this and undo NOHUGEPAGE on + // the end points as well, but it's probably not worth + // the cost because when neighboring allocations are + // freed sysUnused will just set NOHUGEPAGE again. + sysHugePage(v, n) +} +func sysHugePage(v unsafe.Pointer, n uintptr) { + if physHugePageSize != 0 { // Round v up to a huge page boundary. beg := (uintptr(v) + (physHugePageSize - 1)) &^ (physHugePageSize - 1) // Round v+n down to a huge page boundary. |