aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mstats.go
AgeCommit message (Collapse)Author
2021-04-27runtime/metrics: add tiny allocs metricMichael Anthony Knyszek
Currently tiny allocations are not represented in either MemStats or runtime/metrics, but they're represented in MemStats (indirectly) via Mallocs. Add them to runtime/metrics by first merging memstats.tinyallocs into consistentHeapStats (just for simplicity; it's monotonic so metrics would still be self-consistent if we just read it atomically) and then adding /gc/heap/tiny/allocs:objects to the list of supported metrics. Change-Id: Ie478006ab942a3e877b4a79065ffa43569722f3d Reviewed-on: https://go-review.googlesource.com/c/go/+/312909 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-14runtime: move next_gc and last_next_gc into gcControllerStateMichael Anthony Knyszek
This change moves next_gc and last_next_gc into gcControllerState under the names heapGoal and lastHeapGoal respectively. These are fundamentally GC pacer related values, and so it makes sense for them to live here. Partially generated by rf ' ex . { memstats.next_gc -> gcController.heapGoal memstats.last_next_gc -> gcController.lastHeapGoal } ' except for updates to comments and gcControllerState methods, where they're accessed through the receiver, and trace-related renames of NextGC -> HeapGoal, while we're here. For #44167. Change-Id: I1e871ad78a57b01be8d9f71bd662530c84853bed Reviewed-on: https://go-review.googlesource.com/c/go/+/306603 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-13runtime: move internal GC statistics from memstats to gcControllerMichael Anthony Knyszek
This change moves certain important but internal-only GC statistics from memstats into gcController. These statistics are mainly used in pacing the GC, so it makes sense to keep them in the pacer's state. This CL was mostly generated via rf ' ex . { memstats.gc_trigger -> gcController.trigger memstats.triggerRatio -> gcController.triggerRatio memstats.heap_marked -> gcController.heapMarked memstats.heap_live -> gcController.heapLive memstats.heap_scan -> gcController.heapScan } ' except for a few special cases, like updating names in comments and when these fields are used within gcControllerState methods (at which point they're accessed through the reciever). For #44167. Change-Id: I6bd1602585aeeb80818ded24c07d8e6fec992b93 Reviewed-on: https://go-review.googlesource.com/c/go/+/306598 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-11-02runtime: decouple consistent stats from mcache and allow P-less updateMichael Anthony Knyszek
This change modifies the consistent stats implementation to keep the per-P sequence counter on each P instead of each mcache. A valid mcache is not available everywhere that we want to call e.g. allocSpan, as per issue #42339. By decoupling these two, we can add a mechanism to allow contexts without a P to update stats consistently. In this CL, we achieve that with a mutex. In practice, it will be very rare for an M to update these stats without a P. Furthermore, the stats reader also only needs to hold the mutex across the update to "gen" since once that changes, writers are free to continue updating the new stats generation. Contention could thus only arise between writers without a P, and as mentioned earlier, those should be rare. A nice side-effect of this change is that the consistent stats acquire and release API becomes simpler. Fixes #42339. Change-Id: Ied74ab256f69abd54b550394c8ad7c4c40a5fe34 Reviewed-on: https://go-review.googlesource.com/c/go/+/267158 Run-TryBot: Michael Knyszek <mknyszek@google.com> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-30runtime: add world-stopped assertionsMichael Pratt
Stopping the world is an implicit lock for many operations, so we should assert the world is stopped in functions that require it. This is enabled along with the rest of lock ranking, though it is a bit orthogonal and likely cheap enough to enable all the time should we choose. Requiring a lock _or_ world stop is common, so that can be expressed as well. Updates #40677 Change-Id: If0a58544f4251d367f73c4120c9d39974c6cd091 Reviewed-on: https://go-review.googlesource.com/c/go/+/248577 Run-TryBot: Michael Pratt <mpratt@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Trust: Michael Pratt <mpratt@google.com>
2020-10-26runtime,runtime/metrics: add metric for distribution of GC pausesMichael Anthony Knyszek
For #37112. Change-Id: Ibb0425c9c582ae3da3b2662d5bbe830d7df9079c Reviewed-on: https://go-review.googlesource.com/c/go/+/247047 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: make sysMemStats' methods nosplitMichael Anthony Knyszek
sysMemStats are updated early on in runtime initialization, so triggering a stack growth would be bad. Mark them nosplit. Thank you so much to cherryyz@google.com for finding this fix! Fixes #42218. Change-Id: Ic62db76e6a4f829355d7eaabed1727c51adfbd0f Reviewed-on: https://go-review.googlesource.com/c/go/+/265157 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Austin Clements <austin@google.com> TryBot-Result: Go Bot <gobot@golang.org>
2020-10-26runtime,runtime/metrics: add memory metricsMichael Anthony Knyszek
This change adds support for a variety of runtime memory metrics and contains the base implementation of Read for the runtime/metrics package, which lives in the runtime. It also adds testing infrastructure for the metrics package, and a bunch of format and documentation tests. For #37112. Change-Id: I16a2c4781eeeb2de0abcb045c15105f1210e2d8a Reviewed-on: https://go-review.googlesource.com/c/go/+/247041 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Trust: Michael Knyszek <mknyszek@google.com>
2020-10-26runtime: move malloc stats into consistentHeapStatsMichael Anthony Knyszek
This change moves the mcache-local malloc stats into the consistentHeapStats structure so the malloc stats can be managed consistently with the memory stats. The one exception here is tinyAllocs for which moving that into the global stats would incur several atomic writes on the fast path. Microbenchmarks for just one CPU core have shown a 50% loss in throughput. Since tiny allocation counnt isn't exposed anyway and is always blindly added to both allocs and frees, let that stay inconsistent and flush the tiny allocation count every so often. Change-Id: I2a4b75f209c0e659b9c0db081a3287bf227c10ca Reviewed-on: https://go-review.googlesource.com/c/go/+/247039 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: replace some memstats with consistent statsMichael Anthony Knyszek
This change replaces stacks_inuse, gcWorkBufInUse and gcProgPtrScalarBitsInUse with their corresponding consistent stats. It also adds checks to make sure the rest of the sharded stats line up with existing stats in updatememstats. Change-Id: I17d0bd181aedb5c55e09c8dff18cef5b2a3a14e3 Reviewed-on: https://go-review.googlesource.com/c/go/+/247038 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: add consistent heap statisticsMichael Anthony Knyszek
This change adds a global set of heap statistics which are similar to existing memory statistics. The purpose of these new statistics is to be able to read them and get a consistent result without stopping the world. The goal is to eventually replace as many of the existing memstats statistics with the sharded ones as possible. The consistent memory statistics use a tailor-made synchronization mechanism to allow writers (allocators) to proceed with minimal synchronization by using a sequence counter and a global generation counter to determine which set of statistics to update. Readers increment the global generation counter to effectively grab a snapshot of the statistics, and then iterate over all Ps using the sequence counter to ensure that they may safely read the snapshotted statistics. To keep statistics fresh, the reader also has a responsibility to merge sets of statistics. These consistent statistics are computed, but otherwise unused for now. Upcoming changes will integrate them with the rest of the codebase and will begin to phase out existing statistics. Change-Id: I637a11f2439e2049d7dccb8650c5d82500733ca5 Reviewed-on: https://go-review.googlesource.com/c/go/+/247037 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: remove memstats.heap_allocMichael Anthony Knyszek
memstats.heap_alloc is 100% a duplicate and unnecessary copy of memstats.alloc which exists because MemStats used to be populated from memstats via a memmove. Change-Id: I995489f61be39786e573b8494a8ab6d4ea8bed9c Reviewed-on: https://go-review.googlesource.com/c/go/+/246975 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: remove memstats.heap_idleMichael Anthony Knyszek
This statistic is updated in many places but for MemStats may be computed from existing statistics. Specifically by definition heap_idle = heap_sys - heap_inuse since heap_sys is all memory allocated from the OS for use in the heap minus memory used for non-heap purposes. heap_idle is almost the same (since it explicitly includes memory that *could* be used for non-heap purposes) but also doesn't include memory that's actually used to hold heap objects. Although it has some utility as a sanity check, it complicates accounting and we want fewer, orthogonal statistics for upcoming metrics changes, so just drop it. Change-Id: I40af54a38e335f43249f6e218f35088bfd4380d1 Reviewed-on: https://go-review.googlesource.com/c/go/+/246974 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: break down memstats.gc_sysMichael Anthony Knyszek
This change breaks apart gc_sys into three distinct pieces. Two of those pieces are pieces which come from heap_sys since they're allocated from the page heap. The rest comes from memory mapped from e.g. persistentalloc which better fits the purpose of a sysMemStat. Also, rename gc_sys to gcMiscSys. Change-Id: I098789170052511e7b31edbcdc9a53e5c24573f7 Reviewed-on: https://go-review.googlesource.com/c/go/+/246973 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: copy in MemStats fields explicitlyMichael Anthony Knyszek
Currently MemStats is populated via an unsafe memmove from memstats, but this places unnecessary structural restrictions on memstats, is annoying to reason about, and tightly couples the two. Instead, just populate the fields of MemStats explicitly. Change-Id: I96f6a64326b1a91d4084e7b30169a4bbe6a331f9 Reviewed-on: https://go-review.googlesource.com/c/go/+/246972 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: delineate which memstats are system stats with a typeMichael Anthony Knyszek
This change modifies the type of several mstats fields to be a new type: sysMemStat. This type has the same structure as the fields used to have. The purpose of this change is to make it very clear which stats may be used in various functions for accounting (usually the platform-specific sys* functions, but there are others). Currently there's an implicit understanding that the *uint64 value passed to these functions is some kind of statistic whose value is atomically managed. This understanding isn't inherently problematic, but we're about to change how some stats (which currently use mSysStatInc and mSysStatDec) work, so we want to make it very clear what the various requirements are around "sysStat". This change also removes mSysStatInc and mSysStatDec in favor of a method on sysMemStat. Note that those two functions were originally written the way they were because atomic 64-bit adds required a valid G on ARM, but this hasn't been the case for a very long time (since golang.org/cl/14204, but even before then it wasn't clear if mutexes required a valid G anymore). Today we implement 64-bit adds on ARM with a spinlock table. Change-Id: I4e9b37cf14afc2ae20cf736e874eb0064af086d7 Reviewed-on: https://go-review.googlesource.com/c/go/+/246971 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: rename mcache fields to match Go styleMichael Anthony Knyszek
This change renames a bunch of malloc statistics stored in the mcache that are all named with the "local_" prefix. It also renames largeAlloc to allocLarge to prevent a naming conflict, and next_sample because it would be the last mcache field with the old C naming style. Change-Id: I29695cb83b397a435ede7e9ad5c3c9be72767ea3 Reviewed-on: https://go-review.googlesource.com/c/go/+/246969 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: flush local_scan directly and more oftenMichael Anthony Knyszek
Now that local_scan is the last mcache-based statistic that is flushed by purgecachedstats, and heap_scan and gcController.revise may be interacted with concurrently, we don't need to flush heap_scan at arbitrary locations where the heap is locked, and we don't need purgecachedstats and cachestats anymore. Instead, we can flush local_scan at the same time we update heap_live in refill, so the two updates may share the same revise call. Clean up unused functions, remove code that would cause the heap to get locked in the allocSpan when it didn't need to (other than to flush local_scan), and flush local_scan explicitly in a few important places. Notably we need to flush local_scan whenever we flush the other stats, but it doesn't need to be donated anywhere, so have releaseAll do the flushing. Also, we need to flush local_scan before we set heap_scan at the end of a GC, which was previously handled by cachestats. Just do so explicitly -- it's not much code and it becomes a lot more clear why we need to do so. Change-Id: I35ac081784df7744d515479896a41d530653692d Reviewed-on: https://go-review.googlesource.com/c/go/+/246968 Run-TryBot: Michael Knyszek <mknyszek@google.com> Trust: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: don't flush local_tinyallocsMichael Anthony Knyszek
This change makes local_tinyallocs work like the rest of the malloc stats and doesn't flush local_tinyallocs, instead making that the source-of-truth. Change-Id: I3e6cb5f1b3d086e432ce7d456895511a48e3617a Reviewed-on: https://go-review.googlesource.com/c/go/+/246967 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: remove mcentral.nmalloc and add mcache.local_nsmallallocMichael Anthony Knyszek
This change removes mcentral.nmalloc and adds mcache.local_nsmallalloc which fulfills the same role but may be accessed non-atomically. It also moves responsibility for updating heap_live and local_nsmallalloc into mcache functions. As a result of this change, mcache is now the sole source-of-truth for malloc stats. It is also solely responsible for updating heap_live and performing the various operations required as a result of updating heap_live. The overall improvement here is in code organization: previously malloc stats were fairly scattered, and now they have one single home, and nearly all the required manipulations exist in a single file. Change-Id: I7e93fa297c1debf17e3f2a0d68aeed28a9c6af00 Reviewed-on: https://go-review.googlesource.com/c/go/+/246966 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: make nlargealloc and largealloc mcache fieldsMichael Anthony Knyszek
This change makes nlargealloc and largealloc into mcache fields just like nlargefree and largefree. These local fields become the new source-of-truth. This change also moves the accounting for these fields out of allocSpan (which is an inappropriate place for it -- this accounting generally happens much closer to the point of allocation) and into largeAlloc. This move is partially possible now that we can call gcController.revise at that point. Furthermore, this change moves largeAlloc into mcache.go and makes it a method of mcache. While there's a little bit of a mismatch here because largeAlloc barely interacts with the mcache, it helps solidify the mcache as the first allocation layer and provides a clear place to aggregate and manage statistics. Change-Id: I37b5e648710733bb4c04430b71e96700e438587a Reviewed-on: https://go-review.googlesource.com/c/go/+/246965 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: make distributed/local malloc stats the source-of-truthMichael Anthony Knyszek
This change makes it so that various local malloc stats (excluding heap_scan and local_tinyallocs) are no longer written first to mheap fields but are instead accessed directly from each mcache. This change is part of a move toward having stats be distributed, and cleaning up some old code related to the stats. Note that because there's no central source-of-truth, when an mcache dies, it must donate its stats to another mcache. It's always safe to donate to the mcache for the 0th P, so do that. Change-Id: I2556093dbc27357cb9621c9b97671f3c00aa1173 Reviewed-on: https://go-review.googlesource.com/c/go/+/246964 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: make next_gc atomically accessedMichael Anthony Knyszek
next_gc is mostly updated only during a STW, but may occasionally be updated by calls to e.g. debug.SetGCPercent. In this case the update is supposed to be protected by the heap lock, but in reality it's accessed by gcController.revise which may be called without the heap lock held (despite its documentation, which will be updated in a later change). Change the synchronization policy on next_gc so that it's atomically accessed when the world is not stopped to aid in making revise safe for concurrent use. Change-Id: I79657a72f91563f3241aaeda66e8a7757d399529 Reviewed-on: https://go-review.googlesource.com/c/go/+/246962 Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26runtime: define and enforce synchronization on heap_scanMichael Anthony Knyszek
Currently heap_scan is mostly protected by the heap lock, but gcControllerState.revise sometimes accesses it without a lock. In an effort to make gcControllerState.revise callable from more contexts (and have its synchronization guarantees actually respected), make heap_scan atomically read from and written to, unless the world is stopped. Note that we don't update gcControllerState.revise's erroneous doc comment here because this change isn't about revise's guarantees, just about heap_scan. The comment is updated in a later change. Change-Id: Iddbbeb954767c704c2bd1d221f36e6c4fc9948a6 Reviewed-on: https://go-review.googlesource.com/c/go/+/246960 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Go Bot <gobot@golang.org> Trust: Emmanuel Odeke <emmanuel@orijtech.com> Reviewed-by: Michael Pratt <mpratt@google.com>
2020-09-14runtime: add 24 byte allocation size classMartin Möhrmann
This CL introduces a 24 byte allocation size class which fits 3 pointers on 64 bit and 6 pointers on 32 bit architectures. Notably this new size class fits a slice header on 64 bit architectures exactly while previously a 32 byte size class would have been used for allocating a slice header on the heap. The main complexity added with this CL is that heapBitsSetType needs to handle objects that aren't 16-byte aligned but contain more than a single pointer on 64-bit architectures. Due to having a non 16 byte aligned size class on 32 bit a h.shift of 2 is now possible which means a heap bitmap byte might only be partially written. Due to this already having been possible on 64 bit before the heap bitmap code only needed minor adjustments for 32 bit doublecheck code paths. Note that this CL changes the slice capacity allocated by append for slice growth to a target capacity of 17 to 24 bytes. On 64 bit architectures the capacity of the slice returned by append([]byte{}, make([]byte, 24)...)) is 32 bytes before and 24 bytes after this CL. Depending on allocation patterns of the specific Go program this can increase the number of total alloctions as subsequent appends to the slice can trigger slice growth earlier than before. On the other side if the slice is never appended to again above its capacity this will lower heap usage by 8 bytes. This CL changes the set of size classes reported in the runtime.MemStats.BySize array due to it being limited to a total of 61 size classes. The new 24 byte size class is now included and the 20480 byte size class is not included anymore. Fixes #8885 name old time/op new time/op delta Template 196ms ± 3% 194ms ± 2% ~ (p=0.247 n=10+10) Unicode 85.6ms ±16% 88.1ms ± 1% ~ (p=0.165 n=10+10) GoTypes 673ms ± 2% 668ms ± 2% ~ (p=0.258 n=9+9) Compiler 3.14s ± 6% 3.08s ± 1% ~ (p=0.243 n=10+9) SSA 6.82s ± 1% 6.76s ± 1% -0.87% (p=0.006 n=9+10) Flate 128ms ± 7% 127ms ± 3% ~ (p=0.739 n=10+10) GoParser 154ms ± 3% 153ms ± 4% ~ (p=0.730 n=9+9) Reflect 404ms ± 1% 412ms ± 4% +1.99% (p=0.022 n=9+10) Tar 172ms ± 4% 170ms ± 4% ~ (p=0.065 n=10+9) XML 231ms ± 4% 230ms ± 3% ~ (p=0.912 n=10+10) LinkCompiler 341ms ± 1% 339ms ± 1% ~ (p=0.243 n=9+10) ExternalLinkCompiler 1.72s ± 1% 1.72s ± 1% ~ (p=0.661 n=9+10) LinkWithoutDebugCompiler 221ms ± 2% 221ms ± 2% ~ (p=0.529 n=10+10) StdCmd 18.4s ± 3% 18.2s ± 1% ~ (p=0.515 n=10+8) name old user-time/op new user-time/op delta Template 238ms ± 4% 243ms ± 6% ~ (p=0.661 n=9+10) Unicode 116ms ± 6% 113ms ± 3% -3.37% (p=0.035 n=9+10) GoTypes 854ms ± 2% 848ms ± 2% ~ (p=0.604 n=9+10) Compiler 4.10s ± 1% 4.11s ± 1% ~ (p=0.481 n=8+9) SSA 9.49s ± 1% 9.41s ± 1% -0.92% (p=0.001 n=9+10) Flate 149ms ± 6% 151ms ± 7% ~ (p=0.481 n=10+10) GoParser 189ms ± 2% 190ms ± 2% ~ (p=0.497 n=9+10) Reflect 511ms ± 2% 508ms ± 2% ~ (p=0.211 n=9+10) Tar 215ms ± 4% 212ms ± 3% ~ (p=0.105 n=10+10) XML 288ms ± 2% 288ms ± 2% ~ (p=0.971 n=10+10) LinkCompiler 559ms ± 4% 557ms ± 1% ~ (p=0.968 n=9+10) ExternalLinkCompiler 1.78s ± 1% 1.77s ± 1% ~ (p=0.055 n=8+10) LinkWithoutDebugCompiler 245ms ± 3% 245ms ± 2% ~ (p=0.684 n=10+10) name old alloc/op new alloc/op delta Template 34.8MB ± 0% 34.4MB ± 0% -0.95% (p=0.000 n=9+10) Unicode 28.6MB ± 0% 28.3MB ± 0% -0.95% (p=0.000 n=10+10) GoTypes 115MB ± 0% 114MB ± 0% -1.02% (p=0.000 n=10+9) Compiler 554MB ± 0% 549MB ± 0% -0.86% (p=0.000 n=9+10) SSA 1.28GB ± 0% 1.27GB ± 0% -0.83% (p=0.000 n=10+10) Flate 21.8MB ± 0% 21.6MB ± 0% -0.87% (p=0.000 n=8+10) GoParser 26.7MB ± 0% 26.4MB ± 0% -0.97% (p=0.000 n=10+9) Reflect 75.0MB ± 0% 74.1MB ± 0% -1.18% (p=0.000 n=10+10) Tar 32.6MB ± 0% 32.3MB ± 0% -0.94% (p=0.000 n=10+7) XML 41.5MB ± 0% 41.2MB ± 0% -0.90% (p=0.000 n=10+8) LinkCompiler 105MB ± 0% 104MB ± 0% -0.94% (p=0.000 n=10+10) ExternalLinkCompiler 153MB ± 0% 152MB ± 0% -0.69% (p=0.000 n=10+10) LinkWithoutDebugCompiler 63.7MB ± 0% 63.6MB ± 0% -0.13% (p=0.000 n=10+10) name old allocs/op new allocs/op delta Template 336k ± 0% 336k ± 0% +0.02% (p=0.002 n=10+10) Unicode 332k ± 0% 332k ± 0% ~ (p=0.447 n=10+10) GoTypes 1.16M ± 0% 1.16M ± 0% +0.01% (p=0.001 n=10+10) Compiler 4.92M ± 0% 4.92M ± 0% +0.01% (p=0.000 n=10+10) SSA 11.9M ± 0% 11.9M ± 0% +0.02% (p=0.000 n=9+10) Flate 214k ± 0% 214k ± 0% +0.02% (p=0.032 n=10+8) GoParser 270k ± 0% 270k ± 0% +0.02% (p=0.004 n=10+9) Reflect 877k ± 0% 877k ± 0% +0.01% (p=0.000 n=10+10) Tar 313k ± 0% 313k ± 0% ~ (p=0.075 n=9+10) XML 387k ± 0% 387k ± 0% +0.02% (p=0.007 n=10+10) LinkCompiler 455k ± 0% 456k ± 0% +0.08% (p=0.000 n=10+9) ExternalLinkCompiler 670k ± 0% 671k ± 0% +0.06% (p=0.000 n=10+10) LinkWithoutDebugCompiler 113k ± 0% 113k ± 0% ~ (p=0.149 n=10+10) name old maxRSS/op new maxRSS/op delta Template 34.1M ± 1% 34.1M ± 1% ~ (p=0.853 n=10+10) Unicode 35.1M ± 1% 34.6M ± 1% -1.43% (p=0.000 n=10+10) GoTypes 72.8M ± 3% 73.3M ± 2% ~ (p=0.724 n=10+10) Compiler 288M ± 3% 295M ± 4% ~ (p=0.393 n=10+10) SSA 630M ± 1% 622M ± 1% -1.18% (p=0.001 n=10+10) Flate 26.0M ± 1% 26.2M ± 2% ~ (p=0.493 n=10+10) GoParser 28.6M ± 1% 28.5M ± 2% ~ (p=0.256 n=10+10) Reflect 55.5M ± 2% 55.4M ± 1% ~ (p=0.436 n=10+10) Tar 33.0M ± 1% 32.8M ± 2% ~ (p=0.075 n=10+10) XML 38.7M ± 1% 39.0M ± 1% ~ (p=0.053 n=9+10) LinkCompiler 164M ± 1% 164M ± 1% -0.27% (p=0.029 n=10+10) ExternalLinkCompiler 174M ± 0% 173M ± 0% -0.33% (p=0.002 n=9+10) LinkWithoutDebugCompiler 137M ± 0% 136M ± 2% ~ (p=0.825 n=9+10) Change-Id: I9ecf2a10024513abef8fbfbe519e44e0b29b6167 Reviewed-on: https://go-review.googlesource.com/c/go/+/242258 Trust: Martin Möhrmann <moehrmann@google.com> Trust: Michael Knyszek <mknyszek@google.com> Run-TryBot: Martin Möhrmann <martisch@uos.de> TryBot-Result: Go Bot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Keith Randall <khr@golang.org>
2020-06-09runtime: fix typo in comment (object -> objects)Brad Fitzpatrick
Change-Id: I2af1f9dcd1a9609681e58ab07e73e6d7a5f8a12b Reviewed-on: https://go-review.googlesource.com/c/go/+/237160 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2020-04-28runtime: flush mcaches to mcentral before reading memstatsMichael Anthony Knyszek
Currently mcaches are flushed to mcentral after a bunch of memstats have already been read. This is not safe (in the sense that it doesn't ensure consisent memstats) since memstats may in general change when mcentral data structures are manipulated. Note that prior to the new mcentral implementation this was not a problem because mcentral operations happened to never modify certain memstats. As of the new mcentral implementation, we might for example persistentalloc when uncaching a span, which would change memstats. This can cause a skew between the value of sys (which currently is calculated before mcaches are flushed) and the value of gc_sys and other_sys. Fix this by moving mcache flushing to the very top of updatememstats. Also leave a comment explaining that this must be done first, in general, because mcentrals make no guarantee that they will not influence memstats (and doing so would be unnecessarily restrictive). Fixes #38712. Change-Id: I15bacb313c54a46e380a945a71bb75db67169c1b Reviewed-on: https://go-review.googlesource.com/c/go/+/230498 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
2019-11-08runtime: ensure heap memstats are updated atomicallyMichael Anthony Knyszek
For the most part, heap memstats are already updated atomically when passed down to OS-level memory functions (e.g. sysMap). Elsewhere, however, they're updated with the heap lock. In order to facilitate holding the heap lock for less time during allocation paths, this change more consistently makes the update of these statistics atomic by calling mSysStat{Inc,Dec} appropriately instead of simply adding or subtracting. It also ensures these values are loaded atomically. Furthermore, an undocumented but safe update condition for these memstats is during STW, at which point using atomics is unnecessary. This change also documents this condition in mstats.go. Updates #35112. Change-Id: I87d0b6c27b98c88099acd2563ea23f8da1239b66 Reviewed-on: https://go-review.googlesource.com/c/go/+/196638 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-11-08runtime: remove useless heap_objects accountingMichael Anthony Knyszek
This change removes useless additional heap_objects accounting for large objects. heap_objects is computed from scratch at ReadMemStats time (which stops the world) by using nlargealloc and nlargefree, so mutating heap_objects turns out to be pointless. As a result, the "large" parameter on "mheap_.freeSpan" is no longer necessary and so this change cleans that up too. Change-Id: I7d6b486d9b57c018e3db46221d81b55fe4c1b021 Reviewed-on: https://go-review.googlesource.com/c/go/+/196637 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2019-09-25runtime: redefine scavenge goal in terms of heap_inuseMichael Anthony Knyszek
This change makes it so that the scavenge goal is defined primarily in terms of heap_inuse at the end of the last GC rather than next_gc. The reason behind this change is that next_gc doesn't take into account fragmentation, and we can fall into situation where the scavenger thinks it should have work to do but there's no free and unscavenged memory available. In order to ensure the scavenge goal still tracks next_gc, we multiply heap_inuse by the ratio between the current heap goal and the last heap goal, which describes whether the heap is growing or shrinking, and by how much. Finally, this change updates the documentation for scavenging and elaborates on why the scavenge goal is defined the way it is. Fixes #34048. Updates #32828. Change-Id: I8deaf87620b5dc12a40ab8a90bf27932868610da Reviewed-on: https://go-review.googlesource.com/c/go/+/193040 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Keith Randall <khr@golang.org>
2019-05-24runtime: ensure mheap lock stack growth invariant is maintainedMichael Anthony Knyszek
Currently there's an invariant in the runtime wherein the heap lock can only be acquired on the system stack, otherwise a self-deadlock could occur if the stack grows while the lock is held. This invariant is upheld and documented in a number of situations (e.g. allocManual, freeManual) but there are other places where the invariant is either not maintained at all which risks self-deadlock (e.g. setGCPercent, gcResetMarkState, allocmcache) or is maintained but undocumented (e.g. gcSweep, readGCStats_m). This change adds go:systemstack to any function that acquires the heap lock or adds a systemstack(func() { ... }) around the critical section, where appropriate. It also documents the invariant on (*mheap).lock directly and updates repetitive documentation to refer to that comment. Fixes #32105. Change-Id: I702b1290709c118b837389c78efde25c51a2cafb Reviewed-on: https://go-review.googlesource.com/c/go/+/177857 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com>
2018-11-05runtime: clean up MSpan* MCache* MCentral* in docsMichael Anthony Knyszek
This change cleans up references to MSpan, MCache, and MCentral in the docs via a bunch of sed invocations to better reflect the Go names for the equivalent structures (i.e. mspan, mcache, mcentral) and their methods (i.e. MSpan_Sweep -> mspan.sweep). Change-Id: Ie911ac975a24bd25200a273086dd835ab78b1711 Reviewed-on: https://go-review.googlesource.com/c/147557 Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-10-30runtime: separate scavenged spansMichael Anthony Knyszek
This change adds a new treap to mheap which contains scavenged (i.e. its physical pages were returned to the OS) spans. As of this change, spans may no longer be partially scavenged. For #14045. Change-Id: I0d428a255c6d3f710b9214b378f841b997df0993 Reviewed-on: https://go-review.googlesource.com/c/139298 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2018-09-26runtime: rename _MSpan* constants to mSpan*Austin Clements
We already aliased mSpanInUse to _MSpanInUse. The dual constants are getting annoying, so fix all of these to use the mSpan* naming convention. This was done automatically with: sed -i -re 's/_?MSpan(Dead|InUse|Manual|Free)/mSpan\1/g' *.go plus deleting the existing definition of mSpanInUse. Change-Id: I09979d9d491d06c10689cea625dc57faa9cc6767 Reviewed-on: https://go-review.googlesource.com/137875 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-15runtime: use sparse mappings for the heapAustin Clements
This replaces the contiguous heap arena mapping with a potentially sparse mapping that can support heap mappings anywhere in the address space. This has several advantages over the current approach: * There is no longer any limit on the size of the Go heap. (Currently it's limited to 512GB.) Hence, this fixes #10460. * It eliminates many failures modes of heap initialization and growing. In particular it eliminates any possibility of panicking with an address space conflict. This can happen for many reasons and even causes a low but steady rate of TSAN test failures because of conflicts with the TSAN runtime. See #16936 and #11993. * It eliminates the notion of "non-reserved" heap, which was added because creating huge address space reservations (particularly on 64-bit) led to huge process VSIZE. This was at best confusing and at worst conflicted badly with ulimit -v. However, the non-reserved heap logic is complicated, can race with other mappings in non-pure Go binaries (e.g., #18976), and requires that the entire heap be either reserved or non-reserved. We currently maintain the latter property, but it's quite difficult to convince yourself of that, and hence difficult to keep correct. This logic is still present, but will be removed in the next CL. * It fixes problems on 32-bit where skipping over parts of the address space leads to mapping huge (and never-to-be-used) metadata structures. See #19831. This also completely rewrites and significantly simplifies mheap.sysAlloc, which has been a source of many bugs. E.g., #21044, #20259, #18651, and #13143 (and maybe #23222). This change also makes it possible to allocate individual objects larger than 512GB. As a result, a few tests that expected huge allocations to fail needed to be changed to make even larger allocations. However, at the moment attempting to allocate a humongous object may cause the program to freeze for several minutes on Linux as we fall back to probing every page with addrspace_free. That logic (and this failure mode) will be removed in the next CL. Fixes #10460. Fixes #22204 (since it rewrites the code involved). This slightly slows down compilebench and the x/benchmarks garbage benchmark. name old time/op new time/op delta Template 184ms ± 1% 185ms ± 1% ~ (p=0.065 n=10+9) Unicode 86.9ms ± 3% 86.3ms ± 1% ~ (p=0.631 n=10+10) GoTypes 599ms ± 0% 602ms ± 0% +0.56% (p=0.000 n=10+9) Compiler 2.87s ± 1% 2.89s ± 1% +0.51% (p=0.002 n=9+10) SSA 7.29s ± 1% 7.25s ± 1% ~ (p=0.182 n=10+9) Flate 118ms ± 2% 118ms ± 1% ~ (p=0.113 n=9+9) GoParser 147ms ± 1% 148ms ± 1% +1.07% (p=0.003 n=9+10) Reflect 401ms ± 1% 404ms ± 1% +0.71% (p=0.003 n=10+9) Tar 175ms ± 1% 175ms ± 1% ~ (p=0.604 n=9+10) XML 209ms ± 1% 210ms ± 1% ~ (p=0.052 n=10+10) (https://perf.golang.org/search?q=upload:20171231.4) name old time/op new time/op delta Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.25ms ± 1% +0.84% (p=0.000 n=19+19) (https://perf.golang.org/search?q=upload:20171231.3) Relative to the start of the sparse heap changes (starting at and including "runtime: fix various contiguous bitmap assumptions"), overall slowdown is roughly 1% on GC-intensive benchmarks: name old time/op new time/op delta Template 183ms ± 1% 185ms ± 1% +1.32% (p=0.000 n=9+9) Unicode 84.9ms ± 2% 86.3ms ± 1% +1.65% (p=0.000 n=9+10) GoTypes 595ms ± 1% 602ms ± 0% +1.19% (p=0.000 n=9+9) Compiler 2.86s ± 0% 2.89s ± 1% +0.91% (p=0.000 n=9+10) SSA 7.19s ± 0% 7.25s ± 1% +0.75% (p=0.000 n=8+9) Flate 117ms ± 1% 118ms ± 1% +1.10% (p=0.000 n=10+9) GoParser 146ms ± 2% 148ms ± 1% +1.48% (p=0.002 n=10+10) Reflect 398ms ± 1% 404ms ± 1% +1.51% (p=0.000 n=10+9) Tar 173ms ± 1% 175ms ± 1% +1.17% (p=0.000 n=10+10) XML 208ms ± 1% 210ms ± 1% +0.62% (p=0.011 n=10+10) [Geo mean] 369ms 373ms +1.17% (https://perf.golang.org/search?q=upload:20180101.2) name old time/op new time/op delta Garbage/benchmem-MB=64-12 2.22ms ± 1% 2.25ms ± 1% +1.51% (p=0.000 n=20+19) (https://perf.golang.org/search?q=upload:20180101.3) Change-Id: I5daf4cfec24b252e5a57001f0a6c03f22479d0f0 Reviewed-on: https://go-review.googlesource.com/85887 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2018-02-15runtime: replace mlookup and findObject with heapBitsForObjectAustin Clements
These functions all serve essentially the same purpose. mlookup is used in only one place and findObject in only three. Use heapBitsForObject instead, which is the most optimized implementation. (This may seem slightly silly because none of these uses care about the heap bits, but we're about to split up the functionality of heapBitsForObject anyway. At that point, findObject will rise from the ashes.) Change-Id: I906468c972be095dd23cf2404a7d4434e802f250 Reviewed-on: https://go-review.googlesource.com/85877 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-12-08fix a typo in the runtime.MemStats documentationPaul Boyd
Change-Id: If553950446158cee486006ba85c3663b986008a6 Reviewed-on: https://go-review.googlesource.com/82936 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-10-25runtime/internal/sys: use boolean constants for sys.BigEndianTobias Klauser
The BigEndian constant is only used in boolean context so assign it boolean constants. Change-Id: If19d61dd71cdfbffede1d98b401f11e6535fba59 Reviewed-on: https://go-review.googlesource.com/73270 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-09-27runtime: clean up loops over allpAustin Clements
allp now has length gomaxprocs, which means none of allp[i] are nil or in state _Pdead. This lets replace several different styles of loops over allp with normal range loops. for i := 0; i < gomaxprocs; i++ { ... } loops can simply range over allp. Likewise, range loops over allp[:gomaxprocs] can just range over allp. Loops that check for p == nil || p.state == _Pdead don't need to check this any more. Loops that check for p == nil don't have to check this *if* dead Ps don't affect them. I checked that all such loops are, in fact, unaffected by dead Ps. One loop was potentially affected, which this fixes by zeroing p.gcAssistTime in procresize. Updates #15131. Change-Id: Ifa1c2a86ed59892eca0610360a75bb613bc6dcee Reviewed-on: https://go-review.googlesource.com/45575 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2017-09-27runtime: dynamically allocate allpAustin Clements
This makes it possible to eliminate the hard cap on GOMAXPROCS. Updates #15131. Change-Id: I4c422b340791621584c118a6be1b38e8a44f8b70 Reviewed-on: https://go-review.googlesource.com/45573 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-06-13runtime: clean up some silly allp loopsAustin Clements
Back in the day, allp was just a pointer to an array. As a result, the runtime has a few loops of the form: for i := 0; ; i++ { p := allp[i] if p == nil { break } ... } This is silly now because it requires that allp be one longer than the maximum possible number of Ps, but now that allp is in Go it has a length. Replace these with range loops. Change-Id: I91ef4bc7bd3c9d4fda2264f4aa1b1d0271d7f578 Reviewed-on: https://go-review.googlesource.com/45571 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2017-04-28runtime: separate spans of noscan objectsAustin Clements
Currently, we mix objects with pointers and objects without pointers ("noscan" objects) together in memory. As a result, for every object we grey, we have to check that object's heap bits to find out if it's noscan, which adds to the per-object cost of GC. This also hurts the TLB footprint of the garbage collector because it decreases the density of scannable objects at the page level. This commit improves the situation by using separate spans for noscan objects. This will allow a much simpler noscan check (in a follow up CL), eliminate the need to clear the bitmap of noscan objects (in a follow up CL), and improves TLB footprint by increasing the density of scannable objects. This is also a step toward eliminating dead bits, since the current noscan check depends on checking the dead bit of the first word. This has no effect on the heap size of the garbage benchmark. We'll measure the performance change of this after the follow-up optimizations. This is a cherry-pick from dev.garbage commit d491e550c3. The only non-trivial merge conflict was in updatememstats in mstats.go, where we now have to separate the per-spanclass stats from the per-sizeclass stats. Change-Id: I13bdc4869538ece5649a8d2a41c6605371618e40 Reviewed-on: https://go-review.googlesource.com/41251 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-04-21runtime: rationalize triggerRatioAustin Clements
gcController.triggerRatio is the only field in gcController that persists across cycles. As global mutable state, the places where it written and read are spread out, making it difficult to see that updates and downstream calculations are done correctly. Improve this situation by doing two things: 1) Move triggerRatio to memstats so it lives with the other trigger-related fields and makes gcController entirely transient state. 2) Commit the new trigger ratio during mark termination when we compute other next-cycle controls, including the absolute trigger. This forces us to explicitly thread the new trigger ratio from gcController.endCycle to mark termination, so we're not just pulling it out of global state. Change-Id: I6669932f8039a8c0ef46a3f2a8c537db72e578aa Reviewed-on: https://go-review.googlesource.com/39830 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-04-21runtime: consistently use atomic loads for heap_liveAustin Clements
heap_live is updated atomically without locking, so we should also use atomic loads to read it. Fix the reads of heap_live that happen outside of STW to be atomic. Change-Id: Idca9451c348168c2a792a9499af349833a3c333f Reviewed-on: https://go-review.googlesource.com/41371 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-04-13runtime: don't count manually-managed spans from heap_{inuse,sys}Austin Clements
Currently, manually-managed spans are included in memstats.heap_inuse and memstats.heap_sys, but when we export these stats to the user, we subtract out how much has been allocated for stack spans from both. This works for now because stacks are the only manually-managed spans we have. However, we're about to use manually-managed spans for more things that don't necessarily have obvious stats we can use to adjust the user-presented numbers. Prepare for this by changing the accounting so manually-managed spans don't count toward heap_inuse or heap_sys. This makes these fields align with the fields presented to the user and means we don't have to track more statistics just so we can adjust these statistics. For #19325. Change-Id: I5cb35527fd65587ff23339276ba2c3969e2ad98f Reviewed-on: https://go-review.googlesource.com/38577 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-03-04runtime: make ReadMemStats STW for < 25µsAustin Clements
Currently ReadMemStats stops the world for ~1.7 ms/GB of heap because it collects statistics from every single span. For large heaps, this can be quite costly. This is particularly unfortunate because many production infrastructures call this function regularly to collect and report statistics. Fix this by tracking the necessary cumulative statistics in the mcaches. ReadMemStats still has to stop the world to stabilize these statistics, but there are only O(GOMAXPROCS) mcaches to collect statistics from, so this pause is only 25µs even at GOMAXPROCS=100. Fixes #13613. Change-Id: I3c0a4e14833f4760dab675efc1916e73b4c0032a Reviewed-on: https://go-review.googlesource.com/34937 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-03-04runtime: remove unused gcstatsAustin Clements
The gcstats structure is no longer consumed by anything and no longer tracks statistics that are particularly relevant to the concurrent garbage collector. Remove it. (Having statistics is probably a good idea, but these aren't the stats we need these days and we don't have a way to get them out of the runtime.) In preparation for #13613. Change-Id: Ib63e2f9067850668f9dcbfd4ed89aab4a6622c3f Reviewed-on: https://go-review.googlesource.com/34936 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
2017-02-09time: optimize Now on darwin, windowsRuss Cox
Fetch both monotonic and wall time together when possible. Avoids skew and is cheaper. Also shave a few ns off in conversion in package time. Compared to current implementation (after monotonic changes): name old time/op new time/op delta Now 19.6ns ± 1% 9.7ns ± 1% -50.63% (p=0.000 n=41+49) darwin/amd64 Now 23.5ns ± 4% 10.6ns ± 5% -54.61% (p=0.000 n=30+28) windows/amd64 Now 54.5ns ± 5% 29.8ns ± 9% -45.40% (p=0.000 n=27+29) windows/386 More importantly, compared to Go 1.8: name old time/op new time/op delta Now 9.5ns ± 1% 9.7ns ± 1% +1.94% (p=0.000 n=41+49) darwin/amd64 Now 12.9ns ± 5% 10.6ns ± 5% -17.73% (p=0.000 n=30+28) windows/amd64 Now 15.3ns ± 5% 29.8ns ± 9% +94.36% (p=0.000 n=30+29) windows/386 This brings time.Now back in line with Go 1.8 on darwin/amd64 and windows/amd64. It's not obvious why windows/386 is still noticeably worse than Go 1.8, but it's better than before this CL. The windows/386 speed is not too important; the changes just keep the two architectures similar. Change-Id: If69b94970c8a1a57910a371ee91e0d4e82e46c5d Reviewed-on: https://go-review.googlesource.com/36428 Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-12-23runtime: document MemStats.BySize fieldsAustin Clements
Change-Id: Iae8cdcd84e9b5f5d7c698abc6da3fc2af0ef839a Reviewed-on: https://go-review.googlesource.com/34710 Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
2016-12-07runtime: add number of forced GCs to MemStatsAustin Clements
This adds a counter for the number of times the application forced a GC by, e.g., calling runtime.GC(). This is useful for detecting applications that are overusing/abusing runtime.GC() or debug.FreeOSMemory(). Fixes #18217. Change-Id: I990ab7a313c1b3b7a50a3d44535c460d7c54f47d Reviewed-on: https://go-review.googlesource.com/34067 Reviewed-by: Russ Cox <rsc@golang.org>