aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mprof.go
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2017-02-27 11:36:37 -0500
committerAustin Clements <austin@google.com>2017-03-31 00:46:18 +0000
commit673a8fdfe60929f61657dbfbdf5534eabe8cd6f5 (patch)
treef200f0738ca961caad42b2193431426e05f23ca5 /src/runtime/mprof.go
parentef1829d1debc0bcd32052d9686adb75704e75984 (diff)
downloadgo-673a8fdfe60929f61657dbfbdf5534eabe8cd6f5.tar.gz
go-673a8fdfe60929f61657dbfbdf5534eabe8cd6f5.zip
runtime: diagram flow of stats through heap profile
Every time I modify heap profiling, I find myself redrawing this diagram, so add it to the comments. This shows how allocations and frees are accounted, how we arrive at consistent profile snapshots, and when those snapshots are published to the user. Change-Id: I106aba1200af3c773b46e24e5f50205e808e2c69 Reviewed-on: https://go-review.googlesource.com/37514 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
Diffstat (limited to 'src/runtime/mprof.go')
-rw-r--r--src/runtime/mprof.go28
1 files changed, 28 insertions, 0 deletions
diff --git a/src/runtime/mprof.go b/src/runtime/mprof.go
index 555a3ac2a6..6b29b6847d 100644
--- a/src/runtime/mprof.go
+++ b/src/runtime/mprof.go
@@ -64,6 +64,34 @@ type memRecord struct {
// come only after a GC during concurrent sweeping. So if we would
// naively count them, we would get a skew toward mallocs.
//
+ // Hence, we delay information to get consistent snapshots as
+ // of mark termination. Allocations count toward the next mark
+ // termination's snapshot, while sweep frees count toward the
+ // previous mark termination's snapshot:
+ //
+ // MT MT MT MT
+ // .·| .·| .·| .·|
+ // .·˙ | .·˙ | .·˙ | .·˙ |
+ // .·˙ | .·˙ | .·˙ | .·˙ |
+ // .·˙ |.·˙ |.·˙ |.·˙ |
+ //
+ // alloc → ▲ ← free
+ // ┠┅┅┅┅┅┅┅┅┅┅┅P
+ // r_a → p_a → allocs
+ // p_f → frees
+ //
+ // alloc → ▲ ← free
+ // ┠┅┅┅┅┅┅┅┅┅┅┅P
+ // r_a → p_a → alloc
+ // p_f → frees
+ //
+ // Since we can't publish a consistent snapshot until all of
+ // the sweep frees are accounted for, we wait until the next
+ // mark termination ("MT" above) to publish the previous mark
+ // termination's snapshot ("P" above). To do this, information
+ // is delayed through "recent" and "prev" stages ("r_*" and
+ // "p_*" above). Specifically:
+ //
// Mallocs are accounted in recent stats.
// Explicit frees are accounted in recent stats.
// GC frees are accounted in prev stats.