Now that we do some filtering on these to not show too many items per row,
this can be enabled. Previously it could *really* slow things down due to
all the app<->compositor traffic it causes.
We don't want to cost-account the same summary multiple times while walking
up to the root. Otherwise, you can get items which come out to a percentage
of > 100% which is not exactly what you're expecting to see as a normalized
value.
This represented a large stall when loading the window, and also results
in doing a bunch of work twice as we set the model (then again the time
range).
So instead, just do it incrementally and let the functions list backfill in
a bit.
This is something that original flamegraphs do to aid in seeing adjacent
towers. We want that too, but we need it to be stable across redraws. Use
the hash of the symbol rather than g_random_double_range() for that.
This makes things look a bit more like flamegraphs.pl in the sense that we
have some labels and separation between rows. Also, use a ScrolledWindow so
that we can have much taller graphs to accommodate deep stack traces.
We might want to jump to the bottom at some point, but this gets things in
place for now. Icicle graphs are another option (invert).
This shouldn't affect categorizing because that only uses the value if
is_toplevel. But with this added, we can use the count for weights in
other tooling w/o needing augmentation.
These are largely pre-sorted, but not fully when you have merged data. This
uses timsort to speed that up a bit.
In particular, the comparison of various sorts break down to (for a
~32,000,000 record capture.
g_array_sort_with_data() => 3.9 seconds
qsort_r() = > 3.7 seconds
gtk_tim_sort() => .79 seconds