This is overactive because we could be zoomed in so far that we
don't pick up the adjacent frame. We need something more clever
that can pick up frames adjacent to the visible area of the
capture.
When we find ourselves on a HiDPI display, we need to make sure
we setup the device scale factor properly and adjust our render
checks for valid surface sizes.
We don't want to cache all the datapoints from the underlying
capture, just the datapoints for the visible region (and some
at the edges so we get proper cairo_curve_to() x,y coordinates).
This isn't a major optimization yet until we start supporting
much larger capture sizes. But that will mostly be improved with
capture indexes anyway.
If we have not yet received our proper draw for the new size
allocation (likely right after the size allocate), then we can
just use the old surface but at a scaled value. This is handy
so that we don't block the main loop trying to do drawing of
lots of data points. Instead we just scale the image and wait
for the high-quality version to complete.
This starts getting the mechanics in place for off screen
rendering using a cairo image surface. We create our own
point cache for storing x,y pairs and then simplify our
drawing based on that.
This provides the plumbing to do the threaded drawing, we just
need to write the capture cursor and draw operations from the
pixman/cairo worker thread (and do so safely).