Scaling and hinting [was Re: composite manager thought]

Owen Taylor otaylor@redhat.com
Mon, 05 Jan 2004 11:20:28 -0500


On Mon, 2003-12-29 at 14:25, Allen Akin wrote:

> Even if we ignore the hinting question, it might still be worthwhile for
> apps to know something about the nature of the transformations being
> applied to their windows.  They might want to "greek" small text, or
> avoid a time-consuming layout algorithm when the results won't be used,
> for example.

Because hinting is distortion that depends on scale, I'd think you
generally don't want to rehint when doing animations, warping windows,
etc, because you'll see sudden discontinuities as you pass boundaries.

The draft paper at:

 http://people.redhat.com/otaylor/grid-fitting/

discusses ways to avoid major layout changes as you change scale,
but it's about discontinuous changes ... I'm pretty sure you'd see
obnoxious amounts of jitter if you did it continuously.

The only place you might want to rehint is if you were statically 
displaying a window at 50% scale, say. 

I guess what really needs to be addressed here is what we mean when
by "window size" if we have a scalable user interface. You can't
ignore the pixel grid until you get to at least 200dpi. But 

 - scaling
 - non-uniform scaling
 - multiple scaled views

All put considerable strain on the concept that a window is just a 
fixed size bitmap.

I see two general approaches:

 - Simple: You just tell the application "provide a 400x300
   bitmap at 96dpi and hint text for that". This would be a hint
   so you could later tell the app "provide a 200x150 bitmap
   at 48dpi and hint for that", and it could repaint or not,
   as it chose.

   So, you'd probably have the effect, if you say, continuously
   scaled down an image by 50% that it would get fuzzy as it
   scaled, then at the final size the text sharpened back up.

 - Complex: You define all scaling/warping/etc to happen at the
   application level. Which sounds like what you are describing
   below. 

   (You could try to avoid passing bitmaps around at all
   between the layers and just paint the scene every time,
   but that would reintroduce all the scheduling problems with
   slow applications, etc.)

> One approach to solving this sort of problem is to accept the "universal
> programmability" model that is rumored to be under consideration for
> Longhorn.  Then the way components pass arbitrary transformation
> information to other components is by suppling a GPU program that
> performs the transformation or answers queries about it (e.g. provides
> derivatives of parametric coords with respect to window coords).

That sounds like a real challenge to explain to application 
programmers :-) 

Doing it at the GPU level sounds like an inconvenient place for what
you describe above - hinting, greeking, avoiding layout. Isn't it a 
problem to get that application back to the CPU/application?

Regards,
						Owen