Some considerations
Marcelo E. Magallon
mmagallo@debian.org
Sat, 3 Jan 2004 15:28:08 -0600
On Wed, Nov 26, 2003 at 01:19:14AM -0800, Keith Packard wrote:
> Window buffers which aren't actively updating will be migrated back
> to main memory if other applications have more pressing need for
> video memory. But, if everything is updating all the time, you're
> gonna need piles of memory to keep everything resident. Do realize
> that even a full-screen image is "small" these days -- with 256meg
> video cards, even several 4meg full-screen images are manageable.
Well, yes, but you are ignoring windowed OpenGL applications that _use_
those 256 MB of RAM (or something close to it), e.g. Maya, which the OP
mentioned. The question is probably how is texture memory going to be
used? Does every application get its own texture? A pretty smart
texture manager is in order. Since window resizes would mean
destroying and allocating new textures, that might wreak havoc with the
driver's texture manager.
I have been reading the last few weeks worth of posts to this mailing
lists, particularly looking for something related to the OpenGL-based
renderer, particularly:
* How to leverage existing proprietary OpenGL drivers
* How will OpenGL applications work
* What's the impact on OpenGL applications
For the first question, as far as I understand it, there's a regular
Xserver using the proprietary driver, and a fullscreen application runs
on this. This application receives all the requests from clients and
translates them to an OpenGL command stream to do the actual rendering.
The actual implementation might look a bit different, but somewhere
there's got to be a client that uses the vendor's OpenGL driver. Since
all the proprietary OpenGL drivers are GLX-based, you need an Xserver.
That has an impact on the second question. A great deal of effort has
been put into implementing direct rendering drivers (and I'm not
refering only to those provided by the DRI project). In plain English,
OpenGL applications running locally are direct rendering clients for
all situations of interest. The application sets up a window and event
loops with the Xserver but all of the rendering in that window goes
straight to the card bypassing the Xserver.
Which brings me to the third question: on SGIs, where sync-to-vblank is
standard, I can have continously updating OpenGL applications running
side-by-side with little trouble. On Linux using NVIDIA drivers and a
2.6 kernel, this gets very jerky (_even_ with sync-to-vblank).
Framerates aren't bad, but inconsistent. In the context of transparent
windows, double-buffering gets interesting :-) Much larger areas of
the screen get damaged. In can imagine the best solution is to render
each window to a texture and then rendering a bunch of polygons one on
top of the other. The blending code I posted before is pretty good,
but it only beats up blending on the card _if_ the data isn't already
available on the card's memory:
Operation | Texture | DrawPixels
=======================================
Clear | 1 | 1
=======================================
Texture upload | 36 |
Texture upload | 36 |
Render quad | 1 |
Render quad | 1 |
---------------------------------------
DrawPixels | | 16
DrawPixels | | 18
---------------------------------------
ReadPixels | 42 | 41
---------------------------------------
Total | 117 | 76
(times are ms; images are 1k x 1k RGBA; hardware is a
GeForce3)
Just ignore the ReadPixels call.
Blending a handful of large quads is not _that_ expensive, but
uploading large amounts of texture data is.
And where does that leave my OpenGL application? As long as my OpenGL
application is a top-level window everything is ok, but when I lower
it, I start to get inconsistent results, or did I miss something?
Marcelo