Some considerations

Jon Smirl jonsmirl@yahoo.com
Sun, 4 Jan 2004 19:29:32 -0800 (PST)


--- "Marcelo E. Magallon" <mmagallo@debian.org> wrote:
> On Sun, Jan 04, 2004 at 09:36:44PM +0000, Keith Whitwell wrote:
>  > I'm not really sure what you're getting at - an opengl app would be
>  > rendering to an offscreen buffer and composited along with everything
>  > else.
> 
>  Hmmm... tell that to the OpenGL application :-)
> 
>  Sure, you could modify the driver in such a way that it allocates an
>  off-screen buffer instead of rendering to the framebuffer (which they
>  probably do anyway -- modulo single buffered applications).  This is
>  probably implementation dependent (it certainly doesn't work on SGIs
>  -- not that SGIs are interesting per se, I'm just saying not every
>  implementation behaves like this), but if you have a fullscreen OpenGL
>  application and you place another OpenGL window on top of it, and you
>  read the framebuffer (the backbuffer actually), you get the contents of
>  the window that's on top.  With some drivers and some cards at least.
>  At any rate, you have to change the driver because calling SwapBuffers
>  needs to do something different, not what it usually does.
> 

People more familar with this problem should be around on Monday but I'll take a
stab at it now.....

xserver is being developed in conjunction with work being done to Mesa/DRI. The
proposals for running it as an OpenGL app under Xwindows/DRI are for
testing/development purposes. Some things that will work on the Mesa version
won't work quite right on a standard OpenGL implementation.

In the Mesa version apps will write to offscreen buffers using render to
texture, but the apps won't know they are doing that. SwapBuffers will be
modified to actually swap the buffer or hang the app until the compositor has a
chance to composite the buffer onto the framebuffer. The plan is to do the
compositing step in hardware which is very fast. If you go full screen OpenGL
will behave like normal since the compositor won't be running. Single buffered
apps will simply get their buffer snapshotted (via hardware) on each compositor
cycle.

When everything is done the xserver is going to be an OpenGL application which
will cooperate with other OpenGL or xprotocol apps. This is based on a model
when standalone OpenGL is brought up first and then xserver run on top of it.
Standalone OpenGL needs the minor mods to SwapBuffers to make it work right.
Once we get the open source version going we hope ATI/Nvidia will release their
own compatible versions.

>  I don't know, but I have the hunch that that's slow.  I mean, you have
>  to render, copy to a texture and then render a polygon.  OpenGL

The final system will use render to texture to eliminate the copy. The render to
polygon will be done in hardware.

>  programmers get pissed off when their applications get slower for no
>  good reason.  What I'm getting at is a simple question: how do OpenGL
>  applications fit here without seeing their performance punished?  If we
>  are talking about glxgears (which reports ridiculous things like 3000
>  fps) it's fine, but what about that visualization thing which is having
>  a hard time getting past 20 fps?  At 20 fps one frame is 50 ms.  If you
>  add something that's going to take additional 10 ms, you are down to 17
>  fps.  Not good (incidentally, gears drops from 3000 fps to 97 :-).
>  Sure, you don't _have_ to use the Xserver, but then I see an adoption
>  problem (the same way gamers hate Windows XP -- or whatever it is they
>  hate nowadays).
> 
>  If you are compositing images, you need something to composite with.
>  If the OpenGL application is bypassing the Xserver because it's working
>  in direct rendering mode, what are you going to do?  glReadPixels?  How

Direct render is going to mean using the video hardware to write to an offscreen
buffer located in video RAM. Drawing to this kind of buffer has the same drawing
speed and direct rendering into the framebuffer. The only different is the
compositing step. Under Xwindows with a double buffered app I believe the
compositing step (a simple copy) was done in software. Under the new system the
video hardware will do the compositing. 

Single buffered apps will behave differently. They are going to draw into an
offscreen buffer as fast as they want to. The on each composition cycle the
compositing engine will pick up a copy in whatever state that it is in and
composite it onto the screen. I don't see any way around this since the window
may be translucent. You can't direct render into the framebuffer if you are
translucent. 

The compositor will run each retrace cycle. What's the point in drawing a screen
that will never be displayed? In this model single buffered apps can draw at
frame rates exceeding the retrace interval. And double buffered apps may be able
to do so too if there is enough video RAM to give them two buffers.

>  do you know when?  It's not the end of the world.  On SGIs you have to
>  play tricks to take screenshots of OpenGL apps.  But there it's
>  actually a hardware thing.  Along the same lines, you can't assign
>  transparency to an OpenGL window.  You probably don't want to either,
>  but _someone_ is going to ask why (the same way people ask why you
>  can't take a screenshot of say, Xine).
> 
>  (along the same line, what about XVideo?)

Xvideo will merrily draw into an offscreen buffer. Once each composition cycle
it will get copied to the main framebuffer. Modern 3D video hardware has
bandwidth in the gigabytes per second range. I don't think the composition
engine is going to consume more that about 5% of the card's bandwidth.

=====
Jon Smirl
jonsmirl@yahoo.com

__________________________________
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree