2 - The Developer's View
playing around with a lot of games, I noticed that Vsync anomalies
appeared more common in Direct3D games than they did in OpenGL.
But why? What would make OpenGL different from Direct3D in
that respect? I fired off an email to Jake Simpson, Lead
Programmer - PSX2 project for Raven Software (makers of Soldier
of Fortune) asking him just that. He was kind enough to take
a few minutes and respond.
Is there something different about how OpenGL handles Vsync
"There is no Vsync in OpenGL as a command. Most apps
use the GLFlush command, sometimes followed by a GLFinish
command. The GLFlush command basically says "Ok, what ever
commands you have in your buffer, send 'em to the rendering
device now, and get it working." It doesn't care where the
raster is in the drawing sync, it just goes out and does
it. The GLFinish command will then make the app wait until
the rendering device has completed all the commands it has
been sent up til then. This gives you the fastest feedback,
fairly obviously. Now, depending on whether you are double
buffering your video displays (ie rendering to the back
one while the front one is being displayed) you might want
to use a swapbuffers command. This means that you can afford
to slap out commands to the rendering device when ever you
feel like it, since it's always going to be rendering to
an unseen buffer. The SwapBuffers command does what it says,
it swaps the buffers between the front and the back. When
it actually does this, ie at Vsync or just randomly whenever
it can depends on the card you are using. Sometimes you
can set the 'wait for Vsync' in the properties dialog for
your card, sometimes it has to be set via registry options.
It's messy and highly card dependant. Obviously working
in a window you don't get any kind of Vsyncing going on."
It almost seems like the Quake 2 and Quake 3 engines beg for
Vsync to be disabled. Does this go back to question #1 or
has John Carmack done something to sidestep the issue?
JS: "As for Quake II & III - John C. makes the
game run the fastest he can. Obviously waiting for Vsync
before window swapping can cause a slow down. If you take
1.1 frames to draw a scene, then wait for Vsync before swapping
frame buffers that means that .9 of that frame is spent
doing nothing on the card. The OpenGL context can accept
commands and buffer them up, but it's not going to be doing
any rendering until the buffers are swapped and the back
buffer is unlocked for rendering again. You can see why
this would slow the game down."
spoke to Tim Sweeney, from Epic Games, to get a prospective
of the Direct3D side. However his response, while not technical
in nature, really got at the heart of the matter.
don't have any clue why someone would disable VSync for
gameplay. The only legit reason for this is to benchmark
3D card performance without the monitor's refresh rate skewing
the results. Regarding a 'philosophical VSync difference
between Direct3D and OpenGL', that's nutty. There is no
visual benefit to having a game render more frames per second
than your monitor is displaying."
Paul Bonnette from MadOnion (the 3D Mark 2000 people) added
to Tim's comments.
I would agree that there are no actual 'visual' benefits
to disabling vsync (in fact tearing can make things look
pretty god-awful), the ability to squeeze a couple more
frames per second is a tweak I used quite often when playing
graphically intensive games on lower performance systems.
Was there tearing...damned right! Did it speed up the games?
Sure did, but when you're getting 15-20fps in your favorite
game, anything is an improvement, and the odd graphic glitch
is a worthwhile tradeoff!"
line - you can benchmark with Vsync disabled to test the peak
performance of the video card. However, to get the most immersive
gaming experience, leave Vsync enabled.
the future I hope to get a technical response from a developer
to the differences between OpenGL and Direct3D.