Linux progress
-
HotWax
- Posts: 10002
- Joined: Fri Jul 18, 2003 6:18 pm
- Location: Idaho Falls, ID
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
Randy, I didn't bother checking out the specifics for why you get such a slowdown. However, here is a comment from a Doomworld article about this Linux version:
That sounds about right from what I know about XFree86. You are simply using the wrong tool for the job.Pate wrote:Randy, in case you are reading this, the reason for the slowdown is simple: in XFree, you can't get a hardware surface unless you are root. So in every frame SDL has to do 8bit -> screen depth conversion and push the whole picture from main memory to video card.
If you want more information, just let me know.
-
Chris
- Posts: 2998
- Joined: Thu Jul 17, 2003 12:07 am
- Graphics Processor: ATI/AMD with Vulkan/Metal Support
I don't know if it's X or SDL doing the conversion, but I think it's X (since Allegro can do any color depth in an X Window, yet it doesn't have any option for disallowing that, like it does for a DX window). Besides, remapping an 8bit image isn't that hard, since at worst you just have to run it through a 1kilobyte table (a 4byte output for a 256byte input; granted though, it's by no means efficient). As has been shown though, there are alternatives to using X, for both windows (OpenGL, DGA, ect), and fullscreen (SVGALib, VBE/AF, ect). It's just a matter of getting SDL to use them.
-
Jim
- Posts: 535
- Joined: Mon Aug 11, 2003 10:56 am
Randy, ClanLib (http://www.clanlib.org) is a good cross-platform game library which supports 2D OpenGL directly. Its also true that using OpenGL could greatly speed alpha blending, scaling and rotating, in addition to circumventing X.
Just to get some idea of the speedup that you could obtain, you could try glSDL (http://olofson.net/mixed.html). Here's what the author says about it:
Just to get some idea of the speedup that you could obtain, you could try glSDL (http://olofson.net/mixed.html). Here's what the author says about it:
As you can see, this requires recompiling SDL. Thus, its not really a good solution. Like I said though, it would give you an idea of how much OpenGL could speed things up with minimal effort on your Part.glSDL is an implementation of the SDL API on top of OpenGL. This version is a proof-of-concept hack, and is implemented as a "wrapper" around SDL.
Version 0.6 includes project files for Visual C++, Visual C++ .NET and Borland C++. glSDL 0.6 is best used with SDL 1.2.5 or later.
NOTE: This branch of glSDL is no longer actively developed, but there is a glSDL backend for SDL in the works. The glSDL backend makes it possible for many SDL applications to use full hardware acceleration, even without recompiling. This backend will hopefully become part of SDL in the near future.
-
arioch
- Posts: 129
- Joined: Tue Jul 15, 2003 3:27 pm
-
randi
- Site Admin
- Posts: 7749
- Joined: Wed Jul 09, 2003 10:30 pm
Well, there have been a lot of comments here:
Edit: I just made this change and now 32-bit mode is 5 FPS faster.
I tried that. I also tried running from a Linux console without starting X, and it still didn't work.ryan wrote:iirc, using svgalib through sdl is as simple as:
$ export SDL_VIDEODRIVER=svgalib
A Geforce FX 5900.ryan also wrote:out of curiosity, what kind of video card do you have?
Yes, I know from first-hand experience using the latest CVS version of KDevelop. If you try to use files that aren't all in the same directory, it complains and asks if you want to create symlinks to them or copy them all into one directory. Doing a little digging, this seems to be a not uncommon feature request, too.Jim wrote:Hmm... Are you sure? I assume you tried the recently released KDevelop 3.0
It uses X extensions like Xv, which SDL doesn't support as far as I know.Hirogen2 wrote:take a look at mplayer
I've never used RHIDE, so I don't know where you got the idea that I like it.Hirogen2 also wrote:Last time you said you like RHIDE, it's available for Linux too of course.
It's not done yet.ducon wrote:Where is the archive? Maybe it's not ready?
That's basically what I do in Windows when you don't play fullscreen, but it doesn't suffer such massive slowdown. Although I did think about it, and I should be able to get a minor speedup by doing the depth conversion myself instead of letting SDL handle it. The code is already there and used by the Windows version, but for SDL, I draw the entire frame, copy it to SDL's buffer, SDL converts it to another buffer for X, and then it gets drawn to the screen. So there's one layer I can cut out of that. It won't make a huge difference, but it should help a little.Jim quoted Pate who wrote:in XFree, you can't get a hardware surface unless you are root. So in every frame SDL has to do 8bit -> screen depth conversion and push the whole picture from main memory to video card.
Edit: I just made this change and now 32-bit mode is 5 FPS faster.
All I'm interested in is using it as a dumb framebuffer: Draw the screen to a texture, and then draw the texture to the real screen. Anything more would require changing code beyond ZDoom's video driver.Jim wrote:Its also true that using OpenGL could greatly speed alpha blending, scaling and rotating, in addition to circumventing X.
Last edited by randi on Sat Feb 14, 2004 6:01 pm, edited 1 time in total.
-
sth
In the dialog where it wants to copy the file, enter the source path of the original file into the destination path field.randy wrote:Yes, I know from first-hand experience using the latest CVS version of KDevelop. If you try to use files that aren't all in the same directory, it complains and asks if you want to create symlinks to them or copy them all into one directory. Doing a little digging, this seems to be a not uncommon feature request, too.Jim wrote:Hmm... Are you sure? I assume you tried the recently released KDevelop 3.0
I'm using kdevelop for a project with many subdirs and have no problems so far. I also use my own Makefiles with it.
For your FX5900: IIRC I heard people complaining that nVidia doesn't support common 2d-acceleration features in their linux-drivers (XAA?). It's also the same with ATIs binary drivers.
-
randi
- Site Admin
- Posts: 7749
- Joined: Wed Jul 09, 2003 10:30 pm
-
akimmet
- Posts: 30
- Joined: Thu Dec 04, 2003 10:47 am
- Graphics Processor: ATI/AMD with Vulkan/Metal Support
Sdl's configure script ignores svgalib by default. Try recompiling SDL to be able to use svgalib and any other graphics libraries you happen to have avalible that SDL ignores by default (including the all disturbing aalib).randy wrote:Well, there have been a lot of comments here:
I tried that. I also tried running from a Linux console without starting X, and it still didn't work.ryan wrote:iirc, using svgalib through sdl is as simple as:
$ export SDL_VIDEODRIVER=svgalib
I don't remember the commandline argument to to that offhand, but doing a ./configure --help should list the correct options.
Last edited by akimmet on Sun Feb 15, 2004 4:06 pm, edited 1 time in total.
-
dennisj1
- Posts: 399
- Joined: Sun Jan 11, 2004 1:46 pm
- Location: Superior, WI
-
HotWax
- Posts: 10002
- Joined: Fri Jul 18, 2003 6:18 pm
- Location: Idaho Falls, ID
No, he's talking about running with a DESKTOP resolution of 16, 24, or 32-bit. Because ZDoom only runs in 8-bit, that means that if you run in a window, the software has to take the 8-bit window ZDoom is rendering and convert it to whatever your desktop resolution is. This is normal for running in a window, but he's saying it also does that when running fullscreen, which is a waste.
If you're waiting for highcolor support in ZDoom, I'd highly suggest not holding your breath.
If you're waiting for highcolor support in ZDoom, I'd highly suggest not holding your breath.
-
ducon
- Posts: 186
- Joined: Sun Dec 21, 2003 1:11 am
debianize zdoom
OK, when it'll be ready, I'll try to debianize it.randy wrote:It's not done yet.ducon wrote:Where is the archive? Maybe it's not ready?
-
Pate
SDL might not support Xv directly, but you can use it on SDL opened windows. MPlayer does this by default. Not sure on the specifics, though.Randy wrote:It uses X extensions like Xv, which SDL doesn't support as far as I know.Hirogen2 wrote:take a look at mplayer
The reason for the slowdown is the same: you can't get a hardware surface on X, because giving users direct access to video memory is considered a security risk (or something like that). The main problem then is that you have to copy large amounts of data around every frame. An OpenGL backend is faster, because it mostly cuts X out from the middle.Randy wrote:That's basically what I do in Windows when you don't play fullscreen, but it doesn't suffer such massive slowdown. Although I did think about it, and I should be able to get a minor speedup by doing the depth conversion myself instead of letting SDL handle it. The code is already there and used by the Windows version, but for SDL, I draw the entire frame, copy it to SDL's buffer, SDL converts it to another buffer for X, and then it gets drawn to the screen. So there's one layer I can cut out of that. It won't make a huge difference, but it should help a little.Jim quoted Pate who wrote:in XFree, you can't get a hardware surface unless you are root. So in every frame SDL has to do 8bit -> screen depth conversion and push the whole picture from main memory to video card.
-
IntergalacticWalrus
- Posts: 31
- Joined: Mon Sep 01, 2003 2:02 pm
- Location: E2M2, among the many UAC boxes
You obviously have color-depth conversion issues. You should check out what is being done in icculus-quake2:
http://www.icculus.org/quake2/
Last time I played, I used the software render (didn't have 3D accel) and the performance hit from running in 16/32 bit depth was a lot less high than what you are experiencing with zdoom.
Of course, one has to admit that ZDoom under XFree86 will most likely never be as fast as under Windows. XFree86 simply sucks for high-performance rendering (unless you are using hardware-accelerated OpenGL, of course).
Using OpenGL as a dumb framebuffer could prove to be useful. You could
even use an indexed texture so that the conversion would be done in hardware.
Oh yeah, have you tried running ZDoom in DGA mode? SDL has DGA support that can be toggled on by setting the environment variable SDL_VIDEODRIVER to 'dga'. You will need to be root to run this.
Also, if you have a graphical framebuffer console, it's worth a try running zdoom on it. (yes, SDL has framebuffer console support)
BTW Randy could I see your current code? I'd like to investigate this myself.
http://www.icculus.org/quake2/
Last time I played, I used the software render (didn't have 3D accel) and the performance hit from running in 16/32 bit depth was a lot less high than what you are experiencing with zdoom.
Of course, one has to admit that ZDoom under XFree86 will most likely never be as fast as under Windows. XFree86 simply sucks for high-performance rendering (unless you are using hardware-accelerated OpenGL, of course).
Using OpenGL as a dumb framebuffer could prove to be useful. You could
even use an indexed texture so that the conversion would be done in hardware.
Oh yeah, have you tried running ZDoom in DGA mode? SDL has DGA support that can be toggled on by setting the environment variable SDL_VIDEODRIVER to 'dga'. You will need to be root to run this.
Also, if you have a graphical framebuffer console, it's worth a try running zdoom on it. (yes, SDL has framebuffer console support)
BTW Randy could I see your current code? I'd like to investigate this myself.
-
randi
- Site Admin
- Posts: 7749
- Joined: Wed Jul 09, 2003 10:30 pm
Would you like to see the source code before or after I autoconfiscate it? I'm working on making it use autoconf right now. But if you don't want to wait any longer, I still have the source with my custom Makefiles. (Although there is a crashing bug that kills it real quick which I introduced while trying to make it faster before starting the autoconf work.)