Page 4 of 4

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 1:25 am
by MartinHowe
Posting here to avoid derailing the GLES thread - and since I started this thread, I guess I can join the GPU discussion derailment myself if I want to :p

I know basically how the original software renderer works; one would think that on a modern system in software renderer mode, the CPU composes the scene into a framebuffer then simply hands it to the GPU to put on the screen. One would think that on an old graphics card, with no OpenGL or whatever, it would simply do whatever the old ZDoom did to push the framebuffer to the graphics hardware. However, even LZDoom requires at least OpenGL 2.1, IIRC and even GLES Doom requires some form of OpenGL, even for maps that do not use any special features. It seems odd, in the sense that if it requires hardware support, can it even be truly called a software render anymore? Indeed generally speaking, there have been so many changes to rendering since ZDoom, it'd be nice to know how the GZDoom/LZDoom renderer works in any case, perhaps via a flowchart or similar.

So can anyone tell me on a technical level at what point the software rendering process in GZDoom or LZDoom diverges from that in ZDoom?

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 2:00 am
by Graf Zahl
The software rendering process itself has not changed.
What has changed is how stuff gets transferred to the graphics hardware.

Originally Windows used DirectDraw for that but it became a problem around 2005 when graphics hardware had changed too much. Drivers had increasingly worse support for this method so ZDoom got a D3D9 backend to handle the transfer. The main problem with this backend was that it was poorly encapsulated and did a lot of high level work deep in the backend, like preparing 2D rendering. This was mainly owed by still having to support DirectDraw for older hardware.

But ultimately this means that since 2005, even ZDoom used basically the same method as present now to present its software rendered output. The main drawback was that this was Windows only - on the other OSs it still relied on archaic access methods.
The main issue with OpenGL here is that in order to do anything useful you need GL 2.x as 1.x is just too limited and even when running the first survey in 2018 the number of systems we were unable to classify was in the single digits, i.e. for all intents and purposes they were already non-existent, which is why the requirement is mainly GL 2.1, which means standalone graphics hardware from 2004 and up and integrated hardware from 2009 and up.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 6:29 am
by MartinHowe
Thanks, I think I got it now; so to be sure, in my own words:

Supporting non OpenGL systems on would need DirectDraw in Windows and arcane platform specific stuff on Linux etc.
OpenGL 1.x doesn't have a simple "transfer this framebuffer to the hardware's framebuffer" or an easy way of doing that.

Is this right?

Presumably backporting ZScript to ye old ZDoom 2.8.1 would be impossible because the overall internals of the system have changed too much?
(I'm not asking for this, more thinking if it's worth trying to do myself if by some miracle I ever have time.)

Digression as to why anyone would want that spoilered below so it doesn't get in the way:
Spoiler:

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 6:38 am
by dpJudas
A simpler way of saying it is that in the DOS age Doom's software renderer could just write to the memory address of the video buffer and then, on the next vsync, change which address was being displayed by the VGA card. In the modern age this is simply not possible and all Doom ports, including Chocolate Doom, uses a hardware driver to present the output of the software renderer in one way or another (Chocolate Doom uses SDL to do it, but it's still done that way).

The only really special thing about ZDoom and GZDoom is that Randi took it one step further and decided that since it was already using the 3D hardware anyway, it might as well render the 2D HUD stuff using a shader. But the 3D scene itself is still 100% software rendered - it's just how it gets presented that has been upgraded to modern graphics cards (where modern here means anything after the year 2000).

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 6:48 am
by MartinHowe
Thanks @dpJudas, so if get this right even ZDoom uses SDL on Linux which in turn means some form of OpenGL is required even for 2.8.1?

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 6:53 am
by Rachael
OpenGL is not required for Linux for ZDoom 2.8.1, but if you have it it will use it to accelerate the presentation. However, unlike with Windows and the Direct3D components, ZDoom's 2D drawing will happen exclusively with the existing software drawers, which means that it will be unable to present things like truecolor weapon sprites or a transparent console such that is available in Windows.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 7:04 am
by drfrag
LZDoom AFAIR works the same as ZDoom and old versions of GZDoom, but you can compile without SSE2 on Windows and it ran. The problem could be that you need to compile the sound dlls without SSE2 too. So in theory it should run on a Pentium II and Windows XP. I mean the g3.3mgw branch.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 7:05 am
by dpJudas
For Linux it is a little bit complicated because Linux can be many things. If we assume a typical linux distribution with X11 active, then SDL has to hand off the pixels from software renderer to X11 one way or another.

When Graf was discussing DirectDraw vs Direct3D 9 it was really about which API offered the most efficient way of doing this. He described how the driver support for DirectDraw diminished. That happened because it was no longer a good mapping to what was really going on and it had become an emulated API. Using a shader had become more efficient.

In X11 land this usually means SDL uses something like a X11 pixmap, OpenGL, or maybe even Vulkan. What strategy SDL uses doesn't really matter that much, because the point is that on the hardware front there is little difference: it is all effectively a memory transfer of an image (or a memory map to an image on the GPU). In all cases the final image is drawn using shaders by the GPU when X11's compositor sends it to the video frame buffer. No application has direct frame buffer access anymore.

@Rachael: I actually added true color support to the ZDoom 2D drawer fallback code long long time ago. GZDoom doesn't have this fallback code anymore - I think Graf removed it when he simplified stuff in order to support multiple backends.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 7:23 am
by Graf Zahl
The entire software 2D drawer was removed during the backend unification in GZDoom 3.4. It was simply too much of an obstacle that had to be maintained, so when I refactored the 2D code to be a device independent layer that forwards a list of polygons to the actual renderer it all went away because at that point there was no need to keep the fallback any longer.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 8:01 am
by drfrag
But what was that code for? The Softpoly backend is much slower than the old code on SDL and linux both 8 and 32 bits Carmack, for relatively old CPUs that is.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 8:12 am
by dpJudas
The old zdoom backend code supported rendering the 2D parts to the software renderer output image and then handed that off to SDL. It might even have used SDL to do the palette conversion (which SDL most likely did with a shader). With softpoly the palette conversion is done by softpoly, then 2D is drawn by softpoly, and only then is it sent to SDL as a 32 bpp image.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 11:29 am
by MartinHowe
For avoidance of doubt, I do know there is no direct access to graphics hardware on modern systems at app level; I was assuming some sort of system call to reserve something and grant managed access to it a la Direct X.

That said:
Graf Zahl wrote:refactored the 2D code to be a device independent layer that forwards a list of polygons to the actual renderer.

I had kinda guessed that this is how the modern GZDoom works, apart from how the software renderer is unmodified; however I couldn't figure out how one converts a bitmap (well pixelmap) into polygons; surely that's the graphical equivalent of converting an MP3 into the original MIDI tracks? So when the software renderer has done its job and created the framebuffer image for that frame, what happens next?

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 12:12 pm
by phantombeta
It's drawn as a single rectangle (i.e., two tris) that covers the whole screen, using the software-rendered view as the texture.

Re: Multithreading Doom's playsim

PostPosted: Thu Apr 01, 2021 12:14 pm
by Graf Zahl
Yeah, nothing really special here - the content of the 3D view gets uploaded as a normal texture, rendered to the screen and deleted afterward.