Internal resolution scaling/quantization option
Moderator: GZDoom Developers
Internal resolution scaling/quantization option
Given that GZDoom now supports tonemaps resulting in more or less software-perfect visuals in OpenGL mode, I was wondering if it would be possible to implement an option to toggle/adjust the game's internal resolution. Since hardware rendering on modern cards can't display at anything less than 640x480, this could be a means of "faking" the old DOS resolution a la Chocolate Doom, giving a true "old school" look.
Thoughts?
Thoughts?
- Caligari87
- Admin
- Posts: 6174
- Joined: Thu Feb 26, 2004 3:02 pm
- Preferred Pronouns: He/Him
- Contact:
Re: Internal resolution scaling/quantization option
I feel like this was very recently addressed and abandoned for some reason...
EDIT: Here's the thread I was thinking of. Looks like it was gonna break legacy support?
EDIT: Here's the thread I was thinking of. Looks like it was gonna break legacy support?
Re: Internal resolution scaling/quantization option
Ah, that's why I couldn't find it under a "resolution" search.
Shame.
Shame.
Re: Internal resolution scaling/quantization option
I'll see if I can dig up dpJudas's older attempt at it and try and fix it.
But you can definitely expect Graf not to be invested in this. My main interest in it, other than being retro-looking like the tonemap shader, is that it would improve performance on lower-end processing units.
My personal interest in it is being able to play software mode without the fans kicking up to 1500 RPM. I like silent computers.
But you can definitely expect Graf not to be invested in this. My main interest in it, other than being retro-looking like the tonemap shader, is that it would improve performance on lower-end processing units.
My personal interest in it is being able to play software mode without the fans kicking up to 1500 RPM. I like silent computers.
Re: Internal resolution scaling/quantization option
For me, because my monitor is high-dpi (or whatever it's called), to get the window to a decent size to play, my resolution ends up being like 1920x1280 or something like that.
There are maps that work great for me at lower resolutions that drag at these high resolutions. I'd love to be able to have something like the old pixel doubling so that my performance wouldn't tank if I decide to not want to squint at the teeny tiny game screen...
I'm pretty sure I can force Windows to cooperate by tweaking the registry and putting in my own manifest file to override the scaling, but an engine-side capability would be less hackish.
There are maps that work great for me at lower resolutions that drag at these high resolutions. I'd love to be able to have something like the old pixel doubling so that my performance wouldn't tank if I decide to not want to squint at the teeny tiny game screen...
I'm pretty sure I can force Windows to cooperate by tweaking the registry and putting in my own manifest file to override the scaling, but an engine-side capability would be less hackish.
Re: Internal resolution scaling/quantization option
If you run the software renderer with the OpenGL canvas then you can use vid_max_width and vid_max_height to set a resolution lower than the actual window size.
Note however that those two cvars are not official and may be removed again in a future version without warning. They exist solely because I too have a 4K monitor and needed a way to lock the resolution to 1080 to get a better frame rate.
Note however that those two cvars are not official and may be removed again in a future version without warning. They exist solely because I too have a 4K monitor and needed a way to lock the resolution to 1080 to get a better frame rate.
- Hellser
- Global Moderator
- Posts: 2706
- Joined: Sun Jun 25, 2006 4:43 pm
- Preferred Pronouns: He/Him
- Operating System Version (Optional): Windows 11
- Graphics Processor: ATI/AMD with Vulkan/Metal Support
- Location: Citadel Station
Re: Internal resolution scaling/quantization option
I will actually be totally behind this. I personally like the look of "Chunky Doom", and it'll be different to see how things are looking chunky but yet, pretty.
- Graf Zahl
- Lead GZDoom+Raze Developer
- Posts: 49073
- Joined: Sat Jul 19, 2003 10:19 am
- Location: Germany
Re: Internal resolution scaling/quantization option
dpJudas wrote:If you run the software renderer with the OpenGL canvas then you can use vid_max_width and vid_max_height to set a resolution lower than the actual window size.
Note however that those two cvars are not official and may be removed again in a future version without warning. They exist solely because I too have a 4K monitor and needed a way to lock the resolution to 1080 to get a better frame rate.
This really should be made part of all buffered render paths. Of course the hardware renderer with gl_renderbuffers off needs to be an exception.
Re: Internal resolution scaling/quantization option
My attempt at creating a generalized solution did try to do that, but I ran into trouble with the D3D 9 backend. Maybe if/when we fix the 2D drawer stuff it will be easier to achieve. For the GL renderer the support is almost already there - in fact, the macOS full screen version is doing it as you describe.Graf Zahl wrote:This really should be made part of all buffered render paths. Of course the hardware renderer with gl_renderbuffers off needs to be an exception.
- Graf Zahl
- Lead GZDoom+Raze Developer
- Posts: 49073
- Joined: Sat Jul 19, 2003 10:19 am
- Location: Germany
Re: Internal resolution scaling/quantization option
Oh yes, the D3D9 backend. I really don't get why this code has become this insanely complex. But to unify the different render backends first a lot of cleanup is needed on the texture classes. For too long this suffered from the base class not really being suitable for hardware rendering with the D3D9 stuff haphazardly being tacked on and GL having to do its own subclasses to steer clear of these problems. And to top it off, the Atlassing in the software rendering parts is a mystery on its own. I wonder how much advantage this really brought, but considering that few HUDs draw more than 100 elements it's mostly irrelevant. Instead I'd very much prefer to explicitly create atlas textures for fonts only and do this at a higher level, but forget about this stuff in the backends. Was that old ATI hardware with SM 1.x really that weak that it couldn't deal with 100 draw calls or so that the entire code had to be mucked up with this kind of complexity?
Another thing I'd prefer to eliminate is using two-stage textures with the second texture serving as a palette. It doesn't even fully work! Some years ago I had a bug report about an incorrectly colored translated sprite but was unable to do anything about it because this code's assumptions were wrong.
This WILL need a work branch, though, because temporarily breaking stuff is close to inevitable. And I think some very old hardware may have to suffer a bit and forcibly reverted to software 2D drawing in order to keep the code manageable.
Another thing I'd prefer to eliminate is using two-stage textures with the second texture serving as a palette. It doesn't even fully work! Some years ago I had a bug report about an incorrectly colored translated sprite but was unable to do anything about it because this code's assumptions were wrong.
This WILL need a work branch, though, because temporarily breaking stuff is close to inevitable. And I think some very old hardware may have to suffer a bit and forcibly reverted to software 2D drawing in order to keep the code manageable.
Re: Internal resolution scaling/quantization option
I assume you mean FTexture and the derived classes. What changes are you planning there? To move the GetColumn and span stuff to the software renderer?Graf Zahl wrote:Oh yes, the D3D9 backend. I really don't get why this code has become this insanely complex. But to unify the different render backends first a lot of cleanup is needed on the texture classes. For too long this suffered from the base class not really being suitable for hardware rendering with the D3D9 stuff haphazardly being tacked on and GL having to do its own subclasses to steer clear of these problems.
The atlas serves both the purpose of reducing state changes as well as making images work with hardware having power-of-two texture limitations. Back in the day batching things could make a really big difference, but I don't know if it is as bad today. One thing is certain tho - poor batching can end up being slower than not doing it at all.Graf Zahl wrote:And to top it off, the Atlassing in the software rendering parts is a mystery on its own. I wonder how much advantage this really brought, but considering that few HUDs draw more than 100 elements it's mostly irrelevant. Instead I'd very much prefer to explicitly create atlas textures for fonts only and do this at a higher level, but forget about this stuff in the backends. Was that old ATI hardware with SM 1.x really that weak that it couldn't deal with 100 draw calls or so that the entire code had to be mucked up with this kind of complexity?
You mean the translation stuff? Or is there another two-stage texture thing?Graf Zahl wrote:Another thing I'd prefer to eliminate is using two-stage textures with the second texture serving as a palette. It doesn't even fully work! Some years ago I had a bug report about an incorrectly colored translated sprite but was unable to do anything about it because this code's assumptions were wrong.
Hehe, yes, doubt about that. Just let me know when you think we should start working on this.Graf Zahl wrote:This WILL need a work branch, though, because temporarily breaking stuff is close to inevitable. And I think some very old hardware may have to suffer a bit and forcibly reverted to software 2D drawing in order to keep the code manageable.
- Graf Zahl
- Lead GZDoom+Raze Developer
- Posts: 49073
- Joined: Sat Jul 19, 2003 10:19 am
- Location: Germany
Re: Internal resolution scaling/quantization option
No plans yet. I first need to get an overview over the different backends.dpJudas wrote:I assume you mean FTexture and the derived classes. What changes are you planning there? To move the GetColumn and span stuff to the software renderer?Graf Zahl wrote:Oh yes, the D3D9 backend. I really don't get why this code has become this insanely complex. But to unify the different render backends first a lot of cleanup is needed on the texture classes. For too long this suffered from the base class not really being suitable for hardware rendering with the D3D9 stuff haphazardly being tacked on and GL having to do its own subclasses to steer clear of these problems.
Sure. The thing is just that the OpenGL renderer never batched at all and with the low number of polygons drawn, even on AMD with their notoriously bad draw call performance it never actually mattered. I cannot say if it was an issue in the beginning. Randi was known for this kind of tinkering, not because it brought an advantage but just for the sake of doing it.The atlas serves both the purpose of reducing state changes as well as making images work with hardware having power-of-two texture limitations. Back in the day batching things could make a really big difference, but I don't know if it is as bad today. One thing is certain tho - poor batching can end up being slower than not doing it at all.
Instead of random unorganized atlassing in the backend I'd very much prefer to atlas the fonts on a higher level and leave it at that. I believe that will allow the same amount of batching but do it in a backend-independent manner.
I mean putting much of the system independent logic into a higher level class so that all backends can benefit from it. The translations are just one part.You mean the translation stuff? Or is there another two-stage texture thing?
No idea yet, first we need a workable roadmap.Hehe, yes, doubt about that. Just let me know when you think we should start working on this.
And one thing that absolutely needs fixing is PALVERS handling. That stuff is so totally broken I do not know why it was even added...