Mon May 07, 2018 7:13 am
Graf Zahl wrote:That's precisely the issue here. Some people never realize that an engine with loads of modern-ish hardware rendering effects cannot run well on old hardware and that eventually the time comes where a decision has to be made between backwards compatibility and better features.
Let's be serious here: If it wasn't for Intel completely missing hardware trends 10 years ago we wouldn't face this problem. But due to their crappy 3D hardware a lot of cheap-ass laptops were dumped into the market with graphical capabilities that don't even match a real vintage-GL2 system from 13 years ago. I once had the fun of running GZDoom on an older Mac with an Intel HD3000 and calling the performance "bad" would be a major understatement - but the hardware we are talking about here is even worse and will already start to choke on medium sized Boom maps without any advanced features being enabled.
What about the AMD APUs that contains integrated GPUs? The whole thing appealing about APUs was nullified by the CPU's crappy cheap-ass performance. AMD APUs were a total opposite of Intel CPUs with integrated GPUs, meaning AMD got the graphics part right but the CPU part wrong, whereas Intel got the CPU part right but the GPU part wrong.
I will be honest here, if it wasn't Intel GPU's shitty hardware, the whole thing could be much better. If Intel followed the hardware trends in time, then the end experience would be much more better and GZDoom could move forward without having to put it's support for such GPUs. Plus, GZDoom could move forward more quickly.
But no. Now we are stuck in a situation where we are forced to support the old technologies with almost no way to move forward and support newer technologies like Vulkan. I would be more than glad to pull out the shitty GL2 support if I was developing a hardware-accelerated Doom port, because that shit makes it impossible for GZDoom to move forward. People need to realize that they need to move forward quickly if they don't want to be crushed under the bus. People also need to realize that backward compatibility is never going to last forever.
Mon May 07, 2018 7:28 am
I mostly agree. But have a look at the other OpenGL ports. AFAIK none of them has moved beyond GL2 yet. Some even still support GL 1.4 because "compatibility is important".
Mon May 07, 2018 9:34 am
Before we go throwing OpenGL 2 to the wind, I would like to interject that Broadcom's VideoCore4 chipset requires OpenGL2 support. This is the GPU that is in use on the current Raspberry Pi boards, and they will not work with a modern renderer.
That is a computer system that is still maintained today, and the OpenGL ES support in GZDoom does not support GLES 2, either.
So throwing that out will ditch support for Raspberry Pi in its current form, completely. This is why I have been trying to push for modularizing the system in some way, removing it from the main branch but allowing it to exist as an addon - so that support can be continued for such systems until they get upgraded.
Mon May 07, 2018 2:11 pm
I think we're only talking about GL2.0 support being a fallback, and not the norm. Even then, keep in mind that the Software Renderer doesn't require any OpenGL support, and is slated to include model support soon, so it's not like those people don't have an option...
Mon May 07, 2018 2:19 pm
As of the 2D refactor, the software renderer does require OpenGL support, and will not work without it. If you have software OpenGL it runs very poorly on all except the most modern processors. Also - don't even count on the ARM chip in a Raspberry Pi to run the software renderer well - it simply does not, except at very low resolutions.
Tue May 08, 2018 10:12 am
Rachael wrote:As of the 2D refactor, the software renderer does require OpenGL support, and will not work without it. If you have software OpenGL it runs very poorly on all except the most modern processors. Also - don't even count on the ARM chip in a Raspberry Pi to run the software renderer well - it simply does not, except at very low resolutions.
I've done some testing on a pentium 3 @1ghz and i get 60fps @640 (8 bit) on win xp, according to some benchmarks the single core performance of that ARM Cortex A7 is much lower at the same clock speed. On win98 the game no longer runs well (i get half the framerate with cl_capfps on but full speed) and no sound there.
Tue May 08, 2018 10:23 am
Of course it's slower. ARM chips were not designed for intensive CPU processing - they're the same chips that are used in most cell phones and pocket devices today, including but not limited to watches, handheld price scanners, pocket organizers, etc.
The key here is that an ARM chip consumes less power - therefore has longer battery life - than any IA32/64 chip could hope to achieve, whether be they by Intel or AMD's design.
So yes, running anything "software mode" on any ARM CPU is undesirable. This goes for D-Touch, as well.
Tue May 08, 2018 2:15 pm
Somewhat related: Some weeks ago at work a colleague asked me how it could be that the web version of our app could get a large list of data faster from the server than it took the iPad version to calculate it locally. Well, that's mobile ARM vs server Intel! The performance difference was so large that even transferring the data via internet did not cause it to fall behind.
Tue May 08, 2018 3:07 pm
ARM makes an excellent server architecture for anything that does not require a large amount of processing, due to its low power consumption. This includes things like multiplayer FPS servers (i.e. Quake-likes, provided the player count stays within a reasonable limit), file sharing, routers, downloaders, simple HTTP and low-weight PHP processing, python scripts, and other things of that nature.
The moment any sort of processing comes into play, such as statistics processing, compression/decompression, MPEG processing, software-rendered graphics, etc, you will start to see a clear advantage of using Intel processors for such things since they can do a ton of math quicker and a lot more gracefully. There is not even a real competition - ARM processors are good at what they're designed for - especially when you're trying to "Go Green" as a corporation - but they're not designed for Doom or Quake rendering. That is why the Raspberry Pi version of GZDoom needs the GPU in order to run at a decent frame rate.
A good multi-server web setup is a series of ARM processors to do the front-end processing for serving web pages and content and front-line security, while an Intel processor handles the actual database and back-end processing and does the heavy lifting.
Wed May 09, 2018 6:30 am
The Cortex-A7 and A53s in Raspberry Pi 2 and 3 are the very low end of ARM cores. Some ARM cores, particularly Apple's more recent ones, are pretty damn fast.
Wed May 09, 2018 5:38 pm
So, is there a rough timeline for this stuff? Is eventual OpenGL 4 support planned? Should we expect a Vulkan renderer in GZDoom in the next few years? Or is it just "OpenGL 3 is widely supported, so that's the thing now."?
Edit: Of course, thinking back on it, I don't see why I'm thinking that GL4 support is important, though I am wondering about the Vulkan thing...
Wed May 09, 2018 6:35 pm
OpenGL4 is not going to be more supported than it already is, and even that much I think it just uses a few extensions from it to make things faster and nothing more.
No OpenGL support whatsoever is a preclusion to having Vulkan support since it is an entirely different API. Vulkan is a more pedal-to-the-metal API whereas OpenGL did a lot more hand-holding and hardware abstraction (which makes a programmer's life a lot easier than with other API's, like Vulkan, Direct3D, and Apple's Metal).
But even if it hasn't happened in GZDoom itself, yet, in general OpenGL has been deprecated in favor of supporting Vulkan, so when GZDoom does make the eventual move, OpenGL literally will become "the legacy path".
Thu May 10, 2018 12:50 am
The renderer is already mainly GL4.4, the GL 3 support is mainly through less performant fallback paths.
There are a few things that do not work at all without OpenGL 4.4+ because they require the generic shader storage buffers. I haven't made a full list of those yet, but AFAIK the shadowmaps require this. They are too slow on non-Vulkan-compatible cards anyway, and even on low end Vulkan compatible ones.
However, since non-GL-4 hardware has been discontinued by all manufacturers 7 or 8 years ago, don't expect us to focus on that old stuff when considering new features. It is far less of a limiter than GL 2.x, but it's still old legacy hardware that imposes some rather stupid restrictions on how to do things and which actually cost performance and require code duplication.
Yes, it's still roughly 40% of our user base but let's not forget that for many things that could be done in the renderer these cards' power is insufficient. These cards will most likely lose out on new developments.
The main problem here is Apple, because Apple doesn't support any modern generation feature in OpenGL (GL 4.1 would better be named 3.5 anyway in terms of feature support) Fortunately, Macs are mostly irrelevant, being the system with the smallest user share around. And I don't expect this to improve anymore with Apple basically abandoning the enthusiasts and solely focussing on the money (i.e. selling underpowered and overpriced hardware to corporate customers for iOS development) I wonder what third party Vulkan support on macOS will bring...
Vulkan will definitely be a thing, the main reason why it hasn't gone forward is simply lack of time - and the need to split off the API-independent renderer parts, which now is mostly complete. But to go forward from here a lot of research is needed first and with a full time programming job and summer ahead don't expect any quick developments here. I'd rather spend my free time elsewhere than in front of a computer if the weather allows it.
Mon May 14, 2018 8:11 pm
It is possible to use lanczos resampling for resolution scale? That would be great.
Mon May 14, 2018 9:13 pm
First, suggestions like this belong in the Feature Suggestions forum
Secondly, that's not possible unless an additional layer of processing is applied to the scaler. So either this will require dpJudas becoming extremely bored (which he hasn't been, lately
) or an external code submission.
Powered by phpBB © phpBB Group.
phpBB Mobile / SEO by Artodia.