Wed Mar 24, 2021 4:55 am
Wed Mar 24, 2021 10:31 am
Graf Zahl wrote:It really depends on what you need. 9 years ago I had the choice of going mid-range or spend €200 more on a more powerful CPU. So, knowing my gaming habits, I merely bought a low mid-range GPU but invested a bit more into the CPU. I still run that system, admittedly it has problems decoding 4K videos but since I cannot display them, who cares?
But I am slowly reaching the stage where an upgrade may make sense because for many tasks the 4 core CPU and 8 GB of RAM won't cut it anymore. Still, getting 9 years of life out of a computer surely isn't bad at all! With a weaker CPU I may have had to upgrade 4 years ago already.
Wed Mar 24, 2021 11:22 am
Wed Mar 24, 2021 12:25 pm
Graf Zahl wrote: such an approach is very, very hostile toward code readability and refactorability...The loss of productivity and maintainability will inevitably take its toll here.
Wed Mar 24, 2021 12:33 pm
Blzut3 wrote:By the way you do know "server-grade memory" means slower with higher latency right? Granted the modules usually have a lot of overclocking headroom, but most server boards don't allow taking advantage of that.
Wed Mar 24, 2021 12:41 pm
Wed Mar 24, 2021 12:44 pm
#MartinHowe wrote:Blzut3 wrote:By the way you do know "server-grade memory" means slower with higher latency right? Granted the modules usually have a lot of overclocking headroom, but most server boards don't allow taking advantage of that.
Pity, I naturally assume server things are better than consumer things
MartinHowe wrote:I often see server CPUs in the same family clocked slower than the slower clock speed of the desktop chips and have wondered why for a long time. I guess they are hoping for more parallel use and thus better use of more cores.
Wed Mar 24, 2021 12:54 pm
Graf Zahl wrote:the only bloat we have to contend with is AMD's lousy OpenGL performance.
Wed Mar 24, 2021 1:20 pm
Wed Mar 24, 2021 1:27 pm
Redneckerz wrote:Graf Zahl wrote:the only bloat we have to contend with is AMD's lousy OpenGL performance.
Okay, i am going to bite because this claim keeps getting reiterated. lets test GZDoom against a reference rasterizer from both Nvidia and AMD then. That there is degression is one thing but the orders of magnitude that get mentioned in various terminologies aren't.
This was also asked years ago but to this day no definitive consensus to this myth for better or worse has been given, which is also the reason why im bringing this up. Back then a consensus never came because the requester went on to do so in increasingly hostile ways.
The worse part of calling all this? I can't find that forsaken thread anymore nor the user's name. But it listed the results of a reference OpenGL implementation, and it challenged Graf to run that test on ATI/AMD hardware. Graf's words were, far as i can remember, along the lines of ''Test it for me instead and we will talk.'' The requester didn't take that kindly however and called dibs.
Its at the tip of the tongue, but i cannot for the life can come up with the user's name or the thread. Its annoying because it severely degrades the argument i try to make here.
Edit; Hold on to your hineys. I just remembered who it was. Thank you, old me. (Contains some interesting commentary regarding the AMD regression).
Edit 2: This 2019 post by Graf explains things. But this 2010 post by Rachael agreed with VC. And this 2014 post by Leilei suggested a similar thing to test GZ against the Mesa3D software rasterizer (In the sense of setting a reference)
Edit 3: Found it.
Wed Mar 24, 2021 2:09 pm
Graf Zahl wrote:AMD's lousy performance is tied to one property of their driver that has been an issue for at least 12 years and that has never changed: Issuing a draw call (i.e. calling glEnd in immediate mode or glDrawArrays or glDrawElements blocks the entire thread it runs on for a significant amount of time. There's no way around it, vertex buffers change nothing about it *AT ALL*.
The only way to speed up AMD with OpenGL is to reduce the number of draw calls - without incurring even more overhead by doing so.
Rachael wrote:First of all - most of what I see you doing here is bringing up old drama just for the sake of it.
Rachael wrote:Third of all - AMD is provably worse with OpenGL on Windows. It's not just GZDoom, it's every OpenGL game in existence. Does AMD make good cards? Sure. But the OpenGL drivers are trouble. Always have been. Plus, literally every other major update, AMD introduces new bugs in their OpenGL implementation. It's simply non-stop fuckery on that front. The only reason it works better on Linux is because AMD is a lot friendlier to open source on Linux than NVidia ever was, and therefore AMD drivers are no longer a colossal fuckfest, it can be improved by the community, there.
Rachael wrote:I have no doubt that when it comes down to the bare metal that AMD cards are cheaper and can outperform equivalent generation NVidia cards in at least half or more tests in the bare-bone metrics. But when you add the cruft on top of it that is the AMD drivers, that all goes right down the drain.
Rachael wrote:I really would appreciate you not bringing that up again, much less name dropping me for something I said a decade ago.
Wed Mar 24, 2021 2:26 pm
I don't agree with VC's efforts back then but the general idea - testing against a reference to determine what the cause for the significant degression is is not a unreasonable one. As the Edit post mentions, i don't disagree that there is a performance deficit due to AMD's driver implementation - I am more interested as to why said deficit is so significant to the point of being unreasonable.
ZDoomGL was tested against a reference implementation and that provided the support for what users were experiencing back then already (That ZDoomGL's performance flunked against the GZDoom builds at the time). The reference implementation test showed that ZDoomGL had half the performance of GZDoom and was the cause of unoptimized code.
Be as it may, and i don't disagree there. However, GCN cards are known for their longevity in newer games (Which undoubtely is backtraceable to the simple fact that the last-gen consoles used GCN-based GPU's, for one.)
That's only why im arguing the reference implementation case - To determine if performance regression of GZDoom on OGL AMD hardware is this significant when exposed to a reference test case instead of a user test case.
Wed Mar 24, 2021 3:35 pm
MartinHowe wrote:Pity, I naturally assume server things are better than consumer things I often see server CPUs in the same family clocked slower than the slower clock speed of the desktop chips and have wondered why for a long time. I guess they are hoping for more parallel use and thus better use of more cores.
Thu Mar 25, 2021 11:18 am
Redneckerz wrote:I don't agree with VC's efforts back then but the general idea - testing against a reference to determine what the cause for the significant degression is is not a unreasonable one. As the Edit post mentions, i don't disagree that there is a performance deficit due to AMD's driver implementation - I am more interested as to why said deficit is so significant to the point of being unreasonable.
Redneckerz wrote:Based on implementation, i would expect a regression, but not in the orders of magnitude on display here.
Redneckerz wrote:I was collecting evidence to support the general idea of a reference implementation, not to showcase support for the requesters method of trickling down on Graf. If i gave that impression, i apologize.
Thu Mar 25, 2021 11:27 am
Rachael wrote:And the thing I remember most was just how much he went through the code trying to change things, this and that and the other, and just simply nothing seemed to work. In the end he was able to figure out what the actual problem was (the stencil buffer was effectively broken) but it was an issue that he could not fix at the time. I remember that I felt bad putting him through that.
Rachael wrote:I think this is relevant because you say you don't expect things to be as shitty as they are for AMD - but you fail to realize that AMD is just focusing on what butters their bread. Just like any corporation. Even on NVidia, OpenGL is pretty shitty, it's just less so. Literally anything that uses any other driver interface on Windows simply works better. Be it Vulkan, classic Direct3D, or even D3D12.