Shader Cache

Post a reply

Smilies
:D :) :( :o :shock: :? 8-) :lol: :x :P :oops: :cry: :evil: :twisted: :roll: :wink: :geek: :ugeek: :!: :?: :idea: :arrow: :| :mrgreen: :3: :wub: >:( :blergh:
View more smilies

BBCode is OFF
Smilies are ON

Topic review
   

Expand view Topic review: Shader Cache

Re: Shader Cache

by Graf Zahl » Wed Nov 21, 2018 5:06 pm

The thing with NVidia is: It doesn't really cache a compiled binary! The entire feature is a fake on their hardware so that third parties cannot analyze their hardware implementation.

Re: Shader Cache

by Rachael » Wed Nov 21, 2018 4:20 pm

I really would prefer, instead of doing a card check, to do a CVAR check instead, and make the whole thing optional. And before you think I am insane, hear me out:

There is no guarantee every NVidia and AMD will have a cache. There is no guarantee every Intel won't. As a feature it's a godsend for those who don't have such a feature, and it WILL reduce the number of reports appearing in Bugs and/or Technical Issues.

Even so - I've noticed NVidia's caches are like, 2-3 megabytes, maybe? Peanuts. - Intel's, on the other hand, is 37 megabytes. So overall, I think you're dramatically overstating the "harm" that this feature is doing.

Re: Shader Cache

by Graf Zahl » Wed Nov 21, 2018 3:09 pm

I'd still want to disable this on cards that have a native shader cache. There it will only create some needless overhead.

Re: Shader Cache

by drfrag » Wed Nov 21, 2018 2:42 pm

I've seen dpJudas has added this. Well done! :wub: This was way more complicated than i expected, probably i couldn't have done it. :oops: :)
I've tested this with the vintage build only on nvidia.

Re: Shader Cache

by Rachael » Mon Nov 19, 2018 2:10 pm

To be quite frank, you shouldn't even need to do that.

There are two things that can change the shader compilation: The driver doing it, and the shader program, itself. As long as those two stay consistent, that is what the shader cache is meant to help you with. If you have a dual GPU system - fine, all is dandy there, you compile two sets of shaders instead of one. It shouldn't be that complicated. Either way, Intel benefits, and NVidia sees no real drawbacks.

If a shader changes obviously it has to be recompiled. If you properly implement hashing to the original program, that's not an issue - it simply means the hash for the new shader source is gone. So? - recompile! Simple!

If you change your GPU, that again means the hashing scheme chanegs - simply recompile, problem solved.

As for old caches hanging around, that's what the prune is meant to solve - anything that is ~2 months old - gone.

The issues brought up in this thread overcomplicate the original issue unnecessarily.

Re: Shader Cache

by drfrag » Mon Nov 19, 2018 1:44 pm

I might look into it myself, but i'd only check the graphics vendor.
Edit: testing could be a problem, since i only have access to my sister in law's laptop once every few days. I chose amd over intel for my new "crappy" laptop, but it's great compared to my three Pentium M ones (radeon 7000, radeon 9200, intel gma 900). Unfortunately it's gma 900 and not 950 like yours so it's even slower. BTW you should change thermal paste (in case you didn't). And let's not forget my P4 2.8 with trident blade xp, the crappiest of them all. :)

Re: Shader Cache

by Rachael » Mon Nov 19, 2018 12:29 pm

I haven't started yet. Needless to say the posts in this thread instituting arbitrary rules on this, with reasons that are understandable but easily mitigated by hashing the GPU name with the pre-compiled code to make such restrictions wholly unnecessary, have killed my motivation on this.

Re: Shader Cache

by drfrag » Mon Nov 19, 2018 8:46 am

What happened with this? Any technical problems?

Re: Shader Cache

by Marisa the Magician » Sun Oct 28, 2018 4:34 am

If you're doing a shader cache for Intel, it should be only for Windows. On Linux (and pretty much any other platform that uses Mesa) there already is a system-wide shader cache by default on most distros.

Re: Shader Cache

by Graf Zahl » Sat Oct 27, 2018 2:48 pm

If you keep your code inside gl_shader.cpp, no.

Re: Shader Cache

by Rachael » Sat Oct 27, 2018 1:56 pm

Alright. I might be able to manage that.

If I do this, is it going to interfere with the backend abstraction refactor?

Re: Shader Cache

by Graf Zahl » Sat Oct 27, 2018 1:47 pm

Furthermore, it should only be active for Intel, it makes absolutely no sense with NVidia and AMD to cache the shader binaries ourselves.

Re: Shader Cache

by dpJudas » Sat Oct 27, 2018 12:27 pm

@Rachael: if you want to code support for this, what you need to do is call glGetProgramBinary right after each glLinkProgram call. Then instead of calling glCompileShader and glLinkProgram on next launch it should call glProgramBinary with the output from the previous call to glGetProgramBinary.

Re: Shader Cache

by Graf Zahl » Sat Oct 27, 2018 7:47 am

1-2 minutes? For me a complete shader recompile takes 10 seconds. Intel must really be doing something wrong... :?

Shader Cache

by Rachael » Sat Oct 27, 2018 7:33 am

As evidenced here, people who use Intel GPU's are in desperate need of a shader cache. In my tests, unless I run my processor in full turbo (which for reasons you'll figure out later in this post, I don't like to do), it can take upwards of 1-2 minutes to start GZDoom.

The reason why I am using my Intel GPU more lately is because lately, GZDoom has been getting more and more demanding from the NVidia GPU which causes my fans to go crazy and I don't like that sound, and I still get a smooth frame rate on Intel anyway so I figure why not just use that instead. The computer runs silently this way, and that's a better experience for me.

So is it possible to do such a thing? An idea to do it might be something which caches it based on Git version, writes it to GZDoom's cache folder, but would have to use the "ARB_get_program_binary" extension. To save on disk space it could delete anything that hasn't been used in over a month.

Top