Found bugs in version 4.11.3

Need help running G/Q/ZDoom/ECWolf/Zandronum/3DGE/EDuke32/Raze? Did your computer break? Ask here.

Moderator: GZDoom Developers

Forum rules
Contrary to popular belief, we are not all-knowing-all-seeing magical beings!

If you want help you're going to have to provide lots of info. Like what is your hardware, what is your operating system, what version of GZDoom/LZDoom/whatever you're using, what mods you're loading, how you're loading it, what you've already tried for fixing the problem, and anything else that is even remotely relevant to the problem.

We can't magically figure out what it is if you're going to be vague, and if we feel like you're just wasting our time with guessing games we will act like that's what you're really doing and won't help you.
Alphrixus
Posts: 2
Joined: Mon Dec 11, 2023 6:21 am

Found bugs in version 4.11.3

Post by Alphrixus »

I tested GZDoom with our analyzer. All errors are described in the article. https://pvs-studio.com/en/blog/posts/cpp/1087/
User avatar
axredneck
Posts: 372
Joined: Mon Dec 11, 2017 2:09 pm
Preferred Pronouns: He/Him
Operating System Version (Optional): Arch
Graphics Processor: nVidia with Vulkan support
Location: Russia

Re: Found bugs in version 4.11.3

Post by axredneck »

Better report it here or here.
dpJudas
 
 
Posts: 3070
Joined: Sat May 28, 2016 1:01 pm

Re: Found bugs in version 4.11.3

Post by dpJudas »

I wasn't sure if I should even have approved the post or not. The motivation behind the post is clearly for advertising a product, perhaps with a bonus SEO effect by having sites in the Doom community link to it. However, the article itself did list a number of things they had managed to find via the linter tool and I figured the filtered list might be useful for devs to look at.

Personally I don't like using tools like this because while they did find something, what the article isn't mentioning is how many false positives they had to walk through to make that compiled list.

If we take the fuzz variable bug then that code is actually never really active anymore as its the old fuzz that only really looked correctly at 320x200 anyway. Or if its from the newer code then just delete it, but who cares, nobody looked at that particular function for years. So it found effectively dead code.

Then there's the part where it complained about the code not using certain variables - in this case the variables are there to help the next human reading the code. In other words, the tool is recommending making the code worse.

The most interesting thing it found was probably the usage of null pointers. But even here it must be a pretty rare case it found since a null pointer usage results in an immediate crash and we'd have had a bug report about that if it was a common occurrence.

In short: while linters do detect bugs, I think they are like turning warning level to pedantic: annoying as hell to use since they pester you with annoying messages way too often for me to consider them useful. They are like my "favorite" MSVC warning: converting from float to int results in a possible precision loss! Oh no!
User avatar
Rachael
Posts: 13635
Joined: Tue Jan 13, 2004 1:31 pm
Preferred Pronouns: She/Her

Re: Found bugs in version 4.11.3

Post by Rachael »

I agree with dpJudas. This analysis on the surface looked to be marginally more useful than the other pedantics who did similar things, and the author seems to actually somewhat know what they are talking about, but I still question whether any of the info provided is truly helpful. That it seems to be an advert for a commercial product seems to be even more iffy, to me.
User avatar
Chris
Posts: 2945
Joined: Thu Jul 17, 2003 12:07 am
Graphics Processor: ATI/AMD with Vulkan/Metal Support

Re: Found bugs in version 4.11.3

Post by Chris »

dpJudas wrote: Wed Dec 13, 2023 3:19 am In short: while linters do detect bugs, I think they are like turning warning level to pedantic: annoying as hell to use since they pester you with annoying messages way too often for me to consider them useful. They are like my "favorite" MSVC warning: converting from float to int results in a possible precision loss! Oh no!
I guess it depends on your outlook of code. Personally, while "converting from float to int" like warnings can be annoying, they can also hide serious bugs, so I find it better to actually deal with it by explicitly casting as a way to say, not just to the compiler but anyone that comes to look at the code later, that yes, I did mean to convert there and chop off the decimal. Similarly with finding dead code that doesn't look like it should be dead code (setting a variable, then later checking if the value's different without changing it); someone coming to look at the code isn't going to know if it's intended to be dead code or if there was a mistake and a variable didn't get set that should've been, and waste time trying to figure it out (and/or waste time "fixing" the dead code if it breaks compilation). To me, it's better to either remove it if it's truly unneeded and ease the maintenance burden, or properly mark it (with clang-tidy you can use // NOLINT ... comments to disable code checks on particular lines or blocks of code) and note that it's intended to be dead code but is being kept for a reason; and maybe also put it in an #if 0 or if constexpr(false) block to ensure it's properly optimized out (which also helps signify to the reader that it's not meant to be called).

To me, the only annoying warning is one I can't get rid of after determining I intended to do what it's warning about in that particular instance. Because what's more annoying than any other warning is getting a report that the code is misbehaving, taking hours or days to track it down, only to find it was an obvious mistake the compiler or something could've warned me about. Then realizing that bug was in a release and it's probably affecting many more people who haven't said anything. In some cases, I wish I could get more warnings (a recent issue I ran into was when I moved around some code, then soon realized resampling stopped working when downsampling with the bsinc (non-default) resamplers; turns out the table generation code was/is relying on an implicit conversion from int to double, and after some changes that conversion got moved to a later point, so results that should be between 0 and 1 become only 0 or 1. who knows how long it would've been before that got spotted, or how long it would've taken me to find it, if I didn't manage to run into that case soon after that change and have some idea of what to start with).

That said, I can completely understand that with a large multi-decade codebase like GZDoom, you can't reasonably be expected to go through everything whenever a new class of warning comes along for something that's been the norm for decades, and leaving the known warnings on can make it harder to spot unexpected warnings. But I do think it's at least a good to have a record of such issues, in case someone wants to do, or there's a need for, a related cleanup or refactoring, to help harden the code against future issues.
dpJudas
 
 
Posts: 3070
Joined: Sat May 28, 2016 1:01 pm

Re: Found bugs in version 4.11.3

Post by dpJudas »

From my point of view it is all about choosing your battles.

You can spend all your time suppressing linters and overaggressive warnings, or you can spend your time on more important things. The size_t and float conversion errors didn't really make people analyze the situation, it just monkey trained developers into silencing the warnings by adding a cast. Or they ignore the warnings now entirely, which is actually all too common. More important warnings then drown in the noise.

This is before we even get into the list of things they list that basically boil down to coding taste. For example, if you use a classic C style enum the Visual Studio linter will complain that you didn't type enum class. The PVS Studio one complained about assigning a variable a value after first using it:

Code: Select all

int FPCXTexture::CopyPixels(FBitmap *bmp, int conversion, int frame)
{
  ....
  uint8_t c = lump.ReadUInt8();
  c = 0x0c;  // Apparently there's many non-compliant PCXs out there...
  if (c != 0x0c) 
  {
    for(int i=0;i<256;i++) pe[i]=PalEntry(255,i,i,i);// default to a gray map
  }
  ....
}
You have to walk through tons of crap like this when you use linters, which is why I don't. What the linter didn't understand is that the first line is the real value of c and the commented line just below is telling the next developer reading the code that we can't rely on the answer. If you remove the assignment the logic of it isn't nearly as clear.

Next PVS Studio whines that there's 25 arguments to the DrawText. That's also not a bug. Fragment N19 listed in the article is also not a bug (there is potentially a bug there actually if the text is empty, but it wasn't what the linter said and it wasn't what the guy writing the article wrote). N20 is complaining about not using a predicate in OpenAL, which is probably not a bug (the code most likely already supports spurious wakeups). It is a very inefficient way of fixing the bugs that your users actually encounter instead of theoretical bugs.
User avatar
Chris
Posts: 2945
Joined: Thu Jul 17, 2003 12:07 am
Graphics Processor: ATI/AMD with Vulkan/Metal Support

Re: Found bugs in version 4.11.3

Post by Chris »

dpJudas wrote: Wed Dec 13, 2023 5:44 pm For example, if you use a classic C style enum the Visual Studio linter will complain that you didn't type enum class.
I can understand wanting to disable warnings like that. While there are reasons to prefer an enum class (and it's probably a good idea to without a reason to the contrary), there are occasionally reasons to not (ironically, the C++ Core Guidelines say to prefer enum class over plain enum, but also say to prefer enums over macros that express an integer value; but enum classes are annoying to use for integer values since they require a cast to get the value (though these days they also suggest to use inline and constexpr as an alternative)).
dpJudas wrote: Wed Dec 13, 2023 5:44 pm The PVS Studio one complained about assigning a variable a value after first using it:

Code: Select all

int FPCXTexture::CopyPixels(FBitmap *bmp, int conversion, int frame)
{
  ....
  uint8_t c = lump.ReadUInt8();
  c = 0x0c;  // Apparently there's many non-compliant PCXs out there...
  if (c != 0x0c) 
  {
    for(int i=0;i<256;i++) pe[i]=PalEntry(255,i,i,i);// default to a gray map
  }
  ....
}
Honestly that code gave me a bit of pause too. It's more understandable with the comment, but it does look sketchy and maybe a mistake. If I were to do it, I'd have done something like

Code: Select all

int FPCXTexture::CopyPixels(FBitmap *bmp, int conversion, int frame)
{
  ....
  // Apparently there's many non-compliant PCXs out there... assume the expected value
  std::ignore = lump.ReadUInt8();
  uint8_t c = 0x0c;
  ...
}
That makes it clear you're intending to call the read function and ignore the returned value, and are assuming 0x0c in its place.
dpJudas wrote: Wed Dec 13, 2023 5:44 pm What the linter didn't understand is that the first line is the real value of c and the commented line just below is telling the next developer reading the code that we can't rely on the answer. If you remove the assignment the logic of it isn't nearly as clear.
Part of the linter's job isn't to understand your code, but to recognize various coding patterns that have been the cause of bugs for a lot of other people, so there's a not-insignificant chance it's a bug for you too (or even if it's not a bug now, it can become one from a coder not fully realizing what the intention was).
dpJudas wrote: Wed Dec 13, 2023 5:44 pm N20 is complaining about not using a predicate in OpenAL, which is probably not a bug (the code most likely already supports spurious wakeups).
It does handle spurious wakeups, and yeah that is a questionable report to me too. While there may be a reason to prefer predicates in the wait statements, admittedly my background starting with C before moving to C++ and being comfortable with external loops and branches, the way their suggested method splits the logic makes it look more confusing (I also tend to dislike having lambdas capture variables by reference, for reasons of modifying the variable, since it means local variables may be changed by calling a separate lambda). They also erroneously proclaim "We added a predicate that checks if the shared data is ready and excludes spurious wakeups." But predicates do not prevent spurious wakeups; indeed, the fact that the predicate is executed while the lock is held means the thread must be awake to have the lock and run the code. What it does is prevent spurious returns, but the thread still wakes up from spurious events to do its checks all the same. There may be a reason to prefer predicates, and there may be a way to make the code cleaner than that with them, but that suggested code replacement doesn't work for me.
dpJudas wrote: Wed Dec 13, 2023 5:44 pm You can spend all your time suppressing linters and overaggressive warnings, or you can spend your time on more important things.
That's ultimately the crux of the issue. How can one know these warnings aren't pointing to a real important issue without looking to see what they're warning about (and the reasons why)? People like to complain that C++ is unsafe and makes it easy to shoot yourself in the foot, but there has been a concerted effort to come up with comprehensive guidelines to make it safer, and code checkers to help enforce these guidelines and more. These warnings were added because they've proven to be issues for people in the past. Again, it's understandable to not go through all of this on a large preexisting codebase, but I think it's good to at least keep it in mind, especially when it comes time for cleanups or refactoring or writing new code. At least with the IDEs I've used, you can also selectively run clang-tidy and other checkers on a per-file basis, so you don't have to check everything to make sure more recent sources are okay.
dpJudas
 
 
Posts: 3070
Joined: Sat May 28, 2016 1:01 pm

Re: Found bugs in version 4.11.3

Post by dpJudas »

Chris wrote: Wed Dec 13, 2023 8:33 pm How can one know these warnings aren't pointing to a real important issue without looking to see what they're warning about (and the reasons why)?
All software on the planet contains bugs and no amount of linters is ever going to change that. In fact, I deliberately code my software with the assumption that the platform it runs on has enough memory to make all typical memory allocations to go through. The code assumes that you don't feed it gigabytes of input data in situations where it only expects a few kilobytes. The actor code is not battle tested against feeding it 1 billion actors. The file system code doesn't gracefully handle if I put 3 billion .png files. Does the OpenAL code gracefully handle that I start 100,000 samples all at once? I can kill your Windows machine at any time by making a loop creating and destroying HWND windows. No linter helped against that either.

The point I'm making with this is that there are theoretical bugs and then there's the bugs that users actually encounter. The linter helped you against the theoretical ones and its only happenstance whenever it finds any in the second category. It will whine that there's a integer overflow if there's 1 billion png files because when you multiply that with a struct size it might overflow. Or it points out that you multiply an uint32 with a uint64 and then adding a uint32. All kinds of annoying, useless crap like that. If you feel it is worthwhile to spend your time on reading and fixing these kinds of warnings, hey don't let me stop you. All I'm saying is that the signal to noise ratio isn't worth it for me. That's why I listed all the false reports present even in the filtered list from that article. The full list of things that linter listed is far greater I guarantee you. And that was only one of them. Let's run all we can find - they could reveal something!
People like to complain that C++ is unsafe and makes it easy to shoot yourself in the foot, but there has been a concerted effort to come up with comprehensive guidelines to make it safer, and code checkers to help enforce these guidelines and more.
The joke is on them. These so-called experts that think you can secure C++ using guidelines and best practices are completely delusional. If your software require this level of security then don't use C++ for it. You have to block all the ways you can do out of bounds stuff in the language to even begin to claim you are secure like this. 40 years of C++ has consistently proven that no human being on the planet is actually good enough, even when trying to be as disciplined as possible, to prevent this kind of thing. Mathematically making something impossible is always infinitely better than anything else. That's why we use unique/shared pointer and move semantics to prevent memory leaks. No linter ever was able to stop people from fucking up their new and delete statements. No guides did the trick either.
Professor Hastig
Posts: 240
Joined: Mon Jan 09, 2023 2:02 am
Graphics Processor: nVidia (Modern GZDoom)

Re: Found bugs in version 4.11.3

Post by Professor Hastig »

Chris wrote: Wed Dec 13, 2023 8:33 pm That's ultimately the crux of the issue. How can one know these warnings aren't pointing to a real important issue without looking to see what they're warning about (and the reasons why)? People like to complain that C++ is unsafe and makes it easy to shoot yourself in the foot, but there has been a concerted effort to come up with comprehensive guidelines to make it safer, and code checkers to help enforce these guidelines and more. These warnings were added because they've proven to be issues for people in the past. Again, it's understandable to not go through all of this on a large preexisting codebase, but I think it's good to at least keep it in mind, especially when it comes time for cleanups or refactoring or writing new code. At least with the IDEs I've used, you can also selectively run clang-tidy and other checkers on a per-file basis, so you don't have to check everything to make sure more recent sources are okay.
My experience here is that at least in Visual C++ those linter warnings tend to be completely useless. I occasionally see those green markers in the code, hover over the text and 99% of the times it's a non-issue.
Sometimes it's even issues that are not about bad code per-se but merely about coding conventions some people may consider outdated. Ultimately it is not helpful if the noise cannot be dialled down to focus on the real issues.
dpJudas wrote: Wed Dec 13, 2023 5:44 pm The joke is on them. These so-called experts that think you can secure C++ using guidelines and best practices are completely delusional. If your software require this level of security then don't use C++ for it. You have to block all the ways you can do out of bounds stuff in the language to even begin to claim you are secure like this. 40 years of C++ has consistently proven that no human being on the planet is actually good enough, even when trying to be as disciplined as possible, to prevent this kind of thing. Mathematically making something impossible is always infinitely better than anything else. That's why we use unique/shared pointer and move semantics to prevent memory leaks. No linter ever was able to stop people from fucking up their new and delete statements. No guides did the trick either.
While trying to get a better understanding of C++ I came across the places where the so-called 'experts' hang out and in general it is a painful experience of a delusional echo chamber. These people are so deeply invested into the wrong aspects of programming that they are more a hindrance to improvement than an asset.
User avatar
Rachael
Posts: 13635
Joined: Tue Jan 13, 2004 1:31 pm
Preferred Pronouns: She/Her

Re: Found bugs in version 4.11.3

Post by Rachael »

Professor Hastig wrote: Thu Dec 14, 2023 6:48 am While trying to get a better understanding of C++ I came across the places where the so-called 'experts' hang out and in general it is a painful experience of a delusional echo chamber. These people are so deeply invested into the wrong aspects of programming that they are more a hindrance to improvement than an asset.
It's not much different than the people who do the same thing with the English language, getting hung up over the smallest grammatical or spelling errors, or even losing their shit over regional variations. They keep focusing on the wrong parts of the English language without really understanding what the actual point of language really is: as long as the target audience understands what is said, it has served its purpose, errors or not. Standards are good, but they are only good for increasing the likelihood that people will understand each other by using similar spellings and sentence structures (and even that does not always succeed). Worshipping standards like a bible is massively counterproductive, because language standards are and always were created around usages and how they change, not the other way around.

So when a person comes in with clearly broken English literally the worst thing you can do is make fun of them for it. A better solution is to either ask for clarification (if you can't understand them) or try and reach out to them in a language they might understand.

Grammar checks are not foolproof in document programs - and the same could be said for linters.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49098
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: Found bugs in version 4.11.3

Post by Graf Zahl »

Oh well...
this is a topic I could rant about without end.

I think dpJudas hit the nail on the head by saying that with all its problems there is little use in linters for C++.
Unless the guardians of the language do a radical course change I think it will inevitably fade into obscurity over time. The problems with the language are endless and my personal experience with warnings is that they often do more harm than good - especially if there is no language approved way to annotate code not to warn without changing its syntax.

The habit of type-casting integers to fit the target type, for example, is one of the most dangerous things around. If I add the cast I permanently alter the code. If I later extend the target type the cast will still be there and potentially do some damage - without any realistic chance to ever find all such casts later.
dpJudas wrote: Thu Dec 14, 2023 3:23 am The joke is on them. These so-called experts that think you can secure C++ using guidelines and best practices are completely delusional. If your software require this level of security then don't use C++ for it. You have to block all the ways you can do out of bounds stuff in the language to even begin to claim you are secure like this. 40 years of C++ has consistently proven that no human being on the planet is actually good enough, even when trying to be as disciplined as possible, to prevent this kind of thing. Mathematically making something impossible is always infinitely better than anything else. That's why we use unique/shared pointer and move semantics to prevent memory leaks. No linter ever was able to stop people from fucking up their new and delete statements. No guides did the trick either.
It's not the ideas that are delusional but the people propagating them. If you ever visit a discussion place for C++ it is full with people who a) often seem to have no practical programming experience and b) are strongly opposed to any language change that may affect performance. And it is this obsession with maximizing performance that really damages the language.
As a very interesting example, GZDoom's object allocator zeroes the memory it allocates to guarantee a clean slate for every allocated object.
Sounds good? Apparently not for GCC's developers. As the initial state of the memory allocated for a class object is undefined, these smartass idiots optimize this memory clearing code out!
It caused some spurious misbehavior in Raze but if you think about the further implications of such nonsense, it's a huge security vulnerability because you can be sure that the same "optimization" will also take place if you zero a block of memory right before freeing it, maybe to clean some sensitive data. But apparently that isn't important for performance obsessed ignorants.

So with a language like that, what good is a linter for? It can barely scratch the surface of potential issues and from what I have seen, the signal to noise ratio is quite poor, making the entire thing an exercise in futility. I also find it interesting that the security implications of C and C++ are starting to appear on government agencies' radars as serious security risks.
User avatar
Chris
Posts: 2945
Joined: Thu Jul 17, 2003 12:07 am
Graphics Processor: ATI/AMD with Vulkan/Metal Support

Re: Found bugs in version 4.11.3

Post by Chris »

dpJudas wrote: Thu Dec 14, 2023 3:23 am All software on the planet contains bugs and no amount of linters is ever going to change that.
Of course. No one's arguing this makes the code bug-free, but it does demonstrably reduce the amount and severity of bugs.
dpJudas wrote: Thu Dec 14, 2023 3:23 am The point I'm making with this is that there are theoretical bugs and then there's the bugs that users actually encounter. The linter helped you against the theoretical ones and its only happenstance whenever it finds any in the second category.
All bugs are theoretical until they happen. It's a question of the likelihood that the code will trigger a specific one (and unless we solve the halting problem, we can never be sure it will or won't; so you either assume it won't and don't warn, or assume it can and warn). At least with clang-tidy, you can control what it warns about, with the default being relatively conservative. I think the only issues it brought up when I first tried on all of OpenAL Soft it was complaining that memmove was unsafe (4 uses in examples written in C), use memmove_s instead. It wasn't until I enabled the modernize-* set that it really started complaining, the primary one being using trailing return types. I went to myself, "as much as I'd like/prefer trailing return types, there's just too many functions to change for no direct benefit", then disabled modernize-use-trailing-return-type and those warnings went away. Maybe one day if I'm bored with extra time I may chip away at it, but as it is, out of sight out of mind.
dpJudas wrote: Thu Dec 14, 2023 3:23 am All I'm saying is that the signal to noise ratio isn't worth it for me.
I'm not suggesting that all warnings should be enabled always. With OpenAL Soft, which is a much smaller and newer codebase, I'm still starting with clang-tidy using a very conservative default and incrementally enabling certain warnings because there'd be too much at once otherwise. And I don't foresee myself going beyond modernize-* and cppcoreguidelines-*, which is only a fraction of the total warnings that can be enabled, and even of those I'll probably be leaving some off.
dpJudas wrote: Thu Dec 14, 2023 3:23 am These so-called experts that think you can secure C++ using guidelines and best practices are completely delusional.
I don't think anyone is under the delusion that simply having guidelines and best practices will "secure" C++. What's needed is enforcement of those guidelines (as you say below; it's all well and good to have guidelines say "don't use unions, use a variant", but unless a union causes an error, people will still use it), and that's where the tricky part is since you have issues of compatibility.
dpJudas wrote: Thu Dec 14, 2023 3:23 am If your software require this level of security then don't use C++ for it.
It's not just security, but also stability and safety. GZDoom writing past the end of a buffer isn't going to cause a sensitive data leak for millions of people, no, but it would still be nice to prevent it from writing to random memory and cause hard-to-track crashes or misbehaviors (or worse, have some enterprising modders realize that with specially-crafted data, it can create a buffer overflow that can cause funky effects and make a popular mod out of it, that will inevitably stop working and create unhappy users who don't want to update; or have a mod cause arbitrary code execution from a buffer overflow, infect the system with a virus).
dpJudas wrote: Thu Dec 14, 2023 3:23 am You have to block all the ways you can do out of bounds stuff in the language to even begin to claim you are secure like this. 40 years of C++ has consistently proven that no human being on the planet is actually good enough, even when trying to be as disciplined as possible, to prevent this kind of thing. Mathematically making something impossible is always infinitely better than anything else. That's why we use unique/shared pointer and move semantics to prevent memory leaks. No linter ever was able to stop people from fucking up their new and delete statements. No guides did the trick either.
It is possible to hook linters (at least clang-tidy) up to your build system and have them cause a build failure if their checks fail, with either a select set of checks for the whole project (easiest for a new project), or specific per-file checks (if you want to incrementally add checks to an existing codebase). There are also interesting projects like Cppfront/Cpp2, that allows for writing in a new safer syntax that gets appropriate checks/restrictions, and optionally allows interleaving with standard C++ code, which it remains fully interoperable with (you can switch any C++ project to use cppfront and it will work as-is with no changes, but you can then write a function with the new "Cpp2" syntax, and that function gets improved defaults, checks, and restrictions). I think it's kind of interesting [1] [2].
dpJudas
 
 
Posts: 3070
Joined: Sat May 28, 2016 1:01 pm

Re: Found bugs in version 4.11.3

Post by dpJudas »

I actually find that Herb Sutter suggestion for a C++ 2.0 to be a terrible idea. I can't speak for other C++ developers, but personally I actually like C/C++ syntax. What he is suggesting is changing the language visually so much that I'd rather then just migrate to a completely different language.

As for linters improving the stability, I very much doubt that a user of GZDoom could tell from before and after if you ran the linter on it and fixed all the issues it reported. Your buffer overrun "safety" argument is also not that good, because as usual the linter assumes that you're very good at addressing corner cases like working with int32_t, uint32_t, size_t at corner cases and other silly stuff like that. If you'd really want to improve the safety, you should instead go to the loaders, file system and parsers and add some basic sensible checks. Stuff like saying if an image is larger than 16K in width and height then reject loading the image. The linter will never suggest that. Instead it will be trying to make you secure the for loops at boundaries where overflows happen. You think you know how to code that correctly, but as I said, 40 years of C++ has consistently proven that no, people fuck that up all the time. Since the linter won't actually fix the issue itself, you'll end up adding a cast or something else where you think now its safe, but you did it wrong and now the linter is happy, you are happy, but the overflow still exists. And remember, for the virus to get through it just needs to find one such instance for the codebase.
User avatar
Chris
Posts: 2945
Joined: Thu Jul 17, 2003 12:07 am
Graphics Processor: ATI/AMD with Vulkan/Metal Support

Re: Found bugs in version 4.11.3

Post by Chris »

dpJudas wrote: Thu Dec 14, 2023 6:56 pm I actually find that Herb Sutter suggestion for a C++ 2.0 to be a terrible idea. I can't speak for other C++ developers, but personally I actually like C/C++ syntax. What he is suggesting is changing the language visually so much that I'd rather then just migrate to a completely different language.
I'm not the most fond of his syntax either, though I do have to wonder how much of what I'm interpreting as "ugly" is really just "different". I've worked with D code and had similar feelings that things felt "ugly" or "wrong", but that was because they were different from what I was used to and never had a chance to acclimate. I look at Rust code the same way, it's different and ugly, but maybe if I use it for 5 or 10 years it'll stop feeling that way. I can't deny the cleanup and simplification his changes accomplish without losing anything that C++ is capable of, and it shows to be surprisingly seamless; write a thing in Cpp2 syntax and it's unambiguously recognized to be in that new syntax (to both the coder and compiler), where it can automatically shed the compatibility and enforce stricter code standards for that section of code. But in either case, he is quite clear that this is an experiment and it can fail (with a way out for people that use it), and the goal is to get people to talk about and find ways to fix the issues, while showing that it's possible.
dpJudas wrote: Thu Dec 14, 2023 6:56 pm As for linters improving the stability, I very much doubt that a user of GZDoom could tell from before and after if you ran the linter on it and fixed all the issues it reported.
I concur. GZDoom is a battle-tested piece of code that is quite resilient. But this same argument could have been made when it was converted from C to C++, something I'm sure took a fair amount of effort that a user wouldn't be able to tell the difference once it was done. Even today, there isn't a single thing the engine does that it couldn't still do if it was written in C and be just as stable. But like how converting to C++ enabled improved workflow by not having to be so explicit about everything (classes, more generic code, RAII, improved type and resource safety, etc), the use of code checkers allows improving workflow by not having to go back and fix as many bugs after the fact. I forget where I read it, but it's been said that most of a programmer's time is spent debugging. We know where a lot of bugs stem from and have guidance available for how to do better, it just can't be enforced by default because too much existing code relies on outdated and unsafe practices, but in areas where it can be enforced it would be beneficial to do so.
dpJudas wrote: Thu Dec 14, 2023 6:56 pm Your buffer overrun "safety" argument is also not that good, because as usual the linter assumes that you're very good at addressing corner cases like working with int32_t, uint32_t, size_t at corner cases and other silly stuff like that.
Buffer safety relies on a combination of build-time and run-time checks. A linter will be able to catch you using a plain C array (which is easy to lose size information and decay to a raw pointer) as well as using arithmetic on a raw pointer (which is easy to make go out of bounds), and say to use std::array or std::span instead, which maintain size information and can implement run-time checks to catch out of bounds accesses when they happen. The sooner a problem can be detected, the easier it will be to fix (ideally at build time, e.g. being told right away to use a span instead of a raw pointer for non-owning array access, or at least right when it happens at run-time, e.g. when dereferencing outside of an array, rather than hoping it crashes around the area of code that does the dereference).
dpJudas wrote: Thu Dec 14, 2023 6:56 pm Since the linter won't actually fix the issue itself, you'll end up adding a cast or something else where you think now its safe, but you did it wrong and now the linter is happy, you are happy, but the overflow still exists. And remember, for the virus to get through it just needs to find one such instance for the codebase.
This is a more general statement about how certain people will try to take the easy way out instead of doing it the right way. It happens in Rust too, where people will just litter the code with unsafe { ... } sections to shut the compiler up about stuff it doesn't deem safe, rather than figuring out the "proper" way to do it. There's no way around that without making the language prohibitively restrictive. A virus only needs to find one point of entry to infect your system, but that doesn't mean one should be easy to find.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49098
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: Found bugs in version 4.11.3

Post by Graf Zahl »

dpJudas wrote: Thu Dec 14, 2023 6:56 pm I actually find that Herb Sutter suggestion for a C++ 2.0 to be a terrible idea. I can't speak for other C++ developers, but personally I actually like C/C++ syntax. What he is suggesting is changing the language visually so much that I'd rather then just migrate to a completely different language.
Agreed. What he's trying is not going to work.
In the end it's just like Rust: To make your code safe you have to rewrite it from scratch. I don't get how delusional one must be to think that this is going to work.
What I want for safety is a C++ compile mode that can be set to error out on unsafe constructs (like const ->non-const casts) so I can gradually update it.
If the syntax changes the failure is guaranteed.
dpJudas wrote: Thu Dec 14, 2023 6:56 pm As for linters improving the stability, I very much doubt that a user of GZDoom could tell from before and after if you ran the linter on it and fixed all the issues it reported. Your buffer overrun "safety" argument is also not that good, because as usual the linter assumes that you're very good at addressing corner cases like working with int32_t, uint32_t, size_t at corner cases and other silly stuff like that. If you'd really want to improve the safety, you should instead go to the loaders, file system and parsers and add some basic sensible checks. Stuff like saying if an image is larger than 16K in width and height then reject loading the image. The linter will never suggest that. Instead it will be trying to make you secure the for loops at boundaries where overflows happen. You think you know how to code that correctly, but as I said, 40 years of C++ has consistently proven that no, people fuck that up all the time. Since the linter won't actually fix the issue itself, you'll end up adding a cast or something else where you think now its safe, but you did it wrong and now the linter is happy, you are happy, but the overflow still exists. And remember, for the virus to get through it just needs to find one such instance for the codebase.
It's just like too many warnings: The demand to create 'warning free code' does not lead to more safety but to more bad constructs that try to hide the problem from the compiler, and in the end the work to make it safe is increased tenfold. I don't even want to know how many potentially unsafe type casts there are. What I know is that it's impossible to find them with how the language works.
Ultimately it does not matter if you assign a 64 bit integer to a 32 bit destination with ot without cast. The value gets truncated either way, but in the former case the truncation will even happen if the destination gets extended to 64 bit later and the cast is overlooked because it's virtually impossible to find. So ultimately 'my code compiles without warnings' most of the time just means that considerable effort was invested to hide the problems from the compiler, not to fix them.

Return to “Technical Issues”