New savegame format committed to Git

Discuss anything ZDoom-related that doesn't fall into one of the other categories.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

New savegame format committed to Git

Post by Graf Zahl »

Prompted by bug reports like this one http://forum.drdteam.org/viewtopic.php?f=24&t=7107
I decided to finally invest some time here and replace the old savegame format with something that's not only easier to maintain but also far more fault tolerant and easier to analyze in case something goes wrong.

This had been on my list for many, many years but it always failed for the same reason: I was unable to find an efficient enough utility library to handle formats like JSON or similar things. They all were magnitudes too slow to handle the amount of data that can accumulate in a Doom savegame. That is, until I discovered RapidJSON.
This library is capable of generating and reading back a full dump of ZDCMP (that's approx. 100 MB of data) in less than a second each on my system. The best I could find before was 6-7 seconds which even with the best tricks was impossible to bring down to acceptable times.

Considering that a huge portion of Doom's map data never changes this can easily be eliminated from the output by comparing the two and only writing out the values that are different. Using this method, I was able to bring up performance to almost that of the old binary format, which suddenly made this a viable approach.

And since this is incompatible with the old data anyway, I decided to go one step further and ditch the PNG container as well. The new format is stored as a Zip file, containing the savepic as a PNG image without all the added data, a JSON dump of each level in the current hub, another one of global non-level specific data and a last one for the header data (which is shown in the menu.)

So what does this mean?
Saving in a textual format with named items obviously required changing all code related to savegames. And while I have done some extensive testing with several maps using different features I obviously cannot rule out that some problem was overlooked. This means that in the upcoming revisions the save feature may be a bit unstable until the final kinks have been worked out. Of course for that I need bug reports. I am also interested how it scales on slower computers. As said, on my system it's ca. 10-20% slower than the old format, with the largest map causing a savegame load time of ca. 180 ms as opposed to 150ms for the binary format, where loading the UDMF alone takes 700 ms and texture precaching another 300ms, i.e. the increase is nearly irrelevant.

On the plus side, it means that the clumsy savegame versioning that was necessary in the past can be mostly ignored now. Adding new variables should work without any intervention. Since sector, linedef and sidedef data is taken from the map for properties that are not specified in the savegame and from the DECORATE defaults for all actors, this should mean that 90% of all changes will work out of the box after the new handlers have been added. The same should be true for removing things that are no longer needed. Explicit handling is only needed if some semantic change requires adjustment of the stored values.

Last but not least, I implemented something that was completely absent from the old format: Adding a map checksum. Due to the dependence on the original map data to fill in the omitted data I decided to let it abort if anything about the map has changed - the old format only checked ACS script size but didn't mind if lindef, sidedef or sector cound had changed, resulting in meaningless errors deeply in the loading code.
User avatar
Tapwave
Posts: 2096
Joined: Sat Aug 20, 2011 8:54 am
Preferred Pronouns: No Preference
Graphics Processor: nVidia with Vulkan support

Re: New savegame format committed to Git

Post by Tapwave »

Holy crap that sounds absolutely amazing! Are you going to release a test build so that we can bugcheck the heck out of this?
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

It's already in Git, so the next devbuild should have it.
Edward-san
Posts: 1774
Joined: Sat Oct 17, 2009 9:40 am

Re: New savegame format committed to Git

Post by Edward-san »

There are some gcc/clang errors regressions. I hope I can send a PR with some fixes by tomorrow.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

On Visual C++ it compiles without warnings so I definitely needs someone else to do it.
User avatar
Nash
 
 
Posts: 17491
Joined: Mon Oct 27, 2003 12:07 am
Location: Kuala Lumpur, Malaysia
Contact:

Re: New savegame format committed to Git

Post by Nash »

What does this mean for situations where a mod that gets updated frequently; do players still need to start a new game when the mod they are playing gets updated?

Does the save game being in text mean that, if a fork author is inclined, it's easier to add support for saved games that are friendly to level and ACS updates? (AKA not abort when the level and scripts have changed post-save, but add handling so that the game will still load)
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

No, that's beyond the scope of any new savegame format, and I did not even try to do anything here because it won't work.

Being friendly to ACS updates is a total impossibility. Since there are indices into the binary script data, any change of an ACS lump means a savegame has become useless. Same if a map changes. If you even add a single linedef, sector, sidedef or thing the code would no longer know how to deal with it. Save data is by necessity relative to the original level data that has been loaded so if that underlying data changes you need a lot more bookkeeping in order to assign the saved info properly. That would really blow up the file size and processing time.

The reasons for this are merely that broken savegames from the old format are extremely hard to debug and that parts of the code appeared to have come out of a coding nightmare. The binary code was truly anal about shaving single bytes off stored data here and there but on the other side made no attempt at all to cut down the amount of saved data as a whole. This meant that even looking at the binary data yielded no clues at all. It also used a terrible recursive algorithm to store objects (i.e. when it encountered a new object it recursively went into that object and storing it whererver it was seen first.) This also caused serious problems with tracking down problems because storage of the containing object was interrupted by the inner object, and that could go down several recursions. The new code never steps into another object recursively. When it finds an object reference it merely stores an index into an array of objects, it then adds the object to the list of to be saved objects. This gets processed last of all things. And if, inside this list, finds another object, just adds it to the end of this list, so the process ends, once no more new objects have been added. The reverse is done when reloading them. The first thing it does is create all objects without initializing them, in a second step, once all pointers are known, initializes all objects, and only then loads the rest. As a result the save file has a clear structure. Another thing this addresses is that if some handler gets broken, it will only affect the one specific object type, never polluting what comes afterward, so if a problem surfaces it won't result in a useless 'unknown code in archive' error that's impossible to find.

And being stored inside a PNG with a homegrown compression header meant that without writing a dedicated tool to decompress it, looking at it was useless.
The original farchive.cpp file said this:
** The structure of the archive file generated is influenced heavily by the
** description of the MFC archive format published somewhere in the MSDN
** library.
Which makes about sense, this stuff looks like a 90's design philosophy, driven solely by raw efficiency and nothing else.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

Oh, another thing with the old code that was just weird is how it treated players during hub transitions. There was some bad voodoo trying to replace the players from the savegame with ones from the map load, this being a remnant of the original hub transition code that was superseded many years ago by G_FinishTravel but never cleaned out.

And for some reason this code only worked as long as the order of things was left untouched, the first time this caused a problem was when I tried to reorder things for getting the portal data before the actors, but back then I didn't find what was causing it.
Well, this time I had to check, and it turned out that just removing all of it cured the problem. That was one batch of awful code finally being committed to the dumpster. :)
Edward-san
Posts: 1774
Joined: Sat Oct 17, 2009 9:40 am

Re: New savegame format committed to Git

Post by Edward-san »

Done. I didn't run it yet, but how fast is the new serialization compared to the old one?
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

On my system it's a bit slower but I got to load huge maps to get meaningful numbers. Savegame processing for ZDCMP2 after killing multiple monsters, and performing actions that add more data (e.g. decals) on my system is about 180ms, compared to 150ms for the binary format. But you have to consider that the mere act of loading the map is already more than one entire second.

Obviously, on a slower system these numbers might add up to something more significant, but so will the map load time itself.
_mental_
 
 
Posts: 3820
Joined: Sun Aug 07, 2011 4:32 am

Re: New savegame format committed to Git

Post by _mental_ »

There are few more changes required to make it work on macOS.
I hope it's OK to change ptrdiff_t to int64_t in SerializePointer().
Spoiler: Error log for Serialize() with ptrdiff_t
In x64 ABI the sizes of long and long long types are the same although they are distinct types for template parameter matching. I found no clear way to write 'give me intX_t type that matches size of ptrdiff_t'.
User avatar
Graf Zahl
Lead GZDoom+Raze Developer
Lead GZDoom+Raze Developer
Posts: 49230
Joined: Sat Jul 19, 2003 10:19 am
Location: Germany

Re: New savegame format committed to Git

Post by Graf Zahl »

Figures that some idiots do not use the sized types as base for the more generic types...
Oh did I ever say that 'long' with its shitty semantics should vanish from the face of the earth...? :twisted:


Yes,under such circumstances the change is ok.
User avatar
Accensus
Banned User
Posts: 2383
Joined: Thu Feb 11, 2016 9:59 am

Re: New savegame format committed to Git

Post by Accensus »

Offtopic: I also remember Randi saying ZDoom will never support long doubles. Why is 'long' bad?
User avatar
Nash
 
 
Posts: 17491
Joined: Mon Oct 27, 2003 12:07 am
Location: Kuala Lumpur, Malaysia
Contact:

Re: New savegame format committed to Git

Post by Nash »

Something about looping ACS scripts aren't working as before with regards to return back to a level in a hub. I know this is so freaking vague but to be honest, I don't even know WHAT is wrong yet. I'm just noticing that my scripts are acting weird upon returning to a map in the hub.

I'll keep trying to see if I can isolate this into a runnable example but I'm just leaving this here for now, in case you might have an idea what's going on. It's hard for me to even know where to start looking for now...
Gez
 
 
Posts: 17942
Joined: Fri Jul 06, 2007 3:22 pm

Re: New savegame format committed to Git

Post by Gez »

Lud wrote:Offtopic: I also remember Randi saying ZDoom will never support long doubles. Why is 'long' bad?
The C language is old. As times pass, and hardware becomes more powerful, there appears a need for new data types -- going from 16-bit to 32-bit to 64-bit for ints, for example. What do you do to adapt the language? If you redefine existing terms (say a "word" now is 32 bit instead of 16, and a "long" is now 64 instead of 32) you're going to break all existing programs. If you introduce a new keyword, you risk breaking existing programs. So here comes the miracle solution: just use an existing keyword more times. Now you have long and long long and long long long and later you'll get long long long long and long long long long long and obviously that's good design.

But wait, you know what could make this design even better? Have the standard specifies the minimal range of a data type, allowing things to be larger. So a long is at least 32-bit, but it could be 64-bit. (Or, you know, 48-bit on some exotic platform, why not. 37-bit? 239-bit? Would be okay too, you're the boss.)


Long (heh) story short: there's a reason why most C/C++ programs written nowadays use custom defines for having types with sane names like int8_t, uint32_t, etc. And that reason is largely that the part of the program where they are defined is a long list of #ifdef platform-and-compiler dependent stuff.

As for long double vs. double vs. float: the bigger the type, the more precise the values. (Double, after all, is short for "double precision". Yes, "long double" is just a way to say something like "quadruple precision", but obviously a "quadruple" type couldn't be defined as it'd add a new keyword. Better to stack longs and get long long long long long long long long long double for your 2048-bit floating point values.) From what I remember, single precision floats (or just "float") aren't precise enough to replace the fixed point values. I don't see any reason why long double would be bad, technically -- perhaps some concern about performance because, well, each variable would take twice as much space.
Post Reply

Return to “General”