Patch here: http://pastebin.com/sm8H6vRk
I_GetTimeFrac(), which is used for frame interpolation, uses the interval between the last gametic (at TicStart) and the upcoming one (at TicNext) to scale its output. This interval should always be 1000/TICRATE, but right now for some reason it is calculated for every tic separately.
The following expression is used:
sig_next = Scale((Scale (sig_start, TICRATE, 1000) + 1), 1000, TICRATE);
basically meaning
sig_next = (sig_start * TICRATE / 1000 + 1) * 1000 / TICRATE;
which _would_ rearrange _mathematically_ to
sig_next = sig_start + 1000/TICRATE;
which is correct (it's also used in the win32 equivalent).
Because of truncated integer division the result will instead be sig_start rounded up to the next multiple of 1000/TICRATE. Thus sig_next - sig_start will be anything between 0 and 1000/TICRATE (or about 28.57 ms), while it should always be the latter. sig_start is updated (using SDL_GetTicks()) on an interval of 1000/TICRATE, so the rounding error will persist with all the following tics aswell, with its magnitude determined by the value of SDL_GetTicks() when sig_start was set for the first time.
If the error is small, I_GetTimeFrac() returns values close to what they should be, but the larger it gets, the larger a factor they get multiplied with and eventually clamped to what would represent the next gametic, meaning interpolation gets screwed up badly.
This causes the following behavior: at every restart there's about 50% of a chance of seeing ugly stuttering that resembles low framerate (which vid_fps doesn't indicate). Sometimes everything seems to work almost perfectly. Switching between ZDoom/software renderer and GZDoom/OpenGL makes no difference. The stuttering may also appear or disappear after playing for some time. (I couldn't figure out why that's even possible. Perhaps one of the timers can somehow get delayed so the error changes? But it no longer matters, the bug is fixed.)
In the patch it would have been enough to modify the relevant expressions similarly as above, but it turns out the variables sig_next and TicNext could be removed altogether if I_GetTimeFrac() was modified a little, so I decided to just do that. (It's also very, VERY slightly (<2%) more precise because the division by the integer "step" was replaced.)