I use floats because if they are good enough for GPUs, they are good enough for me.

Unfortunately, unlike GPUs, I have to convert my floats to integers once in a while for performance reasons.
The playsim may need the higher precision, but for 3D rendering the trick is to just keep all numbers near zero. I don't think this is a precision issue, although ideally it shouldn't use texture coordinates that large when it could just use a 0-1 range instead for that wall. It is probably more because I cast the floats to integer using this patent-pending equation:
Code: Select all
uint32_t texU = (uint32_t)(int32_t)(u * 0x01000000);
uint32_t texV = (uint32_t)(int32_t)(v * 0x01000000);
uint32_t texelX = (((texU << 8) >> 16) * textureWidth) >> 16;
uint32_t texelY = (((texV << 8) >> 16) * textureHeight) >> 16;
uint32_t texel = texture[texelX * textureHeight + texelY];
It overflows every 256 units, but more importantly, LLVM's intrinsic function for casting float to integer says the behavior in such a situation is 'undefined'. They want me to clamp or fmod the float first, but I don't have time for that! Need this triangle rendered ASAP! So I ignored that - undefined be damned. Maybe at 8192 that undefined behavior started to matter.
