Enjay wrote:They actually looked pretty good. I'm not sure what method they were using but they said it was "largely automated" with some post enlarging touch-up.
Theoretically speaking, a program could be written to take most of the work out of upscaling the sprites. A program could never have the aesthetic eye of person so there can't be a program that does it 100% every time, but I think there is enough detail in most Doom sprites that you could automate upscaling them to reasonably allow for a person to do a final pass to clean things up, taking out a large portion of the grind work.
Let's do a simple one: scaling to twice the size (200%). In this resize, where you once had a single pixel, you now have four arranged in a two by two block: upper left (
UL), upper right (
UR), lower left (
LL) and lower right (
LR). Each of these blocks represent a quarter of the original pixel. The color of the new pixels can be chosen by taking weighted values from the original low-res image. For example, for the UL pixel, you'll want to sample from the low-res the original pixel's color (highest weight), and the color of three of its neighbors: the pixel to the
left and
above (less weight than the original pixel, but equal weight between them), and the pixel to the
upper left (lowest weight). With those four weighted samples the appropriate color for the UL pixel is chosen. Likewise is performed for the UR, LL, and LR, though sampling in the appropriate directions.
Now, of course, this in itself isn't going to work well enough. Doing just an algorithm like that would be basically the same as doubling the image and applying a smooth filter, so it'd probably look like crap. This is where things get sketchier. It's a good starting place, but the algorithm would need to be fleshed out more to do a few smarter things. The most important one in my mind is edge detection, specifically with transparent color edges. If you are using cyan as your transparent color, you
do not want that to be sampled into the new colors being chosen because the transparent color isn't suppose to be there. You'd get a demon with a bit of cyan hue around the edges. The algorithm needs to recognize when transparency is going to play and factor with color picking and try to determine where the overall edge is (for example, when choosing the UL color, if the original's 3 neighbor pixels are all transparent, the algorithm should strongly consider choosing transparent as the new UL color). Even more so, it should try to be aware of the transparent edges as a whole so they come out as smooth as possible, eliminating jagged edges generated by any poor sampling.
The edge detection should also expand to between non-transparent colors. For example, the demon's eyes are a nice, stark golden yellow which really stands out from it's pink skin. If you just straight up filter, you're going to blur the pink and yellow together around the edges and kill the contrast. Again the algorithm needs to know how to rework the weights when it encounters colors of different "classes" while sampling.
Hopefully if such things were intelligently inserted into the algorithm, what it would wind up producing are images that may not look like the greatest hi-res versions of the sprites, but would
require as little work as possible to manually polish them into good looking sprites.