|
|
|
@ -20,9 +20,9 @@ However, if you simply cache a single version of the bitmap and |
|
|
|
|
draw it at different subpixel positions with a GPU, you will get |
|
|
|
|
either the exact same result (if you use point-sampling on the |
|
|
|
|
texture) or linear filtering. Linear filtering will cause a sub-pixel |
|
|
|
|
positioned bitmap to blur further, causing a visible desharpening |
|
|
|
|
positioned bitmap to blur further, causing a visible de-sharpening |
|
|
|
|
of the character. (And, since the character wasn't hinted, it was |
|
|
|
|
already blurrier than a hinted one would be, and not it gets even |
|
|
|
|
already blurrier than a hinted one would be, and now it gets even |
|
|
|
|
more blurry.) |
|
|
|
|
|
|
|
|
|
You can avoid this by caching multiple variants of a character which |
|
|
|
@ -32,7 +32,7 @@ horizontal offset, and always require characters to fall on integer |
|
|
|
|
positions vertically. |
|
|
|
|
|
|
|
|
|
When creating a texture atlas for use on GPUs, which support bilinear |
|
|
|
|
filtering, there is a better approach than caching several indepdent |
|
|
|
|
filtering, there is a better approach than caching several independent |
|
|
|
|
positions, which is to allow lerping between the versions to allow |
|
|
|
|
finer subpixel positioning. You can achieve these by interleaving |
|
|
|
|
each of the cached bitmaps, but this turns out to be mathematically |
|
|
|
|