Sunday, February 7, 2021

BC7 RDO rate distortion curves

I've been tuning the fixed Deflate model in bc7enc_rdo. In this test I varied the # of literal bits from 8 to 14. Higher values push the system to prefer matches vs. literals.

The orange line was yesterday's encoder, all other lines are for today's encoder. Today's encoder has several improvements, such as lazy parsing and mode 6 endpoint match trails. 


(I know this graph is going to be difficult to read on blogger - Google updated it and now images suck. You used to be able to click on images and get a full-res view.)

Lagrangian multiplier based RDO encoding early outs

Some minor observations about Lagrangian multiplier based RDO (with BC7 RDO+Deflate or LZ4):

We're optimizing to find lowest t (sometimes called j), given many hundreds/thousands of ways of encoding a BC7 block:

float t = trial_mse * smooth_block_error_scale + trial_lz_bits * lambda;

For each trial block, we compute its MSE and estimate its LZ bits using a simple Deflate/LZ4-like model.

If we already have a potential solution for a block (the best found so far), given the trial block's MSE and the current best_t we can compute how many bits (maximum) a new trial encoding would take to be an improvement. If the number of computed threshold bits is ridiculous (like negative, or just impossible to achieve with Deflate on a 128-bit block input), we can immediately throw out that trial block:

threshold_trial_lz_bits = (best_t - trial_mse * smooth_block_error_scale ) / lambda

Same for MSE: if we already have a solution, we can compute the MSE threshold where it's impossible for a trial to be an improvement:

threshold_trial_mse  = (best_t - (trial_lz_bits * lambda)) /  smooth_block_error_scale

This seems less valuable because running the LZ simulator to compute trial_lz_bits is likely more expensive than computing a trial block's MSE. We could plug in a lowest possible estimate for trial_lz_bits, and use that as a threshold MSE. Another interesting thing about this: trials are very likely to always have an MSE >= than the best found encoding for a block.

Using simple formulas like this results in large perf. improvements (~2x).

Saturday, February 6, 2021

BC7 DDS file entropy visualization

RDO GPU texture encoders increase the number/density of LZ matches in the encoded output texture. He's a file entropy visualization of kodim18.dds. The left image was non-RDO encoded, the right image was encoded with lambda=4.0 max backwards scan=2048 bytes.

Non-RDO:



RDO:

Non-RDO, one byte matches removed:



RDO, one byte matches removed:



fv docs:
"The output is fv.bmp with the given size in pixels, which visually
displays where matching substrings of various lengths and offsets are
found.  A pixel at x, y is (black, red, green, blue) if the last matching
substring of length (1, 2, 4, 8) at x occurred y bytes ago.  x and y
are scaled so that the image dimensions match the file length.
The y axis is scaled log base 10."
Tool source:

The two types of RDO BC7 encoders

There are two main high-level categories of RDO BC7 encoders:
1. The first type is optimized for highest PSNR per LZ compressed bit, but they are significantly slower vs. ispc_texcomp/bc7e.

2. The second type is optimized for highest PSNR per LZ compressed bit per encoding time. They have the same speed, or are almost as fast as ispc_texcomp/bc7e. Some may even be faster than non-RDO encoders because they entirely ignore less useful modes (like mode 0).

To optimize for PSNR per LZ compressed bit, you can create the usual rate distortion graph (bitrate on X, quality on Y), then choose the encoder with the highest PSNR at specific bitrates (the highest/leftmost curve) that meets your encoder performance needs.

Other thoughts:
- When comparing category 2 encoders, encoding time is nearly everything.

- Category 2 encoders don't need to win against category 1 encoders. They compete against non-RDO encoders. Given two encoders, one category 2 RDO and the other non-RDO, if all other things are equal the RDO encoder will win.

- Modifying bc7e to place it into category #2 will be easy.

- Category 1 is PSNR/bitrate (where bitrate is in LZ bits/texel). Or SSIM/bitrate, but I've found SSIM to be nearly useless for texture encoding.

- Category 2 is (PSNR/bitrate)/encode_time (where encode_time is in seconds).

Simple and fast ways to reduce BC7's output entropy (and increase LZ matches)

 It's relatively easy to reduce the output entropy of BC7 by around 5-10%, without slowing down encoding or even speeding it up. I'll be adding this stuff to the bc7e ispc encoder soon. I've been testing these tricks in bc7enc_rdo:

- Weight the mode errors: For example weight mode 1 and 6's errors way lower than the other modes. This shifts the encoder to use modes 1 and 6 more often, which reduces the output data's entropy. This requires the other modes to make a truly significant difference in reducing distortion before the encoder switches to using them.

- Biased p-bits: When deciding which p-bits to use (0 vs. 1), weight the error from using p-bit 1 slightly lower (or the opposite). This will cause the encoder to favor one of the p-bits more than the other, reducing the block output data's entropy.

- Partition pattern weighting: Weight the error from using the lower frequency partitions [0,15] or [0,33] slightly lower vs. the other patterns. This reduces the output entropy of the first or second byte of BC7 modes with partitions.

- Quantize mode 6's endpoints and force its p-bits to [0,1]: Mode 6 uses 7-bit endpoint components. Use 6-bits instead, with fixed [0,1] p-bits. You'll need to do this in combination with reducing mode 6's error weight, or a multi-mode encoder won't use mode 6 as much. 

- Don't use mode 4/5 component rotations, or the index flag. 

In practice these options aren't particularly useful, and just increase the output entropy. The component rotation feature can also cause odd looking color artifacts.

- Don't use mode 0,2,3, possibly 4: These modes are less useful, at least on albedo/specular/etc. maps, sRGB content, and photos/images. Almost all BC7 encoders, including ispc_texcomp's, can't even handle mode 0 correctly anyway.

Mode 4 is useful on decorrelated alpha. If your content doesn't have much of that, just always use mode 5.


More RDO BC7 encoder progress

I finally sat down and added a simple LZ4-like simulator to my in-progress soon to be open source RDO BC7 encoder. You can add blocks in, query it to find the longest/nearest match, and give it some bytes and ask it how many bits it would take to code (up to 128 for BC7, less if it finds matches). It's definitely the right path forward for RDO encoders. It looks like, for BC7 modes 1 and 6, that it's accurate vs. Deflate within around 1.5-7%. It predicts on the high side vs. Deflate, because it doesn't have a Huffman model. Mode 1's predictions tend to be more accurate, I think because this mode has encoded endpoints nicely aligned on byte boundaries.

With BC7 RDO encoding, you really need an LZ simulator of some sort. Or you need decent approximations. Once you can simulate how many bits a block compresses to, you can then have the encoder try replacing byte aligned sequences within each block (with sequences that appear in previous blocks). This is the key magic that makes this method work so well. You need to "talk" to the LZ compressor in the primary language it understands: 2+ or 3+ length byte matches.

For example, with mode 6, the selectors are 4-bits per texel, and are aligned at the end of the block. So each byte has 2 texels. If your p-bits are always [0,1] (mine are in RDO mode), then it's easy to substitute various regions of bytes from previously encoded mode 6 blocks, and see what LZ does.

This is pretty awesome because it allows the encoder to escape from being forced to always using an entire previous block's selectors, greatly reducing block artifacts.

In one experiment, around 40% of the blocks that got selector byte substitutions from previous blocks are from plugging in 3 or 4 byte matches and evaluating the Lagrangian.

40% is ridiculously high - which means this technique works well. It'll work with BC1 too. The downside (as usual) is encoding performance.

I've implemented byte replacement trials for 3-8 byte matches. All are heavily used, especially 7 and 8 byte matches. I may try other combinations, like trying two 3 byte matches with 2 literals, etc. You can also do byte replacement in two passes, by trying 3 or 4 byte sequences from 2 previously encoded blocks.

Making this go fast will be a perf. optimization challenge. I'm convinced that you need to do something like this otherwise you're always stuck replacing entire block's worth of selectors, which can be way uglier.

Example encodings (non-RDO modes 1+6 is 42.253 dB, 6.84 bits/texel):

- RDO BC7 mode 1+6, lambda .1, 8KB max search distance, match replacements taken from up to 2 previous blocks
41.765 RGB dB, 6.13 bits/texel (Deflate - miniz library max compression)



- RDO BC7 mode 1+6, lambda .25, 8KB max search distance, match replacements taken from up to 2 previous blocks
41.496 RGB dB, 5.78 bits/texel (Deflate - miniz library max compression)



- RDO BC7 mode 1+6, lambda .5, 8KB max search distance, match replacements taken from up to 2 previous blocks
40.830 RGB dB, 5.36 bits/texel (Deflate - miniz library max compression)



- RDO BC7 mode 1+6, lambda 1.0, 4KB max search distance
39.507 RGB dB, 4.97 bits/texel (Deflate - miniz library max compression)



Mode 6 byte replacement histogram (lengths of matches, in bytes):
14752 0 5000 3688 3833 3975 4632 0 0 0 0 0 0 0 0 0
8       3    4    5    6    7

- RDO BC7 mode 1+6, lambda 3.0, 2KB max search distance
36.161 dB, 4.59 bits/texel




- RDO BC7 mode 1+6, lambda 4.0, 2KB max search distance
35.035 dB, 4.47 bits/texel



- RDO BC7 mode 1+6, lambda 5.0, 4KB max search distance, match replacements taken from up to 2 previous blocks
33.760 dB, 3.96 bits/texel



- RDO BC7 mode 1+6, lambda 8.0, 4KB max search distance, match replacements taken from up to 2 previous blocks
32.072 dB, 3.47 bits/texel



-RDO BC7 mode 1+6, lambda 10.0, 4KB max search distance, match replacements taken from up to 2 previous blocks
31.318 dB, 3.32 bits/texel


- RDO BC7 mode 1+6, lambda 10.0, 8KB max search distance, match replacements taken from up to 2 previous blocks
31.279 dB, 3.21 bits/texel

- RDO BC7 mode 1+6, lambda 12.0, 8KB max search distance, match replacements taken from up to 2 previous blocks
30.675 db, 3.07 bits/texel


- RDO BC7 mode 1+6, lambda 20.0, 8KB max search distance, match replacements taken from up to 2 previous blocks
29.179 dB, 2.68 bits/texel



- Non-RDO mode 1+6 (bc7enc level 4)
42.253 dB, 6.84 bits/texel:



- Original 1024x1024 image:



Wednesday, February 3, 2021

RDO BC1-BC7 progress

I've been making progress on my first RDO BC7 encoder. I started working on RDO BC1-7 years ago, but I put this work on hold to open source Basis Universal. BasisU was way more important from a business perspective. (Games are fun and all, but the game business doesn't pay and web and mapping are where the eyeballs are at.)

RDO BC1-5 are done and already checked into the bc7enc_rdo repo. The test app in this repo only currently supports RDO BC1 and BC4, but I'll add in BC3/5 very soon (they are just trivial variations of BC1/4). I'm hoping the KTX2 guys will add this encoder to their repo, so I don't have to create yet another command line tool that supports mipmaps, reading/writing DDS/KTX, etc. RDO BC1-5 are implemented as post-processors, so they are compatible with any other non-RDO BC1-5 encoder. 

For my first RDO BC7 encoder, I've modified bc7enc's BC7 encoder (which purposely only supports 4 modes: 1/5/6/7) to support optional per-mode error weights, and 6-bit endpoint components with fixed 0/1 p-bits in mode 6. These two simple changes immediately reduce LZ compressed file sizes by around 5-10% with Deflate, with no perf. impact. I may support doing something like this for the other modes. I also implemented Castano's optimal endpoint rounding method, because why not.

The next step is creating a post-processor that accepts an array of encoded BC7 blocks, and modifies them for higher quality per compressed bit by increasing LZ matches. The post-processor function will support all the modes, although I'm testing primarily with bc7enc at the moment. Merging selector bits with previously encoded blocks is the simplest thing to do, which I just got working for any mode. 

I'm using the usual Langrangian multiplier method (j=D+l*R, where D=MSE, R=predicted bits, l=lambda). Here's a good but dense article on rate distortion methods and theory: Rate-distortion methods for image and video compression, by Ortego and Ramchandran (1998). I first read this years ago while working on Halo Wars 1's texture compression system, which was like crunch's .CRN mode but simpler. None of this stuff is new, and the image and video folks have been doing it for decades.

I first implemented the Langrangian multiplier method in 2017, as a postprocess on top of crunch's BC1 RDO mode which we sent to a few companies. The Langrangian multiplier method itself is easy, but estimating LZ bitrate and especially handling smooth blocks is tricky. The current smooth block method I'm using computes the maximum of the standard deviation of any component in each block, and from that scalar it computes a per-block MSE error scale. This artificially amplifies computed errors on smooth blocks, which is a hack, but it does seem to work. This hurts R-D performance but something must be done or smooth blocks turn to shit.

Some RDO BC1 and BC7 examples on kodim23, which has lots of smooth blocks:

RDO BC1: 8KB dictionary, lambda=1.0, max smooth block MSE scale=10.2, max std dev=18.0, linear metrics
38.763 RGB dB, 2.72 bits/texel (Deflate, miniz max compression)


RDO BC7 modes 1+6: 8KB dictionary, lambda=1.0, max smooth block MSE scale=19.2, max std dev=18.0, -u4, linear metrics
42.659 RGB dB, 5.05 bits/texel (Deflate, miniz max compression)
Mode 1: 1920 blocks
Mode 6: 22656 blocks



RDO BC7 modes 1+6: 8KB dictionary, lambda=2.0, max smooth block MSE scale=20.4, max std dev=18.0, -u4, linear metrics
40.876 RGB dB, 4.41 bits/texel (Deflate, miniz max compression)
Mode 1: 1920 blocks
Mode 6: 22656 blocks


To get an idea how bad things can get if you don't do anything to handle smooth blocks, here's BC7 modes 1+6: lambda=1.0, no smooth block error scaling (max MSE scale=1.0):
38.469 RGB dB, 3.49 bits/texel


I'm showing kodim23 because how you handle smooth blocks in this method is paramount. 92% of kodim23's blocks are treated as smooth blocks (because the max component standard deviation of any block is <= 18). This means that most of the MSE errors being computed and plugged into the Langrangian calculation are being artificially scaled up. There must be a better way, but at least it's simple. (By comparison, crunch's clusterization-based method didn't do anything special for smooth blocks - it just worked.)

I'm still tuning how smooth blocks are handled. Being too conservative with smooth blocks can cause very noticeable block artifacts at higher lambdas. Being too liberal with smooth blocks causes the R-D efficiency (quality per LZ bit) to go down:


Here are some R-D curves from my early BC1 RDO results. rgbcx.h's RDO BC1 is clearly beating crunch's 13 year old RDO BC1 implementation, achieving higher quality per LZ compressed bit. crunch is based off endpoint/selector clusterization+refinement with no direct awareness of what LZ is going to do with the output data, so this isn't surprising. 


(These images are way too big, but I'm too tired to resize them.)

For some historical background the crunch library (which also supports RDO BC1-5) has always computed and displayed LZMA statistics on the compressed texture's output data. (As a side note, there's no point using Unity's crunch repo for RDO BC1-5 - they didn't optimize RDO, just .CRN.) The entire goal of crunch, from the beginning, was RDO BC1-5. I remember being very excited by RDO texture encoders in 2009, because I realized how useful they would be to video game developers. It achieved this indirectly by causing many blocks, especially nearby ones, to use the same endpoint/selector bits, increasing the ratio of LZ matches vs. literals. For a fun but practical Windows demo I wrote years ago of crunch's RDO encoder (written using managed C++ of all things), check out ddsexport.

Anyhow, the next step is to further enhance RDO BC7 opaque, then dive into alpha. I'll be open sourcing the RDO BC7 postprocessor within a week. After this I'm going to write a second stronger version.

I suspect everybody will switch to RDO texture encoders at some point. Selector RDO with optional endpoint refinement is very easy to do on all the GPU texture formats, even PVRTC1.

I recently went back and updated the UASTC RDO encoder to use the same basic options (lambda and smooth block settings) as my RDO BC1-7 encoders. The original UASTC RDO encoder controlled quality vs. bitrate in a different way. These changes will be in basisu v1.13, which should be released on github hopefully by next week (once we get approval from the company we're working with).