Saturday, December 5, 2015

How to deeply integrate a data compressor into a game engine

Intro


We're still in a "path finding into the future" mode here at Unity. We are now thinking about breaking down the "Berlin Wall" between game engines like Unity and the backend data compressor. We have a lot of ideas, and some are obvious things I've already blogged about like CompressQuery(). These next ideas are deeper, and less concrete, but I think they are interesting.

Below is a technical note from our internal Compression Team Confluence page (internal first to get early feedback on the possibility of deep Unity engine integration). It describes a couple of interesting proposals triggered by several great discussions with Alexander Suvorov, also on the Compression Team. There are no guarantees this stuff will work out, but we need to think and talk about it because it could lead to even better ideas.


Some ideas on how to deeply integrate a data compressor into a game engine


Alexander Suvorov and I have been discussing this topic:
The key question on the Compression Team that we should be answering is: How do we build a data compression engine that Unity can talk to better, so we get higher ratios?

Right now everybody else spends endless dev time on optimizing the backend compressor itself (entropy coders, LZ virtual machine/instruction set changes etc.), which is a great thing to do but there are other ways of improving ratio too. This is done because the folks writing compressors usually don't control the caller. But this is not true for us, because Alexander and I can change anything in the entire Unity stack (we can change the caller and we can change the data compressor).

The golden rule in mainstream lossless compression has been: "you cannot change the submission API", for compatibility/simplicity reasons. Basically, "API and Data Format Compatibility=God" for a codec. The coding literature I've seen doesn't go much (if at all) into the API side either, because the agreed assumption is that the caller doesn't typically know much or care about the details inside lossless data compression libraries. It's just a blob of bytes, here you go.

Since artificial "rules are meant to be broken" (Joachim's observation), there seems to be at least two interesting approaches to breaking down the barrier, that can also be mixed together. Both involve a superset compressor/decompressor API:

1. Direct Context Control (DCC)

Let's try allowing the caller to have some control over the compressor's internal coding contexts (or select between statistical models).

In this API, the caller specifies the upcoming index of the current context to use while streaming data to/from the codec. The context can be as simple as a programmer chosen ID, or a structure member ID - anything the caller wants as long as it's consistent (and as long as they also specify same context ID during decompression) They can say stuff like "context0=DWORD's" and "context1=floats", or they can describe individual bytes within these elements, etc. 

We don't care what the context really means, all we care about is that the caller doesn't lie/is consistent. We don't serialize the context ID's anywhere, this is just info supplied by the caller that they already have. The caller may need to experiment to find the proper way to break down their data into specific context ID's.

LZHAM alpha uses many more contexts internally, adding them back isn't that bad. Let's let the caller control it, as long as they understand the constraints (compressor and decompressor must always agree on contexts, don't lie, be consistent) we're okay.

2. Universal Preprocessing
This benefits from, but doesn't need, DCC. The compressor accepts Granny-style fixup metadata, and we preprocess the data before compression and postprocess the data after compression. We use DCC internally, but it's not strictly required (depends on codec).

The extra API allows the caller to describe how their data is laid out, by providing exactly the same structure, data member, and array size/pointer markup metadata as Granny's powerful binary fixup metadata system works. (Rad puts little tidbits of info about this technology on changenotes here - look for "fixups".)

So the plan here is:

  • We modify Unity to provide universal binary markup metadata of the important binary data it serializes into files
    • It doesn't need to be exact or complete, just describe most/a lot of the data well
    • Markup describes structures with offsets to other arrays of structures, basically a Granny-style "data tree". (Granny uses this for byteswapping and offset to pointer remapping.)
    • Note Granny's markup metadata is "universal". Serialized data is described as structs pointing to other arrays of structs, and structs can have any arbitrary members, like bytes, words, dword's, offsets/pointers, etc. Like run-time type info type metadata. 
    • Worst case, you have an array of bytes for any arbitrary data, but you don't get any gains here, you need higher level info.
    • See Runtime/Serialize/IterateTypeTree.h in Unity code
  • Next, compressor walks the data tree and reserializes it to compressor 
    • This is a specialized tree sampling algorithm that uses the metadata to walk over each byte of every marked up structure member in the tree.
    • Both compressor and decompressor must traverse the tree data in the exact same way.
    • So if the entire file is an array of floats, we first emit byte 0's of all floats, then byte 1's, etc. (this is a AOS->SOA style byte swizzle of each unique data type in file). Or other options.
    • We also can provide compressor with member byte or whatever specific context ID's
    • This could help on images, DXT textures, sound data, arbitrary serialized binary data
    • Should be especially easy to integrate into existing auto-serialization systems
    • Like data preprocessors used by RAR and other archivers
    • Sampling algorithm ideas are in another doc
  • Compression/decompression as usual, except we can have DCC calls too (if we want)
  • Decompressor deswizzles data during postprocess (just like Granny does when it loads a Granny file and has to byteswap and convert from offsets to pointers)

Notes And Conclusion


Now I understand one important reason why tech companies like Apple, Facebook, Google, Microsoft, and Unity hire or internally find data compression specialists. Once you have an internal set of compression specialists those engineers can freely move up/down that company's stack. Once they do that they can achieve superior results by developing a full understanding of the API stack, usage patterns and company-specific datasets. Writing completely custom codecs becomes a doable and profitable thing.

Idea #1, "direct context control" is very interesting but will take some R&D to figure out the technical details and true feasibility. I've learned that adding more contexts must be done very carefully. Too many contexts and they get too sparse, possibly cause performance problems, memory usage goes up, etc.

Idea #2 is the "universal preprocessing" idea I've been mulling over in my head for several years. It describes how to more closely couple or blend a binary data serialization system, like the one used by Unity (or Granny, or the Halo Wars engine), with a lossless data compression system like LZHAM.

We wrote this around the same time Colt McAnlis's interesting blog post, where he mentions his ideas on several unsolved problems in data compression. (I don't quite get point #1 though - need more detail.) This post is closely related to problem #3, maybe a little of #2 as well. It's also related to "compression boosters", that Colt says "are preprocessing algorithms that allow other, existing, compressors to produce better results".

Basically, if we add a binary metadata description API to the compressor (like the Granny-style "fixup" data used for offset->pointer conversion and byteswapping) this opens up a bunch of interesting possibilities. At least on the types of binary data we deal with all the time: images, GPU textures, meshes, animation, etc.

Many engines have serialization systems like this, that handle things like byteswapping, offset->pointer conversion, and auto serialization/deserialization to binary or text formats. (Any open source ones? MessagePack solves some similar problems.)

Colt's point on Kolmogorov complexity is a key related concept to pull inspiration from. We already have the algorithm+input data (the metadata) to serialize or deserialize binary files to/from raw byte arrays. It's just the datatree graph traversal algorithm that I implemented to byteswap/pointerize "Raw" Granny data files on Halo Wars. (For reasons unrelated to this article.)

We still don't have a program that creates the data in a pure shortest program Kolmogorov sense, of course. Our program has two data inputs (metadata and raw "value" data), so all we've done is expand the total amount of real data to compress (by a small amount due to the metadata, but it's still expansion). But we do have at least one little program that can create and manipulate a new set of compressor input, or give us key type information about the compressor's input. We can use this information to better context model and/or reorganize ("sort" or permute the data) the input data so it leads to higher compression with a backend coder like LZHAM/LZMA.

The metadata itself can be transmitted in the compressed data stream (just like Granny now does according to its changenotes), or it can be present in the game engine's executable itself. This data will be necessary for deserialization, anyway, so it makes no sense to duplicate it and hurt ratio.

Unfortunately, one detail not mentioned above is that a datatree can have arrays of objects, which requires "size" fields to be present in the parent objects which point to the array. So it may be necessary for the datatree serializer to compose a separate list of array size fields and supply that information to the compressor (which will also need to be transmitted to the decompressor). Due to object serialization order, the array size fields should always appear before the array data itself, so this may not be a big deal.

Another possibility is to use a sort to rearrange the input data fed into the compressor, like the BWT transform. The close coupling between the serializer and the compressor gives the compressor a sideband of extra type/context information describing the input data to compress. The per-byte sort key can be context ID's computed by traversing the datatree, on both the compression and decompression side.

All of these ideas make my head hurt. It's very possible we're missing something key here, but I believe that our major point (deep compressor integration with a game engine's data serializers) has value. The next steps will be to conduct some quick experiments with an existing set of compressors, and see what problems and interesting new opportunities we encounter.


Thursday, December 3, 2015

One test showing the performance of miniz vs. zlib

miniz (was here, now migrating to github here) is my single source file zlib-alternative. It's a complete from scratch reimplementation, and my 5th Deflate/Inflate implementation so far. It has an extremely fast, real-time Deflate-compatible compressor, and for fun the entire decompressor lives in only a single C function. From this post by Tom Alexander:

miniz vs zlib

For this final test, we will use the code from the above test which is using read and only a single thread. This should be enough to compare the raw performance of miniz vs zlib by comparing our binary vs zcat.
TypeTime
fzcat (modified for test)64.25435829162598
zcat109.0133900642395

Conclusions

So it seems that the benefit of mmap vs read isn't as significant as I expected. THe benefit theoretically could be more significant on a machine with multiple processes reading the same file but I'll leave that as an excercise for the reader.
miniz turned out to be significantly faster than zlib even when both are used in the same fashion (single threaded and read). Additionally, using the copious amounts of ram available to machines today allowed us to speed everything up even more with threading.

Wednesday, December 2, 2015

A graph submission API for lossless data compression

Earlier today I was talking with John Brooks (CEO of Blue Shift Inc.) about my previous blog post (adding a new CompressQuery()API to lossless compressors). It's an easy API to understand and add to existing compressors, and I know it's useful, but it seems like only the first most basic step.

For fun, let's try path finding into the future and see if we can add some more API's that expose more possibilities. How about these API's, which enable the caller to explore the solution space more deeply:

- CompressPush(): Push compressor's internal state
- CompressPop(): Pop compressor's internal state
- CompressQuery(): Determine how many bits it would take to compress a blob
- CompressContinue(): Commit some data generating some compressed output

Once we have these new API's (push/pop/query, we already have commit) we can use the compressor to explore data graphs in order to compose the smallest compressed output.

The Current Situation


Here's what we do with compressors today:



There are two classes of nodes here that represent different concepts.

The blue nodes (A, B, C, etc.) represent internal compressor states, and the black nodes represent some data you've given to the compressor. Black nodes represent calls to CompressContinue() with some data to compress. You put in some data by calling CompressContinue(), and the compressor moves from internal states A all the way to F, and at the end you have some compressed data that will recover the data blobs input in nodes G, H, I, etc. at decompression time. Whenever the compressor moves from one blue node to the next it'll hand you some compressed bits.

Now let's introduce CompressQuery()


Let's see what possibilities CompressQuery() opens up:



Now the black nodes represent "trials". In this graph, the compressor starts in state A, and we conduct three trials labeled B, C, and D using the CompressQuery() API to determine the cheapest path to take (i.e. the path with the highest compression ratio). After figuring out how to get into compressor state E in the cheapest way (i.e. the fewest amount of compressed bits), we "commit" the data we want to really compress by calling CompressContinue(), which takes us into state E (and also gives us a blob of compressed data). We repeat the process for trials F, G, H, which gets the compressor into state I, etc. At Y we have fully compressed the input data stream and we're done.

CompressQuery() is a good, logical first step but it's too shallow. It's just a purely local optimization tool.

Let's go further: push/pop the compressor state


Sometimes you're going to want to explore more deeply, into a forest of trials, to find the optimal solution. You're going to need to push the current compressor's state, do some experiments, then pop it to conduct more experiments. This approach could result in higher compression than just purely local optimization.

Imagine something like this:



At compressor state A we first push the compressor's internal state. Now conduct three trials (C, D, E), giving us compressor state G, etc. At L we're done, so we pop back to node A and explore the bottom forest. Once we've found the best solution we pop back to A and commit the black nodes with the best results to get the final compressed data.

Conclusion


The main point of this post: Lossless data compressors don't need to be opaque black boxes fed fixed, purely linear, data streams. Tightly coupling our data generation code with the backend compressor can enable potentially much higher ratios.


Sunday, November 29, 2015

The Key Missing API in Lossless Data Compressors

There's a key streaming API missing from every lossless codec I've seen. This is the next API going into lzham_codec_devel (what will be LZHAM v1.1). This API bridges the gap between the lossless and lossy worlds, enables some other interesting use cases, and it should be easy to add to most designs.

For some background, the (previously) complete set of lossless compression library API's are:

Blocked:
CompressMemoryToMemory() - comp buffer in memory to another buffer
DecompressMemoryToMemory() decomp buffer in memory to another buffer
GetCompressBound()- returns max possible comp size given size of data to compress

Streaming:
CreateCompressContext() - create new comp context
DestroyCompressContext() - destroy comp context
ResetCompressContext() - reset comp context, reusing allocated memory
CompressContinue() - compress some bytes from input to output buf
CompressFlush(bool end) - forcibly flush comp, generating output

CreateDecompressContext() - create new decomp context
DestroyDecompressContext() - destroy decomp context
ResetDecompressContext() - reset decomp context, reusing allocated memory
DecompressContinue() - decompress some bytes from input to output buf

The missing streaming API is:

double CompressQuery(comp_ctx *pCtx, const void *pBuf, size_t size)

This function efficiently computes the compressed size, in fractional bits (and/or integer bytes) of the specified buffer using the current compression context. Importantly, the current compression context (entropy coding state, sliding dictionary, statistical models, etc.) is not modified. 

This API basically gives you an upper bound on how many compressed bits would be added to the output given a particular input. (It's an upper bound, not exact, because the flush imposes a hard artificial LZ phrase boundary on the output.)

This API can be inefficiently emulated to some degree on streaming compressors that support flushing, except you'll have to settle for only integer byte results, and put up with a full recompress before each query. CompressQuery() is superior because it can give you fractional bit results, it doesn't need to fully update its statistical models, or even fully entropy code the output (it just has to compute how many bits it would output, which codecs like LZMA/LZHAM can do today because they must compute accurate "bit prices" during near-optimal parsing).

CompressQuery() should be implementable in any lossless compressor, not just LZ based ones. Typically, lossless compression is viewed as some black box that occurs after you've generated some data. With this API you can now intimately interact with the compression engine in order to choose the set of data that leads to higher compression.

Example ideas of what you can now do with this API:

1. Rate distortion optimized (RDO) DXTc/PVRTC/etc. compression (i.e. like crunch)

A typical DXTc block compressor evaluates hundreds to thousands of possible packed block candidates, many of them with very similar or virtually the same PSNR's.  A simple RDO DXTc compressor would compute a list of candidate DXTc blocks for each input block, query the backend lossless compressor on each candidate block to determine how many bits would be added to the compressed output, then choose the encoding that strikes the best balance between coded bits and quality. The block compressor then codes (or "commits") this specific block to the compressed output stream by calling CompressContinue(), then continues to the next block and starts the process over again.

This is just local optimization. A more advanced version would use a dynamic programming approach to look ahead multiple blocks (like LZMA or LZHAM's parsers do) to build a graph so the best combination of blocks can be chosen that best balances compressed bits vs. PSNR.

I have already done several promising experiments in this area on DXTc textures while writing crunch. Interestingly, this approach is compatible with virtually any block based format.

2. Universal prediction engine
Honestly this usage is pretty far out and speculative. Here's one possibility, in the context of a real-time or turn based game:

Each frame, encode the position of a player character into a C-style POD fixed size data structure (let's call them records). Compress the raw record bytes by calling CompressContinue(). Simple enough.

Now here's where things get interesting. Let's say we want to try and determine the probability that the character will be at position X on the next frame. Evaluate the next X possible legal gamespace positions the character can be in, and encode these positions into records like usual. Now iterate through each possible legal position's record and call CompressQuery() on each record's serialized struct. 

The return value will be how many fractional bits are needed to encode each structure given the compressor's current context. The more bits needed to encode a record, the higher the record's entropy, and the less likely (or more "surprising") the position is to the compressor. More probable (less surprising) records will require fewer bits. 

Once the codec forms a decent model of the input records it should be able to predict the next position with (hopefully) reasonable certainty. This approach could be quite interesting given a sophisticated enough statistical modeling system and entropy coding backend. (Or not, I haven't tried it yet.)

3. bsdiff-style preprocessing for delta compression
bsdiff is a LZ-like approach for creating patch files. It consists of a command stream, a delta byte stream, and a literal byte stream, which can all be separately compressed (as bsdiff.exe does using bzip2). Importantly, there is no single "right" way of encoding a patch stream, i.e. there are many possible ways of generating fully valid patch command streams.

CompressQuery() can be called while composing the various streams in order to determine the most optimal set of commands/delta bytes/literal bytes to generate, in order to minimize the resulting compressed patch file size.

4. Optimized PNG-like lossless image compression
The typical PNG compressor adaptively chooses the filter to use on each scanline which minimizes the sum of absolute errors. A better metric would be, for each filter, to call CompressQuery() to determine how many compressed bits would be output if that filter was selected, and choose the filter that results in the fewest bits.

5. Basically any data that will be losslessly compressed that has multiple valid or usable encodings (mesh vertex data, curve fitted animation data, VQ data, etc.) could benefit from tightly coupling the data generation process with the backend lossless compressor.

Important aspects of LZHAM's design

I'm going to go through many of the major lossless codecs (LZMA, Zstd, LZ4, Deflate, bzip2, PAQ, etc.) and list the features and properties that made them unique or interesting, especially when first released. Let's start with LZHAM (yes I'm shamelessly beating my own drum here, but hey it's my tech blog). I think it's very important and interesting to understand the past.

LZHAM alpha was first released in Aug. 15, 2010 (according to Google Code), but the fast entropy decoder experiments and classes were written in early 2009 (before I joined Valve). At the time, the practical lossless data compression community didn't seem to have much focus or direction. They were kinda all over the map, and Charle Bloom's excellent reverse engineering of LZMA did not occur until after LZHAM's public release.

This codec was designed for next-generation video games, basically titles I thought would be eventually made with Source 2. Valve was awesome at allowing developers to work on open source and even commercial projects at home. The team didn't think data compression was an important thing to work on, so I decided to work on it on my spare time.

For some background, I was not able to use LZMA on Halo Wars because it was incredibly slow on X360, and Microsoft Game Studios stopped using my internal highly X360 optimized Deflate codec ("eslib") and switched to LZX. I used 7-zip on the Halo Wars build server, and was very impressed with its ratio, especially when in Deflate mode. I always wondered how it was able to achieve such high ratios when compressing to the old Deflate format, and I wanted to understand why.

Some of the major features it demonstrated:

- Micro-threaded compressor
Dictionary updating, match finding, and parsing all in parallel.
A lock-free approach is used to communicate between parser threads and match finder threads.
The usual approach to threading a compressor blocks up the input and sacrifices ratio, which is not necessary with the correct design.
Inspired by my experience writing the multithreaded Halo Wars engine, and the lock free stuff was inspired by experiments I was seeing done on Source 2's graphics engine.

- Interleaved coding
Huffman and binary arithmetic coding interleaved into the same bitstream. The compressor batches all symbols and simulates the entropy decoding steps the decompressor will use in order to figure out how to interleave the output bitstream.

I came up with this design because I wanted a simple symbol_codec class that supported totally free form usage of arithmetic, Huffman, and raw bits. This class was inspired by Amir Said's excellent papers and sample code. I tested it on a laptop and just keep on optimizing it for higher decoding performance over a few weeks time.

LZHAM also showed that Huffman coding still had legs in high ratio codecs. Very low or high probability symbols (what I called high "skew" symbols), where Huffman's prefix coding limitations are most obvious, can use fast and simple binary arithmetic coding, while everything else can be done with static Huffman coding, with bulk table updating for adaption. Also around this time, Andrew Polar showed it was possible to quickly update prefix codes.

- Best of X arrivals parsing (called "extreme" parsing in the code)
This was obvious after figuring out how to construct a parse graph.
Inspired by the path finding algorithms used in games.

- Other things it did that I think are important:
zlib compatible API - It's the standard "universal" lossless compression API, it makes no sense not to support it. To my knowledge LZHAM and miniz were the first to try and copy zlib's API.
Streaming support - I question how useful this is to many developers, but you need it otherwise you're limited to available RAM or have to use blocking which hurts ratio.
Seed dictionaries - Occasionally valuable.
Every update was thoroughly tested before pushing the code. Random failures or crashes = the kiss of death for a new codec trying to be accepted.

For LZHAM I decided that the best way to get noticed as adding value in a very competitive space was to match LZMA's ratio as closely as possible and just move "right" (faster) on the decompression speed/ratio Pareto frontier. I purposely de-emphasized the compression speed/ratio frontier, favoring offline compression.

One critical mistake I made in the alphas was optimizing too much for the Large Text Compression Benchmark, which is 100MB of Wikipedia text. This led to me going down a blind alley with higher order coding experiments, which used way too many Huffman tables.


Friday, November 27, 2015

Future Directions in Lossless Compression

My current guesses on where this field could go. This is biased towards asymmetric codecs (offline compression for data distribution, not real-time compression/decompression).


Short Term


LZ4: Higher ratio using near-optimal parsing but same basic decompressor/instruction set (note I doubt the LZ4 arrow can move up as much as I've illustrated).

LZHAM: Faster decompression by breaking out literals/delta literals/matches to separate entropy coded blocks, switch to new entropy decoder. Other ideas: multithreaded entropy decoding, combine multiple binary symbols into single non-binary symbols.

ZSTD: Refine current implementation: stronger compressor, profile and optimize all loops.

BROTLI: Brotli's place on the decompression frontier is currently too fuzzy on the ratio axis, so an easy prediction is the compressor will get tightened up. Its current entropy coding design may have trouble expanding to the right much further. (The same situation as LZHAM. The fast entropy coding space is moving rapidly right now.)

Long Term


New Territory: Theoretical future "holy grail" codec which will obsolete most other codecs. Once this codec is on the scene most others will be as relevant as compress. If you are working in the compression space commercially this is where you should be heading.

Note the circle is rough. I tried to roughly match Brotli's ratio in region 4, but it could go higher to be closer to LZMA/LZHAM.

Some ideas: blocked interleaved entropy coding with SIMD optimizations, entropy decoding in parallel with decompression, near-optimal parsing with best of X arrivals, cloud compression to search through hundreds of compression options, universal preprocessing, LZMA-like instruction set/state machine, rep matches with relative distances, partial matches with compressed fixup sideband.

Until very recently I thought codecs in this region were impossible. (Now, just incredibly challenging!)

Another interesting place to target is to the direct right of region 3 (or directly above region 2). Target this spot too and you've redefined the entire frontier.

Interesting Recent Developments


- Key paper showing several very promising paths forward in the entropy decoding space:
- Squash benchmark - Easily explore the various frontiers with a variety of test data and CPU's.

- Squash library - Universal compression library wrapping over 30 codecs behind a single API with streaming support.

- Zstd - Promising new codec showing interesting ways of breaking up the usual monolithic decoder loop into separate blocks (i.e. it decouples entropy decoding out from the main decoder loop)

Thursday, November 26, 2015

Quick Survey of the Lossless Decompression Pareto Frontier

I first learned about the compression "Pareto Frontier" concept on Matt Mahoney's Large Text Compression Benchmark page. Those charts are for compression throughput vs. ratio, not decompression throughput vs. ratio which I personally find far more interesting. Simple charts like this allow engineers to judge at a glance what codec(s) they should consider for specific use cases.

This chart was generated by the Squash Benchmark (options selected: Core i5-2400, 20.61MB of tarred Samba source code). Using exclusively text data is sub-optimal for a comparison like this, but this was one of the larger files in the Squash corpus. The ugly circles are my loose categorizations (or clusterizations):




1. Speed Kings, compression and decompression throughput >= disk read rate
Examples: LZ4, Snappy
Typical properties: low memory, low ratio, block-based API, symmetrical (super fast compression/decompression)

2. High ratio, decompression throughput >= avg network read rate
Examples: LZMA, LZHAM, LZNA, Brotli
Properties: Asymmetrical (compression is kinda slow but tolerable), decompression is fast enough to occur in parallel with network download (so it's "free"), ideal codec has best ratio while being just fast enough to overlap with decompression

3. Godlike ratio, decompression throughput < avg network rate
Examples: PAQ series
Properties: Needs godly amounts of RAM to match its godly ratio, extremely slow (seconds per mb), symmetrical

Possible use is for data that must be transmitted to a remote destination with lots of compute but an extremely limited network or radio link (deep space?)

Note: I made 3's circle on the ratio axis very wide because in practice I've seen PAQ's ratio tank on many binary files.

4. Intermediate ratio, decompression throughput > avg_network, but < disk read rate
Examples: zlib, zstd
Major property: Symmetrical (fast compression and decompression), very low to reasonable amount of RAM

Outliers/wildcards that defy simple categorization:

Examples: Heatshrink for embedded CPU's, or RAM compression
Properties: Low ratio, work memory: fixed, extremely low, or none, code size: usually tiny

Observations/notes:

- brotli appears to have pushed the decompression frontier forward and endangered (obsoleted?) several codecs. It's even endangering several region 4 codecs (but its compressor isn't as fast as the other region 4 codecs)

Right now Brotli's compressor is still getting tuned, and will undoubtedly improve. It's currently weak on large binary files, and its max dictionary size is not big enough (so it's not as strong for large files/archives, and it'll never fare very well on huge file benchmarks until this is fixed). So its true position on the frontier is fuzzy, i.e. somewhat dependent on your source data.

- zstd is smack dab in the middle of region 4. If it moves right just a bit more (faster decompression) it's going to obsolete a bunch of codecs in its category.

If zstd's decompressor is speeded up and it gets a stronger parser it could be a formidable competitor.

Currently brotli is putting zstd in danger until zstd's decompressor is further optimized.

- brotli support should be added to 7-zip as a plugin. Actually, probably all the major Decompression Frontier leaders should be added to 7-zip because they all have value in different usage scenarios.

- LZHAM must move to the right of this graph or it's in trouble. Switching the literals, delta literals, and the match/len symbols over to using Zstd's blocked coding scheme seems like the right path forward.

- Perhaps the "Holy Grail" in practical lossless compression is a region 1 codec with  region 2-like ratio. (Is this even possible?) Maybe a highly asymmetrical codec with a hyper-fast SIMD entropy decoder could do it.