I think this is (vaguely to roughly - no idea?) similar to the new supercompressed texture algorithm John Brooks is working on, CEO of Blue Shift:
Instead of crunch-style top down clusterization (VQ) on the color endpoints and selector vectors with GPU format specific optimization and refinement stages, you could convert a pixel-wise codec (something like JPEG but hopefully better!) to directly output GPU block data. In ETC1, each subblock's colorspace line lies along the grayscale axis. The selector optimization/decision process is driven very strongly by the importance of luma vs. chroma error in ETC1. (And it's possible to trivially convert ETC1 data into high quality DXT1, so none of this info is specific to exclusively ETC1.)
The compressor can send "mode" bits to the decompression/transcoder system, depending on the importance of each bit of compressed data. The actual importance is related to a balance between bitrate and desired decompression/encoding time. If something is just way too expensive to do in the transcoder, then just do the computation in the encoder offline and send the results as some arithmetically coded bits with the right contexts.
And the encoder can make all key RDO decisions keeping in mind the destination GPU format (ETC1, DXT1, ASTC, etc.) and the transcoder's strengths/weaknesses.
It's also possible to combine crunch's clusterization+refinement algorithm to handle the chroma problem in this format. Use VQ on the luma/chroma endpoint information, and a pixel-wise approach on the luma to drive the selector decision process. I can imagine so many possibilities here!
Co-owner of Binomial LLC, working on GPU texture interchange. Open source developer, graphics programmer, former video game developer. Worked previously at SpaceX (Starlink), Valve, Ensemble Studios (Microsoft), DICE Canada.
Tuesday, January 24, 2017
Basis ETC1 support progress
The front-end is written, now all of my effort has been on designing an efficient intermediate file format for ETC1 and a decent enough (but an improvable) encoder to go along with it. The idea is to prioritize the development of a powerful enough intermediate file format itself first, along with a "hopefully good enough" encoder. So I've been quickly trying a bunch of different RDO-driven (mostly back end) optimizations, to find some interesting ones to exploit later down the line. Because once the transcoder ships I can't change it for a long time, if ever.
The next step is to add one more major feature, port it to Linux, and ship it to our first customer.
The next step is to add one more major feature, port it to Linux, and ship it to our first customer.
Thursday, January 19, 2017
Basis ETC1 intermediate format support is now round-tripping
We can now encode to our new compressed intermediate format for ETC1, and quickly transcode this compressed data to raw ETC1 bits. (Just like crunch's "CRN" format, but for ETC1.) This required many hundreds of compression and coding experiments to get here, and a bunch of work. The next major step is to port it to Linux and continue optimizing/tuning the encoder. We are freezing out file format for ETC1 within the next 2-3 weeks, and beyond that we'll just be tuning the encoder for higher quality output per bit.
For the first time, the backend of any encoder I've written supports explicit selector RDO. It basically tries to balance luma noise vs. bitrate. I've discovered a large number of improvements to crunch's basic approach. Each time I get tempted to try a completely different algorithm, I find another optimization.
Here's a random example of what selector RDO looks like on kodim18. At these quality settings, Basis ETC1 intermediate format gets 1.52 bits/texel, RDO compression (with the exact same quality level) on the ETC1 .KTX file+LZMA gets 1.83 bits/texel, and plain ETC1+LZMA is 3.33 bits/texel.
Delta selector RDO Enabled: 1.52 bits/texel Luma SSIM: .867811 (plain non-RDO ETC1 Luma SSIM is .981066)
Delta selector RDO Disabled: 1.61 bits/texel SSIM: .900380
RGB delta image (biased to 128):
The two textures above share the exact same endpoint/selector codebooks, the same per-macroblock mode values (flip and differential bits), and the same block color "endpoints". The delta selectors in the 2nd image were chosen to balance introduced noise vs. bitrate. I'm still tuning this algorithm but it's amazing to find improvements like this with so little code. This improvement and others like it will also be available in Basis's support for the DXT formats.
We also support an "ETC1S" mode, that doesn't support per-block tiling or 4:4:4 "individual" mode colors. It transcodes to standard ETC1 blocks. It gets 1.1 bits/texel, Luma SSIM .8616 using the same selector/endpoint codebook sizes. This mode has sometimes noticeable 4x4 block artifacts, but otherwise can perform extremely well in a rate-distortion sense vs. full ETC1 mode.
ETC1S mode has better behavior in a rate-distortion sense vs. full ETC1 mode, so there's still a lot of work to do.
The encoder's backend hasn't been tuned hardly at all yet, there are some missing optimization steps (that crunch used), and the selector codebook compression stage needs to be redone. The ETC1 per-block "mode" bits are also pretty expensive right now, so I need to implement a form of explicit "mode bit" RDO that purposely biases the mode bits in the front end to achieve better compression.
For the first time, the backend of any encoder I've written supports explicit selector RDO. It basically tries to balance luma noise vs. bitrate. I've discovered a large number of improvements to crunch's basic approach. Each time I get tempted to try a completely different algorithm, I find another optimization.
Here's a random example of what selector RDO looks like on kodim18. At these quality settings, Basis ETC1 intermediate format gets 1.52 bits/texel, RDO compression (with the exact same quality level) on the ETC1 .KTX file+LZMA gets 1.83 bits/texel, and plain ETC1+LZMA is 3.33 bits/texel.
Delta selector RDO Enabled: 1.52 bits/texel Luma SSIM: .867811 (plain non-RDO ETC1 Luma SSIM is .981066)
RGB delta image (biased to 128):
The two textures above share the exact same endpoint/selector codebooks, the same per-macroblock mode values (flip and differential bits), and the same block color "endpoints". The delta selectors in the 2nd image were chosen to balance introduced noise vs. bitrate. I'm still tuning this algorithm but it's amazing to find improvements like this with so little code. This improvement and others like it will also be available in Basis's support for the DXT formats.
We also support an "ETC1S" mode, that doesn't support per-block tiling or 4:4:4 "individual" mode colors. It transcodes to standard ETC1 blocks. It gets 1.1 bits/texel, Luma SSIM .8616 using the same selector/endpoint codebook sizes. This mode has sometimes noticeable 4x4 block artifacts, but otherwise can perform extremely well in a rate-distortion sense vs. full ETC1 mode.
ETC1S mode has better behavior in a rate-distortion sense vs. full ETC1 mode, so there's still a lot of work to do.
The encoder's backend hasn't been tuned hardly at all yet, there are some missing optimization steps (that crunch used), and the selector codebook compression stage needs to be redone. The ETC1 per-block "mode" bits are also pretty expensive right now, so I need to implement a form of explicit "mode bit" RDO that purposely biases the mode bits in the front end to achieve better compression.
Monday, January 16, 2017
ETC1 intermediate format progress
I've been working on support for the ETC1 format in Basis. The RDO front end class is done, and I'm writing the first ETC1 encoder+transcoder tonight. ETC1 is different enough from DXT1 that a redesign/rewrite is required.
Check out these histograms of the symbols the encoder sends to the transcoder (for kodim18).
Check out these histograms of the symbols the encoder sends to the transcoder (for kodim18).
Tuesday, January 3, 2017
The "Faster Zombies!" blog post
I'll never forget this post:
http://blogs.valvesoftware.com/linux/faster-zombies/
Gabe Newell himself wrote a lot of this post in front of me. From what I could tell, he seemed flabbergasted and annoyed that the team didn't immediately blog this info once we were solidly running faster in OpenGL vs. D3D. (Of course we should have blogged it ourselves! One of our missions as a team inside of Valve was to build a supportive community around our efforts.) From his perspective, it was big news that we were running faster on Linux vs. Windows. I personally suspect his social network didn't believe it was possible, and/or there was some deeper strategic business reason for blogging this info ASAP.
I stood behind Gabe and gave him all the data concerning GL vs. D3D performance while he typed the blog post in. I was (and still remain) extremely confident that our results were real. We conducted these tests as scientifically as we could, using two machines with the same hardware, configured in precisely the same way in the BIOS's, etc. NVidia and AMD were able to reproduce our results independently. Also, I could have easily made L4D2 on Linux GL run even faster vs. Windows, but we had other priorities like getting more Source1 games working, and helping Intel with their open source GL driver. From what I understand, Linux has some inherent advantages at the kernel level vs. Windows that impact driver performance.
Curiously and amusingly, another key developer on the team (who will remain unnamed as a professional courtesy) seemed to be terrified of being involved in this post when Gabe came over. When I stepped up to tell Gabe what to say, this developer looked totally relieved. I should have realized right there that I had a lot more influence that I suspected. Every time Gabe came over, it seemed like I was in the hot seat. Those were some challenging and amazing experiences!
A few weeks after this post went out, some very senior developers from Microsoft came by for a discreet visit. They loved our post, because it lit a fire underneath Microsoft's executives to get their act together and keep supporting Direct3D development. (Remember, at this point it was years since the last DirectX SDK release. The DirectX team was on life support.) Linux is obviously extremely influential.
It's perhaps hard to believe, but the Steam Linux effort made a significant impact inside of multiple corporations. It was a surprisingly influential project. Valve being deeply involved with Linux also gives the company a "worse case scenario" hedge vs. Microsoft. It's like a club held over MS's heads. They just need to keep spending the resources to keep their in-house Linux expertise in a healthy state.
I look back and now I see that I was totally not ready to handle the stress involved in this project. My ability and experience managing stress was very limited back then. I took on the lion's share of the graphics problems in the Source1 engine releases, I had to interact with the external driver teams and their managers/execs, help and present to external game developers about how to get good performance out of OpenGL, dodge patent attacks on my open source software, survive the rage of one corporation as I (naively!) helped their competitors implement support for a public GL extension, and also try to survive during the mass layoffs of 2013.
Unfortunately, these layoffs did impact the Linux team's morale and development efficiency. Even so, I did have a great time working at Valve overall. It was the most challenging and educational experience I've ever had. I highly doubt I could have gotten this kind of life changing experience at any other company.
http://blogs.valvesoftware.com/linux/faster-zombies/
Gabe Newell himself wrote a lot of this post in front of me. From what I could tell, he seemed flabbergasted and annoyed that the team didn't immediately blog this info once we were solidly running faster in OpenGL vs. D3D. (Of course we should have blogged it ourselves! One of our missions as a team inside of Valve was to build a supportive community around our efforts.) From his perspective, it was big news that we were running faster on Linux vs. Windows. I personally suspect his social network didn't believe it was possible, and/or there was some deeper strategic business reason for blogging this info ASAP.
I stood behind Gabe and gave him all the data concerning GL vs. D3D performance while he typed the blog post in. I was (and still remain) extremely confident that our results were real. We conducted these tests as scientifically as we could, using two machines with the same hardware, configured in precisely the same way in the BIOS's, etc. NVidia and AMD were able to reproduce our results independently. Also, I could have easily made L4D2 on Linux GL run even faster vs. Windows, but we had other priorities like getting more Source1 games working, and helping Intel with their open source GL driver. From what I understand, Linux has some inherent advantages at the kernel level vs. Windows that impact driver performance.
Curiously and amusingly, another key developer on the team (who will remain unnamed as a professional courtesy) seemed to be terrified of being involved in this post when Gabe came over. When I stepped up to tell Gabe what to say, this developer looked totally relieved. I should have realized right there that I had a lot more influence that I suspected. Every time Gabe came over, it seemed like I was in the hot seat. Those were some challenging and amazing experiences!
A few weeks after this post went out, some very senior developers from Microsoft came by for a discreet visit. They loved our post, because it lit a fire underneath Microsoft's executives to get their act together and keep supporting Direct3D development. (Remember, at this point it was years since the last DirectX SDK release. The DirectX team was on life support.) Linux is obviously extremely influential.
It's perhaps hard to believe, but the Steam Linux effort made a significant impact inside of multiple corporations. It was a surprisingly influential project. Valve being deeply involved with Linux also gives the company a "worse case scenario" hedge vs. Microsoft. It's like a club held over MS's heads. They just need to keep spending the resources to keep their in-house Linux expertise in a healthy state.
I look back and now I see that I was totally not ready to handle the stress involved in this project. My ability and experience managing stress was very limited back then. I took on the lion's share of the graphics problems in the Source1 engine releases, I had to interact with the external driver teams and their managers/execs, help and present to external game developers about how to get good performance out of OpenGL, dodge patent attacks on my open source software, survive the rage of one corporation as I (naively!) helped their competitors implement support for a public GL extension, and also try to survive during the mass layoffs of 2013.
Unfortunately, these layoffs did impact the Linux team's morale and development efficiency. Even so, I did have a great time working at Valve overall. It was the most challenging and educational experience I've ever had. I highly doubt I could have gotten this kind of life changing experience at any other company.
I love ebikes
Here's me ebiking to work at Boss Fight Entertainment in McKinney, TX, on a custom ebike built from parts acquired from ebikes.ca. In this video, I was only running at around 40v 20a, and the rest of the power was coming from peddling (me!). I eventually pushed the controller to its limit (100v), and switched from A123 LiFe cells to the much more "touchy" LiPo chemistry.
Dallas turned out to be a wonderfully ebike friendly area. There are endless miles and miles of golf cart trails, bike/walking trails, and virtually unused streets all over the Dallas metroplex. Also, I was almost invisible in Dallas as an ebiker. (Which is both good, and bad.)
Dallas turned out to be a wonderfully ebike friendly area. There are endless miles and miles of golf cart trails, bike/walking trails, and virtually unused streets all over the Dallas metroplex. Also, I was almost invisible in Dallas as an ebiker. (Which is both good, and bad.)
Pic of Matt Pritchard's shipped title award
Pic taken during a late night debugging session. (I have one just like it.) This thing is so well done and high quality.
Ensemble Studios (Microsoft) was such a classy outfit. No other company I've worked at treated its employees as well as Ensemble's did (including Valve).
Ensemble Studios (Microsoft) was such a classy outfit. No other company I've worked at treated its employees as well as Ensemble's did (including Valve).
Subscribe to:
Posts (Atom)