Developers aren't getting the alpha quality they could be getting if they had better BC7 codecs. I noticed while working on our new non-RDO BC7 codec that existing BC7 codecs don't handle textures with decorrelated alpha signals well. They wind up trashing the alpha channel when the A signal doesn't resemble the signal in RGB. I didn't have time to investigate the issue until now. I'm guessing most developers either don't care, or they use simple (correlated) alpha channels, or multiple textures.
Some codecs allow the user to specify individual RGBA channel weightings. (ispc_texcomp isn't one of them.) This doesn't work well in practice, and users will rarely fiddle with the weightings anyway. You have to weight A so highly that RGB gets trashed.
Here's an example using a well-known CPU BC7 codec:
RGB image: kodim18
Alpha image: kodim17
Encoded using Intel's ispc_texcomp to BC7 profile alpha_slow:
RGB Average Error: Max: 40, Mean: 1.868, MSE: 7.456, RMSE: 2.731, PSNR: 39.406
Luma Error: Max: 26, Mean: 1.334, MSE: 3.754, RMSE: 1.938, PSNR: 42.386
Alpha Error: Max: 36, Mean: 1.932, MSE: 7.572, RMSE: 2.752, PSNR: 39.339
Encoded RGB:
Encoded A:
Experimental RDO BC7 codec (quantization disabled) with support for decorrelated alpha. Uses only modes 4, 6, and 7:
M4: 72.432457%, M6: 17.028809%, M7: 10.538737%
RGB Average Error: Max: 65, Mean: 2.031, MSE: 8.871, RMSE: 2.978, PSNR: 38.651
Luma Error: Max: 34, Mean: 1.502, MSE: 4.887, RMSE: 2.211, PSNR: 41.241
Alpha Error: Max: 29, Mean: 1.601, MSE: 5.703, RMSE: 2.388, PSNR: 40.570
Encoded RGB:
Encoded A:
Zoomed in comparison:
This experimental codec separately measures per-block RGB average and alpha PSNR. It prefers mode 4, and switches to modes 6 or 7 using this logic:
const float M7_RGB_THRESHOLD = 1.0f;
const float M7_A_THRESHOLD = 40.0f;
const float M7_A_DERATING = 12.0f;
const float M6_RGB_THRESHOLD = 1.0f;
const float M6_A_THRESHOLD = 40.0f;
const float M6_A_DERATING = 7.0f;
if ((m6_rgb_psnr > (math::maximum(m4_rgb_psnr, m7_rgb_psnr) + M6_RGB_THRESHOLD)) && (m6_a_psnr > M6_A_THRESHOLD) && (m6_a_psnr > math::maximum(m4_a_psnr, m7_a_psnr) - M6_A_DERATING))
{
block_modes[block_index] = 6;
}
else if ((m7_rgb_psnr > (m4_rgb_psnr + M7_RGB_THRESHOLD)) && (m7_a_psnr > (m4_a_psnr - M7_A_DERATING)) && (m7_a_psnr > M7_A_THRESHOLD))
{
block_modes[block_index] = 7;
}
else
{
block_modes[block_index] = 4;
}
What this basically does: we only use mode 6 or 7 when the RGB PSNR is greater than mode 4's RGB PSNR plus a threshold (1 dB). But we only do this if we don't degrade alpha quality too much (either 12 dB or 7 dB), and if alpha quality is above a minimum threshold (40 dB).
PSNR doesn't really capture the extra distortion being introduced here. It can help to load the alpha images into Photoshop or whatever and compare them closely as layers.
RGB PSNR went down a little with my experimental RDO BC7 codec, but alpha went up. Visually, alpha is greatly improved. My ispc non-RDO BC7 code currently has the same issue as Intel's ispc_texcomp with decorrelated alpha.
Co-owner of Binomial LLC, working on GPU texture interchange. Open source developer, graphics programmer, former video game developer. Worked previously at SpaceX (Starlink), Valve, Ensemble Studios (Microsoft), DICE Canada.
Monday, October 29, 2018
Tech company patterns
First off, I'm not finished with this blog post. Don't take it too seriously. This is not about any one company. Some aren't even from me. It's incomplete. There are definitely more patterns. I do strongly believe you can lump most tech companies into a number of general categories because that's good for business. There are good companies too.
A company can be a blend of multiple patterns. For each pattern's attributes I'm using examples from real-life companies I (or friends/ex-coworkers) have seen over the decades. The majority were inspired by multiple companies.
When you interview or interact with a company it's a good idea to figure out what makes the company tick. Knowing patterns like this can help you figure this out. Once you get some experience with this, you can quickly figure out how a company is actually ran or structured fairly quickly. I have shown this post to a few tech business people and they were like "Yup, I know of several companies that fit each category. This is obvious stuff and spot on."
1. The Famous Company Pattern
Massive subsidies from digital distribution or a large popular engine.
Due to intense developer marketing efforts this is a company everyone yearns to work for.
It doesn't matter how bad the company's working conditions are rumored to be. Developers still yearn to someday work for the famous company as a career goal.
Let them call you - *always*. Can be dehumanizing and super abusive.
The company doesn't really need you unless you work on the core product that makes tons of cash. They may be "collecting" employees to keep them away from competitors. Or, they may be hiring employees as a way to give the wealthy insiders something interesting to do.
Results don't really matter at all because the company is going to make money anyway.
The less results matter, the more you should expect things such as: Soviet-like purges. Insane, power mad staff. No accountability. Toxic teams.
Possibly lots of crunching, micromanagement.
May have a cult-like psuedo-leader/godhead used for PR/marketing.
2. The Megacorp Pattern
Massive dilbert-like internal politics between and within groups. Can be decent if you find the right group that you fit into.
Results only matter due to internal politics and constant reorgs/layoffs, not due to any intrinsic need for profits.
Great for those who want a somewhat stable lifestyle, if you can tolerate the politics and culture.
Workers turn the megacorp into their corporate tribe and absolutely obsess over internal politics at virtually all times. (If you go to a bar and hang out with employees from megacorps, and all they talk about is company politics, well there you go. It becomes their obsession.)
Culture can appear absolutely batshit insane from the outside (some megacorps can be very insular).
May have illegally colluded with other megacorps to not hire each other's employees to keep wages and horizontal movement between firms down.
Company may be full of fiefdoms. Lots of shenanigans to protect skilled workers from the insane review system and yearly stack ranking.
Company may strategically spread rumors and PR that the "Company has Changed" and things are so much better now. This is possible, but be wary.
Insiders who get fired sometimes get massive payouts from other insiders on their way out.
3. The Acquired Company Pattern
Firm acquired by large megacorp
Key dynamic is how well the company and its management integrates into the new one.
The former insiders/founders become mid-level management at the megacorp.
Firm may try to keep its unique identity and culture but usually fails.
Resistance is futile: Either the firm ultimately becomes fully absorbed into the megacorp or it will be shut down.
Don't join until you figure out what the megacorp actually thinks of the acquired company and its mid-level managers.
The former owners will be super tight.
The relationship between the former owners and corporate can turn into "us vs. them", which isn't healthy for studio stability.
Be wary if the acquired studio is cooking the books and has secret passion projects going in the background (like with pattern 8). If the studio is lying to corporate about who is working on what they'll eventually figure this out and heads will roll. The larger the secret projects, the more danger the studio is in.
If the acquired company is geographically far away from the corporate mothership, be very wary. If the mothership becomes unhappy with the former owners (now mid-level management) the acquired company will be laid off.
Company morale can drop over time once the company is acquired and the workers collectively realize that they now work for a faceless megacorp. This can lead to the Phoenix Pattern, below.
Insiders at the acquired company can become insiders at the new megacorp.
4. The Legendary Company Pattern
Products are legendary and set the bar.
There are two types of employees: The "Old Guard" that worked on the earlier legendary products, and everyone else.
Can be a very good choice if you can fit in and get shit done.
Don't expect to become an insider anytime soon if ever. Only old guard workers can be insiders.
5. The Silicon Valley Startup Pattern
Founders and investors get in bed with each other.
Founders can appear absolutely batshit on social media, during public presentations etc.
CEO and closest insiders can be very tight knit. They will cover each other's asses.
For the gamblers. The earlier you get in, the higher the probability you'll get good stock.
Non-savvy founders get eventually pushed out or lose power.
These startups come and go constantly, so if you work for one that almost inevitably goes bust just move your desk one block away.
If the startup is actually located in Silicon Valley: Employees may walk at the slightest issue and take a job (along with all your company know-how/IP/experience/etc.) at the megacorp next store. The company is only able to recruit the talent that isn't already working for the megacorp (or fully funded startup) up the street.
Talk to ex-employees before joining. If they had to sign NDA's or got threatened if they talked, avoid. If the company has lots of turnover, avoid.
6. The Self-Funded Startup Pattern
Formed by a small, passionate group of insiders wanting to recapture past glories or just be independent.
Can be good, but don't expect it to last when the insiders break up.
Founders can be super passionate about their project and will continue investing in it even after it becomes obvious to everyone else that it's never going to make a dime.
These startups have lifespans of a few years or so unless they have a big success.
Commonly seen in combination with patterns 7, 8, 9.
Investigate the backgrounds of the owners and obviously avoid if there are any red flags: multiple lawsuits, scams, forced ex-employees to sign NDA's, etc.
7. Single Publisher/Throw-Away Sweatshop Pattern
Beholden to a single publisher or customer. Publisher/primary and only customer is abusive. Company ignores it because it has no choice.
You will be treated like dogs. Crunch is expected.
Founders think they are making good money, but because they go for long periods of time without any income while still working they actually aren't.
Company has zero leverage with its publisher because it doesn't have any alternatives.
Can be OK if you work there hourly, but avoid full-time contracts because you will be crunched to death and treated badly.
Darth Vader-like publisher will break all the rules, recruit your best staff, make changes to your team or contract, etc. at will - because it can.
Unlike the Multiple Publisher Pattern, you will be interacting with the publisher's employees and they will treat you like shit.
If the firm is bought out the insiders will become mid-level managers at the new company (pattern 3). As the company has zero leverage, alternatives, or its own IP it'll be more like a mass "acqui-hire". If you aren't a company insider near the top before the buyout don't expect to earn much from the buyout.
Several more healthy variants on this patterns are possible. The key dynamics are the relationship with the single publisher and the team's talent and fame. Another single publisher variant is possible where the team is just so overwhelmingly famous that they can choose virtually any publisher they want.
8. Multiple Publisher Pattern
Company keeps multiple products in the pipeline with multiple publishers in an attempt to spread around risk and give the company some negotiation leverage.
Firm tends to lie to each publisher about who is working on what. (Publishers know this, too.)
Publishers are kept at arms length and generally aren't allowed to interact with employees - always through managers. This is to prevent the flow of too much company information back to the publisher.
Always talk to all teams in the firm to build a picture of how healthy each product is.
Can be great places to work as long as you realize it probably won't last unless one or more products hits big or the company is bought out.
If the firm is bought out by a publisher the company switches to pattern 3 as the insider owners become mid-level managers at the new company.
9. The Phoenix/Small Town Pattern
Company formed after mass layoff or some other type of company trauma (a purge, or low morale after a megacorp acquisition).
Two groups: Insiders and Outsiders. Insiders are *tight*. Outsiders will never become insiders - new insiders will always be brought in and ordained as management.
Eventual Buyout Mentality: You will be constantly told that the company will eventually be sold and you'll become rich off your stock - just like last time.
Local shadowy investors prop the company up during hard times.
Stock is absolutely, totally worthless unless the Insiders love you during a buyout.
If you piss the insiders off but are still valuable, they will mess with your stock during the buyout to shortchange you.
Unstable until established. Buyout may never actually happen.
Small-town environment may make the company somewhat shady. Horizontal movement between tech companies in the same small town is virtually impossible due to illegal collusion between companies to not compete over employees.
If the company folds a new company will be formed sometimes literally across the street and the best laid off employees will be instantly hired. They'll be handed some fake stock and told they'll be wealthy someday once the new company is sold. (Right.)
The company actually exists to make the insiders wealthy and to give the upper management a decent lifestyle. Everyone else is expendable.
10. Wealthy Dictator Pattern
No "Insiders": There's the dictator-like owner, upper management, and everyone else.
Always meet and interview with the owner first. Avoid if they give you the creeps or a bad vibe.
Company is an expression of the owner's weird development philosophies. It's basically the owner's hobby or side company.
Best avoided if in a small city/town.
Check the background of the owner and figure out where their funding came from. If they are scam artists, have lots of ex-employees suing them, or have otherwise shady backgrounds, obviously avoid.
11. The World Domination Pattern
This large decentralized organization pattern was designed - not evolved. It follows a well-thought out template and a plan.
The company controls an engine or a software product used by a large ecosystem of content creators.
At War with competitive engine companies, which the company absolutely hates.
Funded with large amounts of investor capitol and through support contracts with large firms.
Massive, sprawling corporation consisting of multiple smaller firms spread over the entire globe.
The engine company workers actually wind up secretly hating the developers who use their engine.
Joining as a single developer gets you nowhere. It's best to be acquired by the firm as a small group and given your own office. The company actively looks for these small groups to hire/acquire.
Can be a good gig in the right city but don't expect to get anywhere. It's just a job on a large sprawling piece of engine software nobody fully understands anymore.
A company can be a blend of multiple patterns. For each pattern's attributes I'm using examples from real-life companies I (or friends/ex-coworkers) have seen over the decades. The majority were inspired by multiple companies.
When you interview or interact with a company it's a good idea to figure out what makes the company tick. Knowing patterns like this can help you figure this out. Once you get some experience with this, you can quickly figure out how a company is actually ran or structured fairly quickly. I have shown this post to a few tech business people and they were like "Yup, I know of several companies that fit each category. This is obvious stuff and spot on."
1. The Famous Company Pattern
Massive subsidies from digital distribution or a large popular engine.
Due to intense developer marketing efforts this is a company everyone yearns to work for.
It doesn't matter how bad the company's working conditions are rumored to be. Developers still yearn to someday work for the famous company as a career goal.
Let them call you - *always*. Can be dehumanizing and super abusive.
The company doesn't really need you unless you work on the core product that makes tons of cash. They may be "collecting" employees to keep them away from competitors. Or, they may be hiring employees as a way to give the wealthy insiders something interesting to do.
Results don't really matter at all because the company is going to make money anyway.
The less results matter, the more you should expect things such as: Soviet-like purges. Insane, power mad staff. No accountability. Toxic teams.
Possibly lots of crunching, micromanagement.
May have a cult-like psuedo-leader/godhead used for PR/marketing.
2. The Megacorp Pattern
Massive dilbert-like internal politics between and within groups. Can be decent if you find the right group that you fit into.
Results only matter due to internal politics and constant reorgs/layoffs, not due to any intrinsic need for profits.
Great for those who want a somewhat stable lifestyle, if you can tolerate the politics and culture.
Workers turn the megacorp into their corporate tribe and absolutely obsess over internal politics at virtually all times. (If you go to a bar and hang out with employees from megacorps, and all they talk about is company politics, well there you go. It becomes their obsession.)
Culture can appear absolutely batshit insane from the outside (some megacorps can be very insular).
May have illegally colluded with other megacorps to not hire each other's employees to keep wages and horizontal movement between firms down.
Company may be full of fiefdoms. Lots of shenanigans to protect skilled workers from the insane review system and yearly stack ranking.
Company may strategically spread rumors and PR that the "Company has Changed" and things are so much better now. This is possible, but be wary.
Insiders who get fired sometimes get massive payouts from other insiders on their way out.
3. The Acquired Company Pattern
Firm acquired by large megacorp
Key dynamic is how well the company and its management integrates into the new one.
The former insiders/founders become mid-level management at the megacorp.
Firm may try to keep its unique identity and culture but usually fails.
Resistance is futile: Either the firm ultimately becomes fully absorbed into the megacorp or it will be shut down.
Don't join until you figure out what the megacorp actually thinks of the acquired company and its mid-level managers.
The former owners will be super tight.
The relationship between the former owners and corporate can turn into "us vs. them", which isn't healthy for studio stability.
Be wary if the acquired studio is cooking the books and has secret passion projects going in the background (like with pattern 8). If the studio is lying to corporate about who is working on what they'll eventually figure this out and heads will roll. The larger the secret projects, the more danger the studio is in.
If the acquired company is geographically far away from the corporate mothership, be very wary. If the mothership becomes unhappy with the former owners (now mid-level management) the acquired company will be laid off.
Company morale can drop over time once the company is acquired and the workers collectively realize that they now work for a faceless megacorp. This can lead to the Phoenix Pattern, below.
Insiders at the acquired company can become insiders at the new megacorp.
4. The Legendary Company Pattern
Products are legendary and set the bar.
There are two types of employees: The "Old Guard" that worked on the earlier legendary products, and everyone else.
Can be a very good choice if you can fit in and get shit done.
Don't expect to become an insider anytime soon if ever. Only old guard workers can be insiders.
5. The Silicon Valley Startup Pattern
Founders and investors get in bed with each other.
Founders can appear absolutely batshit on social media, during public presentations etc.
CEO and closest insiders can be very tight knit. They will cover each other's asses.
For the gamblers. The earlier you get in, the higher the probability you'll get good stock.
Non-savvy founders get eventually pushed out or lose power.
These startups come and go constantly, so if you work for one that almost inevitably goes bust just move your desk one block away.
If the startup is actually located in Silicon Valley: Employees may walk at the slightest issue and take a job (along with all your company know-how/IP/experience/etc.) at the megacorp next store. The company is only able to recruit the talent that isn't already working for the megacorp (or fully funded startup) up the street.
Talk to ex-employees before joining. If they had to sign NDA's or got threatened if they talked, avoid. If the company has lots of turnover, avoid.
6. The Self-Funded Startup Pattern
Formed by a small, passionate group of insiders wanting to recapture past glories or just be independent.
Can be good, but don't expect it to last when the insiders break up.
Founders can be super passionate about their project and will continue investing in it even after it becomes obvious to everyone else that it's never going to make a dime.
These startups have lifespans of a few years or so unless they have a big success.
Commonly seen in combination with patterns 7, 8, 9.
Investigate the backgrounds of the owners and obviously avoid if there are any red flags: multiple lawsuits, scams, forced ex-employees to sign NDA's, etc.
7. Single Publisher/Throw-Away Sweatshop Pattern
Beholden to a single publisher or customer. Publisher/primary and only customer is abusive. Company ignores it because it has no choice.
You will be treated like dogs. Crunch is expected.
Founders think they are making good money, but because they go for long periods of time without any income while still working they actually aren't.
Company has zero leverage with its publisher because it doesn't have any alternatives.
Can be OK if you work there hourly, but avoid full-time contracts because you will be crunched to death and treated badly.
Darth Vader-like publisher will break all the rules, recruit your best staff, make changes to your team or contract, etc. at will - because it can.
Unlike the Multiple Publisher Pattern, you will be interacting with the publisher's employees and they will treat you like shit.
If the firm is bought out the insiders will become mid-level managers at the new company (pattern 3). As the company has zero leverage, alternatives, or its own IP it'll be more like a mass "acqui-hire". If you aren't a company insider near the top before the buyout don't expect to earn much from the buyout.
Several more healthy variants on this patterns are possible. The key dynamics are the relationship with the single publisher and the team's talent and fame. Another single publisher variant is possible where the team is just so overwhelmingly famous that they can choose virtually any publisher they want.
8. Multiple Publisher Pattern
Company keeps multiple products in the pipeline with multiple publishers in an attempt to spread around risk and give the company some negotiation leverage.
Firm tends to lie to each publisher about who is working on what. (Publishers know this, too.)
Publishers are kept at arms length and generally aren't allowed to interact with employees - always through managers. This is to prevent the flow of too much company information back to the publisher.
May have secret independent passion-projects in the background covertly funded with publisher funds.
Fragile. If one team fails, the company is in trouble and expect layoffs. If two or more teams fail, the company is toast.Always talk to all teams in the firm to build a picture of how healthy each product is.
Can be great places to work as long as you realize it probably won't last unless one or more products hits big or the company is bought out.
If the firm is bought out by a publisher the company switches to pattern 3 as the insider owners become mid-level managers at the new company.
9. The Phoenix/Small Town Pattern
Company formed after mass layoff or some other type of company trauma (a purge, or low morale after a megacorp acquisition).
Two groups: Insiders and Outsiders. Insiders are *tight*. Outsiders will never become insiders - new insiders will always be brought in and ordained as management.
Eventual Buyout Mentality: You will be constantly told that the company will eventually be sold and you'll become rich off your stock - just like last time.
Local shadowy investors prop the company up during hard times.
Stock is absolutely, totally worthless unless the Insiders love you during a buyout.
If you piss the insiders off but are still valuable, they will mess with your stock during the buyout to shortchange you.
Unstable until established. Buyout may never actually happen.
Small-town environment may make the company somewhat shady. Horizontal movement between tech companies in the same small town is virtually impossible due to illegal collusion between companies to not compete over employees.
If the company folds a new company will be formed sometimes literally across the street and the best laid off employees will be instantly hired. They'll be handed some fake stock and told they'll be wealthy someday once the new company is sold. (Right.)
The company actually exists to make the insiders wealthy and to give the upper management a decent lifestyle. Everyone else is expendable.
10. Wealthy Dictator Pattern
No "Insiders": There's the dictator-like owner, upper management, and everyone else.
Always meet and interview with the owner first. Avoid if they give you the creeps or a bad vibe.
Company is an expression of the owner's weird development philosophies. It's basically the owner's hobby or side company.
Best avoided if in a small city/town.
Check the background of the owner and figure out where their funding came from. If they are scam artists, have lots of ex-employees suing them, or have otherwise shady backgrounds, obviously avoid.
11. The World Domination Pattern
This large decentralized organization pattern was designed - not evolved. It follows a well-thought out template and a plan.
The company controls an engine or a software product used by a large ecosystem of content creators.
At War with competitive engine companies, which the company absolutely hates.
Funded with large amounts of investor capitol and through support contracts with large firms.
Massive, sprawling corporation consisting of multiple smaller firms spread over the entire globe.
The engine company workers actually wind up secretly hating the developers who use their engine.
Joining as a single developer gets you nowhere. It's best to be acquired by the firm as a small group and given your own office. The company actively looks for these small groups to hire/acquire.
Can be a good gig in the right city but don't expect to get anywhere. It's just a job on a large sprawling piece of engine software nobody fully understands anymore.
Company can employ talent virtually anywhere on the globe.
It's hinted in whispers that the eventual buyout will make them all wealthy. (Right.)
It's hinted in whispers that the eventual buyout will make them all wealthy. (Right.)
Workers generally treated like crap. Contractors (especially in Eastern Europe or Russia) are massively underpaid and undervalued.
Company has a firm process and procedure for doing things and that's it.
Upper management layer is cult-like and very tight.
Upper management layer is cult-like and very tight.
Each office has its own strange brand of small town-esque politics and culture.
12. The Master Psychological Manipulator Pattern
The owner has graduated from writing code to Programming Programmers. Owner is a master psychological manipulator. He locks in employees by doing things like co-signing their mortgages.
Possibly combined with pattern 10 (wealthy dictator).
Employees are afraid of the owner, and afraid of what happens if they leave.
You will be so well manipulated by companies following this pattern that everything will feel amazing and alright until the trap is sprung and you're in so deep you're afraid to leave.
The firm is constantly on the lookout for key "10x" engineers who can keep the product(s) functioning.
The recruiting process is scary to watch. The owner will get totally into the head of the new recruit and pave the way for them to enter the company as easily as possible. The owner can switch into Recruiting Mode and back on a dime.
Somehow the firm is actually not profitable and has almost gone under, but was bailed out by friends from other companies pumping in cash for strategic reasons.
Special individual agreements are secretly struck with each employee. Some employees are paid massively more than others.
The firm's software isn't very good, but it has good marketing and appears stable from the outside.
Like other companies listed above, the owner appears to be absolutely batshit if you actually listen to them. Probably technically disconnected because they don't code themselves anymore. They are unable to run projects with more than a small handful of programmers at a time because their primary skill is manipulation of individuals, not project management.
Not recommended unless you're game to being psychologically profiled and manipulated.
12. The Master Psychological Manipulator Pattern
The owner has graduated from writing code to Programming Programmers. Owner is a master psychological manipulator. He locks in employees by doing things like co-signing their mortgages.
Possibly combined with pattern 10 (wealthy dictator).
Employees are afraid of the owner, and afraid of what happens if they leave.
You will be so well manipulated by companies following this pattern that everything will feel amazing and alright until the trap is sprung and you're in so deep you're afraid to leave.
The firm is constantly on the lookout for key "10x" engineers who can keep the product(s) functioning.
The recruiting process is scary to watch. The owner will get totally into the head of the new recruit and pave the way for them to enter the company as easily as possible. The owner can switch into Recruiting Mode and back on a dime.
Somehow the firm is actually not profitable and has almost gone under, but was bailed out by friends from other companies pumping in cash for strategic reasons.
The owner and his closest insider friends have their own strange subculture. It can be almost impossible to comprehend them while hearing them talk to each other.
The owner/founder is reclusive and rarely come into the office. This causes stress with the employees due to a leadership vacuum. Alternately, you are so micro-managed and watched you can't breath.Special individual agreements are secretly struck with each employee. Some employees are paid massively more than others.
The firm's software isn't very good, but it has good marketing and appears stable from the outside.
Like other companies listed above, the owner appears to be absolutely batshit if you actually listen to them. Probably technically disconnected because they don't code themselves anymore. They are unable to run projects with more than a small handful of programmers at a time because their primary skill is manipulation of individuals, not project management.
Not recommended unless you're game to being psychologically profiled and manipulated.
Tuesday, July 24, 2018
This is why we're working on Basis.
Here's a very interesting graph of game install/on-device sizes from The Cost of Games:
This is a *log* graph. Notice the overall trend. Most of this data is texture data.
And so this is why our product is so valuable.
This is a *log* graph. Notice the overall trend. Most of this data is texture data.
And so this is why our product is so valuable.
Thursday, July 12, 2018
A little ETC1S history
I've been talking about ETC1S for several years. I removed some of my earlier posts (to prevent others from stealing our work - which does happen) but they are here:
https://web.archive.org/web/ 20160913170247/http://richg42. blogspot.com/
https://web.archive.org/web/
We also covered our work with ETC1S and a universal texture format at CppCon 2016:
Just in case there's any confusion, we shipped our first ETC1S encoder to Netflix early last year, and developed all the universal stuff from 2016-early 2018.
Sunday, July 8, 2018
Basis status update
I sent this as a reply to someone by email, but it makes a good blog post too. Here's what Basis does today right now (i.e. this is what we ship for OSX/Windows/Linux):
1. RDO BC1-5: Like crunch's, slower but higher quality/smaller files (supports up to 32K codebooks, LZ-specific RDO optimizations - crunch is limited to only 8K codebooks, no LZ RDO)
This competes against crunch's original BC1-5 RDO solution, which is extremely fast (I wrote it for max speed) but lower quality for same bitrate. The decrease in bitrate for same quality completely depends on the content and the LZ codec you use, but it can be as high as 20% according to one large customer. On the other hand, for some texture's it'll only be a few percent.
crunch's RDO is limited to 8K codebooks so Basis can be used were crunch cannot due to quality concerns.
crunch's RDO is limited to 8K codebooks so Basis can be used were crunch cannot due to quality concerns.
Some teams prefer fast encoding at lower quality, and some prefer higher quality (especially on normal maps) at lower speed. We basically gave away the lower quality option in crunch.
2. RDO ETC1: Up to 32K codebooks, no LZ-specific RDO optimizations yet.
Crunch doesn't support ETC1 RDO.
You could compress to ETC1 .CRN, then unpack that to .KTX, to give you a "poor man's" equivalent to direct ETC1 RDO, but you'll still be limited to 8K codebooks max (which is too low quality for many normal maps and some albedo textures).
3. .basis: universal (supports transcoding to BC1-5, BC7, PVRTC1 4bpp opaque, ETC1, more formats on the way)
crunch doesn't support this.
We provide all of the C++ decoder/transcoder source code, which builds using emscripten.
.basis started as a custom ETC1 system we wrote for Netflix, then I figured out how to make it universal. Note that I recently open sourced the key ETC1S->BC1 transcoding technique in crunch publicly (to help the Khronos universal GPU texture effort along by giving them the key info they needed to implement their own solution):
4. Non-RDO BC7: superior to ispc_texcomp's. Written in ispc.
I'm currently working on RDO BC7 and better support for PVRTC. We are building a portfolio of encoders for all the formats, as fast as we can. We're going to keep adding encoders over the next few years.
Our intention is not to compete against crunch (that's commercial suicide). I put a ton of value into crunch, and after Alexander optimized .CRN more its value went through the roof. A bunch of large teams are using it on commercial products because it works so well.
Sunday, June 17, 2018
PVRTC encoding examples
This is "testpat.png", which I got somewhere on the web. It's a surprisingly tricky image to encode to PVRTC. The gradients, various patterns, the transitions between these regions and even the constant-color areas are hard to handle in PVRTC. (Sorry, there is lena in there. I will change this to something else eventually.)
Note my encoder used clamp addressing for both encoding and decoding but PVRTexTool used wrap (not that it matters with this image). Here's the .pvr file for testpat.
Here's delorean (resampled to .25 original size):
Interestingly, on delorean you can see that PVRTC's handling of smooth gradients is clearly superior vs. BC1 with a strong encoder.
Here's xmen_1024:
"Y" is REC 709 Luma, SSIM was computed using OpenCV. The images marked "BC1" were compressed using crunch (uber quality, perceptual mode), which is a bit better than AMD Compressonator's output.
Note my encoder used clamp addressing for both encoding and decoding but PVRTexTool used wrap (not that it matters with this image). Here's the .pvr file for testpat.
Original |
BC1: 47.991 Y PSNR |
PVRTexTool "Best Quality": 41.943 Y PSNR |
Experimental encoder (bounding box, precomputed tables, 1x1 block LS): 44.914 Y PSNR: |
Original |
BC1: 43.293 Y PSNR, .997308 Y SSIM |
PVRTexTool "Best Quality": 40.440 Y PSNR, .996007 Y SSIM |
Experimental encoder: 42.891 Y PSNR, .997021 Y SSIM |
Here's xmen_1024:
Original |
BC1: 37.757 Y PSNR, .984543 Y SSIM |
BC1 (AMD Compressonator quality=1): 37.306 Y PSNR, .978997 Y SSIM |
PVRTexTool "Best Quality": 36.762 Y PSNR, .976023 Y SSIM |
Experimental encoder: 37.314 Y PSNR, .9812 Y SSIM |
"Y" is REC 709 Luma, SSIM was computed using OpenCV. The images marked "BC1" were compressed using crunch (uber quality, perceptual mode), which is a bit better than AMD Compressonator's output.
Tuesday, June 12, 2018
Real-time PVRTC encoding for a universal GPU texture format system
Here's one way to support PVRTC in a universal GPU texture format system that transcodes from a block based format like ETC1S.
First, study this PVRTC code:
https://bitbucket.org/jthlim/pvrtccompressor/src/default/PvrTcEncoder.cpp
Unfortunately, this library has several key bugs, but its core texture encoding approach is sound for real-time use.
Don't use its decompressor, it's not bit accurate vs. the GPU and doesn't unpack alpha properly. Use this "official" decoder instead as a reference instead:
https://github.com/google/swiftshader/blob/master/third_party/PowerVR_SDK/Tools/PVRTDecompress.h
Function EncodeRgb4Bpp() has two passes:
First, study this PVRTC code:
https://bitbucket.org/jthlim/pvrtccompressor/src/default/PvrTcEncoder.cpp
Unfortunately, this library has several key bugs, but its core texture encoding approach is sound for real-time use.
Don't use its decompressor, it's not bit accurate vs. the GPU and doesn't unpack alpha properly. Use this "official" decoder instead as a reference instead:
https://github.com/google/swiftshader/blob/master/third_party/PowerVR_SDK/Tools/PVRTDecompress.h
Function EncodeRgb4Bpp() has two passes:
1. The first pass computes RGB(A) bounding boxes for each 4x4 block:
for(int y = 0; y < blocks; ++y)
{
for(int x = 0; x < blocks; ++x)
{
ColorRgbBoundingBox cbb;
CalculateBoundingBox(cbb, bitmap, x, y);
PvrTcPacket* packet = packets + GetMortonNumber(x, y);
packet->usePunchthroughAlpha = 0;
packet->SetColorA(cbb.min);
packet->SetColorB(cbb.max);
}
}
Most importantly, SetColorA() must floor and SetColorB() must ceil. Note that the alpha version of the code in this library (function EncodeRgba4Bpp()) is very wrong: it assumes alpha 7=255, which is incorrect (it's actually (7*2)*255/15 or 238).
This pass can be done while decoding ETC1S blocks during transcoding. The endpoint/modulation values need to be saved to a temporary buffer.
It's possible to swap the low and high endpoints and get an encoding that results in less error (I believe because the endpoint encoding precision of blue isn't symmetrical - it's 4/5 not 5/5), but you have to encode the image twice so it doesn't seem worth the trouble.
It's possible to swap the low and high endpoints and get an encoding that results in less error (I believe because the endpoint encoding precision of blue isn't symmetrical - it's 4/5 not 5/5), but you have to encode the image twice so it doesn't seem worth the trouble.
2. Now that the per-block endpoints are computed, you can compute the per-pixel modulation values. This function is quite optimizable without requiring vector code (which doesn't work on the Web yet):
for(int y = 0; y < blocks; ++y)
{
for(int x = 0; x < blocks; ++x)
{
const unsigned char (*factor)[4] = PvrTcPacket::BILINEAR_FACTORS;
const ColorRgba<unsigned char>* data = bitmap.GetData() + y * 4 * size + x * 4;
uint32_t modulationData = 0;
for(int py = 0; py < 4; ++py)
{
const int yOffset = (py < 2) ? -1 : 0;
const int y0 = (y + yOffset) & blockMask;
const int y1 = (y0+1) & blockMask;
for(int px = 0; px < 4; ++px)
{
const int xOffset = (px < 2) ? -1 : 0;
const int x0 = (x + xOffset) & blockMask;
const int x1 = (x0+1) & blockMask;
const PvrTcPacket* p0 = packets + GetMortonNumber(x0, y0);
const PvrTcPacket* p1 = packets + GetMortonNumber(x1, y0);
const PvrTcPacket* p2 = packets + GetMortonNumber(x0, y1);
const PvrTcPacket* p3 = packets + GetMortonNumber(x1, y1);
ColorRgb<int> ca = p0->GetColorRgbA() * (*factor)[0] +
p1->GetColorRgbA() * (*factor)[1] +
p2->GetColorRgbA() * (*factor)[2] +
p3->GetColorRgbA() * (*factor)[3];
ColorRgb<int> cb = p0->GetColorRgbB() * (*factor)[0] +
p1->GetColorRgbB() * (*factor)[1] +
p2->GetColorRgbB() * (*factor)[2] +
p3->GetColorRgbB() * (*factor)[3];
const ColorRgb<unsigned char>& pixel = data[py*size + px];
ColorRgb<int> d = cb - ca;
ColorRgb<int> p{pixel.r*16, pixel.g*16, pixel.b*16};
ColorRgb<int> v = p - ca;
// PVRTC uses weightings of 0, 3/8, 5/8 and 1
// The boundaries for these are 3/16, 1/2 (=8/16), 13/16
int projection = (v % d) * 16;
int lengthSquared = d % d;
if(projection > 3*lengthSquared) modulationData++;
if(projection > 8*lengthSquared) modulationData++;
if(projection > 13*lengthSquared) modulationData++;
modulationData = BitUtility::RotateRight(modulationData, 2);
factor++;
}
}
PvrTcPacket* packet = packets + GetMortonNumber(x, y);
packet->modulationData = modulationData;
}
}
The code above interpolates the endpoints in full RGB(A) space, which isn't necessary. You can sum each channel into a single value (like Luma, but just R+G+B), interpolate that instead (much faster in scalar code), then decide which modulation values to use in 1D space. Also, you can unroll the innermost px/py loops using macros or whatever.
Encoding from ETC1S simplifies things somewhat because, for each block, you can precompute the R+G+B values to use for each of the 4 possible input selectors.
That's basically it. If you combine this post with my previous one, you've got a nice real-time PVRTC encoder usable in WebAssembly/asm.js (i.e. it doesn't need vector ops to be fast). Quality is surprisingly good for a real-time encoder, especially if you add the optional 3rd pass described in my other post.
Opaque is tougher to handle, but the basic concepts are the same.
The encoder in this library doesn't support punch-through alpha, which is quite valuable and easy to encode in my testing.
Monday, June 11, 2018
Lookup table based real-time PVRTC encoding
I've found a table-based method of improving the output from a real-time PVRTC encoder. Fast real-time encoders first find the RGB(A) bounds of each 4x4 block to determine the block endpoints, then they evaluate the interpolated endpoints at each pixel to determine the modulation values which minimize the encoded error. This works okay, but the results are barely acceptable in practice due to banding artifacts on smooth features.
One way to improve the output of this process is to precompute, for all [0,255] 8-bit component values, the best PVRTC low/high endpoints to use to encode that value assuming the modulation values in the 7x7 pixel region are either all-1 or 2 (or all 0, 1, 2, or 3):
// Tables containing the 5-bit/5-bit L/H endpoints to use for each 8-bit value
static uint g_pvrtc_opt55_e1[256];
static uint g_pvrtc_opt55_e2[256];
// Tables containing the 5-bit/4-bit L/H endpoints to use for each 8-bit value
static uint g_pvrtc_opt54_e1[256];
static uint g_pvrtc_opt54_e2[256];
const int T = 120;
for (uint c = 0; c < 256; c++)
{
uint best_err1 = UINT_MAX;
uint best_l1 = 0, best_h1 = 0;
uint best_err2 = UINT_MAX;
uint best_l2 = 0, best_h2 = 0;
for (uint l = 0; l < 32; l++)
{
const int lv = (l << 3) | (l >> 2);
for (uint h = 0; h < 32; h++)
{
const int hv = (h << 3) | (h >> 2);
if (lv > hv)
continue;
int delta = hv - lv;
// Avoid endpoints that are too far apart to reduce artifacts
if (delta > T)
continue;
uint e1 = (lv * 5 + hv * 3) / 8;
int diff1 = math::iabs(c - e1);
if (diff1 < best_err1)
{
best_err1 = diff1;
best_l1 = l;
best_h1 = h;
}
uint e2 = (lv * 3 + hv * 5) / 8;
int diff2 = math::iabs(c - e2);
if (diff2 < best_err2)
{
best_err2 = diff2;
best_l2 = l;
best_h2 = h;
}
}
}
g_pvrtc_opt55_e1[c] = best_l1 | (best_h1 << 8);
g_pvrtc_opt55_e2[c] = best_l2 | (best_h2 << 8);
}
// 5-bit/4-bit loop is similar
Now that you have these tables, you can loop through all the 4x4 pixel blocks in the PVRTC texture and compute the 7x7 average RGB color surrounding each block (it's 7x7 pixels because you want the average of all colors influenced by each block's endpoint accounting for bilinear endpoint interpolation). You can look up the optimal endpoints to use for each component, set the block's endpoints to those trial endpoints, find the best modulation values for the impacted 7x7 pixels, and see if the error is reduced or not. The overall error is reduced on smooth blocks very often. You can try this process several times for each block using different precomputed tables.
The lookup table example code above assumes the high endpoints will usually be >= than the low endpoints. Whatever algorithm you use to create the endpoints in the first pass needs to be compatible with your lookup tables, or you'll loose quality.
You can apply this algorithm in multiple passes for higher quality. 2-3 passes seems sufficient.
For comparison, here's a grayscale ramp encoded using PVRTexTool (best quality), vs. this algorithm using 3 passes:
Original:
PVRTexTool:
Lookup-based algorithm:
One way to improve the output of this process is to precompute, for all [0,255] 8-bit component values, the best PVRTC low/high endpoints to use to encode that value assuming the modulation values in the 7x7 pixel region are either all-1 or 2 (or all 0, 1, 2, or 3):
// Tables containing the 5-bit/5-bit L/H endpoints to use for each 8-bit value
static uint g_pvrtc_opt55_e1[256];
static uint g_pvrtc_opt55_e2[256];
// Tables containing the 5-bit/4-bit L/H endpoints to use for each 8-bit value
static uint g_pvrtc_opt54_e1[256];
static uint g_pvrtc_opt54_e2[256];
const int T = 120;
for (uint c = 0; c < 256; c++)
{
uint best_err1 = UINT_MAX;
uint best_l1 = 0, best_h1 = 0;
uint best_err2 = UINT_MAX;
uint best_l2 = 0, best_h2 = 0;
for (uint l = 0; l < 32; l++)
{
const int lv = (l << 3) | (l >> 2);
for (uint h = 0; h < 32; h++)
{
const int hv = (h << 3) | (h >> 2);
if (lv > hv)
continue;
int delta = hv - lv;
// Avoid endpoints that are too far apart to reduce artifacts
if (delta > T)
continue;
uint e1 = (lv * 5 + hv * 3) / 8;
int diff1 = math::iabs(c - e1);
if (diff1 < best_err1)
{
best_err1 = diff1;
best_l1 = l;
best_h1 = h;
}
uint e2 = (lv * 3 + hv * 5) / 8;
int diff2 = math::iabs(c - e2);
if (diff2 < best_err2)
{
best_err2 = diff2;
best_l2 = l;
best_h2 = h;
}
}
}
g_pvrtc_opt55_e1[c] = best_l1 | (best_h1 << 8);
g_pvrtc_opt55_e2[c] = best_l2 | (best_h2 << 8);
}
// 5-bit/4-bit loop is similar
Now that you have these tables, you can loop through all the 4x4 pixel blocks in the PVRTC texture and compute the 7x7 average RGB color surrounding each block (it's 7x7 pixels because you want the average of all colors influenced by each block's endpoint accounting for bilinear endpoint interpolation). You can look up the optimal endpoints to use for each component, set the block's endpoints to those trial endpoints, find the best modulation values for the impacted 7x7 pixels, and see if the error is reduced or not. The overall error is reduced on smooth blocks very often. You can try this process several times for each block using different precomputed tables.
For even more quality, you can also use precomputed tables for modulation values 0 and 3. You can also use two dimensional tables [256][256] that have the optimal endpoints to use for two colors, then quantize each 7x7 pixel area to 2 colors (using a few Lloyd algorithm iterations) and try those endpoints too. 2D tables result in higher quality high contrast transitions.
Here's some psuedocode showing how to use the tables for a single modulation value (you can apply this process multiple times for the other tables):
// Compute average color of all pixels influenced by this endpoint
vec4F c_avg(0);
for (int y = 0; y < 7; y++)
{
const uint py = wrap_or_clamp_y(by * 4 + y - 1);
for (uint x = 0; x < 7; x++)
{
const uint px = wrap_or_clamp_x(bx * 4 + x - 1);
const color_quad_u8 &c = orig_img(px, py);
c_avg[0] += c[0];
c_avg[1] += c[1];
c_avg[2] += c[2];
c_avg[3] += c[3];
}
}
// Save the 3x3 block neighborhood surrounding the current block
for (int y = -1; y <= 1; y++)
{
for (int x = -1; x <= 1; x++)
{
const uint block_x = wrap_or_clamp_block_x(bx + x);
const uint block_y = wrap_or_clamp_block_y(by + y);
cur_blocks[x + 1][y + 1] = m_blocks(block_x, block_y);
}
}
// Compute the rounded 8-bit average color
// c_avg is the average color of the 7x7 pixels around the block
c_avg += vec4F(.5f);
color_quad_u8 color_avg((int)c_avg[0], (int)c_avg[1], (int)c_avg[2], (int)c_avg[3]);
// Lookup the optimal PVRTC endpoints to use given this average color,
// assuming the modulation values will be all-1
color_quad_u8 l0(0), h0(0);
l0[0] = g_pvrtc_opt55_e1[color_avg[0]] & 0xFF;
h0[0] = g_pvrtc_opt55_e1[color_avg[0]] >> 8;
l0[1] = g_pvrtc_opt55_e1[color_avg[1]] & 0xFF;
h0[1] = g_pvrtc_opt55_e1[color_avg[1]] >> 8;
l0[2] = g_pvrtc_opt54_e1[color_avg[2]] & 0xFF;
h0[2] = g_pvrtc_opt54_e1[color_avg[2]] >> 8;
// Set the block's endpoints and evaluate the error of the 7x7 neighborhood (also choosing new modulation values!)
m_blocks(bx, by).set_opaque_endpoint_raw(0, l0);
m_blocks(bx, by).set_opaque_endpoint_raw(1, h0);
uint64 e1_err = remap_pixels_influenced_by_endpoint(bx, by, orig_img, perceptual, alpha_is_significant);
if (e1_err > current_best_err)
{
// Error got worse, so restore the blocks
for (int y = -1; y <= 1; y++)
{
for (int x = -1; x <= 1; x++)
{
const uint block_x = wrap_or_clamp_block_x(bx + x);
const uint block_y = wrap_or_clamp_block_y(by + y);
m_blocks(block_x, block_y) = cur_blocks[x + 1][y + 1];
}
}
}
Here's an example for kodim03 (cropped to 1k square due to PVRTC limitations). This image only uses 2 precomputed tables for modulation values 1 and 2 (because it's real-time):
RGB Average Error: Max: 86, Mean: 1.156, MSE: 9.024, RMSE: 3.004, PSNR: 38.577
RGB Average Error: Max: 79, Mean: 0.971, MSE: 6.694, RMSE: 2.587, PSNR: 39.874
The 2D table version looks better on high contrast transitions, but needs more memory. Using 4 1D tables followed by a single 2D lookup results in the best quality.
The lookup table example code above assumes the high endpoints will usually be >= than the low endpoints. Whatever algorithm you use to create the endpoints in the first pass needs to be compatible with your lookup tables, or you'll loose quality.
You can apply this algorithm in multiple passes for higher quality. 2-3 passes seems sufficient.
For comparison, here's a grayscale ramp encoded using PVRTexTool (best quality), vs. this algorithm using 3 passes:
Original:
PVRTexTool:
Lookup-based algorithm:
Friday, June 8, 2018
ETC1S texture format encoding and how it's transcoded to BC1
I developed the ETC1S encoding method back in late 2016, and we talked about it publicly in our CppCon '16 presentation. It's good to see that this encoding is working well in crunch too (better bitrate for near equal error). There are kodim statistics on Alexander's checkin notes:
https://github.com/Unity-Technologies/crunch/commit/660322d3a611782202202ac00109fbd1a10d7602
I described the format details and asked Alexander to support ETC1S so we could add universal support to crunch.
Anyhow, ETC1S is great because it enables simplified transcoding to BC1 using a couple small lookup tables (one for the 5 bit DXT1 components, and the other for 6). You can precompute the best DXT1 component low/high endpoints to use for each possibility of used ETC1S selectors (or low/high selector "ranges") and ways of remapping the ETC1S selectors to DXT1 selectors. The method I came up with supports a strong subset of these possible mapping (6 low/high selector ranges and 10 selector remappings).
So the basic idea to this transcoder design is that we'll figure out the near-optimal DXT1 low/high endpoints to use for a ETC1S block, then just translate the ETC1S selectors through a remapping table. We don't need to do any expensive R,G,B vector calculations here, just simple math on endpoint components and selectors. To find the best endpoints, we need the ETC1S base color (5,5,5), intensity table index (3 bits), and the used selector range (because ETC1/ETC1S heavily depends on endpoint extrapolation to reduce overall error, so for example sometimes the encoder will only use a single selector in the "middle" of the intensity range).
First, here are the most used selector ranges used by the transcoder:
{ 0, 3 },
{ 1, 3 },
{ 0, 2 },
{ 1, 2 },
{ 2, 3 },
{ 0, 1 },
And here are the selector remapping tables:
{ 0, 0, 1, 1 },
{ 0, 0, 1, 2 },
{ 0, 0, 1, 3 },
{ 0, 0, 2, 3 },
{ 0, 1, 1, 1 },
{ 0, 1, 2, 2 },
{ 0, 1, 2, 3 },
{ 0, 2, 3, 3 },
{ 1, 2, 2, 2 },
{ 1, 2, 3, 3 },
So what does this stuff mean? In the first table, the first entry is { 0, 3 }. This index is used for blocks that use all 4 selectors. The 2nd one is for blocks that only use selectors 1-3, etc. We could support all possible ways that the 4 selectors could be used, but you reach a point of diminishing returns.
The second table is used to translate ETC1S selectors to DXT1 selectors. Again, we could support all possible ways of remapping selectors, but only a few are needed in practice.
So to translate an ETC1S block to BC1/DXT1:
- Scan the ETC1S selectors (which range from 0-3) to identify their low/high range, and map this to the best entry in the first table. This is the selector range table index, from 0-5.
(For crunch/basis this is precomputed for each selector codebook entry, so we don't need to do it for each block.)
- Now we have a selector range (0-5), three ETC1S base color components (5-bits each) and an ETC1S intensity table index (3-bits). We have a set of 10 precomputed tables (for each supported way of remapping the selectors from ETC1S->DXT1) for each selector_range/basecolor/inten_table possibility (6*32*8*10=15360 total tables).
If the block only uses a single selector, it's a fixed color block and you can use a separate set of precomputed tables (like stb_dxt uses) to convert it to the optimal DXT1 color.
https://github.com/Unity-Technologies/crunch/commit/660322d3a611782202202ac00109fbd1a10d7602
I described the format details and asked Alexander to support ETC1S so we could add universal support to crunch.
Anyhow, ETC1S is great because it enables simplified transcoding to BC1 using a couple small lookup tables (one for the 5 bit DXT1 components, and the other for 6). You can precompute the best DXT1 component low/high endpoints to use for each possibility of used ETC1S selectors (or low/high selector "ranges") and ways of remapping the ETC1S selectors to DXT1 selectors. The method I came up with supports a strong subset of these possible mapping (6 low/high selector ranges and 10 selector remappings).
So the basic idea to this transcoder design is that we'll figure out the near-optimal DXT1 low/high endpoints to use for a ETC1S block, then just translate the ETC1S selectors through a remapping table. We don't need to do any expensive R,G,B vector calculations here, just simple math on endpoint components and selectors. To find the best endpoints, we need the ETC1S base color (5,5,5), intensity table index (3 bits), and the used selector range (because ETC1/ETC1S heavily depends on endpoint extrapolation to reduce overall error, so for example sometimes the encoder will only use a single selector in the "middle" of the intensity range).
First, here are the most used selector ranges used by the transcoder:
{ 0, 3 },
{ 1, 3 },
{ 0, 2 },
{ 1, 2 },
{ 2, 3 },
{ 0, 1 },
And here are the selector remapping tables:
{ 0, 0, 1, 1 },
{ 0, 0, 1, 2 },
{ 0, 0, 1, 3 },
{ 0, 0, 2, 3 },
{ 0, 1, 1, 1 },
{ 0, 1, 2, 2 },
{ 0, 1, 2, 3 },
{ 0, 2, 3, 3 },
{ 1, 2, 2, 2 },
{ 1, 2, 3, 3 },
So what does this stuff mean? In the first table, the first entry is { 0, 3 }. This index is used for blocks that use all 4 selectors. The 2nd one is for blocks that only use selectors 1-3, etc. We could support all possible ways that the 4 selectors could be used, but you reach a point of diminishing returns.
The second table is used to translate ETC1S selectors to DXT1 selectors. Again, we could support all possible ways of remapping selectors, but only a few are needed in practice.
So to translate an ETC1S block to BC1/DXT1:
- Scan the ETC1S selectors (which range from 0-3) to identify their low/high range, and map this to the best entry in the first table. This is the selector range table index, from 0-5.
(For crunch/basis this is precomputed for each selector codebook entry, so we don't need to do it for each block.)
- Now we have a selector range (0-5), three ETC1S base color components (5-bits each) and an ETC1S intensity table index (3-bits). We have a set of 10 precomputed tables (for each supported way of remapping the selectors from ETC1S->DXT1) for each selector_range/basecolor/inten_table possibility (6*32*8*10=15360 total tables).
- Each table entry has a DXT1 low/high endpoint values (either 5 or 6 bits) and an error value. But this is only for a single component, so we need to scan the 10 entries (for each possible way of remapping the selectors from ETC1S->DXT1) for all components, sum up their total R+G+B error, and use the selector remapping method that minimizes the overall error. (We can only select 1 way to remap the selectors, because there's only a single selector for each pixel.) The best way of remapping the selectors for R may not be the best way for G or B, so we need to try all 10 ways we support, compute the error for each, and select the best one that minimizes the overall error.
In code:
// Get the best selector range table entry to use for the ETC1S block:
const uint selector_range_table = g_etc1_to_dxt1_selector_range_index[low_selector][high_selector];
// Now get pointers to the precomputed tables for each component:
//[32][8][RANGES][MAPPING]
const etc1_to_dxt1_56_solution *pTable_r = &g_etc1_to_dxt_5[(inten_table * 32 + base_color.r) * (NUM_ETC1_TO_DXT1_SELECTOR_RANGES * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS) + selector_range_table * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS];
const etc1_to_dxt1_56_solution *pTable_g = &g_etc1_to_dxt_6[(inten_table * 32 + base_color.g) * (NUM_ETC1_TO_DXT1_SELECTOR_RANGES * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS) + selector_range_table * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS];
const etc1_to_dxt1_56_solution *pTable_b = &g_etc1_to_dxt_5[(inten_table * 32 + base_color.b) * (NUM_ETC1_TO_DXT1_SELECTOR_RANGES * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS) + selector_range_table * NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS];
// Scan to find the best remapping table (from 10) to use:
uint best_err = UINT_MAX;
uint best_mapping = 0;
CRND_ASSERT(NUM_ETC1_TO_DXT1_SELECTOR_MAPPINGS == 10);
#define DO_ITER(m) { uint total_err = pTable_r[m].m_err + pTable_g[m].m_err + pTable_b[m].m_err; if (total_err < best_err) { best_err = total_err; best_mapping = m; } }
DO_ITER(0); DO_ITER(1); DO_ITER(2); DO_ITER(3); DO_ITER(4);
DO_ITER(5); DO_ITER(6); DO_ITER(7); DO_ITER(8); DO_ITER(9);
#undef DO_ITER
// Now create the DXT1 endpoints
uint l = dxt1_block::pack_unscaled_color(pTable_r[best_mapping].m_lo, pTable_g[best_mapping].m_lo, pTable_b[best_mapping].m_lo);
uint h = dxt1_block::pack_unscaled_color(pTable_r[best_mapping].m_hi, pTable_g[best_mapping].m_hi, pTable_b[best_mapping].m_hi);
// pSelector_xlat is used to translate the ETC1S selectors to DXT1 selectors
const uint8 *pSelectors_xlat = &g_etc1_to_dxt1_selector_mappings1[best_mapping][0];
if (l < h)
{
std::swap(l, h);
pSelectors_xlat = &g_etc1_to_dxt1_selector_mappings2[best_mapping][0];
}
pDst_block->set_low_color(static_cast<uint16>(l));
pDst_block->set_high_color(static_cast<uint16>(h));
// Now use pSelectors_xlat[] to translate the selectors and we're done
So that's it. It's a fast and simple process to convert ETC1S->DXT1. The results look very good, and are within a fraction of a dB between ETC1S and BC1. You can also use this process to convert ETC1S->BC7, etc.
Once you understand this process, almost everything else falls into place for the universal format. ETC1S->BC1 and ETC1S->PVRTC are the key transcoders, and all other formats use these basic ideas.
There are surely other "base" formats we could choose. I choose ETC1S because I already had a strong encoder for this format and because it's transcodable to BC1.
You can see the actual code here, in function convert_etc1_to_dxt1().
It's possible to add BC7-style pbits to ETC1S (1 or 3) to improve quality. Transcoders can decide to use these pbits, or not.
Once you understand this process, almost everything else falls into place for the universal format. ETC1S->BC1 and ETC1S->PVRTC are the key transcoders, and all other formats use these basic ideas.
There are surely other "base" formats we could choose. I choose ETC1S because I already had a strong encoder for this format and because it's transcodable to BC1.
You can see the actual code here, in function convert_etc1_to_dxt1().
It's possible to add BC7-style pbits to ETC1S (1 or 3) to improve quality. Transcoders can decide to use these pbits, or not.