I've reduced the size of the key lookup tables used by the universal transcoder to approx 85KB. Some example encodings:
Original:
ETC1 near-optimal:
ETC1S (the universal texture):
DXT1:
DXT5A:
Co-owner of Binomial LLC, working on GPU texture interchange. Open source developer, graphics programmer, former video game developer. Worked previously at SpaceX (Starlink), Valve, Ensemble Studios (Microsoft), DICE Canada.
Friday, November 24, 2017
Thursday, November 23, 2017
More universal GPU texture format examples
I've improved the quality of the ETC1->DXT1 conversion process. All of these images come from the same exact compressed data. Only a straightforward transform is required on the compressed texture bits to derive the DXT1/DXT5A version. It's simple/fast enough to do in a Javascript transcoder.
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
DXT1:
DXT5A:
ETC1:
ETC1:
DXT1:
DXT5A:
Universal GPU texture format: DXT5 support
Got grayscale ETC1 to DXT5A conversion working, using a small table. This work is for DXT5 support in the universal texture format. Now that this is working I can proceed to finishing the full universal encoder.
Note none of these images were created with my best ETC1 encoder. They use an early prototype from late 2016 that has so-so quality. The main point of these experiments is to prove that the idea is workable.
All stats are dB vs. the original image. This image's subtle gradients are hard to handle, you can see this in the DXT1 version.
To those who argue that a universal GPU texture format that is based off ETC1/DXT1 isn't high quality enough: You would be amazed at the low quality levels teams use with crunch/Basis. This tech isn't about achieving highest texture quality. It's about enabling easy distribution of supercompressed GPU texture data. It's a "JPEG-like format for GPU texture data", usable on mobile or desktop.
Original
ETC1 near-optimal 48.903
ETC1S 46.322 (universal format base image in ETC1 mode)
ETC1S->DXT1 45.664
ETC1S green channel converted to DXT5A (43.878)
Original
ETC1 near-optimal 51.141
ETC1S 46.461
ETC1S->DXT1 44.865
ETC1S green channel converted to DXT5A 46.107
Wednesday, November 22, 2017
"Universal" GPU texture/image format examples
All PSNR figures are luma PSNR. Each image was transcoded from the same compressed texture data.
ETC1 41.233
DXT1 40.9
ETC1 45.964
DXT1 45.322
ETC1 46.461
DXT1 44.865
ETC1 43.785
DXT1 43.406
ETC1 33.516
DXT1 33.339
ETC1 41.233
DXT1 40.9
ETC1 45.964
DXT1 45.322
ETC1 46.461
DXT1 44.865
ETC1 43.785
DXT1 43.406
ETC1 33.516
DXT1 33.339
Sunday, November 12, 2017
On whiteboard coding interviews
I'm in a ranty mood this evening. Looking through my past, one thing that bothers me is the ritual called "whiteboarding".
I've taken and given a lot of these interviews. I personally find the process demeaning, dehumanizing, biased, and subjective. And if the company uses the terms "cultural fit" or "calibration" when teaching you how to whiteboard, be wary.
My first software development interview was in 1996. I walked in, showed my Game Developer Magazine articles and demos (in DOS of course), spoke with the developers and my potential manager, and they made me an offer. Looking back, I was so young, inexperienced and naive at 20 years old. It was a tough gig but we shipped a cool product (Montezuma's Return). There was no whiteboard, all that mattered was the work and results.
Anyhow, my interview at Blue Shift was similar. No whiteboard, just lots of meetings.
At Ensemble (Microsoft), I got a contract gig at first. This turned into a full-time gig. The interviews there were informal and very rarely (if ever) involved problem solving on a whiteboard.
Right before Ensemble, I also interviewed at Microsoft ATG. It was a stressful, heavy duty whiteboard interview with several devs. It was intense, and that night I fell asleep at the table of an unrelated dinner with friends. I got an offer, but Ensemble's was better. I later learned it was basically a form of "Trauma Bonding". Everyone else did it, so you had to go through it too to get "in". Overall, I remember the Microsoft engineers I interviewed with seemed to be all tired and somewhat stressed out, but they were very professional and respectful.
After Age3 shipped, I interviewed at Epic. I was tired from crunching on Age3, and was unprepared. It was the most horrific interview I've ever taken or seen. Incredibly unprofessional. The devs didn't want to be interviewing anyone. I flopped this interview (and probably dodged a bullet as the working conditions there at the time seemed really bad). Nobody at Ensemble knew I interviewed there, and I'm glad I didn't leave.
Years later, I interviewed at Valve. It was another exercise in Trauma Bonding. I was so stressed it was ridiculous, and I found Dune's "The Litany Against Fear" helpful. Somehow I got through, and looking back I think Gabe Newell (who visited Ensemble and met me there) might have helped get me in without my knowledge. I was lucky to get in at all, because I interviewed as a generalist. If I had interviewed as a graphics specialist I never could have gotten in. (Because at the time the gfx coders at Valve had a pact of sorts, and unless you were Carmack it was virtually impossible to survive the whiteboard. So many graphics specialists got turned down that after a while the high-ups at Valve took notice and changed things.)
Anyhow, one of my points is, I've been pretty lucky to get to work at these places. I learned a lot. Most of the companies I worked at didn't use whiteboarding. Interestingly, the cultures of the non-whiteboarding companies were much healthier.
I sometimes wonder: if I wasn't a white male, or overweight, with all other things unchanged, would I have got these gigs? I very highly doubt it.
I've implemented and shipped tons of algorithms, products, etc. But I hate whiteboarding.
I think the tech companies use this process to slow down horizontal movement between companies. It keeps labor in place, and developer wages/prices down. The "price" of moving between companies (in terms of stress, and potential "whiteboard defeat") is purposely held high. Independent of whether or not this is done purposely, this is the end result.
If you've got to whiteboard, it can't hurt to practice like crazy. And read a few whiteboard coding interview books. Also, tap your social network and find devs who interviewed at your target company, and ask them what happened. If companies are going to do this, at least make them put some effort into it.
One trick I've seen done: After a big layoff, a group of devs gets together and starts interviewing at various companies. After every dev interviews at a particular shop, everything about the interview, and the whiteboard questions, are discreetly shared with the group. The first dev to be sent to a particular company won't be expected to get in (and very well might not want to work there in the first place). Once developers start acting as a group the entire process gets "gamed" particularly effectively.
I've taken and given a lot of these interviews. I personally find the process demeaning, dehumanizing, biased, and subjective. And if the company uses the terms "cultural fit" or "calibration" when teaching you how to whiteboard, be wary.
My first software development interview was in 1996. I walked in, showed my Game Developer Magazine articles and demos (in DOS of course), spoke with the developers and my potential manager, and they made me an offer. Looking back, I was so young, inexperienced and naive at 20 years old. It was a tough gig but we shipped a cool product (Montezuma's Return). There was no whiteboard, all that mattered was the work and results.
Anyhow, my interview at Blue Shift was similar. No whiteboard, just lots of meetings.
At Ensemble (Microsoft), I got a contract gig at first. This turned into a full-time gig. The interviews there were informal and very rarely (if ever) involved problem solving on a whiteboard.
Right before Ensemble, I also interviewed at Microsoft ATG. It was a stressful, heavy duty whiteboard interview with several devs. It was intense, and that night I fell asleep at the table of an unrelated dinner with friends. I got an offer, but Ensemble's was better. I later learned it was basically a form of "Trauma Bonding". Everyone else did it, so you had to go through it too to get "in". Overall, I remember the Microsoft engineers I interviewed with seemed to be all tired and somewhat stressed out, but they were very professional and respectful.
After Age3 shipped, I interviewed at Epic. I was tired from crunching on Age3, and was unprepared. It was the most horrific interview I've ever taken or seen. Incredibly unprofessional. The devs didn't want to be interviewing anyone. I flopped this interview (and probably dodged a bullet as the working conditions there at the time seemed really bad). Nobody at Ensemble knew I interviewed there, and I'm glad I didn't leave.
Years later, I interviewed at Valve. It was another exercise in Trauma Bonding. I was so stressed it was ridiculous, and I found Dune's "The Litany Against Fear" helpful. Somehow I got through, and looking back I think Gabe Newell (who visited Ensemble and met me there) might have helped get me in without my knowledge. I was lucky to get in at all, because I interviewed as a generalist. If I had interviewed as a graphics specialist I never could have gotten in. (Because at the time the gfx coders at Valve had a pact of sorts, and unless you were Carmack it was virtually impossible to survive the whiteboard. So many graphics specialists got turned down that after a while the high-ups at Valve took notice and changed things.)
Anyhow, one of my points is, I've been pretty lucky to get to work at these places. I learned a lot. Most of the companies I worked at didn't use whiteboarding. Interestingly, the cultures of the non-whiteboarding companies were much healthier.
I sometimes wonder: if I wasn't a white male, or overweight, with all other things unchanged, would I have got these gigs? I very highly doubt it.
I've implemented and shipped tons of algorithms, products, etc. But I hate whiteboarding.
I think the tech companies use this process to slow down horizontal movement between companies. It keeps labor in place, and developer wages/prices down. The "price" of moving between companies (in terms of stress, and potential "whiteboard defeat") is purposely held high. Independent of whether or not this is done purposely, this is the end result.
If you've got to whiteboard, it can't hurt to practice like crazy. And read a few whiteboard coding interview books. Also, tap your social network and find devs who interviewed at your target company, and ask them what happened. If companies are going to do this, at least make them put some effort into it.
One trick I've seen done: After a big layoff, a group of devs gets together and starts interviewing at various companies. After every dev interviews at a particular shop, everything about the interview, and the whiteboard questions, are discreetly shared with the group. The first dev to be sent to a particular company won't be expected to get in (and very well might not want to work there in the first place). Once developers start acting as a group the entire process gets "gamed" particularly effectively.
Wednesday, September 27, 2017
Things learned while running your own self-funded startup
Here's a brain dump of the things we've learned while running our business and shipping our first product (Basis).
My experience at Valve somewhat helped prepare me for doing this. Working at Valve was like a microcosm of working at your own company. You needed to find customers, interact with them, and figure out what was valuable to them. (You also needed to identify "competitors" and do your best to ignore or respond to whatever challenges they might throw your way.) Financial concerns weren't an issue, but time and your reputation at the company was. I noticed a feedback loop there: The more success you got at Valve, the easier it was to find projects to help out on. As you earned "Valve Bucks" doors got opened much easier.
Entering Valve with basically zero Valve Bucks was a big challenge. It wasn't enough to merely be a good engineer at Valve. If you were a good engineer with zero communication skills your chances at surviving and thriving when I was there were pretty low. If you acted like an asshole and didn't have many friends it didn't matter how good you were or how awesome your accomplishments were. People like this would be fired sooner or later.
Anyhow, running your own company has a number of additional challenges. There are no bi-weekly paychecks, no free lunches, no PTO, no yearly Hawaiian vacation, and no on-site lawyers. You are now in the real world, and you're leaving the high school like corporate drama behind. Everything, including staying financially solvent, is now your responsibility.
Some things we've learned:
1. This is a dramatically more mature way of working vs. full-timing. Your boss is basically the bank. Keeping your account in the green is like an optimization problem. If you fail you go under, or you wind up in the arms of potentially predatory investors.
2. You want a product ASAP. Contract work is basically linear income relative to time, while products can be exponential. Just choose a product and ship it. If it fails, try again and again, because the things you learned while working on the first product will help you immensely on your second.
3. Products can take a long time to develop and monetize. Contract work can bring in immediate income, but only a trickle. The big challenge is working on contracts to stay afloat in the short term, but also finding time to work on your product for long term success.
4. There are lots of ways to stay funded until your product takes off: You can use savings, loans from friends, investor funds, government grants, and income from short contracts. I would recommend staying away from investors as much as you can, because once you get in bed with investors you no longer totally own your company (and it can be basically taken away from you).
5. Every decision must be made extremely carefully. Bad decisions cost money.
6. The large companies move very slowly. Do not place any bets on getting paid quickly by large companies, no matter how happy they say they are with you.
So, at least with a software middleware product, I would first target small customers because they move more quickly.
7. If you have a product that a very large company really wants, they'll still do everything they can to delay purchasing it for the market price. They'll try to hire you or your partner(s) away individually, or they'll wait as long as possible to see if you encounter hard financial times and go under. They won't come and just offer to license your product or buy you out until they've exhausted all other possibilities.
8. If your product offers evaluation licenses, then be very careful with the eval time period. Some companies will purposely demand very long evals as a form of negotiation leverage.
9. A company can feign interest in licensing your product, get your lawyer bogged down negotiating terms of the license (or eval license), then pull away or suddenly change their mind at the last minute. This costs money. To avoid this, a "put up or shut up" mentality can help. Either the company accepts your eval license with little fuss, or just move on.
10. No Hard Sells: If the company you are negotiating with gets overly emotional about the terms in your eval license, then move on. Either they want the value your product offers, or they don't.
11. Research pricing: Your competitor(s) will publicly advertise low-ball prices to help lure in customers, but once you start negotiating with them the price goes up (sometimes massively). Talk to your competitor's customers and just ask them what they actually paid, and you'll be amazed at how much software middleware is actually worth in the market.
The publicly advertised price is basically just for corporate programmers, who generally don't understand the true market value of their code when properly packaged as a product. The public price is optimized so programmers won't feel bad about how underpaid they are, but it won't be too low so the coders will still perceive the product as having sufficient value.
Research the concept of "Death Prices". If the price is too low, it won't be perceived as having enough value to bother with, and low prices won't sustain your efforts. Set the price sufficiently high and let the market set the actual price. Most likely, if you're a programmer, you'll set the price too low because you've been brainwashed into thinking that your software doesn't have much value.
Large companies will pay high prices to be first to use your software, if it's perceived to be groundbreaking or awesome enough.
12. Find a good lawyer. Get your eval and software license figured out early. This is going to cost money, so save up. A lawyer with patent and software license experience is a huge bonus.
13. Interactions with real customers is priceless. Ask them what they want. For us, we were amazed at all the different ways the open source predecessor of Basis (crunch) was utilized. We pivoted our strategy to RDO encoders based off customer feedback. Our long term roadmap is based off what customers are actually doing with our software right now.
14. Open source is forever: Be extremely careful releasing open source software. "Thou shall not release too much functionality or features as open source". Open sourcing your work is both a blessing and a curse, and can be actually dangerous from a patent troll perspective.
Open source is great, because potential customers will get a chance to try out your work without spending a dime or ever talking to you. This boosts your credibility. On the flip side, if you give away too much, you are basically competing against yourself when you attempt to monetize your work as a product.
Your open source release should be a demo of the product, and no more. Give things away with the goal of eventually converting the users of the free software into paying customers.
Even if you don't intend on turning the software into a product, always keep in mind ways it can be eventually monetized. Your time is worth something.
Talk to every user of your open software software that you can find. Gather intelligence about how they actually use your software.
15. Your company must appear and be stable at all costs. If one year your large and expensive GDC booth isn't present, people will notice and you'll lose business. Even the biggest game middleware vendors have had serious cashflow problems. One almost went under a few years ago until they were bailed out by a big player. Even the biggest players sometimes take contract work to stay in the green, because product income isn't reliable.
16. Align yourself well: Being associated with CoMotion Labs and Khronos was invaluable to us. At CoMotion we were exposed to tons of other startups, and this cultural immersion was valuable.
17. Perception and psychology is extremely important. If you're a programmer, you're probably going to suck at the skills needed to bring your software to market. Find a partner who compliments you well.
18. You need friends, inside and outside of companies. Make as many friends as you can.
19. Some big corps can be very nasty:
"OMG, you can't do this due to patents!"
"You'll never take off because your price is too high"
"You'll run out of money and just come work for us, so we'll just wait you out"
"You must work for us because we're going to write this software ourselves and that'll impact your market share"
Corporate programmers at these megacorps can be horrifically nasty. Also, study and become acutely aware of triangulation when dealing with large hierarchical companies.
20. Some large teams have egos and won't want to license your software because of it. The challenge to licensing software in this situation will be overcoming this institutional ego, or just waiting to see how things pan out.
21. Be aware that companies talk to each other. A larger corp can use a smaller corp to help establish prices.
22. If you're at a company with deep pockets and you want to really have influence, offer the potential of a very large license fee for key software middleware. It works.
23. Do not blindly sign NDA's. Get a checklist from your lawyer and read them very carefully. If you try to negotiate over a clause in the NDA that totally sucks and the company refuses to budge, then move on.
Also, the NDA negotiation process can be very revealing. If the company is hard to deal with at this stage, then it's safer to just move on.
24. Treat your employees well, and with respect. We don't have any employees, but we've learned a lot while talking to employees at other middleware companies.
25. Your company will be defined just as much (if not more) by the customers you turn down vs. the ones you take.
If you get a bad feeling from a potential customer, or they aren't respectful, or they treat you substantially differently vs. how they treat your partner, then it's probably best to move on. Selling software like this is actually the establishment of a relationship, and every new relationship you take on has both risks and rewards. We're careful about who we work with.
My experience at Valve somewhat helped prepare me for doing this. Working at Valve was like a microcosm of working at your own company. You needed to find customers, interact with them, and figure out what was valuable to them. (You also needed to identify "competitors" and do your best to ignore or respond to whatever challenges they might throw your way.) Financial concerns weren't an issue, but time and your reputation at the company was. I noticed a feedback loop there: The more success you got at Valve, the easier it was to find projects to help out on. As you earned "Valve Bucks" doors got opened much easier.
Entering Valve with basically zero Valve Bucks was a big challenge. It wasn't enough to merely be a good engineer at Valve. If you were a good engineer with zero communication skills your chances at surviving and thriving when I was there were pretty low. If you acted like an asshole and didn't have many friends it didn't matter how good you were or how awesome your accomplishments were. People like this would be fired sooner or later.
Anyhow, running your own company has a number of additional challenges. There are no bi-weekly paychecks, no free lunches, no PTO, no yearly Hawaiian vacation, and no on-site lawyers. You are now in the real world, and you're leaving the high school like corporate drama behind. Everything, including staying financially solvent, is now your responsibility.
Some things we've learned:
1. This is a dramatically more mature way of working vs. full-timing. Your boss is basically the bank. Keeping your account in the green is like an optimization problem. If you fail you go under, or you wind up in the arms of potentially predatory investors.
2. You want a product ASAP. Contract work is basically linear income relative to time, while products can be exponential. Just choose a product and ship it. If it fails, try again and again, because the things you learned while working on the first product will help you immensely on your second.
3. Products can take a long time to develop and monetize. Contract work can bring in immediate income, but only a trickle. The big challenge is working on contracts to stay afloat in the short term, but also finding time to work on your product for long term success.
4. There are lots of ways to stay funded until your product takes off: You can use savings, loans from friends, investor funds, government grants, and income from short contracts. I would recommend staying away from investors as much as you can, because once you get in bed with investors you no longer totally own your company (and it can be basically taken away from you).
5. Every decision must be made extremely carefully. Bad decisions cost money.
6. The large companies move very slowly. Do not place any bets on getting paid quickly by large companies, no matter how happy they say they are with you.
So, at least with a software middleware product, I would first target small customers because they move more quickly.
7. If you have a product that a very large company really wants, they'll still do everything they can to delay purchasing it for the market price. They'll try to hire you or your partner(s) away individually, or they'll wait as long as possible to see if you encounter hard financial times and go under. They won't come and just offer to license your product or buy you out until they've exhausted all other possibilities.
8. If your product offers evaluation licenses, then be very careful with the eval time period. Some companies will purposely demand very long evals as a form of negotiation leverage.
9. A company can feign interest in licensing your product, get your lawyer bogged down negotiating terms of the license (or eval license), then pull away or suddenly change their mind at the last minute. This costs money. To avoid this, a "put up or shut up" mentality can help. Either the company accepts your eval license with little fuss, or just move on.
10. No Hard Sells: If the company you are negotiating with gets overly emotional about the terms in your eval license, then move on. Either they want the value your product offers, or they don't.
11. Research pricing: Your competitor(s) will publicly advertise low-ball prices to help lure in customers, but once you start negotiating with them the price goes up (sometimes massively). Talk to your competitor's customers and just ask them what they actually paid, and you'll be amazed at how much software middleware is actually worth in the market.
The publicly advertised price is basically just for corporate programmers, who generally don't understand the true market value of their code when properly packaged as a product. The public price is optimized so programmers won't feel bad about how underpaid they are, but it won't be too low so the coders will still perceive the product as having sufficient value.
Research the concept of "Death Prices". If the price is too low, it won't be perceived as having enough value to bother with, and low prices won't sustain your efforts. Set the price sufficiently high and let the market set the actual price. Most likely, if you're a programmer, you'll set the price too low because you've been brainwashed into thinking that your software doesn't have much value.
Large companies will pay high prices to be first to use your software, if it's perceived to be groundbreaking or awesome enough.
12. Find a good lawyer. Get your eval and software license figured out early. This is going to cost money, so save up. A lawyer with patent and software license experience is a huge bonus.
13. Interactions with real customers is priceless. Ask them what they want. For us, we were amazed at all the different ways the open source predecessor of Basis (crunch) was utilized. We pivoted our strategy to RDO encoders based off customer feedback. Our long term roadmap is based off what customers are actually doing with our software right now.
14. Open source is forever: Be extremely careful releasing open source software. "Thou shall not release too much functionality or features as open source". Open sourcing your work is both a blessing and a curse, and can be actually dangerous from a patent troll perspective.
Open source is great, because potential customers will get a chance to try out your work without spending a dime or ever talking to you. This boosts your credibility. On the flip side, if you give away too much, you are basically competing against yourself when you attempt to monetize your work as a product.
Your open source release should be a demo of the product, and no more. Give things away with the goal of eventually converting the users of the free software into paying customers.
Even if you don't intend on turning the software into a product, always keep in mind ways it can be eventually monetized. Your time is worth something.
Talk to every user of your open software software that you can find. Gather intelligence about how they actually use your software.
15. Your company must appear and be stable at all costs. If one year your large and expensive GDC booth isn't present, people will notice and you'll lose business. Even the biggest game middleware vendors have had serious cashflow problems. One almost went under a few years ago until they were bailed out by a big player. Even the biggest players sometimes take contract work to stay in the green, because product income isn't reliable.
16. Align yourself well: Being associated with CoMotion Labs and Khronos was invaluable to us. At CoMotion we were exposed to tons of other startups, and this cultural immersion was valuable.
17. Perception and psychology is extremely important. If you're a programmer, you're probably going to suck at the skills needed to bring your software to market. Find a partner who compliments you well.
18. You need friends, inside and outside of companies. Make as many friends as you can.
19. Some big corps can be very nasty:
"OMG, you can't do this due to patents!"
"You'll never take off because your price is too high"
"You'll run out of money and just come work for us, so we'll just wait you out"
"You must work for us because we're going to write this software ourselves and that'll impact your market share"
Corporate programmers at these megacorps can be horrifically nasty. Also, study and become acutely aware of triangulation when dealing with large hierarchical companies.
20. Some large teams have egos and won't want to license your software because of it. The challenge to licensing software in this situation will be overcoming this institutional ego, or just waiting to see how things pan out.
21. Be aware that companies talk to each other. A larger corp can use a smaller corp to help establish prices.
22. If you're at a company with deep pockets and you want to really have influence, offer the potential of a very large license fee for key software middleware. It works.
23. Do not blindly sign NDA's. Get a checklist from your lawyer and read them very carefully. If you try to negotiate over a clause in the NDA that totally sucks and the company refuses to budge, then move on.
Also, the NDA negotiation process can be very revealing. If the company is hard to deal with at this stage, then it's safer to just move on.
24. Treat your employees well, and with respect. We don't have any employees, but we've learned a lot while talking to employees at other middleware companies.
25. Your company will be defined just as much (if not more) by the customers you turn down vs. the ones you take.
If you get a bad feeling from a potential customer, or they aren't respectful, or they treat you substantially differently vs. how they treat your partner, then it's probably best to move on. Selling software like this is actually the establishment of a relationship, and every new relationship you take on has both risks and rewards. We're careful about who we work with.
Thursday, August 17, 2017
Why crunch likes uncompressed texture data
We've recently gotten some interest in creating a RDO compressor specifically for already compressed textures, which is why I'm writing this.
crunch works best with (and is designed for) uncompressed RGBA texture data. You can feed crunch already compressed data (by compressing to DXT, unpacking the blocks, and throwing the unpacked pixels into the compressor), but it won't perform as well. Why you ask?
crunch uses top down clusterization on the block endpoints. It tries to create groups of blocks that share similar endpoints. Once it finds a group of blocks that seem similar enough, it then uses its DXT endpoint optimizers on these block clusters to create the near-optimal set of endpoints for that cluster. These clusters can be very big, which is why crunch/Basis can't use off the self DXT/ETC compressors which assume 4x4 blocks.
DXT/ETC are lossy formats, so there is no single "correct" encoding for each input (ignoring trivial inputs like solid-color blocks). There are many possible valid encodings that will look very similar. Because of this, creating a good DXT/ETC block encoder that also performs fast is harder than it looks, and adding additional constraints or requirements on top of this (such as rate distortion optimization on both the endpoints and the selectors) just adds to the fun.
Anyhow, imagine the data has already been compressed, and the encoder creates a cluster containing just a single block. Because the data has already been compressed, the encoder now has the job of determining exactly which endpoints were used originally to pack that block. crunch tries to do this for DXT1 blocks, but it doesn't always succeed. There are many DXT compressors out there, each using different algorithms. (crunch could be modified to also accept the precompressed DXT data itself, which would allow it to shortcut this problem.)
What if the original compressor decided to use less than 4 colors spaced along the colorspace line? Also, the exact method used to interpolate the endpoints colors is only loosely defined for DXT1. It's a totally solvable problem, but it's not something I had the time to work on while writing crunch.
Things get worse if the endpoint clusterization step assigns 2+ blocks with different endpoints to the same cluster. The compressor now has to find a single set of endpoints to represent both blocks. Because the input pixels have already been compressed, we're now forcing the input pixels to lie along a quantized colorspace line (using 555/565 endpoints!) two times in a row. Quality takes a nosedive.
Basis improves this situation, although I still favor working with uncompressed texture data because that's what the majority of our customers work with.
Another option is to use bottom-up clusterization (which crunch doesn't use). You first compress the input data to DXT/ETC/etc., then merge similar blocks together so they share the same endpoints and/or selectors. This approach seems to be a natural fit to already compressed data. Quantizing just the selector data is the easiest thing to do first.
crunch works best with (and is designed for) uncompressed RGBA texture data. You can feed crunch already compressed data (by compressing to DXT, unpacking the blocks, and throwing the unpacked pixels into the compressor), but it won't perform as well. Why you ask?
crunch uses top down clusterization on the block endpoints. It tries to create groups of blocks that share similar endpoints. Once it finds a group of blocks that seem similar enough, it then uses its DXT endpoint optimizers on these block clusters to create the near-optimal set of endpoints for that cluster. These clusters can be very big, which is why crunch/Basis can't use off the self DXT/ETC compressors which assume 4x4 blocks.
DXT/ETC are lossy formats, so there is no single "correct" encoding for each input (ignoring trivial inputs like solid-color blocks). There are many possible valid encodings that will look very similar. Because of this, creating a good DXT/ETC block encoder that also performs fast is harder than it looks, and adding additional constraints or requirements on top of this (such as rate distortion optimization on both the endpoints and the selectors) just adds to the fun.
Anyhow, imagine the data has already been compressed, and the encoder creates a cluster containing just a single block. Because the data has already been compressed, the encoder now has the job of determining exactly which endpoints were used originally to pack that block. crunch tries to do this for DXT1 blocks, but it doesn't always succeed. There are many DXT compressors out there, each using different algorithms. (crunch could be modified to also accept the precompressed DXT data itself, which would allow it to shortcut this problem.)
What if the original compressor decided to use less than 4 colors spaced along the colorspace line? Also, the exact method used to interpolate the endpoints colors is only loosely defined for DXT1. It's a totally solvable problem, but it's not something I had the time to work on while writing crunch.
Things get worse if the endpoint clusterization step assigns 2+ blocks with different endpoints to the same cluster. The compressor now has to find a single set of endpoints to represent both blocks. Because the input pixels have already been compressed, we're now forcing the input pixels to lie along a quantized colorspace line (using 555/565 endpoints!) two times in a row. Quality takes a nosedive.
Basis improves this situation, although I still favor working with uncompressed texture data because that's what the majority of our customers work with.
Another option is to use bottom-up clusterization (which crunch doesn't use). You first compress the input data to DXT/ETC/etc., then merge similar blocks together so they share the same endpoints and/or selectors. This approach seems to be a natural fit to already compressed data. Quantizing just the selector data is the easiest thing to do first.
Sunday, June 25, 2017
Seattle
I'm by no means an expert on anything San Diego, having been there only around 1.5 months since leaving Seattle. I did spend 8 years in Seattle though, and here's what I think:
- Seattle is just way too dark of a city for me to live there year round. Here's Seattle vs. San Diego's sunshine (according to city-data.com).
- There's a constant background noise and auditory clutter to Seattle and the surrounding areas that's just getting louder and louder as buildings pop up and people (and their cars) move in.
Eventually this background noise got really annoying. Even downtown San Diego is surprisingly peaceful and quiet by comparison.
- Seattle's density is both a blessing and a curse. It's a very walkable city, so going without a car is possible if you live and work in the right places.
The eastside and westside buses can be incredibly, ridiculously over packed. Seattle needs to seriously get its public transportation act together.
- As a pedestrian, I've found Seattle's drivers to be so much nicer and peaceful on the road vs. San Diego's. CA drivers seem a lot more aggressive.
- San Diego is loaded with amazing beaches. Seattle - not so much.
A few misc. thoughts on Seattle and the eastside tech workers I encountered:
I lived and worked on the eastside (near downtown Bellevue) and westside (U District) for enough time to compare and contrast the two areas. The people in Seattle itself are generally quite friendly and easy going. Things seem to change quickly once you get to the eastside, which feels almost like a different state entirely.
I found eastside people to be much less friendly and living in their own little worlds. I wish I had spent more of my time living in Seattle itself instead of Bellevue. Culturally Bellevue feels cold and very corporate.
The wealthier areas on the eastside seemed the worse. Wealth and rudeness seem highly correlated. So far, I've yet to meet a Bellevue/Redmond tech 10-100 millionaire (or billionaire) that I found to be truly pleasant to be around or work with. I also learned over and over that there is only a weak correlation between someone's wealth and their ability to actually code. In many cases someone's tech wealth seemed to be related to luck of the draw, timing, personality, and even popularity. Some of the wealthiest programmers I met here were surprisingly weak software engineers.
I've seen this happen repeatedly over the years: Average software engineers get showered with mad cash and suddenly they turn inward, become raging narcissistic assholes, and firmly believe they and their code is godly. Money seems to bring out the worse personality traits in people.
- Seattle is just way too dark of a city for me to live there year round. Here's Seattle vs. San Diego's sunshine (according to city-data.com).
The winter rain didn't bother me much at all. It was the lack of sun. (Hint to Seattle-area corporate recruiters: Fly in candidates from sunnier climates like Dallas to interview during July-August.)
Eventually this background noise got really annoying. Even downtown San Diego is surprisingly peaceful and quiet by comparison.
- Seattle's density is both a blessing and a curse. It's a very walkable city, so going without a car is possible if you live and work in the right places.
The eastside and westside buses can be incredibly, ridiculously over packed. Seattle needs to seriously get its public transportation act together.
- As a pedestrian, I've found Seattle's drivers to be so much nicer and peaceful on the road vs. San Diego's. CA drivers seem a lot more aggressive.
- San Diego is loaded with amazing beaches. Seattle - not so much.
A few misc. thoughts on Seattle and the eastside tech workers I encountered:
I lived and worked on the eastside (near downtown Bellevue) and westside (U District) for enough time to compare and contrast the two areas. The people in Seattle itself are generally quite friendly and easy going. Things seem to change quickly once you get to the eastside, which feels almost like a different state entirely.
I found eastside people to be much less friendly and living in their own little worlds. I wish I had spent more of my time living in Seattle itself instead of Bellevue. Culturally Bellevue feels cold and very corporate.
The wealthier areas on the eastside seemed the worse. Wealth and rudeness seem highly correlated. So far, I've yet to meet a Bellevue/Redmond tech 10-100 millionaire (or billionaire) that I found to be truly pleasant to be around or work with. I also learned over and over that there is only a weak correlation between someone's wealth and their ability to actually code. In many cases someone's tech wealth seemed to be related to luck of the draw, timing, personality, and even popularity. Some of the wealthiest programmers I met here were surprisingly weak software engineers.
I've seen this happen repeatedly over the years: Average software engineers get showered with mad cash and suddenly they turn inward, become raging narcissistic assholes, and firmly believe they and their code is godly. Money seems to bring out the worse personality traits in people.
Monday, June 19, 2017
Basis's RDO DXTc compression API
This is a work in progress, but here's the API to the new rate distortion optimizing DXTc codec I've been working on for Basis. There's only one function (excluding basis_get_version()): basis_rdo_dxt_encode(). You call it with some encoding parameters and an array of input images (or "slices"), and it gives you back a blob of DXTc blocks which you then feed to any LZ codec like zlib, zstd, LZHAM, Oodle, etc.
The output DXTc blocks are organized in simple raster order, with slice 0's blocks first, then slice 1's, etc. The slices could be mipmap levels, or cubemap faces, etc. For highest compression, it's very important to feed the output blocks to the LZ codec in the order that this function gives them back to you.
On my near-term TODO list is to allow the user to specify custom per-channel weightings, and to add more color distance functions. Right now it supports either uniform weights, or a custom model for sRGB colorspace photos/textures. Also, I may expose optional per-slice weightings (for mipmaps).
I'm shipping the first version (as a Windows DLL) tomorrow.
// File: basis_rdo_dxt_public.h
#pragma once
#include <stdlib.h>
#include <memory.h>
#ifdef BASIS_DLL_EXPORTS
#define BASIS_DLL_EXPORT __declspec(dllexport)
#else
#define BASIS_DLL_EXPORT
#endif
#if defined(_MSC_VER)
#define BASIS_CDECL __cdecl
#else
#define BASIS_CDECL
#endif
namespace basis
{
// The codec's current version number.
const int BASIS_CODEC_VERSION = 0x0106;
// The codec can accept rdo_dxt_params's from previous versions for backwards compatibility purposes. This is the oldest version it accepts.
const int BASIS_CODEC_MIN_COMPATIBLE_VERSION = 0x0106;
typedef unsigned int basis_uint;
typedef basis_uint rdo_dxt_bool;
typedef float basis_float;
enum rdo_dxt_format
{
cRDO_DXT1 = 0,
cRDO_DXT5,
cRDO_DXN,
cRDO_DXT5A,
cRDO_DXT_FORCE_DWORD = 0xFFFFFFFF
};
enum rdo_dxt_encoding_speed_t
{
cEncodingSpeedSlowest,
cEncodingSpeedFaster,
cEncodingSpeedFastest
};
const basis_uint RDO_DXT_STRUCT_VERSION = 0xABCD0001;
const basis_uint RDO_DXT_QUALITY_MIN = 1;
const basis_uint RDO_DXT_QUALITY_MAX = 255;
const basis_uint RDO_DXT_MAX_CLUSTERS = 32768;
struct rdo_dxt_params
{
basis_uint m_struct_size;
basis_uint m_struct_version;
rdo_dxt_format m_format;
basis_uint m_quality;
basis_uint m_alpha_component_indices[2];
basis_uint m_lz_max_match_dist;
// Output block size to use in RDO optimization stage, note this does NOT impact the blocks written to pOutput_blocks by basis_rdo_dxt_encode()
basis_uint m_output_block_size;
basis_uint m_num_color_endpoint_clusters;
basis_uint m_num_color_selector_clusters;
basis_uint m_num_alpha_endpoint_clusters;
basis_uint m_num_alpha_selector_clusters;
basis_float m_l;
basis_float m_selector_rdo_quality_threshold;
basis_float m_selector_rdo_quality_threshold_low;
basis_float m_block_max_y_std_dev_rdo_quality_scaler;
basis_uint m_endpoint_refinement_steps;
basis_uint m_selector_refinement_steps;
basis_uint m_final_block_refinement_steps;
basis_float m_adaptive_tile_color_psnr_derating;
basis_float m_adaptive_tile_alpha_psnr_derating;
basis_uint m_selector_rdo_max_search_distance;
basis_uint m_endpoint_search_height;
basis_uint m_endpoint_search_width_first_line;
basis_uint m_endpoint_search_width_other_lines;
rdo_dxt_bool m_optimize_final_endpoint_clusters;
rdo_dxt_bool m_optimize_final_selector_clusters;
rdo_dxt_bool m_srgb_metrics;
rdo_dxt_bool m_debugging;
rdo_dxt_bool m_debug_output;
rdo_dxt_bool m_hierarchical_mode;
rdo_dxt_bool m_multithreaded;
rdo_dxt_bool m_use_sse41_if_available;
};
inline void rdo_dxt_params_set_encoding_speed(rdo_dxt_params *p, rdo_dxt_encoding_speed_t encoding_speed)
{
if (encoding_speed == cEncodingSpeedFaster)
{
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 1;
p->m_selector_rdo_max_search_distance = 3072;
}
else if (encoding_speed == cEncodingSpeedFastest)
{
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 0;
p->m_selector_rdo_max_search_distance = 2048;
}
else
{
p->m_endpoint_refinement_steps = 2;
p->m_selector_refinement_steps = 2;
p->m_final_block_refinement_steps = 1;
p->m_endpoint_search_width_first_line = 2;
p->m_endpoint_search_height = 3;
p->m_selector_rdo_max_search_distance = 4096;
}
}
inline void rdo_dxt_params_set_to_defaults(rdo_dxt_params *p, rdo_dxt_encoding_speed_t default_speed = cEncodingSpeedFaster)
{
memset(p, 0, sizeof(rdo_dxt_params));
p->m_struct_size = sizeof(rdo_dxt_params);
p->m_struct_version = RDO_DXT_STRUCT_VERSION;
p->m_format = cRDO_DXT1;
p->m_quality = 128;
p->m_alpha_component_indices[0] = 0;
p->m_alpha_component_indices[1] = 1;
p->m_l = .001f;
p->m_selector_rdo_quality_threshold = 1.75f;
p->m_selector_rdo_quality_threshold_low = 1.3f;
p->m_block_max_y_std_dev_rdo_quality_scaler = 8.0f;
p->m_lz_max_match_dist = 32768;
p->m_output_block_size = 8;
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 1;
p->m_adaptive_tile_color_psnr_derating = 1.5f;
p->m_adaptive_tile_alpha_psnr_derating = 1.5f;
p->m_selector_rdo_max_search_distance = 0;
p->m_optimize_final_endpoint_clusters = true;
p->m_optimize_final_selector_clusters = true;
p->m_selector_rdo_max_search_distance = 3072;
p->m_endpoint_search_height = 1;
p->m_endpoint_search_width_first_line = 1;
p->m_endpoint_search_width_other_lines = 1;
p->m_hierarchical_mode = true;
p->m_multithreaded = true;
p->m_use_sse41_if_available = true;
rdo_dxt_params_set_encoding_speed(p, default_speed);
}
const basis_uint RDO_DXT_MAX_IMAGE_DIMENSION = 16384;
struct rdo_dxt_slice_desc
{
// Pixel dimensions of this slice. A slice may be a mipmap level, a cubemap face, a video frame, or whatever.
basis_uint m_image_width;
basis_uint m_image_height;
basis_uint m_image_pitch_in_pixels;
// Pointer to 32-bit raster image. Format in memory: RGBA (R is first byte, A is last)
const void *m_pImage_pixels;
};
} // namespace basis
extern "C" BASIS_DLL_EXPORT basis::basis_uint BASIS_CDECL basis_get_version();
extern "C" BASIS_DLL_EXPORT basis::basis_uint BASIS_CDECL basis_get_minimum_compatible_version();
extern "C" BASIS_DLL_EXPORT bool BASIS_CDECL basis_rdo_dxt_encode(
const basis::rdo_dxt_params *pEncoder_params,
basis::basis_uint total_input_image_slices, const basis::rdo_dxt_slice_desc *pInput_image_slices,
void *pOutput_blocks, basis::basis_uint output_blocks_size_in_bytes);
The output DXTc blocks are organized in simple raster order, with slice 0's blocks first, then slice 1's, etc. The slices could be mipmap levels, or cubemap faces, etc. For highest compression, it's very important to feed the output blocks to the LZ codec in the order that this function gives them back to you.
On my near-term TODO list is to allow the user to specify custom per-channel weightings, and to add more color distance functions. Right now it supports either uniform weights, or a custom model for sRGB colorspace photos/textures. Also, I may expose optional per-slice weightings (for mipmaps).
I'm shipping the first version (as a Windows DLL) tomorrow.
// File: basis_rdo_dxt_public.h
#pragma once
#include <stdlib.h>
#include <memory.h>
#ifdef BASIS_DLL_EXPORTS
#define BASIS_DLL_EXPORT __declspec(dllexport)
#else
#define BASIS_DLL_EXPORT
#endif
#if defined(_MSC_VER)
#define BASIS_CDECL __cdecl
#else
#define BASIS_CDECL
#endif
namespace basis
{
// The codec's current version number.
const int BASIS_CODEC_VERSION = 0x0106;
// The codec can accept rdo_dxt_params's from previous versions for backwards compatibility purposes. This is the oldest version it accepts.
const int BASIS_CODEC_MIN_COMPATIBLE_VERSION = 0x0106;
typedef unsigned int basis_uint;
typedef basis_uint rdo_dxt_bool;
typedef float basis_float;
enum rdo_dxt_format
{
cRDO_DXT1 = 0,
cRDO_DXT5,
cRDO_DXN,
cRDO_DXT5A,
cRDO_DXT_FORCE_DWORD = 0xFFFFFFFF
};
enum rdo_dxt_encoding_speed_t
{
cEncodingSpeedSlowest,
cEncodingSpeedFaster,
cEncodingSpeedFastest
};
const basis_uint RDO_DXT_STRUCT_VERSION = 0xABCD0001;
const basis_uint RDO_DXT_QUALITY_MIN = 1;
const basis_uint RDO_DXT_QUALITY_MAX = 255;
const basis_uint RDO_DXT_MAX_CLUSTERS = 32768;
struct rdo_dxt_params
{
basis_uint m_struct_size;
basis_uint m_struct_version;
rdo_dxt_format m_format;
basis_uint m_quality;
basis_uint m_alpha_component_indices[2];
basis_uint m_lz_max_match_dist;
// Output block size to use in RDO optimization stage, note this does NOT impact the blocks written to pOutput_blocks by basis_rdo_dxt_encode()
basis_uint m_output_block_size;
basis_uint m_num_color_endpoint_clusters;
basis_uint m_num_color_selector_clusters;
basis_uint m_num_alpha_endpoint_clusters;
basis_uint m_num_alpha_selector_clusters;
basis_float m_l;
basis_float m_selector_rdo_quality_threshold;
basis_float m_selector_rdo_quality_threshold_low;
basis_float m_block_max_y_std_dev_rdo_quality_scaler;
basis_uint m_endpoint_refinement_steps;
basis_uint m_selector_refinement_steps;
basis_uint m_final_block_refinement_steps;
basis_float m_adaptive_tile_color_psnr_derating;
basis_float m_adaptive_tile_alpha_psnr_derating;
basis_uint m_selector_rdo_max_search_distance;
basis_uint m_endpoint_search_height;
basis_uint m_endpoint_search_width_first_line;
basis_uint m_endpoint_search_width_other_lines;
rdo_dxt_bool m_optimize_final_endpoint_clusters;
rdo_dxt_bool m_optimize_final_selector_clusters;
rdo_dxt_bool m_srgb_metrics;
rdo_dxt_bool m_debugging;
rdo_dxt_bool m_debug_output;
rdo_dxt_bool m_hierarchical_mode;
rdo_dxt_bool m_multithreaded;
rdo_dxt_bool m_use_sse41_if_available;
};
inline void rdo_dxt_params_set_encoding_speed(rdo_dxt_params *p, rdo_dxt_encoding_speed_t encoding_speed)
{
if (encoding_speed == cEncodingSpeedFaster)
{
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 1;
p->m_selector_rdo_max_search_distance = 3072;
}
else if (encoding_speed == cEncodingSpeedFastest)
{
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 0;
p->m_selector_rdo_max_search_distance = 2048;
}
else
{
p->m_endpoint_refinement_steps = 2;
p->m_selector_refinement_steps = 2;
p->m_final_block_refinement_steps = 1;
p->m_endpoint_search_width_first_line = 2;
p->m_endpoint_search_height = 3;
p->m_selector_rdo_max_search_distance = 4096;
}
}
inline void rdo_dxt_params_set_to_defaults(rdo_dxt_params *p, rdo_dxt_encoding_speed_t default_speed = cEncodingSpeedFaster)
{
memset(p, 0, sizeof(rdo_dxt_params));
p->m_struct_size = sizeof(rdo_dxt_params);
p->m_struct_version = RDO_DXT_STRUCT_VERSION;
p->m_format = cRDO_DXT1;
p->m_quality = 128;
p->m_alpha_component_indices[0] = 0;
p->m_alpha_component_indices[1] = 1;
p->m_l = .001f;
p->m_selector_rdo_quality_threshold = 1.75f;
p->m_selector_rdo_quality_threshold_low = 1.3f;
p->m_block_max_y_std_dev_rdo_quality_scaler = 8.0f;
p->m_lz_max_match_dist = 32768;
p->m_output_block_size = 8;
p->m_endpoint_refinement_steps = 1;
p->m_selector_refinement_steps = 1;
p->m_final_block_refinement_steps = 1;
p->m_adaptive_tile_color_psnr_derating = 1.5f;
p->m_adaptive_tile_alpha_psnr_derating = 1.5f;
p->m_selector_rdo_max_search_distance = 0;
p->m_optimize_final_endpoint_clusters = true;
p->m_optimize_final_selector_clusters = true;
p->m_selector_rdo_max_search_distance = 3072;
p->m_endpoint_search_height = 1;
p->m_endpoint_search_width_first_line = 1;
p->m_endpoint_search_width_other_lines = 1;
p->m_hierarchical_mode = true;
p->m_multithreaded = true;
p->m_use_sse41_if_available = true;
rdo_dxt_params_set_encoding_speed(p, default_speed);
}
const basis_uint RDO_DXT_MAX_IMAGE_DIMENSION = 16384;
struct rdo_dxt_slice_desc
{
// Pixel dimensions of this slice. A slice may be a mipmap level, a cubemap face, a video frame, or whatever.
basis_uint m_image_width;
basis_uint m_image_height;
basis_uint m_image_pitch_in_pixels;
// Pointer to 32-bit raster image. Format in memory: RGBA (R is first byte, A is last)
const void *m_pImage_pixels;
};
} // namespace basis
extern "C" BASIS_DLL_EXPORT basis::basis_uint BASIS_CDECL basis_get_version();
extern "C" BASIS_DLL_EXPORT basis::basis_uint BASIS_CDECL basis_get_minimum_compatible_version();
extern "C" BASIS_DLL_EXPORT bool BASIS_CDECL basis_rdo_dxt_encode(
const basis::rdo_dxt_params *pEncoder_params,
basis::basis_uint total_input_image_slices, const basis::rdo_dxt_slice_desc *pInput_image_slices,
void *pOutput_blocks, basis::basis_uint output_blocks_size_in_bytes);
Sunday, April 30, 2017
Binomial stuff
One MS employee recently said to Stephanie (my partner) that (paraphrasing) "your company isn't stable and can't possibly last". My reply: We've been in business for over a year now, and our business is just a natural extension and continuation of our careers. I've been programming since 1985, and developing commercial data compression and other software since 1993. I've been doing this for a while and I'm not going to stop anytime soon.
Having my own small consulting company vs. just working full-time for a single corporation is just a natural next step to me. One thing I really liked about working at Valve was the ability to wheel my desk to virtually anywhere in the company and start adding value. I can now "wheel my desk" to anywhere in the world, and the freedom this gives us is amazing.
Binomial is a self-funded startup. We work on both development contracts and our current product (Basis). We haven't taken any investment money. Our "runway" is basically infinite.
Having my own small consulting company vs. just working full-time for a single corporation is just a natural next step to me. One thing I really liked about working at Valve was the ability to wheel my desk to virtually anywhere in the company and start adding value. I can now "wheel my desk" to anywhere in the world, and the freedom this gives us is amazing.
Binomial is a self-funded startup. We work on both development contracts and our current product (Basis). We haven't taken any investment money. Our "runway" is basically infinite.
Wednesday, April 19, 2017
Basis status
Just a small update. We've put like 99% of our effort into ETC1 and ETC1+DXT1 over the previous 5-6 months. Our ETC1 encoder supports RDO and an intermediate format, and has shipped on OSX/Linux/Windows. I've been modifying the ETC1 encoder to also support DXT1 (for our universal format) over the previous few weeks.
Our ETC1 encoder was written almost from scratch. The next major step is to roll back all the improvements and things I've learned while implementing our ETC1 encoder back into our DXT-specific encoder. crunch's support for DXT has a bunch of deficiencies which hurt ratio. (Roy Eltham and Fabian Giesen have recently pointed this issue out to me. I've actually been aware of inefficiencies in crunch's codebook generator for a few months, since working on the new codebook generator for ETC1.) I'm definitely fixing this problem (and others!) in Basis.
Our ETC1 encoder was written almost from scratch. The next major step is to roll back all the improvements and things I've learned while implementing our ETC1 encoder back into our DXT-specific encoder. crunch's support for DXT has a bunch of deficiencies which hurt ratio. (Roy Eltham and Fabian Giesen have recently pointed this issue out to me. I've actually been aware of inefficiencies in crunch's codebook generator for a few months, since working on the new codebook generator for ETC1.) I'm definitely fixing this problem (and others!) in Basis.
Saturday, March 18, 2017
Probiotic yogurt making
Just got back from a wonderful business trip to Portland Maine, visiting ForeFlight. Making more probiotic yogurt tonight because I ate up almost my entire stock on the trip. (It didn't help that we got stuck in a blizzard while there, but that turned out to be really fun.) The food in Portland is amazing!
The pot of boiling water is for sterilizing the growth medium, in this case 2% organic grassfed milk+raw sugar. After the milk is boiled (repastuerized) and cooled I inoculate it using a 10 strain probiotic blend from Safeway. I tried a bunch of probiotics before finding this particular brand, which seems magical for me. Without this extremely strong yogurt I have no idea how long it would have taken my gut to heal after the antibiotics I had to take in 2015.
Yogurt making like this is tricky. Early on, around 30% of my attempts failed in spectacular ways. These days my success rate is almost 100%. Sterilization of basically everything (including tools, spoons, etc.) over and over throughout the process is critical to success.
The pot of boiling water is for sterilizing the growth medium, in this case 2% organic grassfed milk+raw sugar. After the milk is boiled (repastuerized) and cooled I inoculate it using a 10 strain probiotic blend from Safeway. I tried a bunch of probiotics before finding this particular brand, which seems magical for me. Without this extremely strong yogurt I have no idea how long it would have taken my gut to heal after the antibiotics I had to take in 2015.
Yogurt making like this is tricky. Early on, around 30% of my attempts failed in spectacular ways. These days my success rate is almost 100%. Sterilization of basically everything (including tools, spoons, etc.) over and over throughout the process is critical to success.