Right now I'm wondering if the benefits of a 4×4 pixels tile justify changing the format definition, particularly for high-entropy blocks. Or maybe dynamically switching from 4×4 (high entropy) and 16×16 tiles (low entropy). Which adds one flag bit in front of a non-leaf tile.
It's the old "bad apple" syndrome : one outlying sample can wreck the compression ratio. The majority of the samples shouldn't "pay" for a single one, the goal is to prevent this sample from spilling over the whole dataset...
A smaller block also helps with code optimisation because some loops can be unrolled nicely.
Now that I think of it : merging tiles into a larger one IS possible. More on this later...
Packet size : limited to 1024 bits so the size field uses only 10 bits. Code 0000000000 means 1024. This is simpler than adding +1. Empty packets make no sense, right ?
Progressive encoding and decoding : this requires significantly more memory and temporary storage, but is more resilient. And during decoding, if a sub-block is altered, the whole block can be filled with the average of the min and max values...
20171028 : how to save one flag bit ? In the parent tile, if the limits are identical (or sufficiently close) for 2×2 tiles or 4×4 tiles, then the subtiles can switch sizes to 8×8 or 16×16. No need to deal with an explicit space-wasting flag...
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.