While LLMs have been used for… a lot, it seems like this use might be one where it’s not only reliable but it appears to outperform existing methods of image compression. Being able to cram more data into less space tends to lead to interesting developments, so I will be keeping my eye on this.
What do you guys think? Seem like it’s deserving of less hype than I’m giving it? What kind of security holes do you think this could open?
An example of a compression algorithm that does support tuning parameters before hand is zstd.
Even if something isn’t in a pre-shared dataset, I wonder if a sufficiently advanced LLM might be able to do well at compressing predictable but non-repeating data, such as “abc, bcd, cde, […]”.