Oldie goodie article with charts, comparing webp, jpegxl, avif, jpeg etc. avif is SLOW
Wow. Nice. Big improvement if JPEG and PNG can be replaced by one codec.
Works for me with Qubes OS.
This is in jest, but those are my pain points - the AMD thinkpad I have can't run it, the Intel one melts yubikeys when decoding h264 video. The default lock screen can't read capital letters from the yubikeys static password entry. Qubes has a certain user that it caters to, I really wish they could get enough money to be able to cater to more use cases. It is not difficult to use it if it works for you.
> Do you hate using most hardware?
Nobody uses "most hardware". You may be unlucky with your hardware, then it's a problem. Or you can specifically buy hardware working with the OS you want.
> Do you like using Xorg?
What's wrong with Xorg?
It's slow for tasks requiring GPU, but allowing GPU for chosen, trusted VMs is planned: https://github.com/QubesOS/qubes-issues/issues/8552
Note that in that figure the formats are compared at the same SSIMULACRA2 score, not at the same file size. In the "very low quality" category, JPEG uses ~0.4 bpp (bits per pixel), while JPEG-XL and AVIF use ~0.13 bpp and ~0.1 bpp, respectively, so JPEG is roughly given 4 times as much space to work with. In the "med-low quality" category, JPEG-XL and AVIF use around 0.4 bpp, so perhaps you should compare the "very low quality" JPEG with "med-low quality" JPEG-XL and AVIF.
After reading your comment, I assumed you had missed the bpp difference. Please excuse me if I assumed incorrectly.
If the encoder have obvious problems it is not a big deal, but it doesn't bode well for the decoder.
Aviso: http
That's not a great bar since both of them showed up around the same time. And importantly JXL hits many use cases that AVIF doesn't.
> while being written in an unsafe language
They put little emphasis on that part when they were rejecting JXL. If they wanted to call for a safer implementation they could have done that.
Concerns about the implementation only came up after years of pushback forced google ton reconsider.
I think for most modern software it's difficult to name the creator, but if you had to for webp, it would be hard to argue that it's anyone but Jyrki Alakuijala, who is in fact one of the co-creators of jpegxl and the person backing up the long-term support of the rust jxl-rs implementation, so I'm not even going to ask for a source here because it's just not true.
No, memory safety is not security, Rust's memory guarantees eliminate some issues, but they also create a dangerous overconfidence, devs treat the compiler as a security audit and skip the hard work of threat modeling
A vigilant C programmer who manually validates everything and use available tools at its disposal is less risky than a complacent Rust programmer who blindly trust the language
I agree with this. But for a component whose job is to parse data and produce pixels, the security worries I have are memory ones. It's not implementing a permissions model or anything where design and logic are really important. The security holes an image codec would introduce are the sort where it a buffer overun gave an execution primitive (etc.).
You can get an awful lot done very quickly in C if you aren't bothered about security - and traditionally, most of the profession has done exactly that.
What about against a vigilant Rust programmer who also manually validates everything and uses available tools at its disposal?
So, a fairy-tale character?
https://github.com/search?q=repo%3Alibjxl%2Fjxl-rs%20unsafe&...
And my discovery (which basically anyone could have told me beforehand) was that ... "unsafe" rust is not really that different from regular rust. It lets you dereference pointers (which is not a particularly unusual operation in many other languages) and call some functions that need extra care. Usually the presence of "unsafe" really just means that you needed to interface with foreign functions or hardware or something.
This is all to say: implying that mere presence of an "unsafe" keyword is a sign that code is insecure is very, very silly.
JXL is not yet widely supported, so I cannot really use it (videogame maps), but I hope its performance is similar to WebP with better quality, for the future.
I also have both compiled with -O3 and -march=znver2 in GCC (same for rav1e's RUSTFLAGS) through my Gentoo profile.
https://apps.microsoft.com/detail/9MZPRTH5C0TB?hl=en-us&gl=U...
Affinity supports it. Photoshop supports it. Microsoft Photos supports it. Gimp supports it. Apple has had systemwide support for it since iOS 17+ / macOS 12+, including in Safari and basically any app that uses the system image functions.
Chromium isn't on the bleeding edge here. They actually were when it first came out, but then retreated and waited, and now they're back again.
WhatsApp doesn't even support WebP though. Hopefully, if they ever get around to adding WebP, they'll throw JXL in, too.
There seems to be some support there, though I tested on iOS 26.
I wonder if this new implementation could be extended to incorporate support for the older JPEG format and if then total code size could be reduced.
It is at least a very good transcoding target for the web, but it genuinely replaces many other formats in a way where the original source file can more or less be regenerated.
Let's say you want to store images lossless. This means you won't tolerate loss of data. Which means you don't want to risk it by using a codec that will compress the image lossy if you forget to enable a setting.
With PNG there is no way to accidentally make it lossy, which feels a lot safer for cases you want lossless compression.
If you want a robust lossless workflow, PNG isn't the answer. Automating the fiddly parts and validating that the automation does what you want is the answer.
16-bit PNG files can easily accidentally be reduced to 8-bit, which is of course a lossy operation. Animated PNG files can easily get converted into a still image (keeping only the first frame). CMYK images will have to be converted to RGB when saving them as PNG, which is also a lossy operation. It can happen that an image gets created as or converted to JPEG and then gets saved as PNG - which of course is a bad and lossy workflow, but it does happen.
So I don't agree that with PNG there is no way to accidentally make it lossy.
In any case: lossless or lossy is not a property of a format, but of a workflow. For keeping track of provenance information and workflow history, I would recommend looking into JPEG Trust / C2PA, which is a way to embed as metadata what happened to an image since it was captured/generated. Relying on the choice of image format for this is fragile and doesn't allow expressing the nuances, since reality is more complicated than just a binary "lossless or lossy".
> Specifically for JPEG files, the default cjxl behavior is to apply lossless recompression and the default djxl behavior is to reconstruct the original JPEG file (when the extension of the output file is .jpg).
You're right, however, that you do need to be careful and use the reference codec package for this, as tools like ImageMagick create loss during the decoding of the JPEG into pixels (https://github.com/ImageMagick/ImageMagick/discussions/6046) and ImageMagick sets quality to 92 by default. But perhaps that's something we can change.
Browser support for WebP is excellent now. The last browser to add it was Safari 14 in September 16, 2020: https://caniuse.com/webp
It got into Windows 10 1809 in October 2018. Into MacOS Big Sur in November 2020.
Wikipedia has a great list of popular software that supports it: https://en.wikipedia.org/wiki/WebP#Graphics_software
Edit: After reading the comments, this doesn't seem to open in Photos App.
One customer of mine (fashion) has over 700k images in their DAM, and about 0.5% cannot be converted to webp at all using libwebp. They can without problem be converted to jpeg, png, and avif.
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
[0] https://developers.google.com/speed/webp/faq#what_is_the_max...
But I fully realize, there are vanishingly few cases with similar constraints.
From a quick look at various "benchmarks" JpegXL seems just be flat out better than WebP in both compression speed and size, why has there been such reluctance from Chromium to adopt it? Are there WebP benefits I'm missing?
My only experience with WebP has been downloading what is nominally a `.png` file but then being told "WebP is not supported" by some software when I try to open it.
Also from a security perspective the reference implementation of JPEG-XL isn't great. It's over a hundred kLoC of C++, and given the public support for memory safety by both Google and Mozilla it would be extremely embarrassing if a security vulnerability in libjxl lead to a zero-click zero-day in either Chrome or Firefox.
The timing is probably a sign that Chrome considers the Rust implementation of JPEG-XL to be mature enough (or at least heading in that direction) to start kicking the tires.
I agree with the second part (useless hero images at the top of every post demonstrate it), but not necessarily the first. JPEG is supported pretty much everywhere images are, and it’s the de facto default format for pictures. Most people won’t even know what format they’re using, let alone that they could compress it or use another one. In the words of Hank Hill:
> Do I look like I know what a JPEG is? I just want a picture of a god dang hot dog.
* CNN (cnn.com): News-related photos on their front page
* Reddit (www.reddit.com): User-provided images uploaded to their internal image hosting
* Amazon (amazon.com): Product categories on the front page (product images are in WebP)
I wouldn't expect to see a lot of WebP on personal homepages or old-style forums, but if bandwidth costs were a meaningful budget line item then I would expect to see ~100% adoption of WebP or AVIF for any image that gets recompressed by a publishing pipeline.
I can completely see why the default answer to "should we add x" should be no unless there is a really good reason.
- jxl is better at high bpp, best in lossless mode
The issue was the use of C++ instead of Rust or WUFFS (that Chromium uses for a lot of formats).
The decode speed benchmarks are misleading. WebP has been hardware accelerated since 2013 in Android and 2020 in Apple devices. Due to existing hardware capabilities, real users will _always_ experience better performance and battery life with webp.
JXL is more about future-proofing. Bit depth, Wide gamut HDR, Progressive decoding, Animation, Transparency, etc.
JXL does flat out beats AVIF (the image codec, not videos) today. AVIF also pretty much doesn't have hardware decoding in modern phones yet. It makes sense to invest NOW in JXL than on AVIF.
For what people use today - unfortunately there is no significant case to beat WebP with the existing momentum. The size vs perceptive quality tradeoffs are not significantly different. For users, things will get worse (worser decode speeds & battery life due to lack of hardware decode) before it gets better. That can take many years – because hey, more features in JXL also means translating that to hardware die space will take more time. Just the software side of things is only now picking up.
But for what we all need – it's really necessary to start the JXL journey now.
Extra data transfer costs performance and battery life too.
so webp > jpegxl > png
What you’re referring to is pngquant which uses dithering/reduces colors to allow the PNG to compress to a smaller size.
So the “loss” is happening independent of the format.
https://blog.cloudflare.com/uncovering-the-hidden-webp-vulne...
FWIW webp came from the same "research group in google switzerland" that later developed jpegxl.
The funny thing is all the places where Google's own ecosystem has ignored WebP. E.g., the golang stdlib has a WebP decoder, but all of the encoders you'll find are CGo bindings to libwebp.
>>>
- Progressive decoding for improved perceived loading performance
- Support for wide color gamut, HDR, and high bit depth
- Animation support(I don't know if any of this is true, but it sounds funny...)