Either both should have the magnifying glass or neither. This just makes it hard to see the difference.
Reduction? Shouldn't the tool be improving the quality of the image? If it is reducing the quality then why do it?
> The purpose of zoomed in before picture is to show how a typical pixel misalignment.
Okay, but how does this supposed "misalignment" look on the picture? Would I even notice it? If not, does it matter? Did they just zoom in, and draw a misaligned grid over the zoomed in image? Or the grid fault lines are visible in the gestalt?
> Aligned pixels can be easily imagined.
Everything can be easily imagined. Misaligned pixels can be imagined. They could just write "our processed images look better" and let me imagine how much nicer they are. The purpose of a comparison is to prove that they are nicer/better/crisper whatever they want to claim.
People who are the target audience for this tool already know.
>Would I even notice it?
Yes.
>The purpose of a comparison is to prove that they are nicer/better/crisper whatever they want to claim.
They don't need to prove it to their target users. They already know the problem (for which several tools exist).
The exact way that pixels are misaligned is a feature of the specific AI models that generated the almost-pixel art.
I think it'd be worth calling out the differences.
Maybe it's the inconsistent lights/shadows?
Maybe a pixel artist has the proper words to explain the issues
1 - AI just try to compress too many details into so few pixels.
When artists create pixel art they usually add details along the way and only important ones because otherwise it will look like rubbish on some screens.
Also it's easier to e.g add different hats or heads or weapons on the same body. AI generated ones is always too unique.
2 - AI try to mimic realistic poses that look like art supposed to be animated in 3D.
For a real game if you make lets say isometric tactical game you'll never make tiles larger than 64x64 because of how much labour they will take to animate. Each animation at 8fps take hours of work.
So pixel art is usually either high-fidelity and static or low-fi and animated in very basic ways.
Generated pixel art for now is 80-90% done state. To use them in prod, issues should be fixed which seems to be the palette and some semantic issues. If you only generate small parts of the big picture with AI, it will be perfectly usable.
Nano Banana beats it on many other dimensions, but this is one thing that gpt-image-1 usually does much better.
sounds like a good use case to fix this problem from the model layer. an image gen model that is trained to make pixel perfect art.
Are you talking about the LoRA by LuisaP?
Somewhat ironically, that LoRA's showcase images themselves exhibit the exact issues (non-square pixels, much higher color depth than pixel art, etc) that stuff like this project / unfake.js / etc. are designed to fix.
rust is a programming language
people interested in rust may find a tool written in rust relevant to their interests where they otherwise might not
On the OpenAI side, the gpt-image-1 model has actually had the ability to produce true alpha transparent images for a while now. Too bad quality-wise they're lagging pretty badly behind other models.
Or is it purely because the models just don't understand pixel art?