OP, were you inspired by this LLM comparison tool?

https://whatllm.vercel.app

The tables are very similar - though you've added a custom calculator which is a nice touch.

Also for the Versus Comparison, it might be nice to have a checkbox that when clicked highlights the superlative fields of each LLM at a glance.

  • Gcam
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Data in this tool is from https://artificialanalysis.ai/ on October 13 2024 and so is a little of out date.

This page has up to date information of all models and providers: https://artificialanalysis.ai/leaderboards/providers We also on other pages cover Speech to Text, Text to Speech, Text to Image, Text to Video.

Note I'm one of the creators of Artificial Analysis.

  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thanks for sharing. That's a better tool.
Both seem to have great value. Some information is missing from Vercel's tables.
I like the idea of more comparisons of models. Are there plans to add independent analyses of these models or is it only an aggregation of input limits?

How do you see this differing from or adding to other analyses such as:

https://artificialanalysis.ai

https://huggingface.co/spaces/TTS-AGI/TTS-Arena

https://huggingface.co/spaces/hf-audio/open_asr_leaderboard

https://huggingface.co/spaces/TIGER-Lab/GenAI-Arena

Great work on all the aggregation. The website is nice to navigate.

  • botro
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I made https://aimodelreview.com/ to compare the outputs of LLMs over a variety of prompts and categories, allowing a side by side comparison between them. I ran each prompt 4 times for different temperature values and that's available as a toggle.

I was going to add reviews on each model but ran out of steam. Some users have messaged me saying the comparisons are still helpful to them in getting a sense of how different models respond to the same prompt and how temperature affects the same models output on the same prompt.

Hey, this is pretty insightful! Wonder if, in the course of researching to build this website you reached any conclusions as to what’s the AI assistant currently ahead.
  • rtsil
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I can confirm, it's still very helful, thank you!
the gradio ui looks ugly imo, that's why I used shadcn and next.js to make the website look good.

I'll try to make it as user-friendly as possible. Most of the websites are ugly + too technical.

I want to point out you dodged the data question, and there's a reason for it.

I like your work visually on first glance, god knows you're right about gradio, even if its irrelevant.

But peddling extremely limited, out of date, versions of other people's data, trumps that, especially with this tagline. "A website to compare every AI model: LLMs, TTSs, STTs"

It is a handful of LLMs, then one TTS model, then one STT model, both with 0 data. And it's worth pointing out, since this endeavor is motivated by design trumping all: all the columns are for LLM data.

now imagine going one step further and actually running a prompt across every AI model and showing you the best answer and the AI model that generated it
Who decides what the best answer is?
the user who runs the prompt?
Those tools exist, they do not need to be imagined. Look into the related comments. Also they do little, but increase the labor of getting an answer. Not exactly an improvement of AI for the user to spend more time reviewing AI answers.
Great! I wish there was a "bang to buck" value. Some way to know the cheapest model I could use for creating structured data from unstructured text, reliably. Using gpt4o-mini which is cheap but wouldn't know if anything cheaper could do the job too.
  • jampa
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Take a look at Gemini Flash 1.5. I had videos I needed to turn into structured notes, and the result was satisfactory (even better than the Gemini 1.5 Pro, for some reason). https://jampauchoa.substack.com/i/151329856/ai-studio.

According to this website, the cost is half of the gpt4-o mini. 0.15 vs 0.07 per 1M token.

Seconding Gemini flash for structured outputs. Have had some quite large jobs I’ve been happy with.
I haven't found a model at the price point of GPT-4o mini that is as capable. Based on the hype surrounding Llama 3.3 70B, it might be that one though. On Deepinfra, input tokens are more expensive, but the output token is cheaper so I would say they are probably equivalent in price.

Also, best bang for the buck is very subjective, since one person might need it to work for one use case vs somebody else, who needs it for more.

I always plug openrouter.ai for making cross-model comparisons. It's my general goto for random stuff. (I am not affiliated, just a user)
I love the idea of openrouter. I hadn't realized until recently though that you don't necessarily know what quantization a certain provider is running. And of course context size can vary widely from provider to provider for the same model. This blog post had great food for thought https://aider.chat/2024/11/21/quantization.html
To expand a little, some providers may apply more aggressive optimization in periods of high load.
  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'd like to share a personal perspective/rant on AI that might resonate with others: like many, I'm incredibly excited about this AI moment. The urge to dive headfirst into the field and contribute is natural after all, it's the frontier of innovation right now.

But I think this moment mirrors financial markets during times of frenzy. When markets are volatile, one common piece of advice is to “wait and see”. Similarly, in AI, so many brilliant minds and organizations are racing to create groundbreaking innovations. Often, what you're envisioning as your next big project might already be happening, or will soon be, somewhere else in the world.

Adopting a “wait and see” strategy could be surprisingly effective. Instead of rushing in, let the dust settle, observe trends, and focus on leveraging what emerges. In a way, the entire AI ecosystem is working for you: building the foundations for your next big idea.

That said, this doesn't mean you can't integrate the state of the art into your own (working) products and services.

Your proposal makes a lot of sense. I assume a number of companies are integrating sota models into their products.

That being said, there is no free lunch: when you're doing this, you're more reactive than proactive. You minimize risk, but you also lose any change to have a stake [1] in the few survivors that will remain and be extremely valuable.

Do this long enough and you'll have no idea what people are talking about in the field. Watch the latest Dwarkesh Patel episode to get a sense of what I am talking about.

[1] stake to be understood broadly as: shares in a company, knowledge as an AI researcher, etc.

  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thank you for your thoughtful response! I completely agree that there's a tradeoff between being proactive and reactive in this kind of strategy: minimizing risk by waiting can mean missing out on opportunities to gain a broader "stake".

That said, my perspective focuses more on strategic timing rather than complete passivity. It's about being engaged with understanding trends, staying informed, and preparing to act decisively when the right opportunity emerges. It's less about "waiting on the sidelines" and more about deliberate pacing, recognizing that it’s not always necessary to be at the bleeding edge to create value.

I'll definitely check out Dwarkesh Patel’s latest episode. I assume it is the Gwern one, right? Thanks!

Tangent question: is there anything better on the desktop than ChatGPT's native client? I find it too simple to organize chats but I'm having a hard time evaluating the dozen or so apps (most are disguise for some company's API service). Any recommendations? macOS/Linux compatibility preferred.
Telosnex: every platform, native. Also, has web. Anthropic, OpenAI, Mistral, Groq, Gemini, and any local LLM on literally every platform. and you can bring your own API keys, and the best search available. Pay as you go, with everything at cost if you pay $10/month. Otherwise, free. Everythings stored in simple JSON.
There’s https://www.typingmind.com/ local-only (no server) and built by an indie dev
Peesonally im a Typing Mind user but it got too slow and buggy with long cbaglts. Ended up with boltai which is a natice mac app and found it very good after months of heavy use. I think it could also improve navigation coloring or iconography to help distinguish chats better but its my favorite so far.
I'm working on a native LLM client that is beautiful and fast[1], developed in Qt C++ and QML - so it can run on Windows, macOS, Linux (and mobile). Would love to get your feedback once it launches.

[1] https://rubymamistvalove.com/client.mp4

There are only two audio transcription models. Is this generally true, are there no open source ones like llama but for transcribing? Or just small dataset on that site
It looks like the site is only listing hosted models from major providers, not all models available on huggingface, civit.ai, etc. -- Looking at the image generation and chat lists there are many more models that are on huggingface that are not listed.

See https://huggingface.co/models?pipeline_tag=automatic-speech-...

Note: Text to Speech and Audio Transcription/Automatic Speech Recognition models can be trained on the same data. They currently require training separately as the models are structured differently. One of the challenges is training time as the data can run into the hundreds of hours of audio.

Thank you both
There are lots and lots of models, covering various use cases (e.g., on device, streaming/low-latency, specific languages). People somehow think OpenAI invented audio transcription with whisper in 2022 when other models exist and have been used in production for decades (whisper is the only one listed on that website).
Just saw that this was built for a hackathon. Huge kudos and congratulations!
thank you! although I wasn't able to win the hackathon it was still a fun experience :)
Love the UI and table layout. Have you though about showing the different VRAM requirements for models?
It would be great if llmarena leadership information would also appear to compare performance vs cost.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
yep, will add this :)
  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Nice resource. Almost too comprehensive for someone who doesn't know all the sub-version names. Would be great to have a column of the score from lmarena leaderboard. Some prices are 0.00? Is there a page that each row could link to for more detail?
thank you! some models either have N/A or 0.00, I found it is like that for the free models and ones that aren't available.

As per llmarena I'll definitely add it, a lot of other people recommended it as well.

over time will make the website more descriptive and detailed!

And a link to the company page where one can use/subscribe to the model
One thing that stands out playing with the sorting is that Google's Gemini claims to have a context window more than 10x that of most of its competition. Has anyone experimented with this to see if its useful context window is actually anything close to that?

In my own experiments with the chat models they seem to lose the plot after about 10 replies unless constantly "refreshed", which is a tiny fraction of the supposed 128000 token input length that 4o has. Does Gemini actually do something dramatically differently, or is their 3 million token context window pure marketing nonsense?

https://github.com/NVIDIA/RULER results in benchmark other than needle in haystack seem solid all the way to 128k
Thanks, this is exactly the kind of info I was hoping existed.
When the released it they specifically focused on the accurate recall across the context window. There are a bunch of demos of things like giving it a whole movie as input (frame every N seconds plus script or something) and asking for highly specific facts).

Anecdotally, I use NotebookLM a bit, and while that’s probably RAG plus large contexts (to be clear, this is a guess not based on inside knowledge), it seems very accurate.

What tactics do you use to refresh while using them?
I tend to use a sentence along these lines: "Give me a straightforward summary of what we discussed so far, someone who didn't read the above should understand the details. Don't be too verbose."

Then i just continue from there or simply use this as a seed in another fresh chat.

I don't have a strategy that I like—it just amounts to having to say "you forgot about requirement X, try again keeping that in mind".
you guys might also like http://llmprices.dev, similar but it's automatically updated with the latest info every 24h
One helpful addition would be Requests Per Minute (RPM), which varies wildly and is critical for streaming use cases -- especially with Bedrock where the quota is account wide.
thanks for the comparison table! would be great if the header is sticky so i don't get lost in identifying which column is which.
These are hard to keep updated. I find they usually fall off. It would be cool to have one, but honestly, this one already doesn't even have 4o and pro on it which if it was being maintained, it obviously would. Updating a table shouldn't take days. It's like a one minute event.
the reason 4o and pro doesn't exist is because they aren't available via API yet!

the website is updated, don't worry :)

  • Bigie
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I feel like the number is still a bit lacking, especially since many models made by Chinese companies are not represented, like speech-to-text.

As far as I know, there's a volcano engine in China that has impressive text-to-speech capabilities. Many local companies are using this model.

Hey this is great!

A small suggestion, a toggle to exclude between "free" and hosted models.

Reason is, I'm obv. interested in seeing the cheaper models first but am not interested in self-hosting which dominate the first chunk of results because they're "free".

This is missing... so many models... like most TTS and STT ones.

11labs, deepgram, etc.

  • tomp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"every"

you're missing a lot

TTS: 11labs, PlayHT, Cartesia, iFLYTEK, AWS Polly, Deepgram Aura

STT: Deepgram (multiple models, including Whisper), Gladia Whisper, Soniox

just off the top of my head (it's my dayjob!)

Suggestions:

1. Maybe explain what Chat Embedding Image generation Completion Audio transcription TTS (Text To Speech) means?

2. Put a running number on the left, or at least just show total?

  • mtkd
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Would poss be further useful to have a release date column, license type, whether EU restricted and also right-align / comma-delimit those numeric cells
good idea, will look into adding this!
Logs emitted during the build, or test results, or metrics captured during the build (such as how long it took)... these can all themselves be build outputs.

I've got one where "deploying" means updating a few version strings and image reverences in a different repo. The "build" clones that repo and makes the changes in the necessary spots and makes a commit. Yes, the side effect I want is that the commit gets pushed--which requires my ssh key which is not a build input--but I sort of prefer doing that bit by hand.

It's weird that OpenAI has lower prices for same models and Azure has higher prices. Anyone can explain?

BTW impressive idea and upvoted on PH as well.

Azure charges differently based on deployment zone/latency guarantees, OpenAI doesn't let you pick your zone so it's equivalent to the Global Standard deployment (which is the same cost).

[0] https://azure.microsoft.com/en-us/pricing/details/cognitive-...

tysm for the support!

OpenAI and Azure should be the same, it's weird that it shows it as different. I'll look into fixing this.

currently #2 on PH, any help would be appreciated!

This is interesting price-wise, but quality-wise if you do not provide benchmark results, it's not that helpful a comparision.
Can you please consider adding sort options to the cost columns in the Pricing Calculator?
Would be great if it was possible to get to the page where the pricing was found to make it easier to use the model
It’s indeed quite intuitive to see the details of each AI model, but it feels a bit overwhelming with too much information.

I wonder if adding a chatbot might be a good idea. Users could ask specific questions based on their needs, and the bot could recommend the most suitable model. Perhaps this would add more value.

I was surprised: what is that the model that costs the most per token? Luminous-Supreme-Control
some European safety AI company haha. It's exactly what you would expect from Europe :)
DeepInfra prices are significantly better than what’s listed for OS models.
I'm missing the "IQ" column.
Thank you on behalf of my waifu!
  • NoZZz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Stop feeding their machine.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[dead]
[dead]
[flagged]