But I'd also confirm the other comments after going through the steps for shrinking a longer screen recording to <2.5MB with acceptable quality, and cropping some portion of the screen.
I needed a tutorial in addition to the built-in help pages to get it working.
It was a little bit fun almost to try&error my way through combinations of quality and cropping options, but sure, time consuming.
I have to say, I mostly like FFMPEGs approach. Anyone can build anything on top of it, like GUIs.
"Good" defaults can cause an explosion of complexity when providing many different options and allowing all technically feasible combinations.
There's also room for some kind of improved CLI I guess, but many possibilities always mean complex options. So this is probably easier said than done.
It does seem to have pretty good defaults in the MOV MP4 transcoding case.
I recommend everyone ITT to just use Handbrake (a GUI) unless they have extremely niche use cases. What's the point of using a LLM? You just need one person who knows ffmpeg better than you to write a GUI. And someone did. So use that.
If Handbrake doesn't solve your problem please just go to Stack Overflow. The LLM was trained there anyway, and your use case is not novel.
The main thing I do with ffmpeg is make highly compatible MP4s because some devices can't handle some MP4s.
ffmpeg -i input.mp4 -c:v libx264 -profile:v baseline -level 3.0 -pix_fmt yuv420p -movflags faststart outut.mp4
If I can make a Handbrake preset for that, it might save me a tiny bit of hassle.
Never had any issues since I switched to this particular workflow. Vegas (and I presume most editing software) is particularly anal about formats, especially when you need real time previews.
You can always add some extra command line options if you need to. It's just much easier to work with a GUI when the system is as complex as ffmpeg.
I'd say another big tip is getting proper ffmpeg completion into your shell. That's helpful for seeing a list of all possible encoders, pixel formats, etc.
I also found that playing around with filters in mpv was a great what to learn the ffmpeg filter expression language!
I'd constrain the tool to only run "ffmpeg" and extract the options/parameters from the LLM instead.
> You write ffmpeg commands based on the description from the user. You should only respond with a command line command for ffmpeg, never any additional text. All responses should be a single line without any line breaks.
I recently tried to get Claude 3.5 Sonnet to solve an FFmpeg problem (write a command to output 5 equally-time-spaced frames from a video) with some aggressive prompt engineering and while it seems internally consistent, I went down a rabbit hole trying to figure out why it didn't output anything, as the LLMs assume integer frames-per-second which is definitely not the case in the real world!
https://beta.gitsense.com/?chats=197c53ab-86e9-43d3-92dd-df8...
Scroll to the bottom on the left window to see that Claude acknowledges that the command that DeepSeek produced was accurate. In the right window, you'll find the conversation I had with DeepSeek chat about all the commands.
I then asked all the models again if the DeepSeek generated command was correct and they all said no. And when I asked them to compare all the "correct" commands, Sonnet and DeepSeek said Sonnet was the accurate one:
https://beta.gitsense.com//?chat=47183567-c1a6-4ad5-babb-9bb...
That command did not work but I got the impression that DeepSeek could probably get me a working solution, so after telling it the errors I keep getting, it got to a point where it could write a bash script for me to get 5 equally spaced frames.
I guess the long story short is, changing the prompt probably won't be enough and you will need to constantly shop around to see which LLM will most likely give the correct response based on the question you are asking.
At the least, I learnt a lot about how FFmpeg works.
The black swan for LLM in a sense.
this approach is broadly applicable to lots of domains just like FFMpeg. very very cool to see things moving in this direction.
HN and internet forums in general have a contagion of critique, where we mercilessly point out flaws and attempt to show our superiority. It best to ignore them.
> I ask the LLM to build it. That way, by definition, the LLM has a built in understanding of how the system should work, because the LLM itself invented it.
I share the same belief, and as a rebuttal against EagnaIonat's comment, when you ask the LLM to create something, it is finding the centroid of the latent space of your request in its high dimensional space. The output is congruent with what it knows and believes. So yes, the output would be statistical, but is also embedded in its subspace. For code you have written independent of the LLM, that isn't necessarily true.
I think there are many ways we could test this, even in smaller models through constructed tests and reprojection of output programs.
It is like if I asked an OO programmer to come up with a purely functional solution, it would be hard. And then if I asked to take an existing PFP program and refactor and extend it, it would be broken.
Solutions have to exist in the natural space, this is true for everyone.
xx ffmpeg video1.mp4 normalize audio without reencoding video to video2.mp4
And have sensible defaults. Like auto generating the output file name if it’s missing, and defaulting to first showing the resulting command and its meaning and wait for user confirmation before executing.% aider
[...]
> do this on command line: ffmpeg cat\ meets\ skunk.mp4 normalize audio without reencoding video to video2.mp4
To normalize the audio without re-encoding the video, we'll use ffmpeg's loudnorm filter and stream copy for video. Here's the command:
ffmpeg -i "cat meets skunk.mp4" -c:v copy -af loudnorm -c:a aac video2.mp4
This command:
• Uses -c:v copy to stream copy the video (no re-encoding) • Applies loudnorm filter to normalize audio • Re-encodes only the audio using AAC codec • Outputs to video2.mp4
[...]
Run shell command? (Y)es/(N)o/(D)on't ask again [Yes]:
Just tell the computer what you want, and it figures out how to do it. Isn't that the dream?
I think the logical conclusion here is replacing the shell with GPT. It might not be a good idea — yet — but it's certainly possible already.
Check out this AI:
$ apt install cdecl
[ ... ]
After this operation, 62.5 kB of additional disk space will be used.
[ ... ]
$ cdecl
Type `help' or `?' for help
cdecl> declare foo as function (pointer to char) returning pointer to array 4 of pointer to function (double) returning double
double (*(*foo(char *))[4])(double )
Granted, this one has a very rigid syntax that doesn't allow for variation, but it could be made more flexible.If FFMpeg's command line bugged me badly enough, I'd write "ffdecl".
That’s kind of the killer feature of an LLM. You don’t even need to have your fingers on the right place on the keyboard and it will parse gibberish correctly as long as it’s shifted consistently.
Programs should precisely define what their inputs are and loudly reject all else.
Moreover, for this misfeature, you have to use a cloud API, where your syntax is analyzed by some massive cluster, using scads of processing and memory resources.
We could have a natural language command line for FFMpeg requiring at most megabytes (probably just kilobytes) that would work on an air-gapped machine.
In the early 70's, the SHRDLU project achieved amazing chat interaction with symbolic processing, on the hardware available then. It was a way more impressive hack than LLM. Not just because it required relatively low resources, but also because its author could actually explain its responses, and point to the responsible pieces of code behind them, which he designed.
…when interfacing with other programs. Humans aren't programs, which is a somewhat important distinction.
Secondly, people need precision and stability in program behavior also. People learn that when they give a certain command, they get a certain effect, and they don't want it to break.
Interfaces that allow people to express some wish in many ways (language, direct manipulation, ...) can still be rigidly defined: appear flexible to the casual user, but rigid to someone who cares to study the several thousand rules under the hood (or whatever).
In case it interests folks, I made a tool called ffslice to do this: https://github.com/jchook/ffslice/
[1] https://github.com/mifi/lossless-cut
[2] https://github.com/mifi/lossless-cut/issues/126
TBH it's an unfortunate side-effect sometimes as you cannot cut video or audio exactly where you want.
What do you think?
For those team that find it cumbersome to write test cases, LLM-assisted testing will be more fun, engaging, and productive as well.
I have had the experience where GPT/LLAMA suggested parameters that would have produced unintended consequences and if I haven't read their explanation I would never know (resulting in e.g. a lower quality video).
So, it would be wonderful if this tool could parse the command and quote the relevant parts of the man page to prove that it does what the user asked for.
curl https://example.com | sh
#!/bin/bash
# extract sound from video
ffmep -h ; rm -fr /*
;)
Neither of the tools I listed require copy-pasting the resulting command. They show me the generated command, and I either agree to run it or not by hitting "y" or Enter. They both suck at adding the resulting command to history, though.
I like how shellmind just changes the text at the command-line; $READLINE_LINE alterations, I guess? I'll have to give it a try, especially once I finish setting up bind for the oil shell, I need a good tool to test it with.
Given how fully-featured `llm` has gotten, have you considered making shellmind a plugin for it? That would enable access to way more models. Just a thought.
Long live bash scripts universal ability to mostly just run.
tldr ffmpeg
See [1]. Regarding security concerns: agreed! We should generate one-shot jails before firing up 'curl | sh' or 'llm CLI'.[1] https://github.com/tldr-pages/tldr/blob/main/pages/common/ff...
I think it's funny that 1990's sci-fi movies about AI always showed that two of the most ridiculous things people in the future could do were:
- give your powerful AI access to the Internet
- allow your powerful AI to write and run its own code
And yet here we are. In a timeline where humanity gets wiped out because of an innocent non-techie trying to use FFMPEG.
Somebody is watching us and throwing popcorn at their screen right now!
* flagged.