This phrase:
sd uses regex syntax that you already know from JavaScript and Python.
says it all.I still haven't found a better short overview of various regex engines than that: https://web.archive.org/web/20130830063653/http://www.regula...
Well, for starters, you just `s/<regex>/<replacement>/` and try to use that in your everyday work. Just forget about the syntax. It's a search-and-replace tool.
That's the only way I used sed for years. I've learned more since then, but it's still the command I use the most. And that's also what `sd` focuses on.
Also, if you want to replace newlines, just use `tr`, to hook onto the examples of sd. It may seem annoying to use a different tool, but there are two major advantages: 1. you're learning about the existence, capabilities and limitations of more tools 2. both `sed` and `tr` are probably available in your next shitty embedded busybox-driven device, while `sd` probably is not
As you said, the value comes from being around for a long time and, probably more importantly, still being present on nearly any Unix-like system.
Earlier I did this
cat as1 | grep " 65" | sed -e 's/.* 0 65/65/' -e 's/[^ 0-9]//' |sort|uniq
Now some twat will come along and say my process should have been cat as1
grep " 65" as1
grep " 65" as1 | sed -e (various different tries to the data looks useful)
grep " 65" as1 | sed -e (options) | sort|uniq
Because otherwise it's a "useless use of cat" and reformatting my line is well worth the time and cognitive load to save those extra forks.Just go straight to the point that this isn’t available on a proprietary Unix that had its EOL fifteen years ago and that five people still use.
Skill issue. It's not necessary in the first place anyway
> Sed is the perfect programming language, especially for graph problems. It's plain and simple and doesn't clutter your screen with useless identifiers like if, for, while, or int. Furthermore since it doesn't have things like numbers, it's very simple to use.
"useless identifiers like if, for, while, or int"? Useless identifiers?
Some of the notable features include:
Preview variable values, both of them!
...
Its name is a palindrome
I do not mean to sound like “kids these days… “ I really like these modern systems that allow you to install a wide range of packages. It is a huge step forward. I just want to explain my perspective, perhaps others share that perspective. It probably also explains why such tools continue to exist.
perl -MO=Deparse -w -naF: -le 'print $F[2]'
That said, debugging is definitely a thing, and tools like this are awesome!
This has been quite annoying. So now I code it in C or assembly fusing common-cases code templates and ready build scripts to have a comfortable dev loop.
In the end, I get roughly the same results and I don't need those regular expressions languages and engines.
It is a clear win in that case.
Amīcitia nōn semper intellegitur sed sentītur. (Friendship is not always understood, but it is felt.)
which I'm always reminded of when using sed(1) in a script to provide, not this pattern, but that replacement.
> GNU sed actually provides pretty useful debugging interface, try it yourself with `--debug` flag.
There is just something incredibly freeing about knowing you can sit down at a freshly-reinstalled box and do productive work without having to install a single thing on the box itself first.
EDIT: https://hiandrewquinn.github.io/til-site/posts/what-programm... might be of interest if you want to know what you can work with right out of the box on Debian 12. Other distros might differ.
Though what's been a little frustrating is that there's anti scraping measures and they break things. But they're always trivial to get around, so it's just annoying.
A big reason LLMs and up failing is that I need my scripts to work on osx and nix machines. So it's always suggesting things to me that work on one but not the other. It seems to not want to listen to my constraints and grep is problematic for them in particular. Luckily man pages are great. I think they're often over looked.
If that is not an option, go with Perl. It'd be a little slower, but you'll get consistent results. Plus, Perl has powerful regex, lots of standard libraries, etc.
Now I use bash for all sorts of stuff. I’ve been working with *nix for 20 years but bash is so arcane and my needs always so immediate that I never did anything other than use it to run commands in sequence with maybe a $1 or a $2 in there
These tools are things I've used before but always found painful and confusing. Being able to ask Gippity for detailed explanations of what is happening, in particular being able to paste a failing command and have it explain what the problem is, has been a game changer.
In general, for those of us who never had a command line wizard colleague or mentor to show what is possible, LLMs are an absolute game changer both in terms of recommending tools and showing how to use them.
Only a tiny bit more complex but often an order of magnitude faster with today's CPUs.
Use -print0 on find with -0 on xargs to handle spaces in filenames correctly.
GNU parallel is another step up, but xargs is generally always to hand.
find [...] - exec [...] {} +
as opposed to
find [...] - exec [...] {} \;
worked fine and was performant enough for my use-case. An example command was
find . -type f -name "*.html" -exec sed -i '' -e 's/\.\.\/\.\.\/\.\.\//\.\.\/\.\.\/\.\.\/source\//g' {} +
which took about 20s to run
find . -type f -name "*.html" -exec sed -i '' -e 's|\.\./\.\./\.\./|../../../source/|g' {} +
Using "/" as the delineation character for "s" patterns that include "/" drives me batshit - almost as much as scripts that use the doublequote for strings that contain no variables but also contain doublequotes (looking at you, json literals in awscli examples)If your sed is GNU, or otherwise sane, one can also `sed -Ee` and then use `s|\Q../../../|` getting rid of almost every escape character. I got you half way there because one need not escape the "." in the replacement pattern because "." isn't a meta character in the replacement space - what would that even mean?
I find him hard to listen to when he does things like this
- We never figured out how to package programs properly (Nix needs to become easier to use)
- For all kinds of smaller tasks we practically need to use those Unix tools
- Those everywhere tools are for hysterical raisins hard to use in a larger context (The Unix Philosophy in practice: use these five different tools but keep in mind that they are each different from each other across six dimensions and also they have defaults from the 70’s or 80’s)
- For a lot of “simple” things you need to remember the simple thing plus eight comments (on the StackOverflow answer which has 166 votes but that’s just because it was the first to answer the question) with nuance like “this won’t work for your coworker on Mac”
- So you don’t: you go to SO (see previous) and use snippets (see first point: we don’t know how to package programs, this is the best we got)
- This works fine until Google Search decides that you are too reliant on it for it to have to work well
- Now you don’t use “random stuff from StackOverflow” which can at least have an audit trail: now you use random weights from your LLM in order to make “simple” solutions (six Unix tools in a small Bash script which you can’t read because Bash is hard)
This is pretty much the opposite of what inspired me when studying computer science and programming.
What the issue with apt, pacman, and the others? I think they’re doing their job fine.
> For all kinds of smaller tasks we practically need to use those Unix tools
I mean, they’re good for what they do
> Those everywhere tools are for hysterical raisins hard to use in a larger context
Because each does a universal task you may want to do in the unix world of files and stream of texts.
> For a lot of “simple” things you need to remember the simple thing plus eight comments
No, you just need the manuals. And there are books too. And yes the difference between BSD and GNU is not obvious at first glance. But they’re different software worked on by different people.
1. (the things you disagree with)
2. Using AI to compensate for (1)
So if you only disagree with (1) then I don’t know if I should get into it.
I dump about 150GB of Postgres logs a day (I know, it's over the top but I only keep a few days worth and there have been several occasions where I was saved by being able to pick through them).
At that size you even need to give up on grepping, really. I've written a tiny bash script that uses the fact that log lines start with a timestamp and `dd` for immediate extraction. This allows me to quickly binary search for the location I'm interested in.
Then I can `dd` to dump the region of the file I want. After that I have an little awk script that lets me collapse the sql lines (since they break across multiple lines) to make grepping really easy.
All in all it's a handful of old school script that makes an almost impossible task easy.
https://gist.github.com/aidos/5a6a3fa887f41f156b282d72e1b79f...
For anyone else, here's the awk for combining lines in the log files for making them greppable too: https://gist.github.com/aidos/44a9dfce3c16626e9e7834a83aed91...
(The datetime in the log message is presumably sorted, or nearly so).