Anecdotally I haven’t had many namespaces collisions recently. I’ve also let myself go a bit after going into management. My tech skills are 10 years too old.
Any tips from someone else on where they started to be hip again?
What I do is make tools to make my life easier. For example, if there's a web service at work I use for mundane lookups I'll find out if it has an API and write a CLI for it to speed up my daily grind. Once it's tuned to my liking I'll share it with the team. I do struggle to convince others to try it. Not sure why. But I don't really care because I use the tools every day.
Just to see some new perspectives and be conversant in the trends.
Some of the new stack now crosses over to work, and I have a deeper appreciation for some of the older pieces.
Not sure I understand this problem. I just put my bin directory at the front of $PATH rather than the end. To browse my commands, I simply `ls ~/bin`.
Pick your poison
The danger is also mitigated because I only modify my own user's shell rc file. Any daemons running as root or their own user are unaffected.
POTENTIALLY. This (for me, anyway), is a solution in search of a problem. Been using some sort of *nix since the early 90's, writing scripts and commands and aliases and functions the whole time - this has never once happened to me.
It obviously _can_, and PROBABLY has, to some. But it's more a "can" than "will". Maybe I just name my homebrew stuff in a way that's unlikely to collide, dunno.
I'm not on the comma train here because it makes the name ugly and confusing, and doesn't solve any problem that I have. But it is a clever hack for those who do have this problem.
if a tool looks up a command name "x" given to it, it just takes $PATH and goes through it. the same $PATH as in your shell when you call "x" directly.
thinking more about it, you must thought something like putting "@daily mycommand ..." in crontab, then being annoyed by it not finding your command. then the problem is not that some tools expecting a "system path", but that some tools being defiant and overrides inherited path on their own accord. which is totally unneccessary: environment is called environment because it is prepared for you (the given program) to run in.
I forget names all the time. I even forget I wrote entire projects sometimes. That's why I try to organise my systems in a way I can easily stumble upon things I haven't thought about in months, or years.
I find this article's approach actually solves a problem for me. I do find myself going back to the ~/bin folder once I a while to look for some script I use less often. So at least for N=2, it's a cool hack.
~/bin<tab><tab>
Is enough. Not that short, but is something done infrequently in my experience. Maybe I’d do it more if it was easier?One of these names actually collides with a utility that is installed by default on some systems.
Doesn’t matter to me. I have my own bin dirs before the system dirs in my path, so mine “win”, and I’m not really interested at all in the tool that mine has a name collision with.
If someone were to make a useful to me tool that collided with one of my own tools, I’d probably sooner alias that other tool to a new name that didn’t collide with mine, than to change any of my own tool names.
It’s just too comfortable to use these two-character tools of mine.
https://askubuntu.com/questions/938606/dwarf-fortress-starti...
But at least I did not start them with a 'k' (KDE) :)
The line that separates system and user commands may be defined in different ways, and it may be fuzzy in some places, but if a user accidentally invokes a command that they don't even know why is there, and they didn't explicitly install, then that's clearly a command that shouldn't be directly available in the global namespace.
The sbin directories are supposed to contain "system" (or superuser) commands, and regular users should NOT have those directories in their PATH.
This has been broken for a long time on every distribution I've looked at though.
I use Windows most of time. Like the author, I have bunch of CLI scripts (in Python mainly) which I put into my ~/bin/ equivalent.
After setting python.exe as the default program for `.py` extension, and adding `.py` to `%pathext%`, I can now run my ~/bin/hello.py script at any path by just type `hello`, which I use hundreds of time a day.
I now use Linux more and more (still a newbie) but I never get it to work similarly here.
Firstly, Linux seems to have no concept of "associated program", so you can never "just" call .py file, and let the shell to know to use python to execute it. Sure, you can chmod +x to the script, but then you have to add a shebang line directly to the script itself, which I always feel uncomfortable since it's hard-coded (what if in future I don't want to execute my .py script with `/usr/bin/python` but `/usr/bin/nohtyp`?).
Furthermore, I failed to find any way to omit `.py` part when calling my script.
Again, none of the above is to question the design of the Linux -- I know it comes with lots of advantages.
But I really, really just want to run `hello` to call a `hello.py` script that is in my $PATH.
> But I really, really just want to run `hello` to call a `hello.py` script that is in my $PATH.
On Linux I'd say the shebang is still the right tool for this. If you want a lightweight approach, just have a `my_python` symlink in your path, then your shebang can be `/usr/bin/env my_python` (or heck just `/foo/bar/baz/my_python`, /usr/bin/env is already an abstraction).
If you want a more principled approach, look at the `update-alternatives` tool, which provides this sort of abstraction in a more general way: https://linuxconfig.org/how-to-set-default-programs-using-up...
Isn't that path and the behavior of the binary defined by POSIX though? I thought it's as stable as you can get.
That's why it's usually recommended that you use /use/bin/env bash vs /bin/bash in the shebang, as the latter isn't defined by POSIX
I don't see anything about the path being defined. Certainly possible I missed it, though.
I'm not able to check right now but I vaguely recall that I've used a system in the past with env in a location other than /usr/bin.
On Linux (really, all platforms other than Windows), file extensions are much less of a thing; executables of any kind have no extension just the +x flag (among other things, this means you can rewrite them in another language without breaking anything).
The .py extension is only relevant for modules meant to be imported; for scripts being run, if you really need to know, you are supposed to look at the shebang (usually #!/usr/bin/env python for externally-distributed scripts; this gets overwritten to #!/usr/bin/python or whatever for scripts packaged by the distro itself).
Note also that, while shebangs don't support multiple arguments, the GNU version of `env` supports a `-S` argument that can emulate them (the argument length problem remains though).
I'm not saying you're wrong, but let's be clear about what these are. I would point out that Linux inherited some, but not all of its naming conventions from Unix (as did macOS), but at least here, that is a secondary concern.
Carry on...
I can't think of a downside to the shebang. If you really wanted to run the script with a different interpreter, just specify it. "nohtyp hello" or whatever.
If that still bothers you too much, you could define an alias in your shell startup. For example, in bash, you might do:
alias hello="python3 /path/to/hello.py"
If you were so inspired, you could even write a short script to automatically create such aliases for the contents of a directory you specify.
There is, but not in the shell syntax. It's an application concern normally delegated to the desktop/GUI.
For shell scripts, the executable is usually declared in the script itself, by adding a Shebang and making the file executable. Think of the Shebang like a file extension of sorts.
If
chmod +x ./malware.py
./malware.py
does not work, check the path the Shebang points to.That being said, as long as an interpreter can execute the scripts as regular argument, you should be able to get this behavior also for xdg-open:
xdg-open malware.py
if you really want to do that by default.This should be equivalent to double-clicking file in the default file manager, IIRC (am on Mac now).
I had an alias "xop <file>" when using Linux as my primary desktop OS.
But I only used this for data files (images, documents etc) where the default already works.
Wouldn't recommend setting an interpreter as default for executable scripts.
You might want to not execute scripts by default, instead opening them in an editor for example.
xdg-open is a Gnome thing I think, but that doesn't mean it's unavailable for other desktops. I know it from Xubuntu (so Xfce).
So I'd really advise against that, But if you want to execute all python files by default in any GUI context, too you could set this kind of default there
'man xdg-open' might help, or maybe you could even select a specific Python executable as the default for .py files after double-clicking in the File Manager.
Again, bad advice
It won’t directly help reach your goal, but it is semi hard-coded. The ‘correct’ (but see https://unix.stackexchange.com/a/29620 for some caveats) way to write a shebang line is
#!/usr/bin/env python
That will make it run the first python in your path.> what if in future I don't want to execute my .py script with `/usr/bin/python` but `/usr/bin/nohtyp`?).
You could create a symlink called python to /usr/bin/nohtyp on your system in a directory that’s searched before /usr/bin (e.g. by adding ~/myCommandPreferences to the front of your PATH)
Might be you could use binfmt_misc for that.
https://www.kernel.org/doc/html/latest/admin-guide/binfmt-mi...
https://blog.cloudflare.com/using-go-as-a-scripting-language...
I keep all my scripts in ~/git/$Project and symlink them into ~/bin and I've added ~/bin to the end of my path.
- The shebang is only specially interpreted by the Linux loader, i.e. when executing the file directly.
- You can still run it with any other interpreter in the standard way: `nohtyp ~/bin/hello`. Python comments start with `#`, so the shebang does nothing with programs expecting Python code.
- This situation (a script without an extension) is common on Linux, so Linux-aware editors understand the shebang to indicate a file type. At least, vim understands this and automatically detects a python file type without the .py extension.
I get your wish of Windows-like behaviour, and even if you might be able to conspire to have Linux behave the way you want, it's certainly not how people expect it to work, so prefer the above scheme for any software you send to others. :)
/usr/bin/hello:
#!/usr/bin/bash
python3 /usr/bin/hello.py
/usr/bin/hello.py: print("Hello, world!")
Console: $ chmod +x /usr/bin/hello
$ hello
Hello, World!
Sublime:
/usr/bin/subl:
#!/bin/sh
exec /opt/sublime_text/sublime_text "$@"
VSCODE: /usr/bin/code:
...
# Launch
exec /opt/visual-studio-code/bin/code "$@" $CODE_USER_FLAGS
I believe the elegant solution to this is update-alternatives, which lets you tell the system which actual program to call. Maybe look into update-alternatives, I haven't looked into this much but it seems like it might interest you particularly. That's the closest equivalent to file association for the UNIX shell I would guess.
You could also have a specific folder that you control in your PATH that symlinks to the Python you want to use.
This handles the default, but you can still call your script with the program you want if you ever wish to bypass that.
Use an alias which you set up in your initialization scripts. You alias "hello" to "python3 /yada/yada/hello.py" which is essentially what Windows is doing for you behind the scenes.
you're thinking too much, people have had that shebang for 20 years without any problems
> But I really, really just want to run `hello` to call a `hello.py` script that is in my $PATH.
I don't really understand why you're so adamant about this, either make a python "hello" script with a shebang or just tab complete hell<tab> which you should do with most commands anyway so the .py doesn't matter
another option would be to alias but you'd have to do that manually for every frequent script you need
from a google search, https://stackoverflow.com/a/19305076
I'm not at the computer now to test though
I used to think this as well, but I've since come around to the opposite view. Having it as a "requirement" for what's likely the most popular cli execution strategy enforces a (somewhat disorganised but still useful) defacto standard across all scripts. I can open up any repo/gist/pastebin in the world & chances are if it's intended to be run, it'll contain this handy little statement of intent on the first line. I might not actually run (env-dependent) but I'm sure I can make it.
On the env-sensitivity though, if e.g. you're running nohtyp, as another commenter mentioned, /usr/bin/env has that covered.
That’s an interesting feature for a shell to have. Thanks!
Linux being of Unix ancestry which had no such concept as a file extension. It was the responsibility of the application or kernel to discern what type a file was. Typically by the first few bytes of a file and handle it appropriately.
I personally am a fan of the Unix way but I can see why some might prefer the DOS convention.
Whether that is good or bad is left as an exercise to the reader.
You still have to say '.py' at the end, though.
To remove the .py just rename the file to “hello”, or keep “hello.py” and create a symlink or a shell alias called “hello” that points to it.
One drawback is that this doesn't have the same tab completion ergonomics, which I have to admit is really nifty.
EDIT: And another is that collisions can still occur in scripts that need to be sourced rather than executed as a sub-process (like Python's venv activation scripts). But those are rare.
They do in zsh
Of course - your own custom scripts usually wont have so fancy completion, and in any case you'd need to configure this and setting it up for both the long and short version is not that much hassle.
The idea about starting my own scripts' names with a comma would have made the job go much faster, and I'm sure would have helped to job some memories about why each script was written, before opening it.
You can list your personalized tooling using ~/bin/[Tab] for whatever value there is in that.
If you don't like the system's grep (e.g. Solaris grep or whatever) but prefer your own (e.g. GNU grep), why wouldn't you just want that to be "grep".
https://news.ycombinator.com/item?id=22778988 (90 comments)
, macroexpand:
Start all of your commands with a comma (2009) - https://news.ycombinator.com/item?id=31846902 - June 2022 (121 comments)
Start all of your commands with a comma (2009) - https://news.ycombinator.com/item?id=22778988 - April 2020 (89 comments)
I usually do the same with commands where you are able to create sub-commands too, like git-,home (which allows you to run `git ,home add -p` and it conveniently set GIT_DIR to something and GIT_WORKTREE to $HOME). Sadly you can't do it with git aliases, I have to live with them starting with a dot (via '[alias ""]' as a section).
Agreed. I prefer using `!bang`s for the same reason for expanding text.
If we could go back to the drawing board I'd say every system utlity should have a verbose name with some kind of aliasing system that provides easy shorthands to them. Then the shorthands could be replaced easily, with the verbose names being used during scripting.
This might seem like a moot point, since we can't go back to the drawing board, but many projects continue to make this problem worse by insisting on naming their binaries like we're still living with the constraints of the 80s. I guess because it gives them the flavour of "real" system utilities. It would be nice if projects stopped doing that, but oh well.
But honestly, while 2 or 3-letters aliases are tricky, I've very rarely had issues with 4-letter aliases. There are 456k possibilities. On my small opensuse install, my PATH contains only 105 4-letter executables.
and i went into PS and typed ,<tab> and it said:
> PS V:\> & '.\,foo.bar'
if i have ,foo.exe and ,cuda_install.exe in a directory (or on my path), it's two characters and then a tab, same as linux, to run either of them: ,c || ,f
anyhow, it was for my own edification.
Just a few examples on this machine: backup-workstation-to-foo, backup-workstation-to-usb-restic, make-label-for-laptop-battery, set-x-keyboard-preferences, update-pocketbook
For one-letter and two-letter commands that might conceivably overlap with some command in some package someday (e.g., `gi` for `grep -i`), I only do those as interactive shell aliases. So they shouldn't break any scripts, and I'll know if someday I'm typing one of those and intending to get something different.
In a few cases, those one-letter aliases have been for very-often-used scripts of mine.
$ cd ~/bin
$ for x in $(find . -type f -perm /a=x -exec basename {} \;) ; do echo $x ; done
temps
$ for x in $(find . -type f -perm /a=x -exec basename {} \;) ; do ln -s $x ,$x ; done
$ ls -l
total 4
lrwxrwxrwx 1 tanel tanel 5 Jun 23 16:38 ,temps -> temps
-rwxr--r-- 1 tanel tanel 251 May 30 23:26 temps
I do like the idea of autocompleting your own commands though.
- is your ~/bin directory a git repo?
- if you git to manage your dot files, do you use hard links or soft links?
Every few weeks or months, I run a command on each system that gathers up any accumulated changes I've made to these files and syncs them to common machine that has all the repos. I merge those changes, then run another command to install the updates on all machines, so everything stays in-sync, over time.
I found that these ~/bin scripts and config files fell into a bit of a "donut hole" of development effort, where it was too much bother to maintain a full repo/build/install setup for every single script independently, but I did want to keep changes in sync and track them over time, rather than just having each system diverge forever.
So, my solution was to bundle lots of scripts together, into just a few repos, and sync/merge/etc them in bulk, to streamline the process.
A downside is lots of my commit notes are just a generic "gathering up latest changes" since I'm slurping up lots of edits to unrelated files at once. Hasn't really been a problem for me, though. I mostly just care about having the history.
git init --bare $HOME/.config/repo
alias config='/usr/bin/git --git-dir=$HOME/.config/repo --work-tree=$HOME'
config config --local status.showUntrackedFiles no
Then I can do things like "config add", "config commit", and "config push".So .bashrc is a symlink to ~/.home/.bashrc, ~/.config/nvim to ~/.home/.config/nvim, etc.
It’s simple and only relies on having something sh-compatible available so portable now and in the future.
To manage per-system tweaks, I have places that include an additional file based on hostname. For example my .bashrc has something like:
if [ -f “$HOME/.bashrc.$HOSTNAME” ]; then
source “$HOME/.bashrc.$HOSTNAME”
if
Which will include a bashrc file specific to that host if it exists.Been working well for me for… a decade now?
More complex (multi-file) tools are usually separate ts or python projects. Node has {npm,yarn} link, which puts a starter .cmd somewhere in PATH, out of box. Python scripts I usually run through .cmd "alias" files in c:/CmdTools, there's no `pip link` afaik.
I always have MSYS2 installed and here's my cmdbash prolog:
some.cmd:
:<<BATCH
@xbash %~dpf0 %*
@exit /b
BATCH
for i in {1..5}; do
echo "Hello from bash!"
done
xbash.exe is just msys's bash.exe that I copied to avoid collisions with WSL (which is a useless PR nonsense). Same with xgrep, xfind, xecho, xmkdir, xsort.This setup carried me for years and turns out to be a very reasonable windows/unix integration. I like unix architecture, but can't stand linux desktop. This year I've got a project related to linux desktop, and I'm literally become so stressed using it sometimes that I have to take a break or vent loudly.
~/src is a git repo. One script evolved into its own project and became a submodule within ~/src.
For configuration files like ~/.foobar_rc and directories such as ~/.vim/, they again are not directly version controlled but are symlinked into ~/etc which is. I don't see any reason that ~/.foobar_rc couldn't be a hardlink, but it's not in my setup.
I used to maintain a single repository at ~ that included ~/src and ~/etc as submodules, with a build script for setting up links. Always being within a git repository became cumbersome, so I moved the build tools into their respective directories (~/src and ~/etc) and now clone and build each repository manually.
Lastly, since private repos aren't (weren't?) free, those submodule repos are really just branches of the same repo that share no common ancestors.
For my most important custom bins, they are written in Rust and published to crates.io so I cargo install them from there. It’s just one crate, with wrappers for the git commands that I use day to day
In addition to this, I have host specific repos with some scripts. These scripts are not in my path but are instead scripts that run on a schedule from cron. These scripts run and log various facts about the machine such as zpool status and list installed packages, and auto-commit and push those to their repo. And the other kind of script I have invokes zfs send and recv to have automatic backups of data that I care about.
In addition to this I have a couple other git repos for stuff that matters to me, which either runs via cron (retrieving data from 3rd parties on a schedule) or manually (processing some data).
For neovim I stopped caring about having a custom RC file at all. I just use vanilla neovim now on my servers. On my laptop I use full blown IDEs from JetBrains for doing work.
My dotfiles/configs are a mix of the following setup on boot:
- one-way copy of config file from $repo/$path to $path (prevents apps from modifying my curated config or adding noise)
- or make it a symlink (if I want it mutable)
- or make it a bind mount (if a symlink won't work; can be a file or folder)
- or make it a one way copy but add a file watcher to copy changes back to the repo (if none of the above work. Some programs fail if the file they need is a symlink or is bind mounted)
For dotfiles using a one-way copy, whenever I change a setting I want to persist I have to manually copy or edit the original $repo/$path. I can take a diff against the repo for a hint, or use `inotifywait -r -e modify,create,delete,move . ~/.local ~/.config -m` for something new.
Not using hard links since the dotfiles are likely to be on a different filesystem (or dataset) than their target paths.
Since I use `zsh`, I usually only symlink the `dotfiles/etc/zsh/.zshrc` to `$HOME/.zshrc`, while the `.zshrc` loads environment variables settings all required paths for my tools, e.g.:
export PATH="$HOME/bin:$HOME/dotfiles/scripts:$PATH"
export STARSHIP_CONFIG="$HOME/dotfiles/etc/starship/starship.toml"
export GIT_CONFIG_GLOBAL="$HOME/dotfiles/etc/git/.gitconfig"
export MYVIMRC="$HOME/dotfiles/etc/vim/vimrc"
export VIMINIT='source $MYVIMRC'
# ...
The only files of `dotfiles` I copy over are for ssh, because ssh checks file permissions for security reasons and does not allow symlinks.Since the link directives are idempotent, you can run it on every login shell if you desire. I ended up setting up a shared jumpbox used by some contractors with it so they could work with our internal tooling without requiring excessive setup, and wrapped it into a shell startup script[2] and found it performant enough that I couldn't tell the difference.
1: https://github.com/anishathalye/dotbot
2: https://gist.github.com/RulerOf/f41259f493b965c9354c2564d85b...
alias dotfiles='git --git-dir=$HOME/.dotfiles --work-tree=$HOME'
And I init my dotfiles into a fresh home directory like this: git clone --bare gitolite3@example.com:dotfiles $HOME/.dotfiles
git --git-dir=$HOME/.dotfiles --work-tree=$HOME config status.showUntrackedFiles no
git --git-dir=$HOME/.dotfiles --work-tree=$HOME reset --hard
I feel like these small web people's blogs were so much more accessible before link aggregators got this mainstream.
I've used a character for each company I've worked for, and a different one for common scripts. This way is very easy to clean $HOME when I move.
do.item Add an item to the bottom of my todo list
do.soon List the items that I need to do soon
do.next List the next item to work on
do.mark Mark the current item done
How do you implement namespaces? Is it just that you do this with all the commands you create for this company?Exactly. It's super barebones but it works haha.
Please note that brackets have no special meaning to the shell.
But as far as I can see, using a close-bracket as the first character in a command is safe, since it cannot be treated as part of such a pattern. Open-bracket (without a matching close-bracket) would work in many shells, but will get you a "bad pattern" error in zsh.
Together with "*" and "?", the brackets "[" and "]" have been used by the UNIX shell since some of its earliest versions (already many years before the Bourne shell) in pattern matching for pathname expansion (globbing).
For example, if you have in a directory 3 files named "file1", "file2" and "file3", then
"ls file?" will output
file1 file2 file3
while "ls file[13]" will output
file1 file3
https://news.ycombinator.com/item?id=22778988 (April 2020, 90 comments)
Ahem. Nice idea though, I think I'll start using it...
I didn't get it at first thought, thinking of the .nds file extension for Nintendo DS ROMs.
It's one more key press, but I'm pretty sure I would use underscore for the first character.
I do, however, like to comment my custom commands:
$ mv ~/Desktop/*pdf ~/Documents/PDF # pdfsync
$ for i in ~/Documents/Development/JUCE/JUCE_Projects/* ; do ; cowsay $i ; cd $i ; git pull ; git fetch --all ; git submodule update --init --recursive ; done # updatejuce
CTRL-R "updatejuce" or "pdfsync" .. and off we go ..A much nicer way of finding my custom commands ..