I've been following this discussion about project-centric vs. environment-centric workflows, and I think UV actually enables both patterns quite well. For the "fiddle around until something emerges" workflow that @BrenBarn mentioned, you can absolutely create a general-purpose environment with `uv venv playground` and then use `uv pip install` to gradually build up your experimental dependencies. The project structure can come later.
What's interesting is how UV's speed makes the cost of switching between these approaches nearly zero. Want to quickly test something in isolation? Spin up a temporary environment. Want to formalize an experiment into a project? The migration is painless.
This mirrors what I've seen in other parts of the toolchain - tools like Vite for frontend dev or modern Docker practices all follow this pattern of "fast by default, but flexible when you need it." The velocity improvements compound when your entire toolchain operates on this principle.
IMO, uv is quickly becoming one of the best reasons to start a new project in Python. It’s fast, and brings a level of polish and performance that makes Python feel modern again.
Therefore, cars are useless and nobody should use one.
My point is if you put an airplane next to a car factory, it's very clear you can't build the airplane in the car factory.
My hope is that conda goes away completely. I run an ML cluster and we have multi-gigabyte conda directories and researchers who can't reproduce anything because just touching an env breaks the world.
It's still very immature but if you have a mixture of languages (C, C++, Python, Rust, etc.) I highly recommend checking it out.
It makes building FreeCAD pretty trivial, which is a huge deal considering FreeCAD’s really complex Python and non-python, cross-platform dependencies.
On the python front, however, I am somehow still an old faithful - poetry works just fine as far as I was every concerned. I do trust the collective wisdom that uv is great, but I just never found a good reason to try it.
1. Installation & dependencies: Don't install Python directly, instead install pyenv, use pyenv to install python and pip, use pip to install venv, then use venv to install python dependencies. For any non-trivial project you have to be incredibly careful with dependency management, because breaking changes are extremely common.
2. Useless error messages: I cannot remember outside of trivial examples without external packages when the error message I got was actually directly pointing towards the issue in the code. To give a quick example (pointing back to the point above), I got the error message "ImportError: cannot import name 'ChatResponse' from 'cohere.types'". A quick google search reveals that this happens if a) the cohere API-Key isn't set in ENV or b) you use langchain-cohere 0.4.4 with cohere 5.x, since the two aren't compatible.
3. Undisciplined I/O in libraries: Another ML library I recently deployed has a log-to-file mode. Fair enough, should be disabled before k8s deployment no biggie. Well, the library still crashes because it checks if it has rwx-permissions on a dir it doesn't need.
4. Type conversions in C-interop: Admittedly I was also on the edge of my own capabilities when I dealt with these issues, but we had issues with large integers breaking when using numpy/pandas in between to do some transforms. It was a pain to fix, because Python makes it difficult to understand what's in a variable, and what happens when it leaves Python.
1. and 4. are mainly issues with people doing stuff in Python it wasn't really designed to do. Using Python as a scripting language or a thin (!) abstraction layer over C is where it really shines. 2. and 3. have more to do with the community, but it is compounded by bad language design.
2. I've used a ton of languages and frankly Python has the best tracebacks hands-down, it's not even close. It's not Python's fault a 3rd party library is throwing the wrong error.
3. Again, why is bad language design a library can do janky things with I/O?
4. FFI is tricky in general, but this sounds like primarily a "read the docs" problem. All of the major numeric acceleration libraries have fixed sized numbers, python itself uses a kind of bigint that can be any size. You have to stay in the arrays/tensors to get predictable behavior. This is literally python being "a thin abstraction layer over C."
2. I would argue that the ubiquity of needing stack traces in Python is the main problem. Why are errors propagating down so deep? In Rust I know I'm in trouble when I am looking at the stack trace. The language forces you to handle your errors, and while that can feel limiting, it makes writing correct and maintainable code much more likely. Python lets you be super optimistic about the assumptions of your context and data - which is fine for prototyping, but terrible for production.
3. I agree that this isn't directly a language design issue, but there's a reason I feel the pain in Python and not in Rust or Java. Dynamic typing means you don't know what side effects a library might have until runtime. But fundamentally it is a skill issue. When I deploy the code of Java, Go or Rust people, they generally know I/O is important, and spent the necessary time thinking about it. JS, Python or Ruby devs don't.
4. The issue is that Python's integer handling sets an expectation that numbers "just work," and then that expectation breaks at the FFI boundary. And once you're of the trodden path, things get really hard. The cognitive load of tracking which numeric type you're in at any moments sucks. I completely agree that this was a skill issue on my part, but I am quite sure that I would not have had that problem if it was in a properly type-set, compiled language.
Dynamic languages let you do a lot of things with meta classes and monkey-patching that allow you to use a library without needing to build your own special version of it. With some C or C++ library that did something bad (like the logging one you mentioned) there's nothing for it but to rebuild it. With Python you may well be able to monkey patch it and live to fight another day.
It is great when you're dealing with 3rd party things that you either don't have the source code for or where you cannot get them to accept a patch.
Static typing (in most industrially popular languages) doesn't tell you anything about side effects, only expected inputs and return values.
Perl was the language of the big hack. It had very little to offer in the way of abstractions and OO. So you were kind of excused from writing well structured code and people wrote a lot of janky, untestable stuff in perl and carried on doing than in python.
In python, to get good code you absolutely have to have unit tests and some kind of integration tests. I worked on a roughly 50,000 line python program. It wasn't bad even though we didn't have type hints then. Occasionally we did discover embarassing bugs that a statically typed language would not have permitted but we wrote a test and closed that door.
I don’t understand why anybody would ever develop anything in Python other than “I want to write software but can’t be arsed to follow software design principles”, with all the mess that follows from it.
This is such a deeply unserious take. Do you have hundreds of thousands of hours to give out for free? No?
I wish my life had been like this. Unfortunately I always appear to end up needing to make this stuff work for everyone else (the curse of spending ten years on Linux, I suppose).
But then ML is a very broad church, and particularly if you're a researcher in a bigger company then I could see this being true for lots of people (again, i wish this was me).
https://docs.metaflow.org/scaling/dependencies https://outerbounds.com/blog/containerize-with-fast-bakery
Like, I had real issues with GDAL and SQLite/spatialite on MacOS (easy on Linux) which uv was of no help with. I ended up re-architecting rather than go back to conda as uv is just that good (and it was early stage so wasn't that big of a deal).
does that fit the bill?
Conda packaging system and the registry is capable of understanding things like ABI and binary compatibility. It can resolve not only Python dependencies but the binary dependencies too. Think more like dnf, yum, apt but OS-agnostic including Windows.
As far as I know, (apart from blindly bundling wheels), neither PyPI nor Python packaging tools have the knowledge of ABIs or purely C/C++/Rust binary dependencies.
With Conda you can even use it to just have OS-agnostic C compiler toolchains, no Python or anything. I actually use Pixi for shipping an OS-agnostic libprotobuf version for my Rust programs. It is better than containers since you can directly interact with the OS like the Windows GUI and device drivers or Linux compositors. Conda binaries are native binaries.
Until PyPI and setuptools understand the binary intricacies, I don't think it will be able to fully replace Conda. This may mean that they need to have an epoch and API break in their packaging format and the registry.
uv, poetry etc. can be very useful when the binary dependencies are shallow and do not deeply integrate or you are simply happy living behind the Linux kernel and a container and distro binaries are fulfilling your needs.
When you need complex hierarchies of package versions where half of them are not compiled with your current version of the base image and you need to bootstrap half a distro (on all OS kernels too!), Conda is a lifesaver. There is nothing like it.
Conda ecosystem is forced to solve this problem to a point since ML libraries and their binary backends are terrible at keeping their binaries ABI-stable. Moreover different GPUs have different capabilities and support different versions of the GPGPU execution engines like CUDA. There is no easy way out without solving dependency hell.
It is also quite complex and demands huge investment of time to understand its language which isn't so nice to program in it.
The number of cached combinations of various ABI and dependency setting is small with Nix. This means you need source compilation of a considerable number of dependencies. Conda generally contains every library built with the last 3 minor releases of Python.
Want to make sure a software stack works well on a Cray with MPI+cuda+MKL, macOS, and ARM linux, with both C++ and Python libraries? It’s possible with conda-forge.
It is probably the easiest way to install a lot of binary dependencies, good for who doesn't have experience with sofware development and don't care with reproductbility.
Arbitrary examples, I know, but I moved a large software that was truly mixed C++ and python project to conda-forge and all sorts of random C++ dependencies were in there, which drastically simplified distribution and drastically reduced compile time.
If I had done it today, it might be nix+bazel, or maybe conda+bazel, but maintaining a world of C++ libraries for distribution as wheels does not sound like fun - especially because nobody is doing that work as a community now
Conda is like Jquery or Bootstrap: It was necessarily before the official tools evolved. Now we don't need them any more, but they still are around for legacy reasons. You still need it for example, for some Moleular dynamics packages, but that's due to the package publishers choosing it.
  - Enums with inner types require hacks
  - No clean way to make your Rust enums into Python enums
  - More boilerplate than you should have. I think I will have to write my own macros/helpers to solve. For example, you need getters and setters for each field. And the gotcha: You can't just macro them due to a PyO3 restriction.
  - No way to use other Python-exposed rust libs as dependencies directly. You have to re-expose everything. So, if your rust B lib depends on rust A, you will make a PyO3 Rust A package, but instead of re-using that in PyO3 Rust B, you will copy+paste your PyO3 Rust A boilerplate into PyO3 Rust B.
The curmudgeon in me feels the need to point out that fast, lightweight software has always been possible, it's just becoming easier now with package managers.
Rust is for me similar to C just like you wrote, it is better, bigger but not the overwhelming way like C++ (and Rust has cargo, don't know if C++ has anything).
I stayed for the native functional programming, first class enums, good parts of C++ and the ultimate memory safety.
Just look at this post: 1839 points and 1048 comments! That is insane. It's captured the hearts and minds of Python devs and I'm sure they know it.
I'm not against projects making money, just remember you'll likely pay a price later on once you invest in more of Astral's ecosystem. It's just temporarily free.
Progress is already underway. PEP 751 proposes a standardized format for lock files: https://peps.python.org/pep-0751/ This helps to reduce tool-specific lock-in.
uv is open source, so forking remains viable. Build metadata is committed, and conversion to other tools is feasible if needed.
However, we must all remain vigilant against the risk of lock-in.
The only thing that prevents lock-in is the religious zeal of most Python users to use anything presented by the PSF high priests, not technical merit.
The reason uv exists is the utter incompetence of PyPA.
If I write some OSS tool that becomes popular, and lose my job, I might just start monetizing it.
Active State, Enthought, Anaconda, now Astral.
[1] Discounting pure SaaS companies that just use Python but offer no tools.
I also like how you can manage Python versions very easily with it. Everything feels very "batteries-included" and yet local to the project.
I still haven't used it long enough to tell whether it avoids the inevitable bi-yearly "debug a Python environment day" but it's shown enough promise to adopt it as a standard in all my new projects.
You can also prepend the path to the virtual environment's bin/ (or Scripts/ on Windows). Literally all that "activating an environment" does is to manipulate a few environment variables. Generally, it puts the aforementioned directory on the path, sets $VIRTUAL_ENV to the venv root, configures the prompt (on my system that means modifying $PS1) as a reminder, and sets up whatever's necessary to undo the changes (on my system that means defining a "deactivate" function; others may have a separate explicit script for that).
I personally don't like the automatic detection of venvs, or the pressure to put them in a specific place relative to the project root.
> I also like how you can manage Python versions very easily with it.
I still don't understand why people value this so highly, but so it goes.
> the inevitable bi-yearly "debug a Python environment day"
If you're getting this because you have venvs based off the system Python and you upgrade the system Python, then no, uv can't do anything about that. Venvs aren't really designed to be relocated or to have their underlying Python modified. But uv will make it much faster to re-create the environment, and most likely that will be the practical solution for you.
``uv`` accomplishes the same thing, but it is another dependency you need to install. In some envs it's nice that you can do everything with the built-in Python tooling.
Well I do need some way to install multiple python versions in parallel, and ideally the correct python version would be used in each project automatically. I used to use pyenv for this, which puts shims in your path so that it can determine which python executable to run on the fly, but I found that it was sometimes very slow, and it didn’t work seamlessly with other tools. Specifically pipenv seemed to ignore it, so I’d have to use a workaround to point pipenv to the path to the pyenv-installed python executable.
When one tool does both python installs and dependency/venv management, then it can make these work seamlessly together, and I don’t need to screw up my path to make the version selection work either.
At least major and minor, patch is rarely needed for python.
However, I also think many people, even many programmers, basically consider such external state "too confusing" and also don't know how they'd debug such a thing. Which I think is a shame since once you see that it's pretty simple it becomes a tool you can use everywhere. But given that people DON'T want to debug such, I can understand them liking a tool like uv.
I do think automatic compiler/interpreter version management is a pretty killer feature though, that's really annoying otherwise typically afaict, mostly because to get non-system wide installs typically seems to require compiling yourself.
uv, in addition to its raw speed, is very clever to record things in pyproject as the user interacts with it.
These are tools, if you choose to hold them wrong no one can stop you. uv didn't invent the screw driver or the knife, its novelty is as a Swiss Army Knife which put them all in one.
Why else is this discussion getting hundreds of comments?
For any random python tool out there, I had about a 60% chance it would work out of the box. uv is the first tool in the python ecosystem that has brought that number basically to 100%. Ironically, it's written in Rust because python does not lend itself well to distributing reliable, fast tools to end users.
I have managed reproducible Python services and software for multiple years now. This was solved already before uv, although uv does it faster and maybe offers a bit more comfort, although I abstract away such tooling using a simple Makefile anyway.
The reason you are having such a bad time getting random Python projects to work out of the box is, because people creating them did not spend the effort to make them reproducible, meaning, that they do not ensure the setup has same versions and checksums of every direct and transitive dependency. This can be facilitated using various tools these days. poetry, uv, and I am sure there are more. People are just clueless and think that a requirements.txt file with a few loose versions slung in is sufficient. It is not, and you end up with not working project setups like in those cases you refer to.
Had, past tense, because of the metadata situation and the lack of pre-built wheels. The ecosystem has moved on.
> uv is the first tool in the python ecosystem that has brought that number basically to 100%.
Show me a Python tool that you can install and have work out-of-box with uv, but cannot install and have work out-of-box with pip.
> Ironically, it's written in Rust because python does not lend itself well to distributing reliable, fast tools to end users.
I have repeatedly shown that the slowness of pip is overwhelmingly due to its terrible (organically developed on top of legacy cruft from an era where people simply didn't have the same requirements) architecture, not due to being written in Python. Most of the work of installation is simply not CPU-bound — why would it be? — and the main task that is (optional pre-compilation of Python source to .pyc) is one of the few things where uv is dependent on the Python runtime (which, in turn, will do the work in C).
The pieces actually all existed for sure.
I can even write scripts to make it all happen.
But uv remains as a long overdue universal command line tool for python.
How does the rest of the world manage to survive without venvs? Config files in the directory. Shocking, really :-)))
The problem is, that would require support from the Python runtime itself (so that `sys.path` can be properly configured at startup) and it would have to be done in a way that doesn't degrade the experience for people who aren't using a proper "project" setup.
One of the big selling points of Python is that you can just create a .py file anywhere, willy-nilly, and execute the code with a Python interpreter, just as you would with e.g. a Bash script. And that you can incrementally build up from there, as you start out learning programming, to get a sense of importing files, and then creating meaningful "projects", and then thinking about packaging and distribution.
Node or PHP also work like normal Unix programs...
I wonder what good you think insults do? I could insult your use of English for example but would that make my argument better?
> PHP and node were not developed as general purpose scripting languages for use at the commandline and are very commonly used for specific purposes so there's no need for them to be like python.
Perl, Ruby, Lua, I can keep going. You're just nitpicking. Practically only Python uses the venv [1] approach from all the programming languages I've used.
[1] Manual activation needed, non portable pile of dependencies per project (by design, fairly sure the documentation mentions this is not a supported use case - even across quasi-identical machines !!!), etc. I stand by my decision to call venvs a "fractal of bad design".
As things are, I can share a venv with several projects, or have one for my account if I don't want to break the system tools. I can even write a trivial bash function to activate a venv automatically based on whether a venv exists in my project. It's so trivial to do and yet generates all this discussion.
As for non-portability that's the most specious and pathetic argument of the lot. who can bother to make every library portable to every distro....? what is the point of distros if one demands that one can run the same binaries everywhere? This is what containers were invented for. If you need the latest versions you might just, heaven forbid, have to compile something and fight with the problems of running it on distro that it wasn't developed on.
Their single advantage over Python is that they are able to work fine without virtual environments, as they just load libraries from a relative path: That way, you can copy-paste a project directory, move it to another system with a copy of the interpreter binary, and… run the software. There is nothing clever about that; I’d even say Python's way of magically configuring a shell just to be able to locate files is the "clever" solution that nobody asked for!
Python venvs literally f*ed up the simplest form of deployment on the planet, scp. Yes, we have more complex solutions like Docker, another abomination (the software itself). Docker was invented in big part due to Python (not only, but it was a big factor).
Again, I use venvs. They're ok. But they're a stupid semi abstraction.
    python -m venv --copies .myvenvdir
There's nothing nice about this but it does protect you from a lot of issues where it might seem to work and then behave unexpectedly. e.g. if you had a different python on the destination.
Docker doesn't just help with scripts - it also manages the problem of binary compatibility with C and C++ (and whatever) libraries. You may not have had this problem with C/C++ code so you might imagine it's all about python but I can only say it's a misery that the C/C++ crowd have been suffering with for a very long time. How does one support 3 distros each with a different version of libffmpeg installed when you're making a media player? Sometimes there's a lot of "#if FFMPEG_MAJOR > 3" in the code to cope with it.
The distro developers build each package with a bunch of patches to adapt them to the distro they're in.
It's bad enough for the distro developers to curate all this when it's their job but with python devs are living in a more minimally curated world and in some applications like ML are now dealing with wide and deep dependency trees that rival a distro.
IMO perhaps "someone" should come up with some new distributions where they make everything work together.
So a lot of other major ecosystems are just self contained. All the "big" libraries are portable and written in the language itself, so they rarely plug into C/C++ (aka distribution/OS dependencies).
So Docker was basically built primarily for slow programming languages and I guess in a weird way, for C/C++, as you say? :-)))
For .pth files to work, they have to be in a place where the standard library `site` module will look. You can add your own logic to `sitecustomize.py` and/or `usercustomize.py` but then you're really no better off vs. writing the sys.path manipulation logic.
Many years ago, the virtual environment model was considered saner, for whatever reasons. (I've actually heard people cite performance considerations from having an overly long `sys.path`, but I really doubt that matters.) And it's stuck.
source - why are we using an OS level command to activate a programming language's environment
.venv - why is this hidden anyway, doesn't that just make it more confusing for people coming to the language
activate - why is this the most generic name possible as if no other element in a system might need to be called the activate command over something as far down the chain as a python environment
Feels dirty every time I've had to type it out and find it particularly annoying when Python is pushed so much as a good first language and I see people paid at a senior level not understand this command.
Because "activating an environment" means setting environment variables in the parent process (the shell that you use to run the command), which is otherwise impossible on Linux (see for example https://stackoverflow.com/questions/6943208).
> why is this hidden anyway, doesn't that just make it more confusing for people coming to the language
It doesn't have to be. You can call it anything you want, hidden or not, and you can put it anywhere in the filesystem. It so happens that many people adopted this convention because they liked having the venv in that location and hidden; and uv gives such venvs special handling (discovering and using them by default).
> why is this the most generic name possible as if no other element in a system might need to be called the activate command over something as far down the chain as a python environment
Because the entire point is that, when you need to activate the environment, the folder in question is not on the path (the purpose of the script is to put it on the path!).
If activating virtual environments shadows e.g. /usr/bin/activate on your system (because the added path will be earlier in $PATH), you can still access that with a full absolute path; or you can forgo activation and do things like `.venv/bin/python -m foo`, `.venv/bin/my-program-wrapper`, etc.
> Feels dirty every time I've had to type it out
I use this:
  $ type activate-local 
  activate-local is aliased to `source .local/.venv/bin/activate'
  $ cat .local/.gitignore 
  # Anything found in this subdirectory will be ignored by Git.
  # This is a convenient place to put unversioned files relevant to your
  # working copy, without leaving any trace in the commit history.
  *
If you know anyone who's hiring....
> which is otherwise impossible on Linux
Node, Rust, etc all manage it.
> Because the entire point is that...
I just mean there is a history of Python using overly generic naming: activate, easy-install. Just feels weird and dirty to me that you'd call such a specific things names like these and I think it's indicative of this ideology that Python is deep in the OS.
Maybe if I'd aliased the activate command a decade ago I wouldn't feel this way or think about it.
  $ (bash -c 'export foo=bar && echo $foo')
  bar
  $ echo $foo
  $
The git model is based on automatic detection of the .git folder, by having it in a location determined by convention. Many higher-level tools in the Python ecosystem have provided analogous handling of virtual environments over the years. Uv is doing it now; historically pyenv was used for that sort of thing, for example.
But when I said "which is otherwise impossible on Linux", I was specifically referring to the setting of environment variables, because OP asked why an activation script had to be sourced, and the reason is because that's what "activation" is.
This is a model that enough people liked using many years ago, to become dominant. It creates the abstraction of being "in" the virtual environment, while giving you the flexibility to put the actual file tree whereever you want.
Similar mindset to the original creators of venv, I imagine :-)
uv has increased my usage of python for production purposes because it's maintainable by a larger group of people, and beginners can become competent that much quicker.
Surely the effort of programming the actual code is so significant that starting a tool is a minor issue?
Why are people not using the system python? Perhaps it's too old or not old enough for some library that they have to use. This suggests there's a lot of change going on at the moment and it's not all synced up. I also suspect that people are using a very great number of different modules that change incompatibly all the time and ontop of that they need binary libraries of various kinds which are quite difficult to build and have all their own dependencies that python cannot install itself.
Rust has the advantage that they can build a world more completely out of rust and not worry as much about what's on the system already.
I'm glad uv is helping people.
If you're on a "stable" distro like Debian or Ubuntu LTS, that can be somewhere around 5 years old at the end of the stability period. And your system probably depends on its Python, so if you need a newer version of a library than the system's package manager provides you can't update it without risking breaking the system. Python itself has added several very nice new features in the last few versions, so anyone stuck on Ubuntu 22.04 LTS with Python 3.10 can't use newer Python features or libraries through their system's package manager.
I also value my time and life and some degree of standardization.
A language grows on it's ability to create beginners, not to make the people who have learned it the harder way feel special at the expense of others.
Fortunately uv got written and we don't have a problem. I don't have to use it but I can when I want to.
If uv makes it invisible it is a step forward.
not that it's great to start with, but it does happen, no?
Installing a particular node version also becomes as easy as
    fnm install 24Either the package manager is invoked with a different PATH (one that contains the desired Node/Java/whatever version as a higher priority item than any other version on the system).
Or the package manager itself has some way to figure that out through its config file.
Or there is a package manager launch tool, just like pyenv or whatever, which does that for you.
In practice it's not that a big of a deal, even for Maven, a tool created 21 years ago. As the average software dev you figure that stuff out a few weeks into using the tool, maybe you get burnt a few times early on for misconfiguring it and then you're on autopilot for the rest of your career.
Wait till you hear about Java's CLASSPATH and the idea of having a SINGLE, UNIFIED package dependency repo on your system, with no need for per-project dependency repos (node_modules), symlinks, or all of that stupidity.
CLASSPATH was introduced by Java in 1996, I think, and popularized for Java dependency management in 2004.
Activating a venv is just setting a few environment variables, including PATH, and storing the old values so that you can put them back to deactivate the environment.
The venvs created by the standard library `venv`, as well as by uv (and by the third-party `virtualenv` that formed the original basis for `venv`), also happen to include "activation" scripts that manipulate some environment variables. PYTHONPATH is not among these. It manipulates PATH, so that the venv's symlink is on the path. And it may unset PYTHONHOME.
  #!/usr/bin/env -S uv run --script
  # /// script
  # requires-python = ">=3.11"
  # dependencies = [ "modules", "here" ]
  # ///
But whoever runs this has to install uv first, so not really standalone.
"Lol, no I break into computer systems I am a hacker"
"Geeze hell no I have an axe, I am an OG hacker"
The two main runners I am aware of are uv and pipx. (Any compliant runner can be referenced in the shebang to make a script standalone where shebangs are supported.)
Small price to pay for escaping python dependency hell.
It will install and use distribution packages, to use PyPA's terminology; the term "module" generally refers to a component of an import package. Which is to say: the names you write here must be the names that you would use in a `uv pip install` command, not the names you `import` in the code, although they may align.
This is an ecosystem standard (https://peps.python.org/pep-0723/) and pipx (https://pipx.pypa.io) also supports it.
linux core utils have supported this since 2018 (coreutils 8.3), amusingly it is the same release that added `cp --reflink`. AFAIK I know you have to opt out by having `POSIX_CORRECT=1` or `POSIX_ME_HARDER=1` or `--pedantic` set in your environment. [1]
freebsd core utils have supported this since 2008
MacOS has basically always supported this.
---
1. Amusingly despite `POSIX_ME_HARDER` not being official a alrge swapt of core utils support it. https://www.gnu.org/prep/standards/html_node/Non_002dGNU-Sta...
This isn't a knock against UV, but more a criticism of dynamic dependency resolution. I'd feel much better about this if UV had a way to whitelist specific dependencies/dependency versions.
uv installing deps is hardly more risky.
Scanning for external dependencies is common but not so much internal private libraries.
I've used Tiger/Saint/Satan/COPS in the distant past. But I think they're somewhat obsoleted by modern packaging and security like apparmor and selinux, not to mention docker and similar isolators.
uv executes http://somemirror.com/some-version
most people like their distro to vet these things. uv et all had a reason when Python2 and 3 were a mess. i think that time is way behind us. pip is mostly to install libraries, and even that is mostly already done by the distros.
I meant it’s easy to inspect your script’s logic — look it. Bunch harder to audit the code in dependencies though…
But it’s much harder to inspect what the imports are going to do and be sure they’re free of any unsavory behavior.
It’s the script contents that count, not just dependencies.
Deno-style dependency version pinning doesn’t solve this problem unless you check every hash.
If you don't care about being ecosystem-compliant (and I am sure malware does not), it's only a few lines of Python to download the code and eval it.
curl -LsSf https://astral.sh/uv/install.sh | sh """
Also isn't great. But that's how homebrew is installed, so ... shrug ... ?
Not to bash uv/homebrew, they are better than most _easy_ alternatives.
I will happily copy-paste this from any source I trust, for the same reason I'll happily install their software any other way.
For anything that I want to depend on, I prefer stronger auditability to ease of install. I get it, theoretically you can do the exact same thing with curl/sh as with git download/inspecting dependencies, installing the source and so on. But in reality, I'm lazy (and per another thread, a 70s hippie) and would like to nix any temptation to cut corners in the bud.
But then I'm a weirdo that takes personal offense at tools hijacking my rc / PATH, and keep things like homebrew at arm's length, explicitly calling shellenv when I need to use it.
(sadly, uv cannot detect the release date of some packages. I'm looking at you, yaml!)
The man page tells me:
  -S, --split-string=S
         process and split S into separate arguments; used to pass multi‐
         ple arguments on shebang lines
-S causes the string to be split on spaces and so the arguments are passed correctly.
So in fact "-S" is not passed as a separate argument, but as a prefix in the first (and only) argument, and env then extracts it and acts accordingly:
``` $ /usr/bin/env "-S echo deadbeef" deadbeef ```
I want to be able to ship a bundle which needs zero network access to run, but will run.
It is still frustratingly difficult to make portable Python programs.
My current hobby language is janet. Creating a statically linked binary from a script in janet is trivial. You can even bring your own C libraries.
Although several variations on this theme already exist, I'm sure. https://github.com/pex-tool/pex/ is arguably one of them, but it's quite a bit bulkier than what I'm looking for.
As long as you have internet access, and whatever repository it's drawing from is online, and you may get different version of python each time, ...
But, yes, python scripts with in-script dependencies plus uv to run them doesn't change dependency distribution, just streamlines use compared to manual setup of a venv per script.
When I drop into a Node.js project, usually some things have changed, but I always know that if I need to, I can find all of my dependencies in my node_modules folder, and I can package up that folder and move it wherever I need to without breaking anything, needing to reset my PATH or needing to call `source` inside a Dockerfile (oh lord). Many people complain about Node and npm, but as someone who works on a million things, Node/npm is never something I need to think about.
Python/pip though… Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore”, forcing me to use a newer version than the project wants and triggering an avalanche of new “you really shouldn’t install packages globally” messages, demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year.
And then there’s Conda, which has all of these problems and is also closed source (I think?) and has a EULA, which makes it an even bigger pain to automate cleanly (And yes I know about mamba, and miniconda, but the default tool everyone uses should be the one that’s easy to work with).
And yes, I know that if I was a full-time Python dev there’s a “better way” that I’d know about. But I think a desirable quality for languages/ecosystems is the ability for an outsider to drop in with general Linux/Docker knowledge and be able to package things up in a sometimes unusual way. And until uv, Python absolutely failed in this regard.
I think a lot of the decades old farce of Python package management would have been solved by this.
https://peps.python.org/pep-0582/
https://discuss.python.org/t/pep-582-python-local-packages-d...
Having dependency cache and build tool that knows where to look for it is much superior solution.
If you have local dependency repo and dependency manifest, during the build, you can either:
1. Check if local repo is in sync - correct build, takes more time
2. Skip the check - risky build, but fast
If the dependencies are only in the cache directory, you can have both - correct and fast builds.
E.g.
  $ ls -l ./node_modules/better-sqlite3
  ... node_modules/better-sqlite3 -> .pnpm/better-sqlite3@12.4.1/node_modules/better-sqlite3Introducing a directory that needs to stay in sync with dependency manifest will always lead to such problems. It is good that Python developers do not want to repeat such mistake.
This is a problem I've never encountered in practice. And it's not like you don't have to update the dependencies in Python if they are different per-branch.
What's the "this" that is supposedly always your issue? Your comment is phrased as if you're agreeing with the parent comment but I think you actually have totally different requirements.
The parent comment wants a way to have Python packages on their computer that persist across projects, or don't even have a notion of projects. venv is ideal for that. You can make some "main" venv in your user directory, or a few different venvs (e.g. one for deep learning, one for GUIs, etc.), or however you like to organise it. Before making or running a script, you can activate whichever one you prefer and do exactly like parent commenter requested - make use of already-installed packages, or install new ones (just pip install) and they'll persist for other work. You can even switch back and forth between your venvs for the same script. Totally slapdash, because there's no formal record of which scripts need which packages but also no ceremony to making new code.
Whereas your requirements seem to be very project-based - that sounds to me like exactly the opposite point of view. Maybe I misunderstood you?
    > Python/pip though… Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore” [...]
(1) How old must the Python version of those projects be, to not be supported any longer with any decent GNU/Linux distribution?
(2) Are you not using official Python docker images?
(3) What's pip gotta do with a Python version being supported?
(4) How does that "Your Linux distro doesn’t support that version of Python anymore" show itself? Is that a literal error message you are seeing?
    > [...] demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year
(1) Why are you not using virtual environments?
(2) You are claiming Node.js projects to be better in this regard, but actually they are just creating a `node_modules` folder. Why then is it a problem for you to create a virtual environment folder? Is it merely, that one is automatic, and the other isn't?
    > This was always my issue with pip and venv: I don’t want a thing that hijacks my terminal and PATH, flips my world upside down and makes writing automated headless scripts and systemd services a huge pain.
Debian-13 defaults to Python-3.13. Between Python-3.12 and Python-3.13 the support for `pkg_config` got dropped, so pip projects like
https://pypi.org/project/remt/
break. What I was not aware of: `venv`s need to be created with the version of python they are supposed to be run. So you need to have a downgraded Python executable first.
This is one of uv’s selling points. It will download the correct python version automatically, and create the venv using it, and ensure that venv has your dependencies installed, and ensure that venv is active whenever you run your code. I’ve also been bit by the issue you’re describing many times before, and previously had to use a mix of tools (eg pyenv + pipenv). Now uv does it all, and much better than any previous solution.
Would you help me make it work?
  docker run -it --rm -v$(pwd):/venv --entrypoint python python:3.12-alpine -m venv /venv/remt-docker-venv
  cd remt-docker-venv/
  source bin/activate
  python --version
  bash: python: command not foundYou could also pass the `--copies` parameter when creating the initial venv, so it's a copy and not symlinks, but that is not going to work if your on MacOS or Windows (because the binary platform is different to the Linux that's running the container), or if your development Python is built with different library versions than the container you're starting.
The problem is you are mounting a virtual environment you have built in your development environment into a Docker container. Inside your virtual environment there's a `python` binary that in reality is a symlink to the python binary in your OS:
  cd .venv
  ls -l bin/python
  lrwxr-xr-x@ 1 myuser  staff  85 Oct 29 13:13 bin/python -> /Users/myuser/.local/share/uv/python/cpython-3.13.5-macos-aarch64-none/bin/python3.13
The most basic fix would be recreating the virtual environment inside the container, so from your project (approximately, I don't know the structure):
   docker run -it --rm -v$(pwd):/app --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install --system -r requirements.txt
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 97ms
  Prepared 23 packages in 975ms
  Installed 23 packages in 7ms
  [...]
  /app # python
  Python 3.12.12 (main, Oct  9 2025, 22:34:22) [GCC 14.2.0] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  # First run
   docker run -ti --rm --volume .:/app --volume uvcache:/uvcache -e UV_CACHE_DIR="/uvcache" -e UV_LINK_MODE="copy" --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install -r requirements.txt --system
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 103ms
  Prepared 23 packages in 968ms
  Installed 23 packages in 16ms
  [...]
  # Second run
   docker run -ti --rm --volume .:/app --volume uvcache:/uvcache -e UV_CACHE_DIR="/uvcache" -e UV_LINK_MODE="copy" --entrypoint ash ghcr.io/astral-sh/uv:python3.12-alpine
  / # cd /app
  /app # uv pip install -r requirements.txt --system
  Using Python 3.12.12 environment at: /usr/local
  Resolved 23 packages in 10ms
  Installed 23 packages in 21ms
https://docs.astral.sh/uv/guides/integration/docker/#develop...
---
Edit notes:
  - UV_LINK_MODE="copy" is to avoid a warning when using the cache volume
  - Creating the venv with `--copies` and mounting it into the container would fail 
    if your host OS is not exactly the same as the containers, and also defeats in a 
    way the use of a versioned Python containerpip and venv are not such things. The activation script is completely unnecessary, and provided as a convenience for those to whom that workflow makes more sense.
> Every time I need to containerize or setup a Python project for some arbitrary task, there’s always an issue with “Your Linux distro doesn’t support that version of Python anymore“
I can't fathom why. First off, surely your container image can just pin an older version of the distro? Second, right now I have Python versions 3.3 through 3.14 inclusive built from source on a very not-special consumer Linux distro, and 2.7 as well.
> and triggering an avalanche of new “you really shouldn’t install packages globally” messages, demanding new —yes-destroy-my-computer-dangerously-and-step-on-my-face-daddy flags and crashing my automated scripts from last year.
Literally all you need to do is make one virtual environment and install everything there, which again can use direct paths to pip and python without sourcing anything or worrying about environment variables. Oh, and fix your automated scripts so that they'll do the right thing next time.
> I know that if I was a full-time Python dev there’s a “better way” that I’d know about.
Or, when you get the "you really shouldn't install packages globally" message, you could read it — as it gives you detailed instructions about what to do, including pointing you at the documentation (https://peps.python.org/pep-0668/) for the policy change. Or do a minimum of research. You found out that venvs were a thing; search queries like "python venv best practices" or "python why do I need a venv" or "python pep 668 motivation" or "python why activate virtual environment" give lots of useful information.
Literally, my case. I recently had to compile an abandoned six-year-old scientific package written in C with Python bindings. I wasn’t aware that modern versions of pip handle builds differently than they did six years ago — specifically, that it now compiles wheels within an isolated environment. I was surprised to see a message indicating that %package_name% was not installed, yet I was still able to import it. By the second day, I eventually discovered the --no-build-isolation option of pip.
This works because of the relative path to the pyenv.cfg file.
Python sticks out for having the arrogance to think that it’s special, that “if you’re using Python you don’t need Docker, we already solved that problem with venv and conda”. And like, that’s cute and all, but I frequently need to package Python code and code in another language into one environment, and the fact that their choice for “containerizing” things (venv/conda) plays rudely with every other language’s choice (Docker) is really annoying.
If that's not good enough for you, you could do some devops stuff and build a docker container in which you compile Python.
I don't see where it is different from some npm project. You just need to use the available resources correctly.
The shame is ... it never had to be that way. A venv is just a directory with a pyvenv.cfg, symlinks to an interpreter in bin, and a site-packages directory in lib. Running anything with venv/bin/python _is_ running in the virtual environment. Pip operations in the venv are just venv/bin/python -m pip ... . All the source/deactivate/shell nonsense obfuscating that reality did a disservice to a generation of python programmers.
It isn't that way. Nothing is preventing you from running the venv's python executable directly.
But the original designer of the concept appears to have thought that activation was a useful abstraction. Setting environment variables certainly does a lot to create the feeling of being "in" the virtual environment.
Seriously, this is why we have trademarks. If Anaconda and Conda (a made-up word that only makes sense as a nickname for Anaconda and thus sounds like it’s the same thing) are two projects by different entities, then whoever came second needs to change their name, and whoever came first should sue them to force them. Footguns like this should not be allowed to exist.
Anaconda suddendly increased the licensing fees like Broadcom did with VMWare, many companies stopped using it because of the sudden increase in costs.
https://blog.fulcrumgenomics.com/p/anaconda-licensing-change... https://www.theregister.com/2024/08/08/anaconda_puts_the_squ...
This is not anything like a fact. For three years now (since the 3.11 release) Python distributions on Linux have in fact taken special measures to prevent the user from using tools other than the system package manager to install into the global environment. And for thirteen years (since the 3.3 release) Python has offered standard library functionality to create isolated environments specifically to avoid that problem. (And that functionality is based on a third party library with public releases going back eighteen years.)
Pip is designed around giving you the freedom to choose where those environments are (by separately creating them) and your strategy for maintaining them (from a separate environment per-project as a dev, to a single-environment-still-isolated-from-the-system for everything as a data scientist, and everything in between).
Treating python as a project level dependency rather than a system level dependency is just an excellent design choice.
Nobody is treating Python as a project level dependency. Your Linux distro treats it as a system level dependency, which is exactly why you encountered the problem you did.
When you create a virtual environment, that does not install a Python version. It just makes symlinks to a base Python.
Building Python from source, and setting it up in a way that doesn't interfere with the package manager's space and will cause no problems, is easy on major distros. I have access to over a dozen builds right now, on Mint which is not exactly a "power user" distro (I didn't want to think about it too much when I initially switched from Windows).
Only if that "program that uses Python" is itself provided by a system package for global installation.
> so you have python packages bundled as system packages which can conflict with that same package installed with pip.
Right; you can choose whether to use system packages entirely within the system-package ecosystem (and treat "it's written in Python" as an irrelevant implementation detail); or you can create an isolated environment so as to use Python's native ecosystem, native packaging, and native tools.
I don't know why anyone would expect to mingle the two without problems. Do you do that for other languages? When I tried out Ruby, its "Bundler" and "gem" system was similarly isolated from the system environment.
https://uploads.dailydot.com/2024/04/damn-bitch-you-live-lik...
You can create venvs wherever you please and then just install stuff into them. Nobody forces the project onto you, at work we don't even use the .toml yet because it's relatively new, we still use a python_requirements.txt and install into a venv that is global to the system.
At work for us we use uv pip freeze to generate a more strict requirements file.
> [...] at work we don't even use the .toml yet because it's relatively new, we still use a python_requirements.txt and install into a venv that is global to the system.
Unless your `python_requirements.txt` also carries checksums, like uv's lock files or poetry's lock files, that is. Though of course things get spicy and non-reproducible again, if you have then multiple projects/services, each with their own `python_requirements.txt`, all installing into that same global venv ...
I think you're basically suggesting that you'd have a VM or something that has system-high packages already preinstalled and then use UV on top of it?
I don't see anything resembling "environments" in the list of features or in the table of contents. In some sections there is stuff like "When working on a project with uv, uv will create a virtual environment as needed", but it's all about environments as tied to particular projects (and maybe tools).
You can use the `uv venv` and the `uv pip` stuff to create an environment and install stuff into it, but this isn't really different from normal venvs. And in particular it doesn't give me much benefit over conda/mamba.
I get that the project-based workflow is what a lot of people want, and I might even want it sometimes, but I don't want to be forced into foregrounding the project.
The advantage of being forced to do this is other people (including yourself on a new laptop) can clone your project, run uv install and get working. It's the death of "works on my machine" and "well it'll take them a couple of weeks to properly get up and running".
I know this might be a strange idea on HN, but tons of people writing code in Python, who need access to PyPI packages to do what they're doing, have no intention whatsoever of providing a clonable project to others, or sharing it in any other way, or indeed expecting anyone else on the planet to run the code.
It takes a couple of seconds to setup, and then you just use uv add instead of (uv) pip install to add things to the environment, and the project file is kept in sync with the state of the environment. I'm not really understanding what it is for the workflow you describe that you expect a tool to do that uv isn’t providing?
How about the advantage of not taking an entire lunch break to resolve the environment every time you go to install a new library?
That was the biggest sticking point with conda/mamba for me. It's been a few years since I last used them but in particular with geospatial packages I would often run into issues.
   uv venv fooenv
   .\fooenv\Scripts\activate
   uv pip install numpy
   uv pip freeze
   uv pip uninstall numpy
   deactivate
This is covered in the section of the docs titled "The pip interface": https://docs.astral.sh/uv/pip/
Performance?
Note that I'm mostly in the research/hobby environments - I think this approach (and Python in general, re: some other discussions here about the language) works really well, especially for the latter, but the closer you get to "serious" work, the more sense the project environment approach makes of course
Example: https://treyhunner.com/2024/12/lazy-self-installing-python-s...
If not, where do you see a meaningful difference?
tbh this has been a sticking point for me too with uv (though I use it for everything now). I just want to start of a repl with a bunch of stuff installed so I can try out a bunch of stuff. My solution now it to have a ~/tmp dir where I can mess around with all kinds of stuff (not just python) and there I have a uv virtualenv installed with all kinds of packages pre-installed.
Right, it's this. I get the feeling a lot of people here don't work that way though. I mean I can understand why in a sense, because if you're doing something for your job where your boss says "the project is X" then it's natural to start with a project structure for X. But when I'm going "I wonder if this will work..." then I want to start with the code itself and only "productionize" it later if it turns out to work.
I hope the people behind UV or someone else adress this. A repl/notebook thing that is running on a .venv preinstalled with stuff defined in some config file.
So, create a project as a playground, put what you want it to include (including something like Jupyter if you want notebooks) in the pyproject.toml and... use it for that?
What do you want a tool to do for that style of exploration that uv doesn't already do? If you want to extract stuff from that into a new, regular project, that maybe could use some tooling, sure, that would take some new tooling.
Do you need a prepackaged set of things to define the right “bunch of stuff” for the starting point? Because that will vary a lot by what your area of exploration is.
    uv run --with=numpy,pandas pythonuv.lock is a bless
That gets problematic if environments go out of sync, or you need different versions of python or dependencies.
So you are right, you probably won't benefit a lot if you just have one big environment and that works for you, but once you pull things in a project, uv is the best tool out there atm.
You could also just create a starter project that has all the things you want, and then later on pull it out, that would be the same thing.
I use the 'bare' option for this
or `uvx --with my-package ipython`
Could it be that you’re just used to separate environments causing so much pain that you avoid it unless you’re serious about what you’re doing?
It is still benefitial to not install stuff system wide, since this makes it easy to forget which stuff you already have installed and which is a missing dependency.
Keeping track of dependencies is kind part of a programers work, so as long as you're writing these things mostly for yourself do whatever you like. And I say that as someone who treats everything like a project that I will forget about in 3 days and need to deploy on some server a year later.
uv has a script mode, a temp env mode, and a way to superimpose a temp env on top of an existing env.
See: https://www.bitecode.dev/p/uv-tricks
That's one of the selling point of the tool: you don't need a project, you don't need activate anything, you don't even need to keep code around.
Yesterday I wanted to mess around with logoru in ipython. I just ran `uvx --with loguru ipython` and I was ready to go.
Not even a code file to open. Nothing to explicitly install nor to clean up.
For a tool that is that fantastic and create such enthusiasm, I'm always surprise of how little of its feature people know about. It can do crazy stuff.
That is exactly 100% what I demand. Projects should be - must be - completely isolated from one another.
Quite frankly anything else seems utterly insane to me.
Uv combined with type hints reaching critical mass in the Python ecosystem, and how solid PyLance is in VSCode, feels so good it has made me consider investing in Python as my primary language for everything. But then I remember that Python is dog slow compared to other languages with comparable ergonomics and first-class support for static typing, and...idk it's a tough sell.
I know the performance meta in Python is to...not use python (bind to C, Rust, JVM) - and you can get pretty far with that (see: uv), but I'd rather spend my limited time building expertise in a language that isn't constantly hemorrhaging resources unless your code secretly calls something written in another language :/
There are so many good language options available today that compete. Python has become dominant in certain domains though, so you might not have a choice - which makes me grateful for these big steps forward in improving the tooling and ecosystem.
The Python team needs not feel any pressure to change to compete, Python has already done quite well and found its niche.
I am a user of pip binaries. Every few years one of them breaks.
As far as I understand, developers never cared about pinning their dependencies and python is fast to deprecate stuff.
  $ uvx remt
      Built pygobject==3.54.5
      Built remt==0.11.0
      Built pycairo==1.28.0
  Installed 12 packages in 9ms
  Traceback (most recent call last):
    File "/home/user/.cache/uv/archive-v0/BLXjdwASU_oMB-R4bIMnQ/bin/remt", line 27, in <module>
    import remt
  File "/home/user/.cache/uv/archive-v0/BLXjdwASU_oMB-R4bIMnQ/lib/python3.13/site-packages/remt/__init__.py", line 20, in <module>
    import pkg_resources
  ModuleNotFoundError: No module named 'pkg_resources'
  $ uvx maybe
  × Failed to build `blessings==1.6`
  ├─▶ The build backend returned an error
  ╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit status:
      1)
      [stderr]
      /home/user/.cache/uv/builds-v0/.tmpsdhgNf/lib/python3.13/site-packages/setuptools/_distutils/dist.py:289:
      UserWarning: Unknown distribution option: 'tests_require'
        warnings.warn(msg)
      /home/user/.cache/uv/builds-v0/.tmpsdhgNf/lib/python3.13/site-packages/setuptools/_distutils/dist.py:289:
      UserWarning: Unknown distribution option: 'test_suite'
        warnings.warn(msg)
      error in blessings setup command: use_2to3 is invalid.
      hint: This usually indicates a problem with the package or the build
      environment.
  help: `blessings` (v1.6) was included because `maybe` (v0.4.0) depends on
        `blessings==1.6`
Having said that, our team is having to do a bunch of work to move to a new python version for our AWS serverless stuff, which is not something I'd have to worry about with Go (for example). So I agree, there is a problem here.
If so, you also cannot attribute to Python the virtues of Python lib developers either (in particular, a large library ecosystem).
Also, the syntax of layout is bad. I am not talking about layout itself, I do like Haskell syntax (despite being weird about parens). But if I write `a = b +` in Python on one line, then I get a syntax error, although the parser could instead assume that the expression is not terminated and must (obviously) continue on the next (indented) line. I hate that I need to use `\` or `(...)` to make this clear to the parser. I wrote parsers myself, and I know that it knows what needs to follow, and Python itself shows me that it knows: by raising a completely unnecessary syntax error.
It feels to me that the Python language design confuses `simple and easy` with `primitive`. If feels like a design without the knowledge of programming language research and ergonomy. It feels to me like a dangerous toy language, and I am never sure which of my stupid mistakes will be found by the compiler/interpreter, and which will just silently misinterpreted. And which of my perfectly valid requests will be rejected with an exception. In some aspects it feels less safe than C, particularly due to the lack of scoping and the danger of reuse of variables or introduction of new function local variables when actually, outer 'scope' variables were intended to be written.
This is not really meant as a rant, but it is a personal opinion, and I try to lay out my reasons. I am not trying to shame anyone who loves Python, but I just want to clarify that there are people who hate Python.
What would that even do? Is that the equivalent of `a = a or (i > 0)`? Python does not have a "||" operator.
> the `a if b else c` confusion
I'll agree that Python really dropped the ball on implementing a ternary operator, but I guess Guido really didn't want the C version.
> the weirdness of comprehensions (particularly nested `for`s)
If a comprehension is getting weird because of nesting, then I'd change it to not be a comprehension. I'd rather have nested `for` loops than nested comprehensions.
> the exceptions that are raised for non-exceptional cases
I'd be interested in an example of this.
> like trying to delete a non-existing dict entry (need an additional `if` or a `try` block) or access a non-existing map entry (need to use `.get()` instead of `[]` or a `try` block)
I suggest thinking more about the Zen of Python. Specifically, explicit is better than implicit, and errors should never pass silently. If you're trying to delete a non-existing dict entry, or trying to access a non-existing entry, then in most cases, you have a bug somewhere. Basically, Python believes that your code is expecting that dict entry to exist. Forcing you to use .get or use an `if` is a measure to make your code explicitly declare that it's expected that the dict entry might not exist.
> But if I write `a = b +` in Python on one line, then I get a syntax error [..]
Yeah, the parser could certainly be written to handle this case, but it was deliberately written not to.
> It feels to me like a dangerous toy language
Toy language, I could see. Dangerous? Not at all. I could call it opinionated, though.
> and I am never sure which of my stupid mistakes will be found by the compiler/interpreter, and which will just silently misinterpreted.
Meanwhile, I'd look at C and think "I'm not sure which of my mistakes will lead to a memory leak or an exploitable buffer overflow."
> In some aspects it feels less safe than C, particularly due to the lack of scoping and the danger of reuse of variables or introduction of new function local variables when actually, outer 'scope' variables were intended to be written.
I'd argue the exact opposite: It's more dangerous to allow you to accidentally clobber a global variable when you meant to create a local one.
> This is not really meant as a rant, but it is a personal opinion, and I try to lay out my reasons.
I think the core issue is that Python tries to adopt a different paradigm than languages like C, and it's a paradigm that you just strongly disagree with. Personally, I love it. What's funny is that when I first saw Python, I was like "This language sucks, it makes things too easy and it holds your hand." After using it extensively at work though, I find myself saying "This language is great! It makes things so easy and holds your hand!"
Every python codebase i’ve had to look after has rotten to the point it doesn’t even build and is a maintenance nightmare.
I also hate whitespace instead of {}
like with everything else these days, it's about living with it and try to make the best of the good parts of it
i remember getting told in the 00s that i would get used to and love the whitespace-based block definition, and boy i hate it now more than ever with 1000s of hours spent looking at and coding in python
but it is what it is, for whatever reason it has become a must in many particular industries a lot like Java took over some earlier on although it seems to be fading, and javascript is a must in others
it really isn't just about programming languages that these days you either learn to live with some massive annoyances and practices you may hate, or withdraw entirely from society
Given the choice though I typically don’t teach for python unless there’s an obvious reason to (some good library for my task or a team i am helping is only python people / devops etc)
Other platforms like Java and .NET enjoy one to two decades of life for source before it becomes mildly challenging to build.
Java enjoys months of life for source before it becomes impossible to build, because some prehistoric version of Gradle with shitty Groovy script stopped working.
Why?
It's why projects use so many packages in the first place.
The grandparent's scientific/Go interests suggests a need for a large, working ecosystem and there are probably several places in ArrayMancer which need some love for "fallback cases" to work with tcc (or elsewhere in the as you note much smaller Nim eco-system).
EDIT: E.g., for ArrayMancer you need to make stb_image.h work with tcc by adding a nim.cfg/config.nims `passC="-DSTBIW_NO_SIMD -DSTBI_NO_SIMD"` directive, though. And, of course, to optimize compile times, just generally speaking, you always want to import only exactly what you really need which absolutely takes more time/thought.
I try to learn the basics of new programming language regularly and write a small lisp alike interpreter in it and give myself a maximum of 2 days working on it. It covers things like string handling, regexp, recursion, lambdas, garbage collection, ... and run them through a tiny test suite.
In Python and JS, it was easy to do it and the code was still very readable. In C++, the language I earn my money from, I had a bug I was not able to fix within the given time frame, happening just with gcc not clang, assuming some undefined behavior. In C, I was able to add my own garbage collector with muss less work than I expected ... but
Nim really impressed me, it really felt almost like I wrote it in Python, but an executable which run on its own and being quite a bit faster.
Working mostly in the embedded world, where ecosystem matters somewhat less. If any employee ever would give me a chance to choose a language myself I would definitely try to write a first prototype in Nim.
It also compiles to JS.
It's syntax is significantly different from python, but it does have operator overloading.
It's performance is comparable to go, and has good concurrency support, although it is different than go, and there are still some rough edges with "async" code. Compile times aren't as good as go though.
The type system is excellent, although I'm not really sure what you mean by "flexible".
And FFI support is great.
“Flexible” means the range from gradual typing (‘any’) to Turing complete conditional types that can do stuff like string parsing (for better or for worse). Structural typing vs instanceof and so on.
There’s really no comparison between Typescript’s type system and Rust’s. It’s worth noting though that Typescript is a bolted on typesystem that has explicitly traded soundness for flexibility. That’s the real tradeoff between Rust and TS IMHO. Rust is sound and expressive but not flexible, while Typescript is expressive and flexible but not sound.
So the flexibility means one gets to pretend they are doing typing, but in reality they get to sprinkle the code with void casts, because expressing ideas is apparently hard? For better or worse, that is probably the main pillar Rust is designed on.
Rust compiles fast if your translation units don’t need too much macro expansion. You add something like Diesel, and you can call for the lunch break.
It’s also worth mentioning Scala with Scala Native and maybe Kotlin with Kotlin/Native. OpenJDK Project Panama FFM now gives a better FFI experiences than JNI.
Worse, typescript may even run out of it's allocated memory sometimes.
Go feels like C with training wheels.
Rust feels like riding a bike where one leg pedals the front wheel and another one pedals the back wheel, and you have one handlebar for each wheel as well and a very smart alarm system but it is very clunky to ride (and they tell you it's "flexible")
It's kind of a meme here on HN but while Rust compilation times are indeed higher than I wished they were, calling them “crippling” is a massive exaggeration.
My daily driver is a mid-range desktop computer from 2018 and I regularly code on a laptop from 2012, and even then it's completely manageable: cargo check is nigh instant on both, incremental compilation in debug mode is more than fast enough and even incremental rebuild in release mode and full debug builds are OK-ish (especially since they don't happen as often as the other above). There's only full builds in release mode that are arguably slow in the 2012 laptop (though on the project where it's the biggest problem, the majority of the time is spent compiling a C++ dependency), but then again it's a very obsolete piece of hardware and this isn't supposed to happen more than every six week when you update your compiler toolchain.
I’m not memeing here, I’ve struggled with this issue on a variety of different projects since I first started using Rust seven years ago.
Then you can build a DAG of crates and stick e.g. the Protobuf stuff in its own separate corner where it only needs to be recompiled on full rebuilds or when you work on it.
Feels a bit shitty to have to resort to managing crates instead of modules simply due to the compile times, but it is what it is.
And it makes total sense to me, it’s a way of organizing your dependency graph by the lifetimes of your components.
This will also simplify testing, development, and audit. You won’t need to recompile autogen schemas as often as the business logic implementation anyway. Depending on artifacts pushed through a release pipeline is even more reliable. You can QA everything in guaranteed isolation while keeping it conveniently a workspace monorepo.
One question, more out of curiosity than any genuine need: do you (or do you plan to) support any kind of trait/interface based polymorphism? E.g. it's a pretty common idiom to have a function that works on "any iterable type" and that sort of thing, which seems like it would be best modeled as an interface. I guess you could argue that's at odds with Python's tradition of "duck typing" but then again, so is static typing in general so I'm not sure.
I only wish discussions happened elsewhere than Discord, e.g. Zulip, where you can have web-public channels, which is great for searchable documentation, and you can interact with channels by email if you so desire.
* Performance - the JVM is very competitive with C/C++ performance.
* Compile times - Not go fast, but not C/C++/Rust slow.
* Concurrency - Virtual threads (finalized in 21) bring in the concurrency capabilities of go to the JVM
* Type System Flexibility - Kotlin isn't quite as flexible as Typescript, but it's pretty close. It's more flexible than java but not as flexible as scala. I think it strikes a good middle ground.
* Native platform integration - This is probably the weakest part of the JVM but it's gotten a lot better with the work done on Project Panama (mostly delivered in 22). Jextract makes it a lot easier to make native integrations with Java. Definitely not as painful as the JNI days.
There's also kotlin native that you could play around with (I haven't).
You really just want operators when you're performing tons of operations, it's an absolute wall of text when it's all method calls.
Does it deliver on the bold claims of its designers?
In case it encourages you: a lot of uv's performance benefits come from things that are not the implementation language. In particular, it has a much more intelligent system for caching downloaded package artifacts, and when asked to pre-compile bytecode it can use multiple cores (this is coming soon to pip, to my understanding; actually the standard library already has a primitive implementation).
Case in point: uv itself is not written in Python. It's a Rust tool.
It always amazes me when people work on an ecosystem for a language but then don't buy enough into that to actually use it to do the work.
Avoidance of dogfooding is a big red flag to me.
Python aims to be simple, not particularly fast (though it is getting faster)
I don't see a problem with that. Pick the language adapted to your problem. Python isn't aiming at solving every problem and that's okay.
Well, it wildly missed the mark there. Nothing about modern Python is simple. It's a very complex language hiding behind friendly syntax.
It's ok for IO bound but not for CPU bound work.
> Avoidance of dogfooding is a big red flag to me.
I'm making PAPER for a reason.
It's completely fair for a language to have a niche different that 'quick start-up and runtime'.
Would you write an assembler IDE in assembler?
I use Python for >90% of my code these days. I love uv for its ux and speed. I 0% care it wasn't written in Python. In fact, making it fully independent of any existing Python environment is a plus for the Python user. No weird bootstrap problems.
It does not make me switch to Rust for my own code.
The power of Python is that it's high level and very powerful and has a great community and ecosystem of tools/libraries. There's absolutely zero problem and totally a good thing if there are core libraries written in faster languages.
Tools specifically CLI tools, are best written in statically typed compiled languages.
In two years I bet we’ll be seeing v8 level performance out of CPython.
It’s wildly optimistic to now expect a 10x speedup in two years, with fewer resources.
I also believe the JIT in v8 and Python are different, the latter relying on copy-and-patch while v8 uses a bunch of different techniques together.
  $ time python -c 'sum(range(1_000_000_000))'
  real 0m19.997s
  user 0m19.992s
  sys 0m0.005s
  $ time pypy -c 'sum(range(1_000_000_000))'
  real 0m1.146s
  user 0m1.126s
  sys 0m0.020sI'd be quite delighted to see, say, 2x Python performance vs. 3.12. The JIT work has potential, but thus far little has come of it, but in fairness it's still the early days for the JIT. The funding is tiny compared to V8. I'm surprised someone at Google, OpenAI et al isn't sending a little more money that way. Talk about shared infrastructure!
If you're using python because you have to then you might not like all that and might see it as something to toss out. This makes me sad.
But, they don't have the full compatibility with CPython, so nobody really picks them up.
... but then again neither pdm nor uv would have happened without poetry.
I recently had to downgrade one of our projects to 3.12 because of a dependency we needed. With uv, I can be sure that everybody will be running the project on 3.12, it just all happens automatically. Without uv, I'd get the inevitable "but your changes crashed the code, have you even tested them?"
Post like these aptly describe why companies are downsizing in lieu of AI assistants, and they are not wrong for doing so.
Yes, Python is "slow". The thing is, compute is cheap these days and development time is expensive. $1000 per month is considered expensive as hell for an EC2 instance, but no developer would work for $12000 a year.
Furthermore, in modern software dev, most of the bottlenecks is network latency. If your total end to end operation takes 200ms mostly because of network calls, it doesn't matter if you code runs in 10 ms or 5ms as far as compute goes.
When it comes to development, the biggest uses of time are
1. Interfacing with some API or tool, for which you have to write code 2. Making a change, testing a change, fixing bugs.
Python has both covered better than any other language. Just today, it took me literally 10 mins to write code for a menu bar for my Mac using rumps python library so I have most commonly used commands available without typing into a terminal, and that is without using an LLM. Go ahead and try to do the same in Java or Rust or C++ and I promise you that unless you have experience with Mac development, its going to take you way more time. Python has additional things like just putting breakpoint() where you want the debugger, jupyter notebooks for prototyping, and things like lazy imports where you use import inside a function so large modules only get loaded when they run. No compilation step, no complex syntax. Multiprocessing is very easy to use as a replacement for threading, really dunno why people want to get rid of GIL so much. Functionally the only difference is overhead in launching a thread vs launching a process, and shared memory. But with multiprocessing API, you simply spin up a worker pool and send data over Pipes, and its pretty much just as fast as multithreading.
In the end, the things that matter are results. If LLMs can produce code that works, no matter how stringy it is, that code can run in production and start making company money, while they don't have to pay you money for multiple months to write the code yourself. Likewise, if you are able to develop things fast, and a company has to spend a bit more on compute, its a no brainer on using Python.
Meanwhile like strong typing, speed, GIL, and other popular things that get mentioned is all just echos of bullshit education that you learned in CS, and people repeat them without actually having any real world experience. So what if you have weak typing and make mistakes - code fails to run or generate correct results, you go and fix the code, and problem solved. People act like failing code makes your computer explode or something. There is no functional difference between a compilation failure and a code running failure. And as far as production goes, there has never been a case of a strong type language that gets used that gets deployed and doesn't have any bugs, because those bugs are all logic bugs within the actual code. And consequently, with Python, its way easier to fix those bugs.
Youtube, Uber, and a bunch of other well used services all run Python backends for a good reason. And now with skilled LLM usage, a single developer can write services in days that would take a team of engineers to write in weeks.
So TL:DR, if you actually want to stay competitive, use Python. The next set of LLMs are all going to be highly specialized smaller models, and being able to integrate them into services with Pytorch is going to be a very valuable skill, and nobody who is hiring will give a shit how memory safe Rust is.
(Python does incur a hefty performance penalty for things that are actually CPU bound. But that doesn't describe most of the process of installing Python packages; and the main part that is CPU bound is implemented by CPython in C.)
I see it shine for scripts and AI but that's it.
If using Python instead of what we use, our cloud costs would be more than double.
And I can't go to CEO and CFO and explain to them that I want to double the cloud costs (which are already seen as high).
Then, our development speed won't really improve because we have large projects.
That being said, I think using Python for scripting is great in our case.
GP comment reeks of textbook "performance doesn't matter" rhetoric.
Morale: follow the rules.
And the game is worse for it :')
Players are incentivized to win due to specific decisions made by the league.
In Bananaball the league says, "practice your choreographed dance number before batting practice." And those same athletes are like, "Wait, which choreographed dance number? The seventh inning stretch, the grand finale, or the one we do in the infield when the guy on stilts is pitching?"
Edit: the grand finale dance number I saw is both teams dancing together. That should be noted.
Baseball has done a terrible job, but at least seems to have turned the corner with the pitch clock. Maybe they'll move the mound back a couple feet, make the ball 5.5oz, reduce the field by a player and then we'll get more entertainment and the players can still try their hardest to win.
Personally, I think it'd be interesting to see how the game plays if you could only have two outfielders (but you could shift however you choose.)
I'd guess MLB The Show video game wouldn't be a bad place to start. They should have a decent simulator built in.
But this is getting a bit off topic, I suppose.
"A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." - Max Planck.
Western Europe in a VERY dense city BTW.
I don't think the implied claim is that there should be specifically a train to every particular address, if that's what you're counting as failure in the game, but rather that with good public transport (including trains) and pedestrian/cyclist-friendly streets it shouldn't be the case that most people need to drive.
Need to move 3 or 4 people? Driving the car may be cheaper.
Don't want to get rained on? Or heatstroke? Or walk through snow? Or carry a bunch of stuff, like a groceries/familyWeek or whatever else? Or go into the countryside/camping? Or move a differently-abled person? Or go somewhere outside public transport hours? Or, or .. or.
Are there many cases where people should take public transport or ride a bike instead of their car? Obviously yes. But once you have a car to cover the exigent circumstances it is easy to use them for personal comfort reasons.
They’re also a joke when it comes to moving large numbers of people. I can’t imagine the chaos if everyone leaving a concert at Wembley Stadium decided to leave by car.
Fort Worth is worse for this!
Strongtowns is definitely worth a listen.
But people claiming that you can live a life without cars don't seem to realise the very many scenarios where cars are often easier and sometimes the only answer.
> Need to move 3 or 4 people? Driving the car may be cheaper.
That's the issue―the average car occupancy is <1.5. Our goal should be to raise it, by offering alternatives to cars in cases where they're not appropriate.
> Are there many cases where people should take public transport or ride a bike instead of their car? Obviously yes.
Not many, most. Cars are a niche, they're only economical when transporting a few people with cargo over medium distances. Everything else is more efficiently covered by another mode of transport.
And "obviously", huh? Look outside. It's all roads.
> But once you have a car to cover the exigent circumstances it is easy to use them for personal comfort reasons.
You'd be surprised. The Netherlands is the best example of this―the Dutch own almost as many cars per person as Americans do, yet they cycle orders of magnitude more.
It's a matter of designing our built environment to make the most efficient mode of transportation for the situation the most convenient option.
> > Need to move 3 or 4 people? Driving the car may be cheaper. >That's the issue―the average car occupancy is <1.5. Our goal should be to raise it, by offering alternatives to cars in cases where they're not appropriate.
When I said this, I meant in terms of $ to the individual making the choice. Apart from city parking costs, and congestion charges, with modern phones being used a lot for transport these days could we do dynamic group discounts? IE my transport app shows a QR code, my friends who are coming with me scan it with their transport app and by travelling together(beeping on and off at the same locations within the same timeslot) we get a discount?
> Not many, most. Cars are a niche, they're only economical when transporting a few people with cargo over medium distances. Everything else is more efficiently covered by another mode of transport.
I agree in the context of city planning and public transport being a lot better than it is now. Otherwise, the last mile problem is a hard one to get past. As soon as you walk or ride a bike to the station/bus-stop you've introduced constraints on cargo, physical fitness and weather. All mostly easier with a car. Also, a car provides freedom/flexibility for midday decisions like "I'll do the groceries on the way from work" or "my wife had an issue at work, so I'll go pick up the kids this afternoon" or similar - harder to do if you've committed to pubic transport in the morning.
> And "obviously", huh? Look outside. It's all roads.
Where I am, public transport is buses. Bicycles are meant to ride on the road. So the roads are still used even if the car isn't.
> You'd be surprised. The Netherlands is the best example of this―the Dutch own almost as many cars per person as Americans do, yet they cycle orders of magnitude more.
This is one thing I find frustrating. But not everyone has a "default active" lifestyle. Many are quite sedentary. Also, a significant chunk of car costs - purchase/depreciation, yearly insurance and registration - are not mileage based. But it is frustrating that other options are not even considered. Again though, urban planning and current public transport shape the society we live in for generations. Maybe we'd all be more active if it was better done.
> It's a matter of designing our built environment to make the most efficient mode of transportation for the situation the most convenient option.
So much this. But there is a lot to overcome. Individualism, NIMBYs and cars themselves as a status symbol of freedom and "go anywhere, go anytime" flexibility. I don't see how to do it - but I'd support smart attempts to try.
In the states at least if you're using public transit it's generally as an intentional time / cost tradeoff. That's not a mystery and taking a point-to-point schedule and comparing that against public transit constraints doesn't really prove much.
If you want the freedom to move across vast amounts of open nature, then yeah the private automobile is a good approximation for freedom of mobility. But designing urban areas that necessitate the use of a private vehicle (or even mass transit) for such essentials as groceries or education is enslavement. I don't buy the density argument either. Places that historically had the density to support alternative modes of transportation, densities that are lower than they are today, are only marginally accessible to alternative forms of transportation today. Then there is modern development, where the density is decreased due to infrastructure requirements.
These things are different.
So no, I don't think Europeans who haven't been in America have quite absorbed just how vast America is. It stretches across an entire continent in the E-W direction, and N-S (its shortest border) still takes nearly a full day. (San Diego to Seattle is about 20 hours, and that's not even the full N-S breadth of the country since you can drive another 2.5 hours north of Seattle before reaching the Canadian border). In fact, I can find a route that goes nearly straight N-S the whole way, and takes 25 hours to drive, from McAllen, TX to Pembina, ND: https://maps.app.goo.gl/BpvjrzJvvdjD9vdi9
Train travel is sometimes feasible in America (I am planning Christmas travel with my family, and we are planning to take a train from Illinois to Ohio rather than fly, because the small Illinois town we'll be in has a train station but no airport; counting travel time to get to the airport, the train will be nearly as fast as flying but a lot cheaper). But there are vast stretches of the country where trains just do not make economic sense, and those whose only experience is in Europe usually don't quite realize that until they travel over here. For most people, they might have an intellectual grasp of the vastness of the United States, but it takes experiencing it before you really get it deep down. Hence why the very smart German engineer still misread the map: his instincts weren't quite lined up with the reality of America yet, and so he forgot to check the scale of the map.
There are plenty of city pairs where high speed trains do make economic sense and America still doesn't have them. [1] is a video "56 high speed rail links we should've built already" by CityNerd. And that's aside from providing services for the greater good instead of for profit - subsidizing public transport to make a city center more walkable and more profitable and safer and cleaner can be a worthwhile thing. The US government spends a lot subsidizing air travel.
> So no, I don't think Europeans who haven't been in America have quite absorbed just how vast America is
China had some 26,000 miles of high speed rail two years ago, almost 30,000 miles now connecting 550 cities, and adding another couple of thousand miles by 2030. A hundred plus years ago America had train networks coast to coast. Now all Americans have is excuses why the thing you used to have and tore up is impossible, infeasible, unafordable, unthinkable. You have reusable space rockets that can land on a pillar of fire. If y'all had put as much effort into it as you have into special pleading about why it's impossible, you could have had it years ago.
This is, of course, a massively broad generalization, and there will be plenty of voters who don't fit that generalization. But the average American voter, as best I can tell, recoils from the words "high-speed rail" like Dracula would recoil from garlic. And I do believe that California's infamous failure (multiple failures, even) to build the high-speed rail they have been working on for years has a lot to do with that "high-speed rail is a boondoggle and a waste of taxpayer dollars" knee-jerk reaction that so many voters have.
The forests and wilderness of the PNW are much, much, much, much more remote and wild than virtually anywhere you’d go in Europe. Like not even close.
Or have a "car-cabin-without-engine-and-wheels" and treat it like a packet on a network of trains and "skateboard car platforms".
Such "freedom"...
They just don't have to use them all the time since they can take the more efficient public transport, and they can buy one after college even, they don't need to drive one from 16 yo just to be able to get around...
I'm curious how this changes (in your mind) if "trains" can be expanded to "trains, buses, bicycle", or if you consider that to be a separate discussion.
The Atlanta Metro has 6.5 million people across TWENTY THOUSAND square kilometers.
Trains just don't make sense for this. Everything is too spread out. And that's okay. Cites are allowed to have different models of transportation and living.
I like how much road infra we have. That I can visit forests, rivers, mountains, and dense city all within a relatively short amount of time with complete flexibility.
Autonomous driving is going to make this paradise. Cars will be superior to trains when they drive themselves.
Trains lack privacy and personal space.
I live in NYC which has 29,000/sqkm in Manhattan and 11,300/sqkm overall. Public transportation is great here and you don't need a car.
but at 240/sqkm, that's really not much public trans per person!
How did we get here from the post about uv?
I'm so stoked for what uv is doing for the Python ecosystem. requirements.txt and the madness around it has been a hell for over a decade. It's been so pointlessly hard to replicate what the authors of Python projects want the state of your software to be in.
uv has been much needed. It's solving the single biggest pain point for Python.
Public transport is to move people around, not to make money.
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
> Eschew flamebait. Avoid generic tangents. Omit internet tropes.
> a society that glamorizes everyone driving the biggest trucks and carrying the largest rifles
Did you forget to support yourself? You're saying Rheinland has three times the population density of Atlanta, with convenient passenger rail, and that demonstrates that low population density isn't an obstacle to passenger rail in Atlanta?
https://www.nytimes.com/interactive/2019/08/14/magazine/traf...
Calgary apparently also does a good job of clearing its bike lanes.
And I do my Costco shopping by bike year-round. I think I've used the car for large purchases at Costco twice in the last year.
I _rarely_ drive my car anywhere in Toronto, and find the streets on bike safer than most of the sidewalks in January -- they get plowed sooner than most homeowners and businesses clear the ice from their sidewalks.
And in Toronto we're rank amateurs at winter biking. Look at Montreal, Oslo, or Helsinki for even better examples. Too bad we've got a addle-brained carhead who doesn't understand public safety or doing his own provincial as our premier.
Personally I've also biked to work (and everywhere, really) in sub-zero degrees many times, because the bicycle lanes are cleared and salted. It's really not too bad. It actually gets a bit too hot even, because you start out by wearing so much.
I used to bike to work in just-above-freezing temperatures. That wasn't so bad.
The one time it started to rain mid-journey, that was bad.
Do the opposite thought experiment for me: Pick any two points of interest on the map and see how well connected they are with roads. Keep doing it until you find somewhere not accessible via car. See the issue yet?
We've paved over the entire planet to the point that you can get anywhere you'd like with a car. We have not done so whatsoever for any other mode of transportation. Pedestrian walkways come close but we prioritize vehicles over those too. The investment into public transport & cycling infrastructure is a statistical error in comparison to roadways.
So no shit it's more convenient for you to take a car than a train, that's the entire point―it shouldn't be.
A 20 lane highway should be a train track, intra-city roads should be dedicated to bikes, not cars.
Depending how expensive is gasoline in your country, when using a car people underestimate the cost of a travel by a factor two to five, because they don't count the depreciation of their vehicle's value and the maintenance cost (and sometimes even insurance price) driven by the kilometers ridden during the trip.
I guess Europeans will never find out how great the US is :-)
And you have to get lots of permits to have an AC installed legally. If you do not have a permit, you will have to pay a really hefty fee when the inspectors come.
So yeah, buying an AC is what most people would do, but they do not because of the damn permits they most likely will not get. It is a shitty situation.
Actually this idea of just buying things at "the store" is relatively new too. Historically people would make more things themselves, and more food would be purchased directly from farmers who had grown it.
Sure, this is just my experience, but I use Python a lot and use a lot of tools written in Python.
Usually happens to me when I find code for some research paper. Even something that's just three months old can be a real pain to get running
Still, I would think it's rare that package versions of different packages become incompatible?
To be fair to the GP comment, this is how I feel about Ruby software. I am not nearly as practiced at installing and upgrading in that ecosystem so if there was a way to install tools in a way that lets me easily and completely blow them away, I would be happier to use them.
One of the commentors above explained what the problem really is (basically devs doing "pip install whatever" for their dependencies, instead of managing them properly). That's more a problem of bad development practices though, no?
God, I hate Python. Why is it so hard to not break code?
Exactly my case. I had to move back to Debian from Ubuntu, where I had installed Chatterbox without much difficulty, and it was hell. You pretty much need Anaconda. With it, it's a cinch.
>what are open-source voice synth which have been working for you.
I tried a few, although rather superficially. Keeping in mind that my 3090 is on my main (Windows) machine, I was constrained to what I could get running on it without too much hassle. Considering that:
* I tried Parler for a bit, although I became disillusioned when I learned all models have an output length limit, rather than doing something internally to split the input into chunks. What little I tried with it sounded pretty good if it stayed within the 30-second window, otherwise it became increasingly (and interestingly) garbled.
* Higgs was good. I gave it one of Senator Armstrong's lines and made it generate the "mother of all omelettes" one, and it was believable-ish; not as emphatic but pretty good. But it was rather too big and slow and required too much faffing around with the generation settings.
* Chatterbox is what I finally settled with for my application, which is making audiobooks for myself to listen to during my walks and bike rides. It fits in the 3070 I have on the Linux machine and it runs pretty quick, at ~2.7 seconds of audio per second.
These are my notes after many hours of listening to Chatterbox:
* The breathing and pauses sound quite natural, and generally speaking, even with all the flaws I'm about to list, it's pleasing to listen to, provided you have a good sample speaker.
* It you go over the 40-second limit, it handles it somewhat more graciously than Parler (IMO). Instead of generating garbage it just cuts off abruptly. In my experience splitting text at 300-350 characters works fairly well, and keeping paragraphs intact where possible generates best results.
* If the input isn't perfectly punctuated it will guess at the sentence structure to read it with the correct cadence and intonation, but some things can still trip it up. I have one particular text where the writer used commas in many places where a period should have gone, and it just cannot figure out the sentence structure like that.
* The model usually tries to guess emotion from the text content, but it mostly gets it wrong.
* It correctly reads quoted dialogue in the middle of narration, by speaking slightly louder. If the text indicates a woman is speaking the model tries to affect a high pitch, with varying degrees of appropriateness in the given context. Honestly, it'd be better if it kept a consistent pitch. And, perplexingly, no matter how much the surrounding text talks about music, it will read "bass" as "bass", instead of "base".
* Quite often the model inserts weird noises at the beginning and end of a clip which will throw you off until you learn to ignore them. It's worse for short fragments, like chapter titles and the like. Very rarely it inserts what are basically cut-off screams, like imagine a professional voice actor is doing a recording and just before he hit stop someone was murdered inside the booth.
* It basically cannot handle numbers more than two digits long. Even simple stuff like "3:00 AM" it will read as complete nonsense like "threenhundred am".
* It also has problems with words in all caps. It's a tossup if it's going to spell it out, yell it, or something in between. In my particular case, I tried all sorts of things to get it to say "A-unit" (as in a unit with the 'A' designation) properly, but sometimes it still manages to fuck it up and go "ah, ah, ah, ah, ah, ah unit".
* Sometimes it will try to guess the accent it should use based on the grammar. For example, I used a sample from a Lovecraft audiobook, with a British speaker, and the output will sometimes turn Scottish out of nowhere, quite jarringly, if the input uses "ya" for "you" and such.
This is the entire problem. You gonna put that in a lock file or just tell your colleagues to run the same command?
I guess this is mostly about data science code and maybe people who publish software in those communities are just doing very poor packaging, so this idea of a "lock file" that freezes absolutely everything with zero chance for any kind of variation is useful. Certainly the worst packaged code I've ever seen with very brittle links to certain python versions and all that is typically some ML sort of thing, so yeah.
This is all anathema to those of us who know how to package and publish software.
In 2025, the overall developer experience is much better in (1) Rust compared to C++, and (2) Java/DotNet(C#) compared to Python.
I'm talking about type systems/memory safety, IDEs (incl. debuggers & compilers), package management, etc.
Recently, I came back to Python from Java (for a job). Once you take the drug of a virtual machine (Java/DotNet), it is hard to go back to native binaries.
Last, for anyone unfamiliar with this quote, the original is from Winston Churchill:
    Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.I don't really know why this is, at a high level, and I don't care. All I know is that Python is, for me, with the kinds of things I tend to need to build, the absolute fucking worst. I hope uv gets adopted and drives real change.
My last dance with Python was trying to build Ardupilot, which is not written in Python but does have a build that requires a tool written in Python, for whatever reason. I think I was on my Mac, and I couldn't get this tool from Homebrew. Okay, I'll install it with Pip—but now Pip is showing me this error I've never seen before about "externally managed environments", a concept I have no knowledge of. Okay, I'll try a venv—but even with the venv activated, the Ardupilot makefile can't find the tool in its path. Okay, more googling, I'll try Pipx, as recommended broadly by the internet—I don't remember what was wrong with this approach (probably because whatever pipx does is totally incomprehensible to me) but it didn't work either. Okay, what else? I can do the thing everybody is telling me not to do, passing `--break-system-packages` to plain old Pip. Okay, now the fucking version of the tool is wrong. Back it out and install the right version. Now it's working, but at what cost?
This kind of thing always happens, even if I'm on Linux, which is where I more usually build stuff. I see errors nobody has ever posted about before in the entire history of the internet, according to Google. I run into incomprehensible changes to the already incomprehensible constellation of Python tooling, made for incomprehensible reasons, and by incomprehensible I mean I just don't care about any of it, I don't have time to care, and I shouldn't have to care. Because no other language or build system forces me to care as much, and as consistently, as Python does. And then I don't care again for 6 months, a year, 2 years, until I need to do another Python thing, and whatever I remember by then isn't exactly obsolete but it's still somehow totally fucking useless.
The universe has taught me through experience that this is what Python is, uniquely. I would welcome it teaching me otherwise.
UV is making me give python a chance for the first time since 2015s renpy project I did for fun.
One could argue, that this is one difference between npm and such, and what many people use in the Python ecosystem. npm and cargo and so on are automatically creating lock files. Even people, who don't understand why that is important, might commit them to their repositories, while in the Python ecosystem people who don't understand it, think that committing a requirements.txt only (without checksums) is OK.
However, it is wrong, to claim, that in the Python ecosystem we didn't have the tools to do it right. We did have them, and that well before uv. It took a more care though, which is apparently too much for many people already.
The unnecessary work of a `git commit`?
Having the file be versioned creates no requirement to update its contents any more frequently than before, and it streamlines "publishing alongside the release". The presence of the lockfile in the repo doesn't in any way compel devs to use the lockfile.
C/C++ often had to compile used “make” which I’ll admit to being better at the conda/pip.
I suspect this is because the c/c++ code was developed by people with a more comp Sci background. Configure/make/make install..I remember compiling this one.
https://mafft.cbrc.jp/alignment/software/source.html
If the software made it biogrids life was easier
But a lot of the languages had their own quirks and challenges (Perl cpan, Java…). Containerization kinda helps.
Honorable mention: Compiling someone else's C code. Come on; C compiles to a binary; don't make the user compile.
I'm assuming a Linux based system here, but consider the case where you have external dependencies. If you don't want to require that the user installs those, then you gotta bundle then or link them statically, which is its own can of worms.
Not to mention that a user with an older glibc may not be able to run your executable, even if they have your dependencies installed. Which you can, for example, solve by building against musl or a similar glibc alternative. But in the case of musl, the cost is a significant overhead if your program does a lot of allocations, due to it lacking many of the optimizations found in glibc's malloc. Mitigating that is yet another can of worms.
There's a reason why tools like Snap, AppImage, Docker, and many more exist, each of which are their own can of worms
  - Distribute a single binary (Or zip with with a Readme, license etc) for Windows
  - Distribute a single binary (or zip etc) for each broad Linux distro; you can cover the majority with 2 or 3. Make sure to compile on an older system (Or WSL edition), as you generally get forward compatibility, but not backwards.
  - If someone's running a Linux distro other than what you built, they can `cargo build --release`, and it will *just work*.$ rustup target add x86_64-unknown-linux-musl
$ cargo build --target x86_64-unknown-linux-musl --release
Similarly for cross-compiling for Windows
The musl wiki lists a number of differences between it and glibc that can have an impact:
https://wiki.musl-libc.org/functional-differences-from-glibc...
C compiles to many different binaries depending on the target architecture. The software author doesn't necessarily have the resources to cross-compile for your system.
Incidentally, this is probably exactly the thing that has made most of those Python installations problematic for you. Because when everything is available as a pre-built wheel, very much less can go wrong. But commonly, Python packages depend on included C code for performance reasons. (Pre-built results are still possible that Just Work for most people. For example, very few people nowadays will be unable to install Numpy from a wheel, even though it depends on C and Fortran.)
Unless you’re on a different architecture, then having the source code is much more useful.
Guess which part of the build I spent fixing the other day... It wasn't the ~200000 lines of c/c++ or the 1000+ line bash script. No. It was 100 lines of python that was last touched 2 years years ago. Python really doesn't work as a scripting language.
pip freeze > requirements.txt
pip install -r requirements.txt
Way before "official" lockfile existed.
Your requirements.txt becomes a lockfile, as long as you accept to not use ranges.
Having this in a single tool etc why not, but I don't understand this hype, when it was basically already there.
With pip you update a dependency, it won't work if it's not compatible, it'll work if they are. Not sure where the issue is?
This is very new behavior in pip. Not so long ago, imagine this:
You `pip install foo` which depends on `bar==1.0`. It installs both of those packages. Now you install `pip install baz` which depends on `bar==2.0`. It installs baz, and updates bar to 2.0. Better hope foo's compatible with the newer version!
I think pip only changed in the last year or two to resolve conflicts, or die noisily explaining why it couldn't be done.
It can get complicated. The resolver in uv is part of its magic.
You include the security patch of whatever your dependencies are into your local vetted pypi repository. You control what you consider liabilities and you don't get shocked by breakages in what should be minor versions.
Of course you have to be able to develop software and not just snap Lego's together to manage a setup like that. Which is why uv is so popular.
Inevitably, these versions are out-of-date. Sometimes, they are very, very out of date. "Sorry, I can only install [version from 5 years ago.]" is always great for productivity.
I ran into this recently with a third-party. You'd think a 5 year old version would trigger alarm bells...
But when you're developing software, you want the newer stuff. Would you use MySQL 5.0 from 2005? No, you'd be out of your mind.
Sensible defaults would completely sidestep this, that's the popularity of uv. Or you can be an ass to people online to feel superior, which I'm sure really helps.
Which makes you part of the people the GP is referring to? Try using it anger for a week, you'll come to understand.
It's like Sisyphus rolling a cube up a hill and being offered a sphere instead: "no thanks, I just push harder when I have to overcome the edges."
As far as I know, files like requirements.txt, package.json, cargo.toml are intended to be used as a snapshot of the dependencies in your project.
In case you need to update dependency A that also affects dependency B and C, I am not sure how one tool is better than other.
cargo can also update transitive dependencies (you need `--locked` to prevent that).
Ruby's Bundler does not, which is preferred and is the only correct default behaviour. Elixir's mix does not.
I don't know whether uv handles transitive dependencies correctly, but lockfiles should be absolute and strict for reproducible builds. Regardless, uv is an absolute breath of fresh air for this frequent Python tourist.
(It also removed all untracked dependencies in node_modules, which you should also never have unless you've done something weird.)
I switched to pnpm as my preferred package manager a couple of years ago because of this, and even that still requires explicit specification.
It was an unpleasant surprise, to say the least.
uv does it a lot faster and generates requirements.txts that are cross-platform, which is a nice improvement.
Pips solver could still cause problems in general on changes.
UV having a better solver is nice. Being fast is also nice. Mainly tho it feeling like it is a tool that is maintained and can be improved upon without ripping one’s hair out is a godsend.
- dev dependencies (or other groups) - distinguishing between direct and indirect dependencies (useful if you want to cut some fat from a project) - dependencies with optional extra dependencies (if you remove the main, it will delete the orphans when relevant)
It's not unachievable with pip and virtualenvs, but verbose and prone to human error.
Like C: if you're careful enough, it can be memory safe. But teams would rather rely on memory safe languages.
But the main reason shouldn't be the "lockfile". I was replying to the parent comment mainly for that particular thing.
What you SHOULD solve are conflicts in the packages/project file. Once solved, just create a new lockfile and replace the old one.
This applies to lockfiles on any project python or non-python.
That being said, the uv experience is much nicer (also insanely fast).
[1] https://pip.pypa.io/en/stable/user_guide/#constraints-files
Now having said that, I suspect PyEnv is doing some voodoo behind the scenes, because I occasionally see messages like "Package X what's version N, but you have version N1". I've never investigated them though, since both old and new packages seem to work just fine regardless.
You don't… you use the same versions for everything :)
Honestly, I can't think of a single good reason not to want to use a venv for Python.
For a long time there were even compatibilities between the RHEL host python version, and the python version the Red Hat Ansible team were shipping.
So I keep hearing ;)
Meanwhile, on my machines ...
Pipenv tried to be what uv is, but it never did seem to work right, and it had too many weird corner cases ("why is it suddenly taking 3 hours to install packages? why it is literally impossible to get it to upgrade one single dependency and not all the others?") to ever be a contender.
I would probably use something like this: https://stackoverflow.com/questions/17803829/how-to-customiz...
FWIW I use zsh with auto-auto-completion / auto-completion-as-you-type, so just hitting `p` on an empty command line will remember the most recent command starting with `p` (which was likely `pnpm`), and you can refine with further keystrokes and accept longer prefixes (like I always do that with `git add` to choose between typical ways to complete that statement). IMO people who don't use auto-completion are either people who have a magical ability to hammer text into their keyboards with the speed of light, or people who don't know about anything hence don't know about auto-completion, or terminally obsessive types who believe that only hand-crafting each line is worth while.
I don't know which type of person you are but since typing `pnpm` instead of `npm` bothers you to the degree you refuse to use `pnpm`, I assume you must be of the second type. Did you know you can alias commands? Did you know that no matter your shell it's straightforward to write shell scripts that do nothing but replace obnoxious command invocations with shorter ones? If you're a type 3 person then of course god forbid, no true hacker worth their salt will want to spoil the purity of their artisanal command line incantations with unnatural ersatz-commands, got it.
It even has some (I feel somewhat rudimentary) support for workspaces and isolated installs (what pnpm does)
Maven worked fine without semantic versioning and lock files.
Edit: Changed "semantic versioning" to "version ranging"
No, it actually has the exact same problem. You add a dependency, and that dependency specifies a sub-dependency against, say, version `[1.0,)`. Now you install your dependencies on a new machine and nothing works. Why? Because the sub-dependency released version 2.0 that's incompatible with the dependency you're directly referencing. Nobody likes helping to onboard the new guy when he goes to install dependencies on his laptop and stuff just doesn't work because the versions of sub-dependencies are silently different. Lock files completely avoid this.
Version ranges are really bad idea which we can see in NPM.
Before version ranging, maven dependency resolution was deterministic.
Should just be a version bump in one place.
In the general case Java and maven doesn’t support multiple versions of the same library being loaded at once(not without tricks at least, custom class loaders or shaded deps), so it shouldn’t matter what transitive dependencies depend on.
It effectively means I can only have versions of dependencies that rely on the exact version that I'm updating to. Have a dependency still on 1.0.1 with no upgrade available? You're stuck.
Even worse, let's say you depends on A which depends on B, and B has an update to 1.0.2, if A doesn't support the new version of B, you're equally stuck.
But the problem with that is, when you need another version of a library, that is not in that edition. For example when a backdoor or CVE gets discovered, that you have to fix asap, you might not want to wait for the next Maven release. Furthermore, Maven is Java ecosystem stuff, where things tend to move quite slowly (enterprisey) and comes with its own set of issues.
Coming from ruby. However, I think uv has actually now surpassed bundler and the ruby standard toolset for these things. Definitely surpassed npm, which is also not fine. Couldn't speak for cargo.
Some time ago I found out it does work with authentication, but their “counter ascii animation” just covers it… bug has been open for years now…
uv actually works.
Funny how these things get forgotten to history. There's lots of prior art when it comes to replacing pip.
edit: here's an HN thread about pipenv, where many say the same things about it as they are about UV and Poetry before https://news.ycombinator.com/item?id=16302570
However, I have zero reservations about uv. I have not encountered bugs, and when features are present they are ready for complete adoption. Plus there's massive speed improvements. There is zero downside to using uv in any application where it can be used and also there are advantages.
Agree that uv is way way way faster than any of that and really just a joy to use in the simplicity
Also the ability to have a single script with deps using TOML in the headers super eaisly.
Also Also the ability to use a random python tool in effectively seconds with no faffing about.
Even then though, the core developers made it clear that breaking everyone’s code was the only thing they were willing to do (remember Guido’s big “No 2.8” banner at PyCon?), which left the community with no choice.
Why?
It’s almost too easy to add one compared to writing your own functions.
Now compare that to adding a dependency to a c++ project
I think it's more like Rust devs using Python and thinking what the fuck why isn't this more like rustup+cargo?
The environment, dependency experience created so much friction compared to everything else. Changed my perspective on Docker for local dev.
Glad to hear it seems to finally be fixed.
And inspired by uv, we now have rv for RoR!
Yes, though poetry has lock files, and it didn't create the same positive feelings uv does :)
good god no thank you.
>cargo
more like it.
My default feeling towards using python in more ways than I did was default no because the tooling wasn't there for others to handle it, no matter how easy it was for me.
I feel uv will help python go even more mainstream.
You've been able to have the exact same setup forever with pyenv and pyenv-virtualenv except with these nothing ever has to be prefixed. Look, uv is amazing and I would recommend it over everything else but Python devs have had this flow forever.
No, you aren't.
> It doesn't change any of the moving pieces
It literally does, though iyt maintains a mostly-parallel low-level interface, the implementation is replaced with improved (in speed, in dependency solving, and in other areas.) You are using virtual environments (but not venv/virtualenv) and the same sources that pip uses (but not pip).
> You've been able to have the exact same setup forever with pyenv and pyenv-virtualenv except with these nothing ever has to be prefixed.
Yes, you can do a subset of what uv does with those without prefixes, and if you add pipx and hatch (though with hatch you’ll be prefixing for much the same reason as in uv) you’ll get closer to uv’s functionality.
> Look, uv is amazing and I would recommend it over everything else but Python devs have had this flow forever.
If you ignore the parts of the flow built around modern Python packaging standards like pyproject.toml, sure, pieces of the flow have been around and supported by the right constellation of other standard and nonstandard tools for a while.
> If you ignore the parts of the flow
I don't get this, pip has worked with pyproject.toml since its standardization https://peps.python.org/pep-0621/. You don't need any constellation of tools, the only pieces that aren't provided by upstream is the version manager and the virtualenv manager. The new packaging flow has also worked with the authoritative pypa tools since their standardization https://peps.python.org/pep-0517/ https://peps.python.org/pep-0518/ https://peps.python.org/pep-0751/
Again, uv is great, I just think people are giving this one tool too much credit for the standardization process (that uv is an implementation of) that actually addressed Python packaging issues. Like for example uv run, that's all https://peps.python.org/pep-0751/
I do prefer uv but it's not like sane python env management hasn't existed
This is the most insulting take in the ongoing ruination of Python. You used to be able to avoid virtualenvs and install scripts and dependencies directly runnable from any shell. Now you get endlessly chastised for trying to use Python as a general purpose utility. Debian was a bastion of sanity with the split between dist_packages and site_packages but that's ruined now too.
With PEP 723 and confortable tooling (like uv), now you get scripts, that are "actually directly runnable", not just "fake directly runnable oops forgot to apt-get install something sorta runnable", and work reliably even when stuff around you is updated.
This wasn't really the case; in principle anything you installed in the system Python environment, even "at user level", had the potential to pollute that environment and thus interfere with system tools written in Python. And if you did install it at system level, that became files within the environment your system package manager is managing, that it doesn't know how to deal with, because they didn't come from a system package.
But it's worse now because of how many system tools are written in Python — i.e., a mark of Python's success.
Notably, these tools commonly include the system package manager itself. Since you mentioned Debian (actually this is Mint, but ya know):
  $ file `which apt`
  /usr/local/bin/apt: Python script, ASCII text executable
No, you don't. Nothing prevents you from running scripts with the system Python that make use of system-provided libraries (including ones that you install later with the system package manager).
If you need something that isn't packaged by your distro, then of course you shouldn't expect your distro to be able to help with it, and of course you should expect to use an environment isolated from the distro's environment. In Python, virtual environments are the method of isolation. All reasonable tooling uses them, including uv.
> Debian was a bastion of sanity with the split between dist_packages and site_packages but that's ruined now too.
It's not "ruined". If you choose to install the system package for pip and to use it with --break-system-packages, the consequences are on you, but you get the legacy behaviour back. And the system packages still put files separately in dist-packages. It's just that... doing this doesn't actually solve all the problems, fundamentally because of how the Python import system works.
Basically the only thing missing from pip install being a smooth experience is something like npx to cleanly run modules/binary files that were installed to that directory. It's still futzing with the PATH variable to run those scripts correctly.
This could still cause problems if you run system tools as that user.
I haven't checked (because I didn't install my distro's system package for pip, and because I use virtual environments properly) but I'm pretty sure that the same marker-file protection would apply to that folder (there's no folder there, on my system).
This ideology is what caused all the problems to begin with, the base python is built as if it's the only thing in the entire operating systems environment when it's entire packaging system is also built in a way that makes that impossible to do without manually having to juggle package conflicts/incompatibilities.
I do agree it is annoying, and what they need to do is just provide an automatic "userspace" virtualenv for anything a user installs themselves... but that is a pandoras box tbh. (Do you do it per user? How does the user become aware of this?)
But that's probably not practical to retrofit given the ecosystem as it is now.
So far it seems like they have a bunch of these high performance tools. Is this part of an upcoming product suite for python or something? Just curious. I'm not a full-time python developer.
"What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today. An example of what this might look like [...] would be something like an enterprise-focused private package registry."
There's also this interview with Charlie Marsh (Astral founder): https://timclicks.dev/podcast/supercharging-python-tooling-a... (specifically the "Building a commerical company with venture capital " section)
There are apparently 10 million Python developers in the world and pretty soon all of them will be using uv. I doubt it is that hard to monetise.
The "install things that have complex non-Python dependencies using pip" story is much better than several years ago, because of things like pip gaining a new resolver in 2020, but in large part simply because it's now much more likely that the package you want offers a pre-built wheel (and that its dependencies also do). A decade ago, it was common enough that you'd be stuck with source packages even for pure-Python projects, which forced pip to build a wheel locally first (https://pradyunsg.me/blog/2022/12/31/wheels-are-faster-pure-...).
Another important change is that for wheels on PyPI the installer can now obtain separate .metadata files, so it can learn what the transitive dependencies are for a given version of a given project from a small plain-text file rather than having to speculatively download the entire wheel and unpack the METADATA file from it. (This is also possible for source distributions that include PKG-INFO, but they aren't forced to do so, and a source distribution's metadata is allowed to have "dynamic" dependencies that aren't known until the wheel is built (worst case) or a special metadata-only build hook is run (requires additional effort for the build system to support and the developer to implement)).
With uv it just works. With pip, technically you can make it work, and I bet you'll screw something up along the way.
This is different as of Python 3.11. Please see https://peps.python.org/pep-0668/ for details. Nowadays, to install a package globally, you first have to have a global copy of pip (Debian makes you install that separately), then you have to intentionally bypass a security marker using --break-system-packages.
Also, you don't have to activate the venv to use it. You can specify the path to the venv's pip explicitly; or you can use a different copy of pip (e.g. a globally-installed one) passing it the `--python` argument (you have been able to do this for about 3 years now).
(Pedantically, yes, you could use a venv-installed copy of pip to install into the system environment, passing both --python and --break-system-packages. I can't prove that anyone has ever done this, and I can't fathom a reason beyond bragging rights.)
> - really easy to distinguish [dev] and main dependencies
As of 25.1, pip can install from dependency groups described in pyproject.toml, which is the standard way to group your dependencies in metadata.
> distinguish direct dependencies from indirect dependencies, making it easy to find when a package is not needed anymore
As of 25.1, pip can create PEP 751 standard lockfiles.
> easily use different python versions for different projects
If you want something to install Python for you, yes, that was never in pip's purview, by design.
If you want to use an environment based off an existing Python, that's what venv is for.
I'm still mostly on poetry
Wake me up when pip can do any of that.
This is a matter of opinion. Pip exists to install the packages and their dependencies. It does not, by design, exist to manage a project for you.
If anything, pip is a dependency installer, while working with even trivial projects requires a dependency manager. Parent's point was that pip is actually good enough that you don’t even need uv anymore, but as long as pip doesn’t satisfy 80% of the requirements, that’s just plain false.
A majority of HN users might agree with you, but I'd guess that a majority of developers, to paraphrase Don Draper, don't think about it at all.
Some people don't have, or don't care about, the additional requirements you have in mind.
Or by asyncio.
Currently they are a bit pointless. Sure they aid in documentation, but they are effort and cause you pain when making modifications (mind you with halfarse agentic coding its probably less of a problem. )
What would be better is to have a strict mode where instead of duck typing its pre-declared. It would also make a bunch of things faster (along with breaking everything and the spirit of the language)
I still don't get the appeal of UV, but thats possibly because I'm old and have been using pyenv and venv for many many years. This means that anything new is an attack on my very being.
however if it means that conda fucks off and dies, then I'm willing to move to UV.
I've been using it professionally and its been a big improvement for code quality.
It's the python version of fink vs macports vs homebrew. Or apt vs deb. or pkgsrc vs ports.
But I don't think "its just another" gets the value proposition here. It's significantly simpler to deploy in practice for people like me, writing ad hoc scripts and running git downloaded scripts and codelets.
Yes, virtualenv and pip existed. No, they turned out to be a lot more fiddly to run in practice than UV.
That UV is rust is funny, but not in a terrible way. The llvm compiler toolchain is written in C but compiles other languages. Using one language to do things for another language isn't such a terrible outcome.
I hope UV supplants the others. Not to disrespect their authors, but UV is better for end users. If its worse for package maintainers I think the UV authors should be told.
Types save you cognitive effort and catch errors earlier, while writing code, not later when running or testing
Then again it's not so bad if you're willing to make AI add all the types and not even care.
1. It tries to do too many things. Please just do one thing and do it well. It's simultaneously trying to replace pip, pyenv, virtualenv, and ruff in one command.
2. You end up needing to use `uv pip` so it's not even a full replacement for pip.
3. It does not play well with Docker.
4. It adds more complexity. You end up needing to understand all of these new environmental variables: `UV_TOOL_BIN_DIR`, `UV_SYSTEM_PYTHON`, `UV_LINK_MODE`, etc.
pip and virtualenv also add a ton of complexity and when they break (which happens quite often) debugging it is even harder despite them being "battle tested" tools.
The alternative, of course, is having Python natively support a combined tool. Which you can support while also not liking `uv` for the above reason.
It's the same sort of deal with pyenv--the Python version is itself a dependency of most libraries, so it's a little silly to have a dependency manager that only manages some dependencies.
I started using NodeJS more after lots of Python experience. Packages make so much more sense there. Even imports. You know how hard it is to do the equivalent of "require '../foo.js'" in Python?
`virtualenv` is a heavy-duty third-party library that adds functionality to the standard library venv. Or rather, venv was created as a subset of virtualenv in Python 3.3, and the projects have diverged since.
The standard library `venv` provides "obvious thing that a dependency manager does" functionality, so that every dependency manager has the opportunity to use it, and so that developers can also choose to work at a lower level. And the virtual-environment standard needs to exist so that Python can know about the pool of dependencies thus stored. Otherwise you would be forced to... depend on the dependency manager to start Python and tell it where its dependency pool is.
Fundamentally, the only things a venv needs are the `pyvenv.cfg` config file, the appropriate folder hierarchy, and some symlinks to Python (stub executables on Windows). All it's doing is providing a place for that "pool of dependencies" to exist, and providing configuration info so that Python can understand the dependency path at startup. The venvs created by the standard library module — and by uv — also provide "activation" scripts to manipulate some environment variables for ease of use; but these are completely unnecessary to making the system work.
Fundamentally, tools like uv create the same kind of virtual environment that the standard library does — because there is only one kind. Uv doesn't bootstrap pip into its environments (since that's slow and would be pointless), but you can equally well disable that with the standard library: `python -m venv --without-pip`.
> the Python version is itself a dependency of most libraries
This is a strange way of thinking about it IMO. If you're trying to obtain Python libraries, it's normally because you already have Python, and want to obtain libraries that are compatible with the Python you already have, so that you can write Python code that uses the libraries and works under that Python.
If you're trying to solve the problem of deploying an application to people who don't have Python (or to people who don't understand what Python is), you need another layer of wrapping anyway. You aren't going to get end users to install uv first.
“…I can't see any valid use case for a machine-global pool of dependencies…” - Rhetorical question for OP but how do you run an operating system without having said operating systems dependencies available to everything else?
> how do you run an operating system without having said operating systems dependencies available to everything else?
I’m not sure if I understand your question, but I’ll answer based on what I think you mean. The OS gets compiled into an artifact, so the dependencies aren’t available to the system itself unless they are explicitly added.
> This is a strange way of thinking about it IMO. If you're trying to obtain Python libraries, it's normally because you already have Python, and want to obtain libraries that are compatible with the Python you already have, so that you can write Python code that uses the libraries and works under that Python.
“normally” is biased by what the tooling supports. If Python tooling supported pinning to an interpreter by default then perhaps it would seem more normal?
I write a lot of Go these days, and the libs pin to a version of Go. When you build a project, the toolchain will resolve and (if necessary) install the necessary Go dependency just like all of the other dependencies. It’s a very natural and pleasant workflow.
It's easier to work with if you're new to Python, lazy, or just not generally familiar with the concept of a "project". Tons of people use Python through Jupyter notebooks and install libraries to play with them in a notebook, and have no real reason to think about which installations are required for the current notebook until there's a conflict, or until they want to share their work (which might never happen).
Also as you're well aware, Python existed for a long time before the virtual environment concept.
Top that off with first class programming capabilities and modularization and I can share common configuration and packages across systems. And add that those same customized packages can be directly included in a dev shell making all of the amazing software out there available for tooling and support. Really has changed my outlook and I have so much fun now not EVER dealing with tooling issues except when I have explicitly upgrade my shell and nixpkgs version.
I just rebuilt our CI infrastructure with nix and was a able to configure multiple dockerd isolated daemons per host, calculate the subnet spread for all the networks, write scripts configuring the env so you can run docker1 and hit daemon 1. Now we can saturate our CI machines with more parallel work without them fighting over docker system resources like ports. Never would have attempting doing this without nix, being able to generate the entire system config tree and inspect systemd service configs befor even applying to a host reduced my iteration loop to an all time low in the infrastructure land where 10-15mins lead times of building images to find out I misspelling Kafka and kakfa somewhere and now need to rebuild again for 15mins. Now I get almost instant feedback for most of these types of errors.
Yep: Nix
I think there are more cases where pip, pyenv, and virtualenv are used together than not. It makes sense to bundle the features of the three into one. uv does not replace ruff.
> 2. You end up needing to use `uv pip` so it's not even a full replacement for pip.
uv pip is there for compatibility and to facilitate migration but once you are full on the uv workflow you rarely need `uv pip` if ever
> 3. It does not play well with Docker.
In what sense?
> 4. It adds more complexity. You end up needing to understand all of these new environmental variables: `UV_TOOL_BIN_DIR`, `UV_SYSTEM_PYTHON`, `UV_LINK_MODE`, etc.
You don't need to touch them at all
uv doesn’t try to replace ruff.
> You end up needing to use `uv pip` so it's not even a full replacement for pip.
"uv pip" doesn't use pip, it provides a low-level pip-compatible interface for uv, so it is, in fact, still uv replacing pip, with the speed and other advantages of uv when using that interface.
Also, while I’ve used uv pip and uv venv as part of familiarizing myself with the tool, I’ve never run into a situation where I need either of those low-level interfaces rather than the normal high-level interface.
> It does not play well with Docker.
How so?
Happened to buy a new machine and decided to jump in the deep end and it's been glorious. I think the difference from your comment (and others in this chain) and my experience is that you're trying to make uv fit how you have done things. Jumping all the way in, I just . . . never needed virtualenvs. Don't really think about them once I sorted out a mistake I was making. uv init and you're pretty much there.
>You end up needing to use `uv pip` so it's not even a full replacement for pip
The only time I've used uv pip is on a project at work that isn't a uv-powered project. uv add should be doing what you need and it really fights you if you're trying to add something to global because it assumes that's an accident, which it probably is (but you can drop back to uv pip for that).
>`UV_TOOL_BIN_DIR`, `UV_SYSTEM_PYTHON`, `UV_LINK_MODE`, etc.
I've been using it for six months and didn't know those existed. I would suggest this is a symptom of trying to make it be what you're used to. I would also gently suggest those of us who have decades of Python experience may have a bit of Stockholm Syndrome around package management, packaging, etc.
- uv add <package_name>
- uv sync
- uv run <command>
Feels very ergonomic, I don't need to think much, and it's so much faster.
In my experience it generally does all of those well. Are you running into issues with the uv replacements?
> 2. You end up needing to use `uv pip` so it's not even a full replacement for pip.
What do end up needing to use `uv pip` for?
I disagree with this principle. Sometimes what I need is a kitset. I don't want to go shopping for things, or browse multiple docs. I just want it taken care of for me. I don't use uv so I don't know if the pieces fit together well but the kitset can work well and so can a la carte.
The uv docs even have a whole page dedicated to Docker; you should definitely check that out if you haven't already: https://docs.astral.sh/uv/guides/integration/docker/
Needing pip and virtualenvs was enough to make me realize uv wasn't what I was looking for. If I still need to manage virtualenvs and call pip I'm just going to do so with both of these directly.
I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.
Isn't that exactly a pyproject.toml via the the uv add/sync/run interface? What is that missing that you need?
Ah ok I was missing this and this does sound like what I was expecting. Thank you!
If you are using uv, you don’t need to do shell shenanigans, you just use uv run. So I'm not sure how uv with pyproject.toml doesn't meet this description (yes, the venv is still there, it is used exactly as you describe.)
I have worked on numerous projects that started with pipenv and it has never "just works" ever. Either there's some trivial dependency conflict that it can't resolve or it's slow as molasses or something or the other. pipenv has been horrible to use. I started switching projects to pip-tools and now I recommend using uv
  uv venv ~/.venvs/my_new_project --python 3.13
  source ~/.venvs/my_new_project/bin/activate
  python3 -m ensurepip --upgrade
  cp -r /path/from/source/* .
  python3 -m pip install -r requirements.txt
Someone, please tell me what's wrong with this. To me, this seems much less complicated that some uv-centric .toml config file, plus some uv-centric commands for more kinds of actions.
Why do you need to use uv pip?
What problems you have in Docker?
I don't understand any of those env variables you listed, yet I use uv without problems.
I'm using uv in two dozen containers with no issues at all. So not sure what you mean that it doesn't play well with Docker.
so i do uv pip install ipdb.
but then, after uv add somepackage
uv sync happens and cleans up all extras. to keep extras, you need to run uv sync --inexact. But there is no env var for `--inexact`, so I end up doing the sync manually.
- resorting to logical fallacies, or
- relying on your unstated assumption that all complexity is bad
No you don't. That's just a set of compatibility approaches for people who can't let go of pip/venv. Move to uv/PEP723, world's your oyster.
> It does not play well with Docker.
Huh? I use uv both during container build and container runtime, and it works just fine?
> You end up needing to understand all of these new environmental variables
Not encountered the need for any of these yet. Your comments on uv are so far out of line of all the uses I've seen, I'd love to hear what you're specifically doing that these become breaking points.
UV is great but I use it as a more convenient pip+venv. Maybe I'm not using it to it's full potential.
uv is probably much more of a game changer for beginner python users who just need to install stuff and don't need to lint. So it's a bigger deal for the broader python ecosystem.
You aren't, but that's fine. Everyone has their own idea about how tooling should work and come together, and I happen to be in your camp (from what I can tell). I actively don't want an all-in-one tool to do "project management".
But where it isn't a matter of opinion is, speed. Never met anyone who given then same interface, would prefer a process taking 10x longer to execute.
pyenv was problematic because you needed the right concoction of system packages to ensure it compiled python with the right features, and we have a mix of MacOS and Linux devs so this was often non-trivial.
uv is much faster than both of these tools, has a more ergonomic CLI, and solves both of the issues I just mentioned.
I'm hoping astral's type checker is suitably good once released, because we're on mypy right now and it's a constant source of frustration (slow and buggy).
> uv is much faster than both of these tools
conda is also (in)famous for being slow at this, although the new mamba solver is much faster. What does uv do in order to resolve dependencies much faster?
- Representing version numbers as single integer for fast comparison.
- Being implemented in rust rather than Python (compared to Poetry)
- Parallel downloads
- Caching individual files rather than zipped wheel, so installation is just hard-linking files, zero copy (on unix at least). Also makes it very storage efficient.
Arguably this article is missing one of the biggest benefits: Being able to make Python scripts truly self-contained by including dependencies via a PEP 723 inline header and then running them via `uv run <script.py>` [1].
It's made Python my language of choice for one-off scripts easily shareable as gists, scp-able across systems etc.
[1] https://pybit.es/articles/create-project-less-python-utiliti...
`uv init --script myscript.py`
General comment: using Rust for utilities and libraries has revitalized Python.
    > Instead of 
    >
    > source .venv/bin/activate
    > python myscript.py
    >
    > you can just do
    >
    > > uv run myscript
    >
Side rant: yes I get triggered whenever someone tells me "you can just" do this thing that is actually longer and worse than the original.
The `uv run` command is an optional shortcut for avoiding needing to activate the virtual environment. I personally don't like the whole "needing to activate an environment" before I can run commands "natively", so I like `uv run`. (Actually for the last 10 years I've had my `./manage.py` auto-set up the virtual environment for me.)
The `uv add` / `uv lock` / `uv sync` commands are still useful without `uv run`.
There is a new standard mechanism for specifying the same things you would specify when setting up a venv with a python version and dependencies in the header of a single file script, so that tooling can setup up the environment and run the script using only the script file itself as a spec.
uv (and PyPA’s own pipx) support this standard.
> yes I get triggered whenever someone tells me "you can just" do this thing that is actually longer and worse than the original.
"uv run myscript" is neither longer nor worse than separately manually building a venv, activating it, installing dependencies into it, and then running the script.
Apologies for triggering you in advance, but in case you or others find it useful, here’s how to do the equivalent env-activation commands with uv: https://news.ycombinator.com/item?id=44360892
In principle, you can ‘activate’ this new virtual environment like any typical virtual environment that you may have seen in other tools, but the most ‘uv-onic’ way to use uv is simply to prepend any command with uv run. This command automatically picks up the correct virtual environment for you and runs your command with it. For instance, to run a script — instead of
   source .venv/bin/activate
   python myscript.py
   uv run myscript.pyNo; they are plain virtual environments. There is no special kind of virtual environment. Uv simply offers its own command structure for managing those environments. In particular, `uv run` just ensures a venv in a specific location, then uses it.
There is no requirement to activate virtual environments in order to use them (unless you have some other tooling that specifically depends on the environment variables being set). You can, similarly, "just do"
  .venv/bin/python myscript.py
> This command automatically picks up the correct virtual environment for you
Some people dislike such magic, especially since it involves uv having an opinion about where the virtual environment is located.
`uv run` will also sync the environment to be sure it exists and meets the correct specifications.
But yes, it's optional. You can also just do `uv sync` to sync the environment and then activate it like normal.
Or use `uv venv`, `uv pip` commands and just take the speed advantage.
  - Faster dependency resolution. In fact, everything uv does is extremely fast.
  - Better ergonomics in a dozen ways (`uv run` instead of activating the virtual env, support for script metadata to run scripts with dependencies, uv add to modify the pyproject.toml (that it created for you), etc.)
  - Stack of one tool instead of four+
  - Easier Python installation (although I usually use both pyenv and uv on my machine)UV means getting more strings attached with VC funded companies and leaning on their infrastructure. This is a high risk for any FOSS community and history tells us how this ends….
Speaking of history, I was very sympathetic to the "we are open-source volunteers, give us a break" kind of stuff for the first N years.. but pypa has a pattern of creating problems, ignoring them, ignoring criticism, ignoring people who are trying to help, and pushing talent+interest elsewhere. This has fragmented the packaging ecosystem in a way that confuses newcomers, forces constant maintenance and training burden on experts, and damages the credibility of the language and its users. Hatch is frankly too little too late, and even if it becomes a wonderful standard, it would just force more maintenance, more confusion for a "temporary" period that lasts many, many years. Confidence is too far gone.
As mentioned elsewhere in the thread, there are tons of conflicting tools in the space already, and due to the fragmentation, poetry etc could never get critical mass. That's partly because pypa stuff felt most "official" and a safer long term bet than anything else, but partly because 33% better was never good enough to encourage widespread adoption until it was closer to 200% better. But uv actually IS that much better. Just let it win.
And let pypa be a case-study in how to NOT do FOSS. Fragmentation is fine up to a point, but you know what? If it wasn't for KDE / Gnome reinventing the wheel for every single kind of individual GUI then we'd have already seen the glorious "year of the linux desktop" by now.
yep, I've been saying this for years, and astral have proved it in the best way: with brilliant, working software
python was a dying project 10 years ago, after the python 3000 debacle
the talent left/lost interest
then the machine learning thing kicked off (for some reason using python), and now python is everywhere and suddenly massively important
and the supporting bureaucracies, still in their death throes, are unable to handle a project of its importance
uv is MIT licensed so if they rug pull, you can fork.
so. will uv install psychopy (say version 3.2.4)?
Poetry was the worst for me. It doesn’t even try to manage the Python distribution, so it’s only a partial solution. It was so slow our CICD would timeout and fail. And I watched the maintainers actively refuse to fix super annoying bugs for YEARS, blaming others for the problem.
You do understand that psychopy switched to calver immediately after that version, and that this was over five years ago? That when that package was released, the oldest currently officially supported version of Python had barely started development? And that it's packaged according to legacy standards that weren't even following the best practices for 2019 (it offers only a source distribution despite only having Python code itself)?
That said, it looks like current versions would be able to install (on Mac, only up to Python 3.11, because that's what's supported) even with pip.
I have yet to be shown a package that uv can cleanly install into a new environment but pip cannot.
This time for real!
And here: https://pyproject-nix.github.io/uv2nix/FAQ.html#why-doesnt-u...
I'd love for uv to lock build dependencies, but due to the dynamic nature of Python package metadata it's quite a hard problem. It'll be supported eventually though.
(I work on uv)
And yes, build dependencies is the big elephant of why Python packaging sucks.
For example:
    1. uv sync should update by default (like poetry)
    2. uv lock revision and dependency resolver keep changing and it makes it hard to figure out if changes to our uv.lock are real or due to separate versions of uv among developers
    3. uv pre-release dependency rules should be able to be disabled with either a sys_marker or specific case like pinning a version
 In reality, nobody checks checksums of binaries they download, so piping curl into bash makes no difference.
Piping curl to bash, especially a copy/paste from a random blog is way too easy to exploit. Most people might not realize if the unicode they copied from a website silently translates to a different location than what they thought they read in the screen.
But you don't have to. Brew and other package managers hold uv in their registries.
It’s really excellent stuff
Definitely lightyears faster than mypy though.
It's just simpler to use, and better overall. It's reduced friction significantly.
I think the Python community should put it as a first preference vehicle, and be respectful to the prior arts, and their developers, but not insist they have primacy.
I would love to see them compete with the likes of Conda and try to handle the Python C extension story.
But in the interim, I agree with everyone else who has already commented, Pixi which is partly built atop of UV’s solver is an even bigger deal and I think the longer term winner here.
Having a topologically complete package manager who can speak Conda and PyPi, is amazing.
virtualenv, venv, pyenv, pipenv... I think at one point the recommended option changed because it was integrated into Python, but I can't even remember which is which anymore.
Such a pleasure to finally have just one, for maybe... ~99% of my needs.
> I'd get suspicious if a developer is picky about python versions or library versions
Certain library versions only support certain python versions. And they also break API. So moving up/down the python versions also means moving library versions which means stuff no longer works.
Pip can install from dependency groups in a pyproject.toml file, and can write PEP 751 lockfiles, and work is under way to allow it to install from those lockfiles as well.
I don't know what you mean about a "standard dependency dir". When you make a venv yourself, you can call it what you want, and put it where you want. If you want to put it in a "standard" place, you can trivially make a shell alias to do so. (You can also trivially make a shell alias for "activate the venv at a hard-coded relative path", and use that from your project root.)
Yes, pip installation is needlessly slow for a variety of reasons (that mostly do not have to do with being implemented in Python rather than Rust). Resolving dependencies is also slow (and Rust may be more relevant here; I haven't done detailed testing). But your download speed is still going to be primarily limited by your internet connection to PyPI.
> The alternatives are to use higher-level management like uv does,
The question was specifically what's wrong with pip, venv and pyproject toml, i.e. what issues uv is trying to address. Well of course the thing trying to address the problem addresses the problem....
> I don't know what you mean about a "standard dependency dir".
like node's node_modules, or cargo's ~/.cargo/registry. You shouldn't have to manually create and manage that. installing/building should just create it. Which is what uv does and pip doesn't.
> the same as what you get with `python -m venv --without-pip`
The thing that should be automatic. And even if it is not it should at least be less arcane. An important command like that should have been streamlined long ago. One of the many improvements uv brings to the table.
> and work is under way to allow it to install from those lockfiles as well.
Yeah well, the lack up until now is one of those "what is wrong" things.
> But your download speed is still going to be primarily limited by your internet connection to PyPI.
Downloading lots of small packages dependencies serially leaves a lot of performance on the table due to latency and non-instantaneous response from congestion controllers. Downloading and installing concurrently reduces walltime further.
The point is that it is a thing trying to address the "problem", and that not everyone considers it a problem.
> Which is what uv does and pip doesn't.
The point is that you might want to install something not for use in a "project", and that you might want to explicitly hand-craft the full contents of the environment. Pip is fundamentally a lower-level tool than uv.
> The thing that should be automatic.
Bootstrapping pip is the default so that people who have barely learned what Python is don't ask where pip is, or why pip isn't installing into the (right) virtual environment.
Yes, there are lots of flaws in pip. The problem is not virtual environments. Uv uses the same virtual environments. Neither is the problem "being a low-level tool that directly installs packages and their dependencies". I actively want to have that tool, and actively don't want a tool that tries to take over my entire project workflow.
How many commands are required to build up a locally consistent workspace?
Modern package managers do that for you.
Implementation-wise, there's nothing wrong in my view with venv. Or rather, everything is compelled to use virtual environments, including uv, and venv is just a simple tool for doing so manually. Pip, on the other hand, is slow and bulky due to poor architecture, a problem made worse by the expectation (you can work around it, but it requires additional understanding and setup, and isn't a perfect solution) of re-installing it into each virtual environment.
(The standard library venv defaults to such installation; you can disable this, but then you have to have a global pip set up, and you have to direct it to install into the necessary environment. One sneaky way to do this is to install Pipx, and then set up some script wrappers that use Pipx's vendored copy of pip. I describe my techniques for this in https://zahlman.github.io/posts/2025/01/07/python-packaging-....)
Edit: by "design" above I meant the broad strokes of how you use pip, installing single packages with their transitive dependencies etc. There's a lot I would change about the CLI syntax, and other design issues like that.
Pip also generates PEP 751 lockfiles, and installing from those is on the roadmap still (https://github.com/pypa/pip/issues/13334).
venv is lower-level tooling. Literally all it does is create a virtual environment — the same kind that uv creates and manages. There's nothing to "integrate".
There have also been PoCs on serving malicious content only when piped to sh rather than saved to file.
If you want to execute shell code from the internet, at the very least store it in a file first and store that file somewhere persistent before executing it. It will make forensics easier
Versioning OTOH is often more problematic with distro package managers that can't support multiple versions of the same package.
Also inability to do user install is a big problem with distro managers.
Of course unpredictability itself is also a security problem. I'm not even supposed to run partial updates that at least come from the same repository. I ain't gonna shovel random shell scripts into the mix and hope for the best.
Maybe if you trust the software, then trusting the install script isn't that big of a stretch?
Also, many of the "distribution" tools like brew, scoop, winget, and more are just "PR a YAML file with your zip file URL, name of your EXE to add to a PATH, and a checksum hash of the zip to this git repository". We're about at a minimum effort needed to generate a "distribution" point in software history, so seems interesting shell scripts to install things seem to have picked up instead.
Looking at the install script or at a release page (eg. https://github.com/astral-sh/uv/releases/tag/0.9.6 ) shows they have pretty broad hardware support in their pre-compiled binaries. The most plausible route to being disappointed by the versatility of this install script is probably if you're running an OS that's not Linux, macOS, or Windows—but then, the README is pretty clear about enumerating those three as the supported operating systems.
The software is not written in a scripting language where forgetting quote marks regularly causes silent `rm -rf /` incidents. And even then, I probably don't explicitly point the software at my system root/home and tell it to go wild.
You can `pip install uv` or manually download and extract the right uv-*.tar.gz file from github: https://github.com/astral-sh/uv/releases
Also, most reasonable developers should already be running with the ExecutionPolicy RemoteSigned, it would be nice if code signing these install script was a little more common, too. (There was even a proposal for icm [Invoke-Command] to take signed script URLs directly for a much safer alternative code-golfed version of iwr|iex. Maybe that proposal should be picked back up.)
/just guessing, haven't tried it
no. thats how you get malware. Make a package. Add it to a distro. then we will talk.
I hate it too.
It's moving pretty quick.
> Do they have good influence on what python's main ecosystem is moving to?
Yes, they're an early adaptor/implementer of the recent pyproject.toml standards.
It’s hard to demonstrate the speed difference in a pitch deck.
Hopeful that a lot of this will be even more resolved next time I'm looking to make decisions.
Python for me is great when things can remain as simple to wrap your head around as possible.
It mentions uv at the end and rye at first (which use uv internally).
Now with uv everything just works and I can play around easily with all the great Python projects that exist.
It has always been enough to place installations in separate directories, and use the same bash scripts for environment variables configuration for all these years.
haven used it personally, uv is quite fast and nice to work with at first. definitely nice if you work in a team that fully utilise its potential. however, there felt a lot of parallels with the node.js universe and switching to .venv / localized environments bloats up the system when you work with a boilerplate env that is the same across projects.
the additional files generated are also less human-readable compared to a requirements.txt and the workflow felt a bit more involved for individual users imho. it definitely has a place in the ecosystem, but personally don't find it ready to replace everything else yet.
> uv is an incredibly powerful simplification for us that we use across our entire tech stack. As developers, we can all work with identical Python installations, which is especially important given a number of semi-experimental dependencies that we use that have breaking changes with every version. On GitHub Actions, we’re planning to use uv to quickly build a Python environment and run our unit tests. In production, uv already manages Python for all of our servers.
> It’s just so nice to always know that Python and package installation will always be handled consistently and correctly across all of our machines. That’s why uv is the best thing to happen to the Python ecosystem in a decade.
I can only conclude, that the author of the article, and perhaps even the organization they work in, is unaware of other tools that did the job long before uv. If they really value reproducibility that much, how come they didn't look into the matter before? Things much have been really hastily stitched together, if no one ever looked at existing tooling before, and only now they make things reproducible.
I guess reproducibility is still very much a huge problem, especially in jobs, where it should be one of the most important things to take care of: Research. ("Astronomer & Science Communicator" it says on the website). My recommendation is: Get an actual software developer (at least mid-level) to support your research team. A capable and responsibly acting developer would have sorted this problem out right from the beginning.
I am glad they improved their project setups to the level they should be at, if they want to call it research.
Yes, Poetry has had lock files for years, and pyenv has been able to manage installations, but uv is "an incredibly powerful simplification" that makes it easy to do everything really well with just one tool.
There’s a bigger conversation about open source maintenance there, but if I have to get my job done it’s increasingly tempting to take the simplifications and speed.
I'm not convinced. "pip" uses one's weaker ring finger; "v" is adjacent to "f"; and alternating hands should be easier for a touch typist.
But nice read!
Over the years, I've tried venv, conda, pipenv, petry, plain pip with requirements.txt. I've played with uv on some recent projects and it's a definite step up. I like it.
Uv actually fixes most of the issues with what came before and actually builds on existing things. Which is not a small compliment because the state of the art before uv was pretty bad. Venv, pip, etc. are fine. They are just not enough by themselves. Uv embraces both. Without that, all we had was just a lot of puzzle pieces that barely worked together and didn't really fit together that well. I tried making conda + pipenv work at some point. Pipenv shell just makes using your shell state-full just adds a lot of complexity. None of the IDEs I tried figured that out properly. I had high hopes for poetry but it ended up a bit underwhelming and still left a lot of stuff to solve. Uv succeeds in providing a bit more of an end to end solution. Everything from having project specific python installation, venv by default without hassle, dependency management, etc.
My basic needs are simple. I don't want to pollute my system python with random crap I need for some project. So, like uv, I need to have whatever solution deal with installing the right python version. Besides, the system python is usually out of date and behind the current stable version of python which is what I would use for new projects.
Maven has always been a very good solution. I think Bazel is too, but haven't had much experience with it.
To me, Python's best feature is the ability to quickly experiment without a second thought. Conda is nice since it keeps everything installed globally so I can just run `python` or iPython/Jupyter anywhere and know I won't have to reinstall everything every single time.
One thing I did recently was create a one-off script with functions to exercise a piece of equipment connected to the PC via USB, and pass that to my coworkers. I created a `main.py` and uv add'ed the library. Then when I wanted to use the script in the REPL, I just did `uv run python -i main.py`.
This let me just call functions I defined in there, like `set_led_on_equipment(led='green', on=True)` directly in the REPL, rather than having to modify the script body and re-run it every time.
Edit: another idea that I just had is to use just[0] and modify your justfile accordingly, e.g. `just pything` and in your justfile, `pything` target is actually `uv run --with x,y,z ipython`
Edit edit: I guess the above doesn't even require just, it could be a command alias or something, I probably am overengineering that lol.
   [project]
   name = "my_project"
   version = "1.0.0"
   requires-python = ">=3.9,<3.13"
   dependencies = [
     "astropy>=5.0.0",
     "pandas>=1.0.0,<2.0",
   ]
You may have a library that's been globally installed, and you have multiple projects that rely on it. One day you may need to upgrade the library for use in one project, but there are backward incompatibile changes in the upgrade, so now all of your other projects break when you upgrade the global library.
In general, when projects are used by multiple people across multiple computers, it's best to have the specific dependencies and versions specified in the project itself so that everyone using that project is using the exact same version of each dependency.
For recreational projects it's not as big of a deal. It's just harder to do a recreation of your environment.
Because it being available in the system environment could cause problems for system tools, which are expecting to find something else with the same name.
And because those tools could include your system's package manager (like Apt).
> So there is a massive possibility I am simply wrong and pip-installing something globally is a huge risk. I'm just not understanding it.
I assume you're referring to the new protections created by the EXTERNALLY-MANAGED marker file, which will throw up a large boilerplate warning if you try to use pip to install packages in the system environment (even with --user, where they can still cause problems when you run the system tools without sudo).
You should read one or more of:
* the PEP where this protection was introduced (https://peps.python.org/pep-0668/);
* the Python forum discussion explaining the need for the PEP (https://discuss.python.org/t/_/10302);
* my blog post (https://zahlman.github.io/posts/2024/12/24/python-packaging-...) where I describe in a bit more detail (along with explaining a few other common grumblings about how Python packaging works);
* my Q&A on Codidact (https://software.codidact.com/posts/291839/) where I explain more comprehensively;
* the original motivating Stack Overflow Q&A (https://stackoverflow.com/questions/75608323/);
* the Python forum discussion (https://discuss.python.org/t/_/56900) where it was originally noticed that the Stack Overflow Q&A was advising people to circumvent the protection without understanding it, and a coordinated attempt was made to remedy that problem.
Or you can watch Brodie Robertson's video about the implementation of the PEP in Arch: https://www.youtube.com/watch?v=35PQrzG0rG4.
https://dotslash-cli.com/docs/
DotSlash to get the interpreter for your platform, and uv to get the dependencies.
Perfect for corporate setups with custom mirrors etc.
The home page should be a simplified version of this page buried way down in the docs: https://docs.astral.sh/uv/guides/projects/
"Just pipe a random script from the internet into your shell! What could possibly go wrong?"
Run my command through an LLM and tell me "don't do this" once, I'm out to a different distro :-).
Also, if people copy-paste stuff they don't understand in a terminal (and running a script like this is pretty much "running stuff one does not understand"), I don't think there is anything you can do for them.
They shouldn't, though...
  dependencies = [
      "torch==2.8.0+rocm6.4",
      "torchvision==0.23.0+rocm6.4",
      "pytorch-triton-rocm==3.4.0",
  ...
  ]
(Transparently, I'm posting this before I've completed the article.)
uv's biggest advantage is speed. It claims a 10-100x performance speedup over pip and Conda [1]. uv can also manage python versions and supports using Python scripts as executables via inline dependencies [2].
But Conda is better for non-Python usage and is more mature, especially for data science related uses.
[1]: https://github.com/astral-sh/uv/blob/main/BENCHMARKS.md [2]: https://docs.astral.sh/uv/#scripts
The fact that it's a binary, not written in python, also simplifies bootstrapping. So you don't need python+dependencies installed in order to install your python+dependencies.
Some foundations have moved into the stdlib. This means that newer tools are much more compatible with each other and mainly just differ in implementation rather than doing different things altogether. The new stuff is working on a much more standard base and can leave behind many dark crufty corners.
Unravelling the legacy stuff and putting the standards in place seems to have taken 15+ years?
Standards are developed to allow existing tools to inter-operate; this entails allowing new tools to appear (and inter-operate), too.
This system was in some regards deliberate, specifically to support competition in "build backends". The background here is that many popular Python projects must interface to non-Python code provided with the project; in many cases this is code in compiled languages (typically C, Fortran or Rust) and it's not always possible to pre-build for the user's system. This can get really, really complicated, and people need to connect to heavyweight build systems in some cases. The Python ecosystem standards are designed with the idea that installers can automatically obtain and use those systems when necessary.
And by doing all of this, Python core developers get to focus on Python itself.
Another important concern is that some bad choices were made initially with Setuptools, and we have been seeing a very long transition because of a very careful attitude towards backwards compatibility (even if it doesn't seem that way!) which in turn is motivated by the battle scars of the 2->3 transition. In particular, it used to be normal and expected that your project would use arbitrary Python code (in `setup.py` at the project root) simply to specify metadata. Further, `setup.py` generally expects to `import setuptools`, and might require a specific version of Setuptools; but it can't express its build-time Setuptools version requirement until the file is already running - a chicken-and-egg scenario.
Modern projects use a declarative TOML file for "abstract" metadata instead (which is the source for concrete metadata included in the actual build artifacts), but the whole ecosystem still has to support a lot of really outdated ways of doing things, because in part of how much abandonware is out there.
[0]: Wheels are zip-compressed, and Python can run code from a zip file, with some restrictions. The pip project is designed to make sure that this will work. The standard library provides a module "ensurepip" which locates this wheel and runs a bootstrap script from that wheel, which will then install into the current environment. Further, the standard library "venv", used to create virtual environments, defaults to using this bootstrap in the newly created environment.
With python over the years i can think of pip, pipx, setuptools, easy_install, distutils, venv, conda, wheel, .egg, wheel (formats) , now uv.
PHP stabilized with composer, perl with cpan , go with `go mod` and `go get` (builtin).
Java and Swift had some competition with Gradle/maven and swiftPM / cocoapods, but nothing as egregious.
file tree, dep tree, task DAG. how many ways can they be written?
Almost literally: https://wheelnext.dev/
> how many ways can they be written?
It's not just a matter of how they're written. For Python specifically, build orchestration is a big deal. But also, you know, there are all the architecture ideas that make uv faster than pip. Smarter (and more generous) caching; hard-linking files where possible rather than copying them; parallel downloads (I tend to write this off but it probably does help a bit, even though the downloading process is intermingled with resolution); using multiple cores for precompiling bytecode (the one real CPU-intensive task for a large pure-Python installation).
1. Ruby tooling wasn't superior when Python became the standard.
2. Until Rails, IIRC, Ruby had limited visibility outside of Japan, whereas Python already had deep penetration in lots of fields.
3. Ruby for quite a while, IIRC, had a pretty bad story on Windows compared to Python.
For that matter, when Python became the standard, the modern conception of a "language ecosystem" scarcely existed.
Rubies meta-prigramming often leads to too much magic, lack of understanding.
The Machine-Learning world, especially "Google Brain" research team figured out that NumPy was an awesome piece of software for dealing with large arrays of numbers and matrix multiplication. They built "TensorFlow" on top of it around 2015 which became very popular. Facebook followed suit and released PyTorch in 2016.
IPython/Jupiter notebooks (for Julia, Python and R) from 2015 were another factor, also adopted by the AI/ML community.
The alternative data-science languages at the time were Mathematica, MATLAB, SAS, Fortran, Julia, R, etc, but Python probably won because it was general purpose and open source.
I suspect Python would not have survived the 2/3 split very well if it wasn't for AI/ML adopting Python as its main language.
> when the tooling was so inferior
Since 2012, Conda/Anaconda has been the go-to installer in the SciPy/NumPy world which also solves a lot of problems that uv solves.
No need to clone/manually install packages first. E.g. `uvx --from "git+https://github.com/richstokes/meshtastic_terminal.git" meshtastic-tui`
Does "any" version include custom homebrew builds of Python, e.g. backports of Python 3.12 to Windows Vista/7?
But why is it the Windows installation is to execute a script off the Internet with bypassed security isolations?
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Nix does not play well with python dependencies. It's so nice to have that part taken care in a reproducible manner by uv.
As an outsider to the python ecosystem I've wanted to learn the _how_ behind uv as well, but that hasn't been immediately clear
I had to update some messy python code and I was looking for a tool that could handle python versions, package updates, etc. with the least amount of documentation needing be read and troubleshooting.
Rye was that for me! Next time I write python I'm definitely going to use uv.
Please see https://news.ycombinator.com/item?id=45753142.
What strikes me about uv is that it seems to understand that not everyone launching a Python-based project has a CS degree. That accessibility matters—especially in the era where more non-engineers are building products.
Curious: for those who've switched to uv, did you notice any friction when collaborating with team members who were still on traditional setups? I'm thinking about adoption challenges when you're not a solo builder.
I'm teaching (strongly recommending/forcing using) uv in all my courses now.
uv is a clear improvement over pip and venv, for sure.
But I do everything in dev containers these days. Very few things get to install on my laptop itself outside a container. I've gotten so used to this that tools that uninstall/install packages on my box on the fly give me the heebie-jeebies.
Yes, it was the NPM supply chain issues that really forced this one me. Now I install, fetch, build in an interactive Docker container
This whole discussion has the same vibes like digital photography 15 years ago. Back then some people spent more time on discussing the tech spec their cameras than takin photos. Now some people spend more time on discussing the pros and cons of different Python environment management solutions than building real things.
The last time I had to touch one of my dockerized environments was when Miniconda and Miniforge were merged. I said the agent "fix the dockerfile", and the third attempt worked. Another time, one dependency was updated and I had to switch to Poetry. Once again, I said the agent "refactor the repository to Poetry" and it worked. Maybe because all my Python package versions are frozen and I only update them when they break or when I need the functionality of the new version.
Whenever this topic pops up in real life, I always ask back what was the longest time they managed the same Python service in the cloud. In the most cases, the answer is never. The last time someone said one year. After a while this service was turned into two .py files.
I don't know. Maybe I'm just too far away from FAANG level sorcery. Everything is a hammer if all you have to deal with are nails.
Does that mean they aren't running unit tests _at all_ in CI yet, or they just use a totally different, newer system in production than they do for CI? Either way, brave of them to admit that in public.
It wasn't anything like the radical change to how CI works that you seem to be envisioning. It was just deleting a lot of Python environment setup and management code that has a history of being obnoxious to maintain, and replacing it with a one-liner that, at least thus far, has given us zero fuss.
I don't know how the author's company manages their stack, so I can't speak to how they do their testing. But I do know that in many companies run-time environment management in production is not owned by engineering and it's common for ops and developers to use different methods to install run-time dependencies in the CI environment and in the production environment. In companies that work that way, testing changes to the production runtime environment isn't done in CI; it's done in staging.
If that's at all representative of how they work, then "we didn't test this with the automated tests that Engineering owns as part of their build" does not in any way imply, "we didn't test this at all."
Tangentially, the place I worked that maintained the highest quality and availability standards (by far) did something like this, and it was a deliberate reliability engineering choice. They wanted a separate testing phase and runtime environment management policy that developers couldn't unilaterally control as part of a defense in depth strategy. Jamming everything into a vertically integrated, heavily automated CI/CD pipeline is also a valid choice, but one that has its roots in Silicon Valley culture, and therefore reaches different solutions to the same problems compared to what you might see in older industries and companies.
I'm very happy the python community has better tooling.
How do I install it globally on a system? Debian doesn't let me install packages via pip outside of a venv or similar.
You can go from no virtual environment, and just "uv run myfile.py" and it does everything that's needed, nearly instantly.
  $ time pip install
  ERROR: You must give at least one requirement to install (see "pip help install")
  real 0m0.356s
  user 0m0.322s
  sys 0m0.036s
I've always wondered why Linux OSes that rely on python scripts don't make their own default venv and instead clobber the user's default python environment...
The wheel basically contains a compiled ~53MB (huh, it's grown in recent versions) Rust executable and a few boilerplate files and folders to make that play nice with the Python packaging ecosystem. (It actually does create an importable `uv` module, but this basically just defines a function that tells you the path to the executable.)
If you want it in your system environment, you may be out of luck, but check your full set of options at https://docs.astral.sh/uv/getting-started/installation/ .
The install script does a ton of system introspection. It seems to be structured quite similarly to the Julia installer, actually.
That was silly of me; since all you need is the compiled executable, you can just move it to an appropriate place (that doesn't interfere with the system package manager; so, /usr/local/bin, or /opt/bin) after user installation.
Using uv at build time can dramatically reduce your build times if you properly handle the uv cache. https://docs.astral.sh/uv/guides/integration/docker/#caching
It's also easy:
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
Pip is also not conda, but uv is way faster than pip.
> Reminds me of that competing standards xkcd.
Yes, for years I've sat on the sidelines avoiding the fragmented Poetry, ppyenv, pipenv, pipx, pip-tools/pip-compile, rye, etc, but uv does now finally seem to be the all-in-one solution that seems to be succeeding where other tools have failed.
In general, you can use your preferred package management tool with their code. The developers are just showing you their own workflow, typically.
not a python developer, so not sure it's equivalent as the npm registry is shared between all.
uv has implemented experimental support, which they announced here [3].
[0] https://wheelnext.dev/proposals/pepxxx_wheel_variant_support...
[1] https://us.pycon.org/2025/schedule/presentation/100/
I'm interested if you have any technical documentation about how conda environments are structured. It would be nice to be able to interact with them. But I suspect the main problem is that if you use a non-conda tool to put something into a conda environment, there needs to be a way to make conda properly aware of the change. Fundamentally it's the same issue as with trying to use pip in the system environment on Linux, which will interfere with the system package manager (leading to the PEP 668 protections).
Since I am mostly avoiding non-reproducible use-cases, like for example stating dependencies inside the python scripts themselves, without checksums, only with versions, and stuff like that, I am not really benefiting that much. I guess, I am just not writing enough throwaway code, to benefit from those use-cases.
Some people here act, like uv is the first tool ever to install dependencies like npm and cargo and so on. Well, I guess they didn't use poetry before, which did just that.
Have you tried uv?
If you haven't spent 5 minutes trying it out, you don't know what you're missing.
If you're worried about getting addicted like everyone else, I could see that as a valid reason to never try it in the first place.
Other than speed and consolidation, pip, pipx, hatch, virtualenv, and pyenv together roughly do the job (though pyenv itself isn’t a standard python tool.)
> Why uv over, lets say, conda?
Support for Python standard packaging specifications and consequently also easier integration with other tools that leverage them, whether standard or third party.
I don’t think people would think twice about the legitimacy (if you want to call it that) of uv except for all the weird fawning over it that happens, as you noticed. It makes it seem more like a religion or something.
For example, installing on an air gapped system, where uv barely has support.
I don’t really get that uv solves all these problems ve never encountered. Just make a venv and use it seems to work fine.
For me package installation is way, way faster with uv, and I appreciate not needing to activate the virtual environment.
I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.
I just... build from source and make virtual environments based off them as necessary. Although I don't really understand why you'd want to keep older patch versions around. (The Windows installers don't even accommodate that, IIRC.) And I can't say I've noticed any of those "significant improvements and differences" between patch versions ever mattering to my own projects.
> I don't love that UV is basically tied to a for profit company, Astral. I think such core tooling should be tied to the PSF, but that's a minor point. It's partially the issue I have with Conda too.
In my book, the less under the PSF's control, the better. The meager funding they do receive now is mostly directed towards making PyCon happen (the main one; others like PyCon Africa get a pittance) and to certain grants, and to a short list of paid staff who are generally speaking board members and other decision makers and not the people actually developing Python. Even without considering "politics" (cf. the latest news turning down a grant for ideological reasons) I consider this gross mismanagement.
The PSF is busy with social issues and doesn't concern itself with trivia like this.
Edit: or was it ruff? Either way. I thought they created the tools first, then the company.
Wonderful project
Don't do this shit, especially if you were told to do this shit.
If you are going to do this shit, separate the commands, read the bash script (and roll it down, if another file is downloaded, download that manually and inspect it).
If you are going to ask people to do this shit, split the command into two. Someone that asks me to do something insecure is either a malicious actor that is trying to compromise me, or someone careless enough to be compromised themselves.
I don't care what uv is, I can pip install stuff thank you. I install 2 or 3 things tops. I don't install 500 packages, that's sounds like a security nightmare.
Change your ways or get pwned people. Don't go the way of node/npm
p.s: Stop getting cute with your TLDs, use a .com or the TLD from your country, using gimmick TLDs is opaque and adds an unnecessary vector, in this case on the politics of the British Overseas Territory of Saint Helena, Ascension, and Tristan da Cunha.
The idea behind most package managers including apt and pip is that they help you build the software and try to make it easier for you without actually downloading and trusting binaries.
Because you can easily make changes to the software, not because it's way less likely to be backdoored.
>The idea behind most package managers including apt and pip is that they help you build the software and try to make it easier for you without actually downloading and trusting binaries.
I'm so deeply confused
Compare this to the Go community, who celebrate rewrites from other languages into Go. They rewrote their compiler in Go even though that made it worse (slower) than the original C version, because they enjoy using their own language and recognise the benefits of dogfooding.
Python is an interpreted scripting language that was not originally designed with high performance computing in mind. It's perfectly normal for languages like that to have their tooling written in a systems programming language. It's also perfectly normal for languages like that to have components that do need to be performant written in a systems programming language. We call this, "Using the right tool for the job."
It's true that a lot of historical Python toolchain was written in Python. That was also using the right tool for the job. It's a holdover from a time when Python was still mostly just a scripting language, so projects were smaller and packages were smaller and dependency trees were smaller and there just generally weren't as many demands placed on the toolchain.
Go, by contrast, is itself a systems programming language. And so naturally they'd want to have all the systems components written in Go, and the sooner the better. It wouldn't inspire much confidence if the maintainers of a systems programming language didn't trust it with systems programming tasks.
I wouldn't describe having a culture that isn’t exclusivisy fanaticism as not liking the language, though.
3.14 is a big deal.
But otherwise, people on this forum and elsewhere are praising uv for: speed, single-file executable, stability, and platform compatibility. That's just a summary of the top reasons to write in Rust!
I agree 3.14 is a big deal as far as Python goes, but it doesn't really move the needle for the language toward being able to author apps like uv.
Which is fine, Python is not for everything.
But I’m utterly shocked that UV doesn’t support “system dependencies”. It’s not a whole conda replacement. Which is a shame because I bloody hate Conda.
Dependencies like Cuda and random C++ libraries really really ought to be handled by UV. I want a true genuine one stop shop for running Python programs. UV is like 80% of the way there. But the last 20% is still painful.
Ideally UV would obsolete the need for docker. Docker shouldn’t be a requirement to reliable run a program.
Currently, there isn't a way for the packages to specify these dependencies.
In part because of the complexity of explaining exactly what is needed (and where to look for it, if you're expecting the system to provide it outside of what Python manages itself).
See https://peps.python.org/pep-0725/ and https://pypackaging-native.github.io/ for details.
I've switched to running any and all python projects in Docker as a way to ensure that low effort supply chain attacks doesn't easily get everything in my home dir. So even if I use uv, I'd only do that in a Docker image for now
Docker images are a productivity killer. I don’t want to waste even 1 second building an image. And all the hoops you have to jump through to enable rapid iteration aren’t worth it.
Docker Images are fine - I guess - for deployment. But for development I absolutely hate them.
EDIT: Looks like I fell hook, line, and sinker for the troll. Shame on me.
My brother in christ, we are all just names without bodies or even faces on this digital ocean of the internet. Letting people know how they should address you isn't "disconnected from reality", it's grounded in the very real reality that we, as people, like talking to each other. We should all be so thankful for their foresight in allowing us the opportunity of avoiding an otherwise unavoidable faux pas of calling everyone in the world "hey you".
No, the same uv that people have been regularly (https://hn.algolia.com/?q=uv) posting about on HN since its first public releases in February of 2024 (see e.g. https://news.ycombinator.com/item?id=39387641).
> How many are there now?
Why is this a problem? The ecosystem has developed usable interoperable standards (for example, fundamentally uv manages isolated environments by using the same kind of virtual environment created by the standard library — because that's the only kind that Python cares about; the key component is the `pyvenv.cfg` file, and Python is hard-coded to look for and use that); and you don't have to learn or use more than one.
There are competing options because people have different ideas about what a "package manager" should or shouldn't be responsible for, and about the expectations for those tasks.
So you are back having to use conda and the rest. Now, you have yet another package manager to handle.
I wouldn't be harsh to engineers at astral who developed amazing tooling, but the issue with the python ecosystem isn't lack of tooling, it is the proliferation and fragmentation. To solve dependency management fully would be to incorporate other package descriptors, or convert them.
Rsbuild, another rust library, for the node ecosystem did just that. For building and bundling. They came up with rspack, which has large compatibility with the webpack config.
You find a webpack repo? Just add rsbuild, rspack, and you are pretty much ready to go, without the slow (node native) webpack.
It's been a joy for owning some of dependencies, that have been not maintained much.
Mostly just using codex web/claude code web and it's doing wonders.
Conda solves a completely orthogonal set of problems, and is increasingly unnecessary. You can `pip install scipy` for example, and have been able to for a while.
I refered to the interfaces of other packaging tools. I use uv and it's excellent on its own.
You get a repo, it's using playwright, what do you do now ? You install all the dependencies found in the dependency descriptor then sync to create a uv descriptor. or you compose a descriptor that uv understands.
It's repetitive, rather systematic so it could be automated. I should volunteer for a PR but my point is introducing yet another tool to an ecosystem suffering a proliferation of build and deps management tooling expands the issue. It would have been helpful from the get go to support existing and prolific formats.
pnpm understands package.json It didn't reinvent the wheel be cause we have millions of wheels out there. It created its own pnpm lock file, but that's files a user isn't meant to touch so it goes seamlessly to transition from npm to pnpm. Almost the same when migrating from webpack to rsbuild.
You only have to look at the ruby ecosystem and the recent mass-expulsion of long-term developers from rubygems/bundler via RubyCentral going full corporate-mode ("we needs us some more moneeeeeys now ... all for the community!!!" - or something). While one COULD find pros in everything, is what is happening in different programming languages really better for both users and developers? I am not quite convinced here.
I am not saying the prior status quo was perfect. What I am saying is ... I am not quite convinced that the proposed benefits are real. In fact, I find managing multiple versions actually annoying. I should say that I already handle that via the GoboLinux way mostly (Name/Version/ going into a central directory; this is also similar what homebrew does, and also to some extent nixos, except that they store it via a unique hash which is less elegant. For instance, on GoboLinux I would then have /Programs/Ruby/3.3.0/ - that's about as simple as can possibly be). I really don't want a tool I don't understand to inject itself here and add more complications to that. My system is already quite simple and I don't really need anything it describes to me as "you need this". I also track and handle dependencies on my own. (This is more work to do initially, but past that point I just do "ue" on the commandline to update to the latest version, where ue is simply an alias to a ruby class called UpdateEntry, which in turn updates an entry in a .yml file, which then is used to populate a SQL database and also downloads and repackages and optionally compiles/installs the given package, e. g. "ue mesa" would just update mesa .tar.xz locally. I usually don't automatically compile it though, so "ue" I just use to update a program version or simply change it; it also accepts an URL of course so users can override this behaviour as they see fit.)