I now use uv for everything Python. The reason for the switch was a shared server where I did not have root and there were all sorts of broken packages/drivers and I needed pytorch. Nothing was working and pip was taking ages. Each user had 10GB of storage allocated and pip's cache was taking up a ton of space & not letting me change the location properly. Switched to uv and everything just worked
If you're still holding out, really just spend 5 minutes trying it out, you won't regret it.
pip + config file + venv requires you to remember ~2 steps to get the right venv - create one and install stuff into it, and for each test run, script execution and such, you need to remember a weird shebang-format, or to activate the venv. And the error messages don't help. I don't think they could help, as this setup is not standardized or blessed. You just have to beat a connection of "Import Errors" to venvs into your brain.
It's workable, but teaching this to people unfamiliar with it has reminded me how.. squirrely the whole tooling can be, for a better word.
Now, team members need to remember "uv run", "uv add" and "uv sync". It makes the whole thing so much easier and less intimidating to them.
There's also some additional integration which I haven't tried yet: https://mise.jdx.dev/mise-cookbook/python.html#mise-uv
I use Just inside (and outside) mise, almost exclusively with embedded shell/python scripts. Have used mise tasks a little, but didn't meaningfully add enough for me
Or do you mean justfiles with shebangs?
Either way, curious what problem you hit (and may be able to unblock ya)
possibly patched now but definitely going to try breaking it now before moving any more stuff into it
For moving to uv, I haven't heard a good story for what uv provides over Poetry rather than "is fast". The only unique thing that I am currently aware of is that uv can install Python itself, which gets rid of tools like Pyenv. I'm interested because of that, but "is fast" isn't enough of a reason.
I've converted multiple large Python codebases to ruff, and each time I just configure ruff as close to the previous tools as possible, then reformat the entire codebase with ruff and remove all the previous tools. The speed increase when linting alone is worth the minor formatting changes to me.
If you really insist on keeping isort's sorting then you could at least replace black and pylint, which would reduce the total number of tools by one.
Even your most complicated projects should be able to switch within a day. The only reason I took any time was having to restructure my docker containers to work with uv instead of poetry. And that's mostly with my inexperience with docker, not because uv is complicated.
Is it better about storage use? (And if so, how? Is it just good at sharing what can be shared?)
I'd avoid workflows that lean on it, if anything else for security's sake.
Really? :)
requirements.txt is just hell and torture. If you've ever used modern project/dependency management tools like uv, Poetry, PDM, you'll never go back to pip+requirements.txt. It's crazy and a mess.
uv is super fast and a great tool, but still has roughnesses and bugs.
# Makefile
compile-deps:
uv pip compile pyproject.toml -o requirements.txt
compile-deps-dev:
uv pip compile --extra=dev pyproject.toml -o requirements.dev.txt
Yes. Azure, for instance, looks for requirements.txt if you deploy a web app to Azure App Service.
If you’re doing a code-based deployment, it works really well. Push to GitHub, it deploys.
You can of course do a container-based deployment to Azure App Service and I’d assume that will work with uv.
Damn I'm getting old
Secondly Docker only solves a subset of problems. It's fine if you're developing a server that you will be deploying somewhere. It's inconvenient if you're developing an end user application, and it's completely useless if you're developing a library you want people to be able to install.
[1] https://docs.astral.sh/uv/guides/tools/#commands-with-plugin...
You don't have that problem with Poetry. You go make a cup of coffee for a couple minutes, and it's usually done when you come back.
Rust's speed advantages typically come from one of a few places:
1. Fast start-up times, thanks to pre-compiled native binaries.
2. Large amounts of CPU-level concurrency with many fewer bugs. I'm willing to do ridiculous threading tricks in Rust I wouldn't dare try in C++.
3. Much lower levels of malloc/free in Rust compared to some high-level languages, especially if you're willing to work a little for it. Calling malloc in a multithreaded system is basically like watching the Millennium Falcon's hyperdrive fail. Also, Rust encourages abusing the stack to a ridiculous degree, which further reduces allocation. It's hard to "invisibly" call malloc in Rust, even compared to a language like C++.
4. For better or worse, Rust exposes a lot of the machinery behind memory layout and passing references. This means there's a permanent "Rust tax" where you ask yourself "Do I pass this by value or reference? Who owns this, and who just borrows is?" But the payoff for that work is good memory locality.
So if you put in a modest amount of effort, it's fairly easy to make Rust run surprisingly fast. It's not an absolute guarantee, and there are couple of traps for the unwary (like accidentally forgetting to buffer I/O, or benchmarking debug binaries).
tl;dw Rust, a fast SAT solver, micro-optimisation of key components, caching, and hardlinks/CoW.
1. they way get the metadata for a package.
packages are in zip files. zip files have their TOC at the end. So, instead of downloading the entire zip they just get the end of the file, read the TOC, then from that download just the metadata part
I've written that code before for my own projects.
2. They cache the results of packages unzipped and then link into your environment
This means there's no files being copied on the 2nd install. Just links.
Both of those are huge time wins that would be possible in any language.
3. They store their metadata as a memory dump
So, on loading there is nothing to parse.
Admittedly this is hard (impossible?) in many languages. Certainly not possible in Python and JavaScript. You could load binary data but it won't be useful without copying it into native numbers/strings/ints/floats/doubles etc...
I've done this in game engines to reduce load times in C/C++ and to save memory.
It'd be interesting to write some benchmarks for the first 2. The 3rd is a win but I suspect the first 2 are 95% of the speedup.
Even on a single core, this turns out to be simply false. It isn't that hard to either A: be doing enough actual computation that faster languages are in fact perceptibly faster, even, yes, in a web page handler or other such supposedly-blocked computation or B: without realizing it, have stacked up so many expensive abstractions on top of each other in your scripting language that you're multiplying the off-the-top 40x-ish slower with another set of multiplicative penalties that can take you into effectively arbitrarily-slower computations.
If you're never profiled a mature scripting language program, it's worth your time. Especially if nobody on your team has ever profiled it before. It can be an eye-opener.
Then it turns out that for historical path reasons, dynamic scripting languages are also really bad at multithreading and using multiple cores, and if you can write a program that can leverage that you can just blow away the dynamic scripting languages. It's not even hard... it pretty much just happens.
(I say historical path reasons because I don't think an inability to multithread is intrinsic to the dynamic scripting languages. It's just they all came out in an era when they could assume single core, it got ingrained into them for a couple of decades, and the reality is, it's never going to come fully out. I think someone could build a new dynamic language that threaded properly from the beginning, though.)
You really can see big gains just taking a dynamic scripting language program and turning it into a compiled language with no major changes to the algorithms. The 40x-ish penalty off the top is often in practice an underestimate, because that number is generally from highly optimized benchmarks in which the dynamic language implementation is highly tuned to avoid expensive operations; real code that takes advantage of all the conveniences and indirection and such can have even larger gaps.
This is not to say that dynamic scripting languages are bad. Performance is not the only thing that matters. They are quite obviously fast enough for a wide variety of tasks, by the strongest possible proof of that statement. That said, I think it is the case that there are a lot of programmers who have no idea how much performance they are losing in dynamic scripting languages, which can result in suboptimal engineering decisions. It is completely possible to replace a dynamic scripting language program with a compiled one and possibly see 100x+ performance improvements on very realistic code, before adding in multithreading. It is hard for that not to manifest in some sort of user experience improvement. My pitch here is not to give up dynamic scripting languages, but to have a more realistic view of the programming language landscape as a whole.
What would a dynamic scripting language look like that wasn't subject to this limitation? Any examples? I don't know of contenders in this design space--- I am not up on it.
But because of the way cache coherency for shared, mutated memory works, parallel refcounting is slow as molasses and will always remain so.
I think Ruby has always used a tracing GC, but it also still has a GIL for some reason?
I don't know python but in JavaScript, triggering 1000 downloads in parallel is trivial. Decompressing them, like in python, is calling out to some native function. Decompressing them in parallel in JS would also be trivial (no idea about python). Writing them in parallel is also trivial.
....
Unfortunately, there seems to be a problem here.
When reality and theory conflict, reality wins.
It sounds like you've drunk the same Kool-Aide I was referring to in my post. It's not true. When you're playing with 50x-100x slowdowns, if not more, it's really quite easy to run into user-perceptible slowdowns. A lot of engineers grotesquely underestimate how slow these languages are. I suspect it may be getting worse over time due to evaporative cooling, as engineers who do understand it also tend to have one reason or another to leave the language community at some point, and I believe (though I can not prove) that as a result the dynamic scripting language communities are actually getting worse and worse at realizing how slow their languages are. They're really quite slow.
I watched the video linked above on uv. They went over the optimizations. The big wins had nothing to do with rust and everything to do with design/algo choices.
You could have also done without the insults. You have no idea who I am and my experiences. I've shipped several AAA games written in C/C++ and assembly. I know how to optimize. I also know how dynamic languages work. I also know when people are making up bullshit about "it's fast because it's in rust!". No, that is not why it's fast.
Instead of "It's fast because it's in rust", I'd say: "It's fast because they chose to use rust for their python tool, which means they care a lot about speed."
Conda rewrote their package resolver for similar reasons
The improvements came from lots of work from the entire python build system ecosystem and consensus building.
Sure, other tools could handle the situation, but being baked into the tooling makes it much easier to bootstrap different configurations.
uv does the Python ecosystem better than any other tool, but it's still the standard Python ecosystem as defined in the relevant PEPs.
It creates a venv. Note were talking about the concept of a virtual environment here, PEP 405, not the Python module "venv".
This is the entire purpose of the standards.
> This is the entire purpose of the standards.
That seems to amount to saying that the purpose of the standards is to prevent progress and ensure that the mistakes of early Python project management tools are preserved forever. (Which would explain some things about the last ~25 years of Python project management I guess). The parts of uv that follow standards aren't the parts that people are excited about.
I disagree. Had uv not followed these standards and instead gone off and done their completely own thing, it could not function as a drop in replacement for pip and venv and wouldn't have gotten anywhere near as much traction. I can use uv personally to work on projects that officially have to support pip and venv and have it all be transparent.
The standards have nothing to do with the last 25 years of Python project management, the most import ones (PEP 517/518) are less than 10 years old.
Dont know, dont care. It thinks about these things not me.
The good thing about reinventing the wheel is that you can get a round one.
https://scripting.wordpress.com/2006/12/20/scripting-news-fo...
Personally the only thing I miss from it is support for binary data - you end up having to base64 binary content which is a little messy.
The comma rules introduce diff noise on unrelated lines.
At this point XML is the backbone of many important technologies that many people won't use or won't use directly anymore.
This wasn't the case circa 2010, when I doubt any dev could have really avoided XML for a bunch of years.
I do like XML, though.
My primary vehicle has off-road capable tires that offer as much grip as a road-only tire would have 20-25 years ago, thanks to technology allowing Michelin to reinvent what a dual-purpose tire can be!
Can you share more about this? What has changed between tires of 2005 and 2025?
https://www.caranddriver.com/features/a15078050/we-drive-the...
> In the last decade, the spiciest street-legal tires have nearly surpassed the performance of a decade-old racing tire, and computer modeling is a big part of the reason
(written about 8 years ago)
A metal wheel is still just a wheel. A faster package manager is still just a package manager.
“Find the dependencies — and eliminate them.” When you're working on a really, really good team with great programmers, everybody else's code, frankly, is bug-infested garbage, and nobody else knows how to ship on time.
We had a similar attitude, although I'd say that we were a bit more humble. We didn't think that everyone else was producing garbage but, we also didn't assume that we couldn't produce something comparable to what we could buy for a tenth of the cost. From talking to folks at some competitors, there was a pretty big cultural difference between how we operated and how they operated. It simply didn't occur to them that they didn't have to buy into the standard American business logic that you should focus on your core competencies, that you can think through whether or not it makes sense to do something in-house on the merits of the particular thing instead of outsourcing your thinking to a pithy saying.[0]
Hopefully this can disabuse others of similar mistaken memory.
off topic, but i wonder why that phrase gets used rather than 10x which is much shorter.
Long answer: Because if you put a number, people expect it to be accurate. If it was 6x faster, and you said 10x, people may call you out on it.
In common conversation, the multiplier can vary from 2x - 10x. In context of some algorithms, order of magnitudes can be over the delta rather than absolutes. eg: an algorithms sees 1.1x improvement over the previous 10 years. A change that shows a 1.1x improvement by itself, overshadows an an order-of-magnitude more effort.
For salaries, I've used order-of-magnitude to mean 2x. Good way to show a step change in a person's perceived value in the market.
Order of magnitude faces less of that baggage, until it does :)
- 10x is a meme
- what if it's 12x better
You can use ent ENV variable UV_CONCURRENT_DOWNLOADS to limit this. In my case it needed to be 1 or 2. Anything else would cause timeouts.
An extreme case, I know, but I think that uv is too aggressive here (a download thread for every module). And should use aggregate speeds from each source server as a way of auto-tuning per-server threading.
uv add <mydependencies> --script mycoolscript.py
And then shoving #!/usr/bin/env -S uv run
on top so I can run Python scripts easily. It's great!Claude 4's training cutoff date is March 2025 though, I just checked and it turns out Claude Sonnet 4 can do this without needing any extra instructions:
Python script using uv and inline script dependecies
where I can give it a URL and it scrapes it with httpx
and beautifulsoup and returns a CSV of all links on
the page - their URLs and their link text
Here's the output, it did the right thing with regards to those dependencies: https://claude.ai/share/57d5c886-d5d3-4a9b-901f-27a3667a8581 If you need to run these scripts, use "uv run script-name.py". It will automatically install the dependencies. Stdlibs don't need to be specified in the dependencies array.
since e.g. Cursor often gets confued because the dependencies are not installed and it doesn't know how to start the script. The last sentence is for when LLMs get confused and want to add "json" for example to the dependency array.Instant reactive reproducible app that can be sent to others with minimal prerequisites (only uv needs to be installed).
Such a hot combo.
- https://everything.intellectronica.net/p/the-little-scripter
~~That mutates the project/env in your cwd. They have a lot in their docs, but I think you’d like run --with or uv’s PEP723 support a lot more~~
Also love Ruff from the Astral team. We just cut our linting + formatting across from pylint + Black to Ruff.
Saw lint times drop from 90 seconds to < 1.5 seconds. crazy stuff.
https://docs.astral.sh/ruff/faq/#how-does-ruffs-linter-compa...
It prevents uv from making a virtual environment and does some optimizations like compiling byte code once when your dependencies get installed.
It was well worth the switch. I noticed a ~10x improvement for speed compared to pip (30s to 3s to install all dependencies). Proper lock file support is nice too. Funny enough I wrote about and made a video about switching to uv about a week ago here https://nickjanetakis.com/blog/switching-pip-to-uv-in-a-dock....
#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "python-dateutil",
# ]
# ///
#
# [python script that needs dateutil]
So there isn't really much to do to make it simpler.
Or maybe create a second binary or symlink called `uvs` (short for uv script) that does the same thing.
* Redis -> redict, valkey
* elastic search -> opensearch
* terraform -> opentofu
(Probably a few more but those are the ones that come to mind when they "go rogue")
The answer is that we replaced all our of 'pip' with 'uv pip'. Shit goes south and we simply change it back at the cost of speed.
It's a nice software.
I don't see a way to change current and global versions of python/venvs to run scripts, so that when I type "python" it uses that, without making an alias.
1. Modify your PATH:
export PATH="$(uv run python -BISc 'import sys; print(sys.base_exec_prefix)')/bin:$PATH"
2. Put this in an executable file called "python" early in your PATH: #!/bin/sh
exec uv run python $*
Those are basically what pyenv does (via a symlink and PATH entry).The second option will always pick up the Python local to the project directory you're in, if any. The former (if you put it in your shell profile) will not.
https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
I specifically want to run "python", rather subcommands for some other command, since I often I want to pass in arguments to the python interpreter itself, along with my script.
You really shouldn't be doing this. Utilities on your host rely on the system python being stable, unchanged, pinned to a very specific python version; also with exact versions for dependencies.
For example in fedora/rhel, changing the default platform-python will prevent you from using yum/dnf to apply system updates.
I've written a lightweight replacement script to manage named central virtual envs using the same command syntax as virtualenvwrapper. Supports tab completion for zsh and bash: https://github.com/sitic/uv-virtualenvwrapper
A problem I have now, though, is when I I jump to def in my editor it no longer knows which venv to load because it's outside of the project. This somehow used to work with virtualenwrapper but I'm not sure how.
I can see how if you've had issues with dependencies you would rave about systems that let you control down to the commit what an import statement actually means, but I like the system that requires the least amount of typing/thinking and I imagine I'm part of a silent majority.
You are probably part of the silent majority because yes, most people have relatively simple needs for every popular tool.
uv pip install --system requests
but it's more typing. If I type 5 characters per second, making me also type "uv --system" is the same as adding 2 seconds of runtime to the actual command, except even worse because the chance of a typo goes up and typing takes energy and concentration and is annoying.Hopefully one of those things isn't your backup cronjob.
If they do pursue that idea, I wonder if they'd also have a public component to it that could act as a better PyPI?
Just `git clone someproject`, `uv run somescript.py`, then mic drop and walk away.
There are times when you do NOT want the wheel version to be installed (which is what --no-binary implements in pip), but so many package managers including uv don't provide that core, basic functionality. At least for those that do use pip behind the scenes, like pipenv, one can still use the PIP_NO_BINARY environment variable to ensure this.
So I'll not be migrating any time soon.
See https://docs.astral.sh/uv/reference/environment/#uv_no_binar...
Many reasons: you need more control, specialized hardware, testing newer versions of the library, maintaining an internal fork of a library, security, performance, the dev team maintains both the native library and python package and needs to keep them independent, or simply preference for dynamic linking against system libraries to avoid duplication.
No, the question is why a package would need to decide for its users that the package and its dependencies must be installed the gentoo way. That's quite obviously different from why an end user would decide to install from source despite the availability of binary packages.
uv is still quite new though. Perhaps you can open an issue and ask for that?
[tool.uv]
no-binary = true
Or for a specific package: [tool.uv]
no-binary-package = ["ruff"]
https://docs.astral.sh/uv/reference/settings/#no-binaryWhen, why? Should I be doing this?
other than that, it's invaluable to me, with the best features being uvx and PEP 723
uv add --dev uv-bump
uv-bump
Agree that something like this should be built in.What I want is, if my project depends on `package1==0.4.0` and there are new versions of package1, for uv to try install the newer version. and to do that for a) all the deps, simultaneously, b) without me explicitly stating the dependencies in the command line since they're already written in the pyproject.toml. an `uv refresh` of sorts
pyproject.toml’s dependency list specifies compatibility: we expect the program to run with versions that satisfy constraints.
If you want to specify an exact version as a validated configuration for a reproducible build with guaranteed functionality, well, that’s what the lock file is for.
In serious projects, I usually write that dependency section by hand so that I can specify the constraints that match my needs (e.g., what is the earliest version receiving security patches or the earliest version with the functionality I need?). In unserious projects, I’ll leave the constraints off entirely until a breakage is discovered in practice.
If `uv` is adding things with `==` constraints, that’s why upgrades are not occurring, but the solution is to relax the constraints to indicate where you are okay with upgrades happening.
Yeah, that's pretty much what I've been doing with my workaround script. And btw most of my projects are deeply unserious, and I do understand why one should not do that in any other scenario.
Still, I dream of `uv refresh` :D
pyproject.toml is meant to encode the actual constraints for when your app will function correctly, not hardcode exact versions, which is what the lockfile is for.
Though I do think with Python in particular it's probably better to manually upgrade when needed, rather than opportunistically require the latest, because Python can't handle two versions of the same package in one venv.
https://packaging.python.org/en/latest/specifications/depend...
package1>=0.4.0 means 0.4.0, 0.4.1, 0.4.100, 0.4.100.1 and so on
package1>=0.4 includes the above plus 0.5.0, 0.5.1, 0.6.0, 0.100.0 and so on
I think you're just specifying your dependency constraints wrong. What you're asking for is not what the `==` operator is for; you probably want `~=`.
[1]: I do sometimes write the title or the description. But never the deps themselves
uv add example>=0.4.0
Then it will update as you are thinking.
Much prefer not thinking about venvs.
Same with uv. They are doing very nice tricks, like sending Range requests to only download the metadata part from the ZIP file from PyPI, resolve them in memory and only after that downloading the packages. No other package manager does this kind of crazy optimization.
Yes I know Rust is not a GC language… go a level deeper
https://deepwiki.com/search/point-to-the-top-3-rust-specif_4...
This is very very useful for robotics or other places where you might end up with a big list of pip installed packages in some Dockerfile, want them pinned, with just one version of each library across the whole space, but don't want to necessarily do the song and dance of a full python package. Because uv is fast it just kind of works and checking the projects doesn't take any time at all compared to the other build work being done.
Just today I set it up on 20 PCs in a computer lab that doesn't have internet, along with vs code and some main packages. Just downloaded the files, made a powershell script and it's all working great with Jupyter etc... Now to get kids to be interested in it...
After that many years of optimization pure python seems still to be wishfull thinking. It's ai/mk success is also only as a shim language around library calls.
What was super unclear was how I develop locally with uv. Figuring out I needed `aider sync --extra` then `aider run --projrct /opt/aider aider` to run was a lot of bumbling in the dark. I still struggle to find good references for everyday running projects use with uv.
It was amazing though. There were so many pyproject and other concerns that it just knew how to do. I kept assuming I was going to have to do a lot more steps.
Fast is a massive factor.
I haven't used it much, but being so fast, I didn't even stop to think "is it perfect at dependency management?" "does it lack any features?".
Maybe that functionality isnt implemented the same way for uvx.
You could try this equivalent command that is under "uv run" to see if it behaves differently: https://docs.astral.sh/uv/concepts/tools/#relationship-to-uv...
e.g.
$ uv tool install asciinema
$ asciinema play example.cast
One possible alternative is Pants. It's also written in Rust for performance, but has more flexibility baked into the design.
uv is basically souped-up pip.
Pants is an entire build/tooling system, analogous to something like Bazel. It can handle multiple dependency trees, multiple types of source code, building and packaging, even running tests.
It also has workspaces and subprojects features that are sneaking toward something like the full-fledged multi-artifact project support you get in Pants (or Poetry, with plugins). Except that their decision that the entire project must have one and only one global dependency solution means that there's no escape hatch if you ever end up in a dependency hell situation. Which is fairly common in Python for a variety of reasons. And, even when it does work, subprojects all sharing a single virtualenvironment means it's really easy to accidentally create an undeclared dependency and never notice at development time because it's already in the venv due to a sister project already having declared it. So if you have a project that builds multiple packages, you're on your own for implementing a solution to ensure packages with sane dependency specs get published to the archive.
That's a major reason why Python development culture decided to go with many project-specific virtualenvironments instead of a single global one like what uv is trying to drive toward. And it's true that it still allows you to do independent projects that reference each other at development time using path dependencies. But, unlike some of its more mature alternatives, it doesn't give you any help with replacing those path dependencies with named, version dependencies at build time. So if you have a project that builds multiple packages, you're on your own for implementing a solution to ensure packages with sane dependency specs get published to the archive.
If you're intentionally not trying it simply because you don't want to get addicted like everyone else clearly is, I could see that as a valid reason to never try it in the first place.
I usually avoid jumping on bandwagons, so I've always stuck with vanilla pip/venv, but at this point it really is clear to me that uv really is the "One True (tm) python package management solution", and probably will be for the next 10 years.
I have one complaint though, I want ./src to be the root of my python packages such that
> from models.attention import Attention
Works if I have a directory called models with a file called attention.py in it (and __init__.py) etc. The only way this seems to work correctly is if I set PYTHONPATH=./src
Surely the environment manager could set this up for me? Am I just doing it wrong?
https://docs.astral.sh/uv/concepts/projects/init/#packaged-a...
uv init --package example-pkg
cd example-pkg
uv run python
>>> from example_pkg import main
I have read a few tickets saying uv won’t support this so everyone running my project will have to read the README first to get anything to run. Terrible UX.
I also appreciate that it handles most package conflicts and it constantly maintains the list of packages as you move. I have gotten myself into a hole or two now with packages and dependencies, I can usually solve it by deleting venv an just using uv to reinstall.
I switched everything over and haven’t looked back.
It’s everything I hoped poetry would be, but 10x less flakey.
Sources
Then I looked at uv and oh boy is it better. Everything just works and is blazing fast. Kudos to the team.
Perhaps uv will continue its ascendancy and get there naturally. But I’d like to see uv be a little more aggressive with “uv native” workflows. If that makes sense.
After the switch, the same dependency resolution was done in seconds. This tool single-handedly made iteration possible again.
However I really like installing uv globally on my Windows systems and then using uvx to run stuff without caring about venvs and putting stuff to path.
Or would it be possible to go this fast in python if you cared enough about speed?
Is it a specific thing that rust has an amazing library for? Like Network or SerDe or something?
Using Rust is responsible for a lot of speed gains too, but I believe it's the hard linking trick (which could be implemented in any language) that's the biggest win.
pip could be made faster based on this, but maybe not quite as fast.
I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).
What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.
An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]
But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS.
Let's be honest, all tries to bring a cpython alternative failed (niche boosters like PyPy is a separate story, but it's not up-to-date, and not entirely exact). For some reason, people think that 1:1 compatibility is not critical and too costly to pursue (hello, all LLVM-based compilers). I think, it's doable and there's a solid way to solve it. What if Astral thinks so too?
It seems easy to imagine Astral following a similar path and making a significant amount of money in the process.
One day they're going to tell me I have to pay $10/month per user and add a bunch of features I really don't need just because nobody wants to prioritize the speed of pip.
And most of that fee isn't going to go towards engineers maintaining "pip but faster", it's going to fund a bunch of engineers building new things I probably don't want to use, but once you have a company and paying subscribers, you have to have developers actively doing things to justify the cost.
1: https://old.reddit.com/r/Python/comments/12rk41t/astral_next...
Also, it seems like a sign that even Python tooling needs to not be written in Python now to get reasonable performance.
I’d be surprised if there wasn’t an env var for it though.
The suffix "written in Rust" is getting cringy though.
echo 'import antigravity' | uv run -
Thank you astral!
Many languages have many package management tools but most languages there are one or two really popular ones.
For python you just have to memorize this basically:
- Does the project have a setup.py? if so, first run several other commands before you can run it. python -m venv .venv && source .venv/bin/activate && pip install -e .
- else does it have a requirements.txt? if so python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
- else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run
- else does it have a pipfile? pipenv install and then prefix all commands with pipenv run
- else does it have an environment.yml? if so conda env create -f environment.yml and then look inside the file and conda activate <environment_name>
- else I have not had to learn the rules for uv yet
Thank goodness these days I just open up a cursor tab and say "get this project running"
> - else does it have a pyproject.toml? if so poetry install and then prefix all commands with poetry run
That's not even correct. Not all projects with pyproject.toml use poetry (but poetry will handle everything with a pyproject.toml)
Just try uv first. `uv pip install .` should work in a large majority of cases.
pipenv is on the way out. bare `setup.py` is on the way out. `pyproject.toml` is the present and future, and the nice thing about it is it is self-describing in the tooling used to package.
I didn't say "all projects with pyproject.toml use poetry"
Rather, pip was broken intentionally two years ago and they are still not interested in fixing it:
https://github.com/pypa/packaging/issues/774
I tried uv and it just worked.
Automatically generated, not compatible with zsh