I really like uv, and it's the first package manager for a while where I haven't felt like it's a minor improvement on what I'm using but ultimately something better will come out a year or two later. I'd love if we standardized on it as a community as the de facto default, especially for new folks coming in. I personally now recommend it to nearly everyone, instead of the "welllll I use poetry but pyenv works or you could use conda too"
I never used anything other than pip. I never felt the need to use anything other than pip (with virtualenv). Am I missing anything?
Couple of things.
- pip doesn't handle your Python executable, just your Python dependencies. So if you want/need to swap between Python versions (3.11 to 3.12 for example), it doesn't give you anything. Generally people use an additional tool such as pyenv to manage this. Tools like uv and Poetry do this as well as handling dependencies
- pip doesn't resolve dependencies of dependencies. pip will only respect version pinning for dependencies you explicitly specify. So for example, say I am using pandas and I pin it to version X. If a dependency of pandas (say, numpy) isn't pinned as well, the underlying version of numpy can still change when I reinstall dependencies. I've had many issues where my environment stopped working despite none of my specified dependencies changing, because underlying dependencies introduced breaking changes. To get around this with pip you would need an additional tool like pip-tools, which allows you to pin all dependencies, explicit and nested, to a lock file for true reproducibility. uv and poetry do this out of the box.
- Tool usage. Say there is a python package you want to use across many environments without installing in the environments themselves (such as a linting tool like ruff). With pip, you need to install another tool like pipx to install something that can be used across environments. uv can do this out of the box.
Plus there is a whole host of jobs that tools like uv and poetry aim to assist with that pip doesn't, namely project creation and management. You can use uv to create a new Python project scaffolding for applications or python modules in a way that conforms with PEP standards with a single command. It also supports workspaces of multiple projects that have separate functionality but require dependencies to be in sync.
You can accomplish a lot/all of this using pip with additional tooling, but its a lot more work. And not all use cases will require these.
Yes, generally people already use an additional tool for managing their Python executables, like their operating system's package manager:
$> sudo apt-get install python3.10 python3.11 python3.12
And then it's simple to create and use version-specific virtual environments: $> python3.11 -m venv .venv3.11
$> source .venv3.11/bin/activate
$> pip install -r requirements.txt
You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment. In fact, this behavior is made more difficult by tools like `uv` if or `pipx` they're trying to manage Python executables as well as dependencies.It’s like a whole post of all the things you’re not supposed to do with Python, nice.
> sudo apt-get install python3.10 python3.11 python3.12
This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.
> You are incorrect about needing to use an additional tool to install a "global" tool like `ruff`; `pip` does this by default when you're not using a virtual environment.
True, but it's not best practice to do that because while the tool gets installed globally, it is not necessarily linked to a specific python version, and so it's extremely brittle.
And it gets even more complex if you need different tools that have different Python version requirements.
>This assumes the Python version you need is available from your package manager's repo. This won't work if you want a Python version either newer or older than what is available.
And of course you could be working with multiple distros and versions of the same distro, production and dev might be different environment and tons of others concerns. You need something that just works across.
Surely you just use Docker for production, right?
most developers I know do not use the system version of python. We use an older version at work so that we can maximize what will work for customers and don't try to stay on the bleeding edge. I imagine others do want newer versions for features, hence people find products like UV useful
(I've just learned about uv, and it looks like I have to pick it up since it performs very well.)
I just use pipx. Install guides suggest it, and it is only one character different from pip.
With Nix, it is very easy to run multiple versions of same software. The path will always be the same, meaning you can depend on versions. This is nice glue for pipx.
My pet peeve with Python and Vim is all these different package managers. Every once in a while a new one is out and I don't know if it will gain momentum. For example, I use Plug now in Vim but notice documentation often refers to different alternatives these days. With Python it is pip, poetry, pip search no longer working, pipx, and now uv (I probably forgot some things).
Sometimes I feel like my up vote doesn't adequately express my gratitude.
I appreciate how thorough this was.
Oh wow, it actually can handle the Python executable? I didn't know that, that's great! Although it's in the article as well, it didn't click until you said it, thanks!
pip's resolving algorithm is not sound. If your Python projects are really simple it seems to work but as your projects get more complex the failure rate creeps up over time. You might
pip install
something and have it fail and then go back to zero and restart and have it work but at some point that will fail. conda has a correct resolving algorithm but the packages are out of date and add about as many quality problems as they fix.I worked at a place where the engineering manager was absolutely exasperated with the problems we were having with building and deploying AI/ML software in Python. I had figured out pretty much all the problems after about nine months and had developed a 'wheelhouse' procedure for building our system reliably, but it was too late.
Not long after I sketched out a system that was a lot like uv but it was written in Python and thus had problems with maintaining its own stable Python enivronment (e.g. poetry seems to trash itself every six months or so.)
Writing uv in Rust was genius because it eliminates that problem of the system having a stable surface to stand on instead of pipping itself into oblivion, never mind that it is much faster than my system would have been. (My system had the extra feature that it used http range requests to extract the metadata from wheel files before pypi started letting you download the metadata directly.)
I didn't go forward with developing it because I argued with a lot of people who, like you, thought it was "the perfect being the enemy of the good" when it was really "the incorrect being the enemy of the correct." I'd worked on plenty of projects where I was right about the technology and wrong about the politics and I am so happy that uv has saved the Python community from itself.
May I introduce you to our lord and saviour, Nix and it's most holy child nixpkgs! With only a small tithing of your sanity and ability to Interop with any other dependency management you can free yourself of all dependency woes forever!
[] For various broad* definitions of forever.
[*] Like, really, really broad**
[**] Maybe a week if you're lucky
Except python builders in nixpkgs are really brain damaged because of the writers ways they inject search path which for example breaks if you try to execute a separate python interpreter assuming same library environment...
Within the holy church of Nix the sect of Python is troubled one, it can however be tamed into use via vast tomes of scripture. Sadly these times can only be written by those you have truly given their mind and body over to the almighty Nix.
It's not as bad as Common Lisp support which stinks to high heavens of someone not learning the lessons of the Common-Lisp-Controller fiasco
Lisp is of the old gods, only the most brave of Nix brethren dare tread upon their parenthesised ways.
>May I introduce you to our lord and saviour, Nix and it's most holy child nixpkgs!
In this case, instead of working with Python, you change how you manage everything!
The Nix of Python, conda, was already mentioned.
> add about as many quality problems as they fix
I used to have 1 problem, then I used Nix to fix it, now I have 'Error: infinite recursion' problems.
Nix is really the best experience I've had with Python package management but only if all the dependencies are already in nixpkgs. If you want to quickly try something off github it's usually a pain in the ass.
Ugh, I hate writing this but that's where docker and microservices comes to the rescue. It's a pain in the butt and inefficient to run but if you don't care about the overhead (and if you do care, why are you still using Python?), it works.
My experience was that docker was a tool data scientists would use to speedrun the process of finding broken Pythons. For instance we'd inexplicably find a Python had Hungarian as the default charset, etc.
The formula was
- Docker - Discipline = Chaos
Docker - Discipline = Chaos
Docker + Discipline = Order
but - Docker + Discipline = Order
If you can write a Dockerfile to install something you can write a bash script. Circa 2006 I was running web servers on both Linux and Windows with hundreds of web sites on them with various databases, etc. It really was simple then as "configure a filesystem path" and "configure a database connection" and I had scripts that could create a site in 30 seconds or so.Sure today you might have five or six different databases for a site but it's not that different in my mind. Having way too many different versions of things installed is a vice, not a virtue.
> If you can write a Dockerfile to install something you can write a bash script.
Docker is great for making sure that magic bash script that brings the system up actually works again on someone else’s computer or after a big upgrade on your dev machine or whatever.
So many custom build scripts I’ve run into over the years have some kind of unstated dependency on the initial system they were written on, or explicit dependencies on something tricky to install, and as such are really annoying to diagnose later on, especially if they make significant system changes.
Docker is strictly better than a folder full of bash scripts and a Readme.txt. I would have loved having it when I had to operate servers like that with tons of websites running on them. So much nicer to be able to manage dependency upgrades per-site rather than server-wide, invariably causing something to quietly break on one of 200 sites.
Unfortunately sometimes you get to host things not written by you, or which exist for a long time and thus there's a lot of history involved that prevents nice and tidy.
My first production use of kubernetes started out because we put in the entirety of what we had to migrate to new hosting into spreadsheet, with columns for various parts of stack used by the websites, and figured we would go insane trying to pack it up - or we would lose the contract because we would be as expensive as the last company.
Could we package it nicely without docker? Yes, but the effort to package it in docker was smaller than packaging it in a way where it wouldn't conflict on a single host, because the simple script becomes way harder when you need to handle multiple versions of the same package, something that most distro do not support at all (these days I think we could have done it with NixOS, but that's a different kettle of deranged fishes)
And then the complexity of managing the stack was quickly made easier by turning each site into separate artifact (docker container) handled by k8s manifests (especially when it came to dealing with about 1000 domains across those apps).
So, theoretically discipline is enough, practical world is much dirtier though.
>For instance we'd inexplicably find a Python had Hungarian as the default charset, etc.
Sounds quite explicable: Docker image created by Hungarian devs perhaps?
>and if you do care, why are you still using Python?
Because I get other advantages of it. Giving in to overhead on one layer, doesn't mean I'm willing to give it up everywhere.
Respectively, yes. The ability to create venvs so fast, that it becomes a silent operation that the end user never thinks about anymore. The dependency management and installation is lightning quick. It deals with all of the python versioning
and I think a killer feature is the ability to inline dependencies in your Python source code, then use: uv tool run <scriptname>
Your script code would like:
#!/usr/bin/env -S uv run --script # /// script # requires-python = ">=3.12" # dependencies = [ # "...", # "..." # ] # ///
Then uv will make a new venv, install the dependencies, and execute the script faster than you think. The first run is a bit slower due to downloads and etc, but the second and subsequent runs are a bunch of internal symlink shuffling.
It is really interesting. You should at least take a look at a YT or something. I think you will be impressed.
Good luck!
If you switch to uv, you’ll have fewer excuses to take coffee breaks while waiting for pip to do its thing. :)
Yeah, it unifies the whole env experience with the package installation experience. No more forgetting to activate virtualenv first. No more pip installing into the wrong virtual env or accidentally borrowing from the system packages. It’s way easier to specify which version of python to use. Everything is version controlled including python version and variant like cpython, puppy, etc. it’s also REALLY REALLY fast.
I am not a python developer, but sometimes I use python projects. This puts me in a position where I need to get stuff working while knowing almost nothing about how python package management works.
Also I don’t recognise errors and I don’t know which python versions generally work well with what.
I’ve had it happen so often with pip that I’d have something setup just fine. Let’s say some stable diffusion ui. Then some other month I want to experiment with something like airbyte. Can’t get it working at all. Then some days later I think, let’s generate an image. Only to find out that with pip installing all sorts of stuff for airbyte, I’ve messed up my stable diffusion install somehow.
Uv clicked right away for me and I don’t have any of these issues.
Was I using pip and asdf incorrectly before? Probably. Was it worth learning how to do it properly in the previous way? Nope. So uv is really great for me.
This is not just a pip problem. I had the problem with anaconda a few years ago where upgrading the built in editor (spyder?) pulled versions of packages which broke my ML code, or made dependencies impossible to reconsile. It was a mess, wasting hours of time. Since then I use one pip venv for each project and just never update dependencies.
My life got a lot easier since I adopted the habit of making a shell script, using buildah and podman, that wrapped every python, rust, or golang project I wanted to dabble with.
It's so simple!
Create a image with the dependencies, then `podman run` it.
Performance and correctness mostly.
Also you can set the python version for that project. It will download whatever version you need and just use it.
in my view, depending on your workflow you might have been missing out on pyenv in the past but not really if you feel comfortable self-managing your venvs.
now though, yes unequivocally you are missing out.
Pip only has requirements.txt and doesn't have lockfiles, so you can't guarantee that the bugs you're seeing on your system are the same as the bugs on your production system.
I’ve always worked around that by having a requirements.base.txt and a requirements.txt for the locked versions. Obviously pip doesn’t do that for you but it’s not hard to manage yourself.
Having said that, I’m going to give uv a shot because I hear so many good things about it.
This works until you need to upgrade something, pip might upgrade to a broken set of dependencies. Or if you run on a different OS and the dependencies are different there (because of env markers), your requirements file won't capture that. There are a lot of gotchas that pip can't fix.
With pip the best practice is to have a requirements.txt with direct requirements (strictly or loosely pinned), and a separate constraints.txt file [1] with strictly pinned versions of all direct- and sub-dependencies (basically the output of `pip freeze`). The latter works like a lock file.
[1] https://pip.pypa.io/en/stable/user_guide/#constraints-files
I’m grouchy because I finally got religion on poetry a few years ago, but the hype on uv is good enough that I’ll have to give it a shot.
With the new major release of Poetry that just came out I also feel like it might be a good time to switch to Uv rather than adapt to this new version: https://python-poetry.org/blog/announcing-poetry-2.0.0/
I freaking love Poetry. It was a huge breath of fresh air after years of pip and a short detour with Pipenv. If uv stopped existing I’d go back to Poetry.
But having tasted the sweet nectar of uv goodness, I’m onboard the bandwagon.
"pip freeze" generates a lockfile.
No, that generates a list of currently installed packages.
That’s very much not a lock file, even if it is possible to abuse it as such.
The requirements.txt file is the lockfile. Anyways, this whole obsession with locked deps or "lockfiles" is such an anti-pattern, I have no idea why we went there as an industry. Probably as a result of some of the newer stuff that is classified as "hipster-tech" such as docker and javascript.
Just because you don't understand it, it's ok to call it an "anti-pattern"?
Reproducibility is important in many contexts, especially CI, which is why in Node.js world you literally do "npm ci" that installs exact versions for you.
If you haven't found it necessary, it's because you haven't run into situations where not doing this causes trouble, like a lot of trouble.
Just because someone has a different perspective than you doesn't mean they don't "understand".
Lockfiles are an anti-pattern if you're developing a library rather than an application, because you can't push your transitive requirements onto the users of your library.
If you're developing a library, and you have a requirement for what's normally a transitive dependency, it should be specified as a top-level dependency.
The point is that if I'm writing a library and I specify `requests == 1.2.3`, then what are you going to do in your application if you need both my library and `requests == 1.2.4`?
This is why libraries should not use lockfiles, they should be written to safely use as wide a range of dependencies' versions as possible.
It's the developers of an application who should use a lockfile to lock transitive dependencies.
The lock file is for developers of the library, not consumers. Consumers just use the library’s dependency specification and then resolve their own dependency closure and then generate a lock file for that. If you, as a library developer, want to test against multiple versions of your dependencies, there are other tools for that. It doesn’t make lock files a bad idea in general.
That’s not the perspective that OP was sharing, though.
You literally phrased it as "I have no idea why". You can't be upset if someone feels you don't understand why.
I’m pretty sure it was a sarcasm.
Cool story bro.
I've used pip, pyenv, poetry, all are broken in one way or another, and have blind spots they don't serve.
If your needs are simple (not mixing Python versions, simple dependencies, not packaging, etc) you can do it with pip, or even with tarballs and make install.
Much of the Python ecosystem blatantly violates semantic versioning. Most new tooling is designed to work around the bugs introduced by this.
To be fair, Python itself doesn’t follow SemVer. Not in a “they break things they shouldn’t,” but in a “they never claim to be using SemVer.”
Well, for one you can't actually package or add a local requirement (for example , a vendored package) to the usual pip requirements.txt (or with pyproject.toml, or any other standard way) afaik.
I saw a discourse reply that cited some sort of possible security issue but that was basically it and that means that the only way to get that functionality is to not use pip. It's really not a lot of major stuff, just a lot of little paper cuts that makes it a lot easier to just use something else once your project gets to a certain size.
Sure you can.
It's in their example for how to use requirements.txt: https://pip.pypa.io/en/stable/reference/requirements-file-fo...
Maybe there's some concrete example you have in mind though?
I don't think so, though maybe I didn't explain myself correctly. You can link to a relative package wheel I think, but not to a package repo. So if you have a repo, with your main package in ./src, and you vendor or need a package from another subfolder (let's say ./vendored/freetype) , you can't actually do it in a way that won't break the moment you share your package. You can't put ./vendored/freetype in your requirements.txt, it just fails.
That means you either need to use pypi or do an extremely messy hack that involves adding the vendored package as a sub package to your main source, and then do some importlib black magic to make sure that everything uses said package.
https://github.com/pypa/pip/issues/6658
https://discuss.python.org/t/what-is-the-correct-interpretat...
Pip doesn't resolve dependencies for you. On small projects that can be ok, but if you're working on something medium to large, or you're working on it with other people you can quickly get yourself into a sticky situation where your environment isn't easily reproducible.
Using uv means your project will have well defined dependencies.
My bad, see PaulHoule's comment for what I was getting at.
Oh wow it doesn’t? What DOES it do then?
As I commented here just now I never got pip. This explains it.
The guy doesn't know what he's talking about as pip certainly has dependency resolution. Rather get your python or tech info from a non-flame-war infested thread full of anti-pip and anti-python folk.
(I will likely base a blog post in my packaging series off this comment later.)
What people seem to miss about Pip is that it's by design, not a package manager. It's a package installer, only. Of course it doesn't handle the environment setup for you; it's not intended for that. And of course it doesn't keep track of what you've installed, or make lock files, or update your `pyproject.toml`, or...
What it does do is offer a hideously complex set of options for installing everything under the sun, from everywhere under the sun. (And that complexity has led to long-standing, seemingly unfixable issues, and there are a lot of design decisions made that I think are questionable at best.)
Ideas like "welllll I use poetry but pyenv works or you could use conda too" are incoherent. They're for different purposes and different users, with varying bits of overlap. The reason people are unsatisfied is because any given tool might be missing one of the specific things they want, unless it's really all-in-one like Uv seems like it intends to be eventually.
But once you have a truly all-in-one tool, you notice how little of it you're using, and how big it is, and how useless it feels to have to put the tool name at the start of every command, and all the specific little things you don't like about its implementation of whatever individual parts. Not to mention the feeling of "vendor lock-in". Never mind that I didn't pay money for it; I still don't want to feel stuck with, say, your build back-end just because I'm using your lock-file updater.
In short, I don't want a "package manager".
I want a solid foundation (better than Pip) that handles installing (not managing) applications and packages, into either a specified virtual environment or a new one created (not managed, except to make it easy to determine the location, so other tools can manage it) for the purpose. In other words, something that fully covers the needs of users (making it possible to run the code), while providing only the bare minimum on top of that for developers - so that other developer tools can cooperate with that. And then I want specialized tools for all the individual things developers need to do.
The specialized tools I want for my own work all exist, and the tools others want mostly exist too. Twine uploads stuff to PyPI; `build` is a fine build front-end; Setuptools would do everything I need on the back-end (despite my annoyances with it). I don't need a lockfile-driven workflow and don't readily think in those terms. I use pytest from the command line for testing and I don't want a "package manager" nor "workflow tool" to wrap that for me. If I needed any wrapping there I could do a few lines of shell script myself. If anything, the problem with these tools is doing too much, rather than too little.
The base I want doesn't exist yet, so I've started making it myself. Pipx is a big step in the right direction, but it has some arbitrary limitations (I discuss these and some workarounds in my recent blog post https://zahlman.github.io/posts/2025/01/07/python-packaging-... ) and it's built on Pip so it inherits those faults and is that much bigger. Uv is even bigger still for the compiled binary, and I would only be using the installation parts.
This:
> I haven't felt like it's a minor improvement on what I'm using
means that this:
> I'd love if we standardized on it as a community as the de facto default
…probably shouldn’t happen. The default and de facto standard should be something that doesn’t get put on a pedestal but stays out of the way.
It would be like replacing the python repl with the current version of ipython. I’d say the same thing, that it isn’t a minor improvement. While I almost always use ipython now, I’m glad it’s a separate thing.
> The default and de facto standard should be something that doesn’t get put on a pedestal but stays out of the way.
The problem is that in the python ecosystem there really isn't a default de facto standard yet at all. It's supposed to be pip, but enough people dislike pip that it's hard as a newcomer to know if it's actually the standard or not.
The nice thing about putting something like this on a pedestal is that maybe it could actually become a standard, even if the standard should be simple and get out of the way. Better to have a standard that's a bit over the top than no standard at all.
It feels even more standard than it used to, with python -m pip and python -m venv making it so it can be used with a virtalenv even if only python or python3 is in your path.
Oh, it's certainly more standard than it used to be, and maybe it's on the way to being fully standard. But it definitely hasn't arrived in the spot that npm, cargo, hex, bundler, and similar have in their respective ecosystems.
Npm is a pretty good example of what pip should be. Npm has had to compete with other package managers for a long time but has remained the standard simply because it actually has all the basic features that people expect out of a package manager. So other package managers can spin up using the npm registry providing slightly better experiences in certain ways, but npm covers the basics.
Pip really does not even cover the basics, hence the perpetual search for a better default.
> The default and de facto standard should be something that doesn’t get put on a pedestal but stays out of the way.
This to me is unachievable. Perfection is impossible. On the the way there if the community and developers coalesced around a single tool then maybe we can start heading down the road to perfectionism.
I mean, stays out of the way for simple uses.
When I first learned Python, typing python and seeing >>> and having it evaluate what I typed as if it appeared in a file was a good experience.
Now that I use python a lot, ipython is more out of the way to me than the built-in python repl is, because it lets me focus on what I'm working on, than limitations of a tool.
+1 uv now also supports system installation of python with the --default --preview flags. This probably allows me to replace mise (rtx) and go uv full time for python development. With other languages, I go back to mise.
What is the deal with uv's ownership policy? I heard it might be VC backed. To my mind, that means killing pip and finding some kind of subscription revenue source which makes me uneasy.
The only way to justify VC money is a plot to take over the ecosystem and then profit off of a dominant position. (e.g. the Uber model)
I've heard a little bit about UV's technical achievements, which are impressive, but technical progress isn't the only metric.
It’s dual MIT and Apache licensed. Worst case, if there’s a rug pull, fork it.
I usually stay away far FAR from shiny new tools but I've been experimenting with uv and I really like it. I'm a bit bummed that it's not written in Python but other than that, it does what it says on the tin.
I never liked pyenv because I really don't see the point/benefit building every new version of Python you want to use. There's a reason I don't run Gentoo or Arch anymore. I'm very happy that uv grabs pre-compiled binaries and just uses those.
So far I have used it to replace poetry (which is great btw) in one of my projects. It was pretty straightforward, but the project was also fairly trivial/typical.
I can't fully replace pipx with it because 'uv tool' currently assumes every Python package only has one executable. Lots of things I work with have multiple, such as Ansible and Jupyterlab. There's a bug open about it and the workarounds are not terrible, but it'd be nice if they are able to fix that soon.
Heck, you can get even cleaner than that by using uv’s support for PEP 723’s inline script dependencies:
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "pandas",
# ]
# ///
h/t https://simonwillison.net/2024/Dec/19/one-shot-python-tools/So it's like a shebang for dependencies. Cool.
Is it possible for my IDE (vscode) to support this? Currently my IDE screams at me for using unknown packages and I have no type hinting, intellisense, etc.
I don't understand how things like this get approved into PEPs.
Seems like a great way to write self documenting code which can be optionally used by your python runtime.
As in, you think this shouldn't be possible or you think it should be written differently?
The PEP page is really good at explaining the status of the proposal, a summary of the discussion to date, and then links to the actual detailed discussion (in Discourse) about it:
I see this was accepted (I think?); is the implementation available in a released python version? I don't see an "as of" version on the pep page, nor do lite google searches reveal any official python docs of the feature.
It's not a python the language feature, it's for packaging. So no language version is relevant. It's just there for any tool that wants to use it. uv, an IDE, or anything else that manages virtual environments would be the ones who implement it independent of python versions.
And people were laughing at PHP comments configuring framework, right?
Python was always the late born twin brother of PHP with better hair and teeth, but the same eyes that peered straight into the depths of the abyss.
Why come types?
I don't think this IS a PEP, I believe it is simply something the uv tool supports and as far as Python is concerned it is just a comment.
No, this is a language standard now (see PEP 723)
And for the Jupyter setting, check out Trevor Manz's juv:
I used to have pyenv, asdf or mise to manage python versions (never use conda unless I need DL lib like pytorch). Now just uv is enough.
As a NodeJS developer it's still kind of shocking to me that Python still hasn't resolved this mess. Node isn't perfect, and dealing with different versions of Node is annoying, but at least there's none of this "worry about modifying global environment" stuff.
Caveat: I'm a node outsider, only forced to interact with it
But there are a shocking number of install instructions that offer $(npm i -g) and if one is using Homebrew or nvm or a similar "user writable" node distribution, it won't prompt for sudo password and will cheerfully mangle the "origin" node_modules
So, it's the same story as with python: yes, but only if the user is disciplined
Now ruby drives me fucking bananas because it doesn't seem to have either concept: virtualenvs nor ./ruby_modules
It's worth noting that Node allows two packages to have the same dependency at different versions, which means that `npm i -g` is typically a lot safer than a global `pip install`, because each package will essentially create its own dependency tree, isolated from other packages. In practice, NPM has a deduplication process that makes this more complicated, and so you can run into issues (although I believe other package managers can handle this better), but I rarely run into issues with this.
That said, I agree that `npm i -g` is a poor system package manager, and you should typically be using Homebrew or whatever package manager makes the most sense on your system. That said, `npx` is a good alternative if you just want to run a command quickly to try it out or something like that.
Because you don’t need virtualenvs or ruby_modules. You can have however many versions of the same gem installed it’s simply referenced by a gemfile, so for Ruby version X you are guaranteed one copy of gem version Y and no duplicates.
This whole installing the same dependencies a million times across different projects in Python and Node land is completely insane to me. Ruby has had the only sane package manager for years. Cargo too, but only because they copied Ruby.
Node has littered my computer with useless files. Python’s venv eat up a lot of space unnecessarily too.
Ruby has a number of solutions for this - rvm (the oldest, but less popular these days), rbenv (probably the most popular), chruby/gem_home (lightweight) or asdf (my personal choice as I can use the same tool for lots of languages). All of those tools install to locations that shouldn't need root.
Yes, I am aware of all of those, although I couldn't offhand tell anyone the difference in tradeoffs between them. But I consider having to install a fresh copy of the whole distribution a grave antipattern. I'm aware that nvm and pyenv default to it and I don't like that
I did notice how Homebrew sets env GEM_HOME=<Cellar>/libexec GEM_PATH=<Cellar>/libexec (e.g. <https://github.com/Homebrew/homebrew-core/blob/9f056db169d5f...>) but, similar to my node experience, since I am a ruby outsider I don't totally grok what isolation that provides
Python has been cleaning up a number of really lethal problems like:
(i) wrongly configured character encodings (suppose you incorporated somebody else's library that does a "print" and the input data contains some invalid characters that wind up getting printed; that "print" could crash a model trainer script that runs for three days if error handling is set wrong and you couldn't change it when the script was running, at most you could make the script start another python with different command line arguments)
(ii) site-packages; all your data scientist has to do is
pip install --user
the wrong package and they'd trashed all of their virtualenvs, all of their condas, etc. Over time the defaults have changed so pythons aren't looking into the site-packages directories but I wasted a long time figuring out why a team of data scientists couldn't get anything to work reliably(iii) "python" built into Linux by default. People expected Python to "just work" but it doesn't "just work" when people start installing stuff with pip because you might be working on one thing that needs one package and another thing that needs another package and you could trash everything you're doing with python in the process of trying to fix it.
Unfortunately python has attracted a lot of sloppy programmers who think virtualenv is too much work and that it's totally normal for everything to be broken all the time. The average data scientist doesn't get excited when it crumbles and breaks, but you can't just call up some flakes to fix it. [1]
I don’t exactly remember the situation but a user created a python module named error.py.
Then in their main code they imported the said error.py but unfortunately numpy library also has an error.py. So the user was getting very funky behavior.
... it's tricky. In Java there's a cultural expectation that you name a package like
package organization.dns.name.this.and.that;
but real scalability in a module system requires that somebody else packages things up as package this.and.that;
and you can make the system look at a particular wheel/jar/whatever and make it visible with a prefix you specify like package their.this.and.that;
Programmers seem to hate rigorous namespace systems though. My first year programming Java (before JDK 1.0) the web site that properly documented how to use Java packages was at NASA and you still had people writing Java classes that were in the default package.But let's all be real here: the ability of __init__.py to do FUCKING ANYTHING IT WANTS is insanity made manifest
I am kind of iffy on golang's import (. "some/packge/for/side-effects") but at least it cannot suddenly mutate GOPATH[0]="/home/jimmy/lol/u/fucked" as one seems to be able to do on the regular with python
I am acutely aware that is (programmer|package|organization|culture)-dependent but the very idea that one can do that drives us rigorous people stark-raving
> Programmers seem to hate rigorous namespace systems though
Pretty much a nothing burger in Rust, so I disagree that items necessarily hate the concept. Maybe others haven’t done a good job with the UX?
> Python has been cleaning up a number of really lethal problems like
I wish they would stick to semantic versioning tho.
I have used two projects that got stuck in incompatible changes in the 3.x Python.
That is a fatal problem for Python. If a change in a minor version makes things stop working, it is very hard to recommend the system. A lot of work has gone down the drain, by this Python user, trying to work around that
I've only recently started with uv, but this is one thing it seems to solve nicely. I've tried to get into the mindset of only using uv for python stuff - and hence I haven't installed python using homebrew, only uv.
You basically need to just remember to never call python directly. Instead use uv run and uv pip install. That ensures you're always using the uv installed python and/or a venv.
Python based tools where you may want a global install (say ruff) can be installed using uv tool
> Python based tools where you may want a global install (say ruff) can be installed using uv tool
uv itself is the only Python tool I install globally now, and it's a self-contained binary that doesn't rely on Python. ruff is also self-contained, but I install tools like ruff (and Python itself) into each project's virtual environment using uv. This has nice benefits. For example, automated tests that include linting with ruff do not suddenly fail because the system-wide ruff was updated to a version that changes rules (or different versions are on different machines). Version pinning gets applied to tooling just as it does to packages. I can then upgrade tools when I know it's a good time to deal with potentially breaking changes. And one project doesn't hold back the rest. Once things are working, they work on all machines that use the same project repo.
If I want to use Python based tools outside of projects, I now do little shell scripts. For example, my /usr/local/bin/wormhole looks like this:
#!/bin/sh
uvx \
--quiet \
--prerelease disallow \
--python-preference only-managed \
--from magic-wormhole \
wormhole "$@"
Because node.js isn't a dependency of the Operating system.
Also we don't have a left pad scale dependency ecosystem that makes version conflicts such a pressing issue.
Oh, tell us OS can’t venv itself a separate python root and keep itself away from what user invents to manage deps. This is non-explanation appealing to authority while it’s clearly just a mess lacking any thought. It just works like this.
we don't have a left pad scale dependency ecosystem that makes version conflicts such a pressing issue
TensorFlow.
This. IME, JS devs rarely have much experience with an OS, let alone Linux, and forget that Python literally runs parts of the OS. You can’t just break it, because people might have critical scripts that depend on the current behavior.
Hot take: pnpm is the best dx, of all p/l dep toolchains, for devs who are operating regularly in many projects.
Get me the deps this project needs, get them fast, then them correctly, all with minimum hoops.
Cargo and deno toolchains are pretty good too.
Opam, gleam, mvn/gradle, stack, npm/yarn, nix even, pip/poetry/whatever-python-malarkey, go, composer, …what other stuff have i used in the past 12 months… c/c++ doesn’t really have a first class std other than global sys deps (so ill refer back to nix or os package managers).
Getting the stuff you need where you need it is always doable. Some toolchains are just above and beyond, batteries included, ready for productivity.
Have you used bun? It's also great. Super fast
I swear I'm not trolling: what do you not like about modern golang's dep management (e.g. go.mod and go.sum)?
I agree that the old days of "there are 15 dep managers, good luck" was high chaos. And those who do cutesy shit like using "replace" in their go.mod[1] is sus but as far as dx $(go get) that caches by default in $XDG_CACHE_DIR and uses $GOPROXY I think is great
1: https://github.com/opentofu/opentofu/blob/v1.9.0/go.mod#L271
pnpm is the best for monorepos. I've tried yarn workspaces and npm's idea of it and nothing comes close to the DX of pnpm
What actually, as an end user, about pnpm is better than Yarn? I've never found an advantage with pnpm in all the times I've tried it. They seem very 1:1 to me, but Yarn edges it out thanks to it having a plugin system and its ability to automatically pull `@types/` packages when needed.
automatically pull `@types/` packages when needed
Wait, what? Since when?
I wouldn't really say it's that black and white. It was only recently that many large libraries and tools recommended starting with "npm i -g ...". Of course you could avoid it if you knew better, but the same is true for Python.
Half the time something breaks in a javascript repo or project, every single damn javascript expert in the team/company tells me to troubleshoot using the below sequence, as if throwing spaghetti on a wall with no idea what's wrong.
Run npm install
Delete node_modules and wait 30minutes because it takes forever to delete 500MB worth of 2 million files.
Do an npm install again (or yarn install or that third one that popped up recently?)
Uninstall/Upgrade npm (or is it Node? No wait, npx I think. Oh well, used to be node + npm, now it's something different.)
Then do steps 1 to 3 again, just in case.
Hmm, maybe it's the lockfile? Delete it, one of the juniors pushed their version to the repo without compiling maybe. (Someone forgot to add it to the gitignore file?)
Okay, do steps 1 to 3 again, that might have fixed it.
If you've gotten here, you are royally screwed and should try the next javascript expert, he might have seen your error before.
So no, I'm a bit snarky here, but the JS ecosystem is a clustermess of chaos and should rather fix it's own stuff first. I have none of the above issues with python, a proper IDE and out of the box pip.
So you’re not experiencing exactly this with pip/etc? I hit this “just rebuild this 10GB venv” scenario like twice a day while learning ML. Maybe it’s just ML, but then regular node projects don’t have complex build-step / version-clash deps either.
The pain is real. Most of the issues are navigable, but often take careful thought versus some canned recipe. npm or yarn in large projects can be a nightmare. starting with pnpm makes it a dream. Sometimes migrating to pnpm can be rough, because projects that work may rely on incorrect, transitive, undeclared deps actually resolving. Anyway, starting from pnpm generally resolves this sort of chaos.
Most packing managers are developed.
Pnpm is engineered.
It’s one of the few projects I donate to on GitHub
What kind of amateurs are you working with? I’m not a Node.js dev and even I know about npm ci command.
Sounds like a tale from a decade ago, people now use things like pnpm and tsx.
It is kind of solved, but not default.
This makes a big difference. There is also the social problem of Python community with too loud opinions for making a good robust default solution.
But same has now happened for Node with npm, yarn and pnpm.
The activation of the virtualenv is unnecessary (one can execute pip/python directly from it), and the configuring of your local pyenv interpreter is also unnecessary, it can create a virtual environment with one directly:
pyenv virtualenv python3.12 .venv
.venv/bin/python -m pip install pandas
.venv/bin/python
Not quite one command, but a bit more streamlined; I guess.Note that in general calling the venv python directly vs activating the venv are not equivalent.
E.g. if the thing you run invokes python itself, it will use the system python, not the venv one in the first case.
Indeed, you're right ;).
Why can’t python just adopt something like yarn/pnpm + and effing stop patch-copying its binaries into a specific path? And pick up site_packages from where it left it last time? Wtf. How hard it is to just pick python-modules directory and python-project.json and sync it into correctness by symlink/mklink-inf missing folders from a package cache in there in a few seconds?
Every time when I have to reorganize or upgrade my AI repos, it’s yet another 50GB writes to my poor ssd. Half of it is torch, another half auto-downloaded models that I cannot stop because they become “downloaded” and you never know how to resume it back or even find where they are cause python logging culture is just barbaric.
There's so many more!
1. `uvx --from git+https://github.com/httpie/cli httpie` 2. https://simonwillison.net/2024/Aug/21/usrbinenv-uv-run/ uv in a shebang
Yes! since that Simon Willison article, I've slowly been easing all my scripts into just using a uv shebang, and it rocks! I've deleted all sorts of .venvs and whatnot. really useful
The uv shebang is definitely the killer feature for me, especially with so much of the AI ecosystem tied up in Python. Before, writing Python scripts was a lot more painful requiring either a global scripts venv and shell scripts to bootstrap them, or a venv per script.
I’m sure it was already possible with shebangs and venv before, but uv really brings the whole experience together for me so I can write python scripts as freely as bash ones.
Super neat re Willison article.. would something like this work under powershell though?!
Uv also bundles uvx command so you can run Python scripts without installing them manually:
uvx --from 'huggingface_hub[cli]' huggingface-cli
And there's also the `uv run script.py` where you can have dependencies indicated as comments in the script, see eg https://simonwillison.net/2024/Dec/19/one-shot-python-tools/
Neat!
Ridiculous post:
The author says that a normal route would be:
- Take the proper route:
- Create a virtual environment
- pip install pandas
- Activate the virtual environment
- Run python
Basically, out of the box, when you create an virtual it is immediately activated. And you would obviously need to have it activated before doing a pip install...In addition, in my opinion this is the thing that would sucks about UV to have different functions being tied to a single tool execution.
It is a breeze to be able to activate a venv, and be done with it, being able to run multiple times your program in one go, even with crashes, being able to install more dependencies, test it in REPL, ...
You can still use traditional venvs with UV though, if you want.
For anyone that used rye, it's worth noting that the creator of rye recommends using uv. Also, rye is going to be continually updated to just interface with uv until rye can be entirely replaced by uv.
Whats the point if you have other binary dependencies?
Use Nix for Python version as well as other bin deps, and virtualenv + pip-tools for correct package dependency resolution.
Waiting 4s for pip-tools instead of 1ms for uv doesn't change much if you only run it once a month.
Python package management has always seemed like crazyland to me. I've settled on Anaconda as I've experimented with all the ML packages over the years, so I'd be interested to learn why uv, and also what/when are good times to use venv/pip/conda/uv/poetry/whatever else has come up.
NeutralCrane has a really helpful comment below[0], would love to have a more thorough post on everything!
Sometimes, only a specific wheel is available (e.g. on Nvidia's Jetson platform where versions are dictated by the vendor).
Can uv work with that?
OK, I'm convinced. I just installed uv. Thanks for sharing!
Ditto. This is pretty cool!
What would be interesting is if you could do something similar for IPython/Jupyter Notebooks: while front-ends like JupyterLab and VS Code Notebooks do let you select a .venv if present in the workspace, it's annoying to have to set one up and build one for every project.
uv implements PEP 723.
https://packaging.python.org/en/latest/specifications/inline...
Especially useful if the script has dependencies on packages in private repos.
Small misorder in the „right route” - you should first activate the virtual environment just created and then install pandas.
I love this, the biggest problem I have right now with python scripts is distributing my single file utility scripts (random ops scripts).
I wish there was a way to either shebang something like this or build a wheel that has the full venv inside.
There’s a shebang now. as of PEP 722 you can declare dependencies in a comment at the top of a single file script that a package manager can choose to read and resolve.
uv has support for it: https://docs.astral.sh/uv/guides/scripts/#running-a-script-w... (which only helps if your team is all in on uv, but maybe they are)
How does that work with the shebang?
#!uv run
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "click>8",
# "rich",
# ]
# ///
and that's itDo other package managers support this yet?
PEP 722 was rejected. You are thinking of PEP 723, which was very similar to 722 in goals.
https://discuss.python.org/t/pep-722-723-decision/36763 contains the reasoning for accepting 723 and rejecting 722.
You mean like pyinstaller https://pyinstaller.org that takes your python and makes a standalone, self extracting or onedir archive to convert you ops script plus dependencies into something you can just distribute like a binary?
You are in luck https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
https://peps.python.org/pep-0723/ is at the very least related. It's a way of specifying the metadata in the script, allowing other tools to do the right thing. One of the use cases is:
> A user facing CLI that is capable of executing scripts. If we take Hatch as an example, the interface would be simply hatch run /path/to/script.py [args] and Hatch will manage the environment for that script. Such tools could be used as shebang lines on non-Windows systems e.g. #!/usr/bin/env hatch run
https://micro.webology.dev/2024/08/21/uv-updates-and.html shows an example with uv:
> With this new feature, I can now instruct users to run uv run main.py without explaining what a venv or virtualenv is, plus a long list of requirements that need to be passed to pip install.
That ends:
> PEP 723 also opens the door to turning a one-file Python script into a runnable Docker image that doesn’t even need Python on the machine or opens the door for Beeware and Briefcase to build standalone apps.
I do like uv and hope to try it soon but I don't get the point of the article.
Pyenv + poetry already gives you ability to "pull in local dependencies". Yes, you have to create a virtual environment and it's not "ad-hoc".
But if you're going to pull in a bunch of libraries, WHY would you want to invoke python and all your work dependencies on a one liner? Isn't it much better and easier to just spell-out the dependencies in a pyproject.toml? How "ad-hoc" are we talking here?
But I still need pip to install uv, right? Or download it using a one-liner alternatively.
You can install it in several ways without pip, easiest is standalone installer (which can upgrade itself)
one useful UV alias I use is uvsys='uv pip install --system'
So I can just do uv {package} for a quick and dirty global install. I'm so used to pip install being global by default just making this shorthand makes things a bit easier.
Can you also specify which version of pandas to use?
Of course!
uv run -q --with pandas==2.1.4 python -c "import pandas; print(pandas.__version__)" 2.1.4
That’s like a killer app type feature. However it says adhoc so you probably can’t get back to that setup easily
uv has does not (nor do they plan to add) support for conda, and that is a deal-breaker.
I can't see why anyone is using Conda in 2025. In 2018, yeah, pip (now uv) was hard and you could get a "just works" experience installing Tensorflow + NVIDIA on Conda. In 2023 it was the other way around and it still is.
Well, when you're building python packages that have non python dependencies and a big chunk of your users are on Windows, conda is the only option, even in 2025 :)
Examples include, quant libraries, in-house APIs/tools, etc.
Circa 2018, I figured out how to pack up the CUDA libraries inside conda for Windows so I could have different conda environments with different versions of CUDA which was essential back then because if you had a model that was written w/ a certain version of Tensorflow you had to have a matching CUDA and if you used NVIDIA's we-need-your-email-address installers you could only have one version of CUDA installed at a time.
Worked great except for conda making the terrible mistake of compressing package files with bzip2 which took forever to decompress for huge packages.
I see no reason you can't install any kind of non-Python thing that a Python system wants with uv because a wheel is just a ZIP file, so long as it doesn't need to be installed in a particular place you can just unpack it and go.
Conda worked for me in the past, but at some point I was getting inexplicable segfaults from Python scripts. I switched back to just pip and everything worked fine again. And installation was much faster.
That was basically my experience. At one time conda made my life easier, eventually it made it impossible.
I’m on Windows and I categorically refuse to install Conda. It’s not necessary.
Why would it be a deal breaker? uv would replace conda. And I hope it does. Conda has been such a headache for me when I've used it in the past. If the Python (particularly ML/academic community) could move on from conda it would be a great thing.
uv can’t replace conda, any more than it can replace apt or nix.
Conda packages general binary packages, not just python packages. uv is just python packages.
Python packages (distributed via the Python package index, PyPI) can also be general binary packages. Try pip install cmake, for example.
Pixi might be something worth looking for, if you want a uv conda equivalent
Fun fact: pixi uses uv as a library to install pypi packages!
pixi seems fine, but it also is just using mamba on the backend so you might as well continue to use miniforge
That doesn't make sense, respectfully.
I honestly really hate the venv ergonomics but uv does still depend on it as the golden path if you don’t use the —with flags (in my understanding). Is there a way to do a clean break with just the new —script inline dependencies, or is that wrong/suboptimal?
You can definitely do that — it's just sub-optimal when you have multiple files that share dependencies.
I had to write some a simple Python script recently I was genuinely angry at how stupid the Python ecosystem has become.
- you need a virtual environment for some reason
- installing packages under sudo doesn't make them available to other users
- on ubuntu it seems pip has been replaced with 'python-*' debian packages
I was willing to gloss over the dumbness of Python itself while it was braindead-easy to run a script.
I've replaced the linkbait title with an attempt at saying what the feature is. If there's a more accurate wording, we can change it again.
I don't feel strongly, but as a uv author, I found "local dependencies" misleading. It's more like "uv's killer feature is making ad-hoc environments easy".
When we talk about local dependencies in the Python packaging ecosystem, it's usually adding some package on your file system to your environment. The existing title made me think this would be about the `[tool.uv.sources]` feature.
Really, it's about how we create environments on-demand and make it trivial to add packages to your environment or try other Python versions without mutating state.
Sorry dang, didn't know the practice + got a bit emotional haha. I agree with the remark above, my message is rather on how easy it is to run python scripts with dependencies (without mutating the state.
No worries!
Happy to take correction from an author! I've switched the wording above.
Thanks!
uh, thanks I guess.
It's just standard practice here. See https://news.ycombinator.com/newsguidelines.html.
see my reply above
How about "A UV feature that intrigues me most"?
That's still clickbait, although less tropey. The common definition of clickbait is intentionally omitting information that incentives the user to click.
Still clickbait if you don't say what the feature is.
wow, they've re-invented a tiny bit of Nix, purely legend!
That you can use without having 4 PhDs. It's pretty good. You should try it sometime when your done fully ingesting algebraic topology theory or whatever the fuck Nix requires to know just to install figlet.
Try flox [0]. It's an imperative frontend for Nix that I've been using. I don't know how to use nix-shell/flakes or whatever it is they do now, but flox makes it easy to just install stuff.
[0]: https://flox.dev/
Oh come on, it's not that hard even for packaging stuff (let alone usage). Quite trivial compared to leetcode grind I'd say.
You say that, but there seems be a vast chasm between “can solve LC hards” and “can administer an OS,” even though the latter is generally not at all abstract, extremely well-documented, and almost certainly has associated man pages.
Everybody will reinvent Nix if they are in software engineering long enough ...
If only... Majority will use whatever is shoved up their a## be it docker or anything else.
A few months ago I saw someone hacking the linker to get mundane package management working in Nix. It was bubbling up to the top of my "to try" list and that bumped it back down. It'll be good eventually, I'm sure.
Nix people are more annoying than Rust Defense Force.
I use Windows, and not WSL. Nix does literally nothing for me.