finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Now if only someone could do the same for shell scripts. Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages. Right now it’s still curl | bash and hope for the best, or a README with 12 manual steps and three missing dependencies.
Sure, there’s Nix... if you’ve already transcended time, space, and the Nix manual. Docker? Great, if downloading a Linux distro to run sed sounds reasonable.
There’s got to be a middle ground simple, declarative, and built for humans.
Nix is overkill for any of the things it can do. Writing a simple portable script is no exception.
But: it’s the same skill set for every one of those things. This is why it’s an investment worth making IMO. If you’re only going to ever use it for one single thing, it’s not worth it. But once you’ve learned it you’ll be able to leverage it everywhere.
Python scripts with or without dependencies, uv or no uv (through the excellent uv2nix which I can’t plug enough, no affiliation), bash scripts with any dependencies you want, etc. suddenly it’s your choice and you can actually choose the right tool for the job.
Not trying to derail the thread but it feels germane in this context. All these little packaging problems go away with Nix, and are replaced by one single giant problem XD
I don't think nix is that hard for this particular use case. Installing nix on other distros is pretty easy, and once it's installed you just do something like this
This is a hack but I still found it helpful. If you do want to force a certain version, without worrying about flakes [1] this can be your bash shebang, with similar for nix configuration.nix or nix-shell interactive. It just tells nix to use a specific git hash for it's base instead of whatever your normal channel is.
For my use case, most things I don't mind tracking mainline, but some things I want to fix (chromium is very large, python changes a lot, or some version broke things)
I will say this with a whole heart. My arch linux broke and I wanted to try out nix.
The most shocking part about nix is the nix-shell (I know I can use it in other distros but hear me out once), its literally so cool to install projects for one off.
Want to record a desktop? Its one of those tasks that for me I do just quite infrequently and I don't like how in arch, I had to update my system with obs as a dependency always or I had to uninstall it. Ephemerality was a concept that I was looking for before nix since I always like to try out new software/keep my home system kind of minimalist-ish Cool. nix-shell -p obs-studio & obs and you got this.
honestly, I like a lot of things about nix tbh. I still haven't gone too much into the flake sides of things and just use it imperatively sadly but I found out that nix builds are sandboxed so I found a unique idea of using it as a sandbox to run code on reddit and I think I am going to do something cool with it. (building something like codapi , codapi's creator is kinda cool if you are reading this mate, I'd love talking to ya)
And I personally almost feel as if some software could truly be made plug n play (like imagine hetzner having nix os machines (currently I have heard that its support is finnicky) but then somehow a way to get hetzner nix os machines and then I almost feel as if we can get something really really close to digital ocean droplets/ plug n play without any isolation that docker provides because I guess docker has its own usecases but I almost feel as if managing docker stuff is kinda harder than nix stuff but feel free to correct me as I am just saying what I am feelin using nix.
I also wish if something like functional lua (does fxn lua exist??) -> nix transpiler because I'd like to write lua instead of nix to manage my system but I guess nix is fine too!
Hi there, Since you mentioned Hetzner, I thought I would respond here. While we do not have NixOS as one of our standard images for our cloud products, it is part of our ISO library. Customers can install it manually. To do this, create a cloud server, click on it, and then on the "ISO" in the menu, and then look for it listed alphabetically. --Katie
Hey hetzner. I am just a 16 year old boy (technically I am turning 17 on 2nd july haha but I want nothing from ya haha) who has heard great things about your service while being affordable but never have tried them because I guess I just don't have a credit card/I guess I am a really frugal person at this moment haha. I was just reading one of your own documents if I feel correct and it said that the support isn't the best(but I guess I was wrong)
I guess I will try out nix on hetzner for sure one day.
This is really cool!!! Thanks! I didn't expect you to respond. This is really really cool. You made my day to whoever responded with this.
THANKS A LOT KATIE. LOTS OF LOVE TO HETZNER. MAY YOU BE THE WAY YOU ARE, SINCE Y'ALL ARE PERFECT.
Hi again, I'm happy that I made your day! You seem pretty easy to please if that is all it takes.
Keep in mind that customers must be 18 years old. I believe that is a legal requirement here in Germany, where we are based. Until then, if you're a fan, maybe you'd enjoy seeing what we're up to. We're on YouTube, reddit, Mastodon, Instagram, Facebook, and X. --Katie
> Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages.
IMO it should stay that way, because any script that needs those things is way past the point where shell is a reasonable choice. Shell scripts should be small, 20 lines or so. The language just plain sucks too much to make it worth using for anything bigger.
My rule of thumb is that as soon as I write a conditional, it's time to upgrade bash to Python/Node/etc. I shouldn't have to search for the nuances of `if` statements every time I need to write them.
This is a decent heuristic, although (IMO) you can usually get away with ~100 lines of shell without too much headache.
Last year I wrote (really, grew like a tumor) a 2000 line Fish script to do some Podman magic. The first few hundred lines were great, since it was "just" piping data around - shell is great at that!
It then proceeded to go completely off the rails when I went full sunk cost fallacy and started abusing /dev/shm to emulate hash tables.
E: just looked at the source code. My "build system" was another Fish script that concatenated several script files together. Jeez. Never again.
An if statement in, for instance bash, just runs any command and then runs one of two blocks of code based on the exit status of that command. If the exit status is truthy, it runs what follows the `then`. If it's falsey, it rhns what follows the `else`. (`elsif` is admittedly gratuitous syntax— it would be better if it were just implemented as an if inside an else statement.) This seems quite similar to other programming languages and like not very much to remember.
I'll admit that one thing I do in my shell scripts is avoid "fake syntax"— I never use `[` or `[[` because these obscure the real structure of the statements for the sake of cuteness. I just write `test`, which makes clear that it's just an ordinary command, ans also signals to someone who isn't sure what it's doing that they can find out just by running `man test`, `help test`, `info test`, etc., from the same shell.
I also agree that if statements and if expressions should be kept few and simple. But in some ways it's actually easier to do this in shell languages than in many others! Chaining && and/or || can often get you through a substantial script without any if statements at all, let alone nested ones.
I mean, there are 3 equally valid ways to write an if statement: `test`, `[`, and `[[`. In the case of the latter two, there are a mess of single-letter flags to test things about a file or condition[0]. I'm not sure what makes them "fake syntax", but I also don't know that much about bash.
It's all reasonable enough if you go and look it up, but the script immediately becomes harder to reason about. Conditionals shouldn't be this hard.
You don't need any of those to write an if statement. I frequently write if statements like this one
if ! grep -qF something /etc/some/config/file 2>/dev/null; then
do_something
fi
The `test` command is there if you want to use it, but it's just another command.
In the case of Bash, `test` is a built-in command rather than an external program, and it also has two other names, `[` and `[[`. I don't like the latter two because they look, to a naive reader, like special syntax built into the shell— like something the parser sees as unique and different and bear a special relationship to if-statements— but they aren't and they don't. And in fact you can use them in other shells that don't have them as built-ins, if you implement them as external commands. (You can probably find a binary called `[` on your system right now.)
(Actually, it looks like `[[` is even worse than "fake syntax"... it's real special syntax. It changes how Bash interprets `&&` and `||`. Yikes.)
But if you don't like `test`, you don't have to use it; you can use any command you like!
For instance, you might use `expr`:
if expr "1 > 0"; then
echo this will always run
else
echo this will never run
fi
Fish has some built-ins that fall into a similar niche that are handy for simple comparisons like this, namely `math` and `string`, but there are probably others.
If you really don't like `test`, don't even need to use it for checking the existence or type (dir, symlink, socket, etc.) of files! You can use GNU `find` for that, or even sharkdp's `fd` if you ache for something new and shiny.
Fish actually has something really nice here in the `path` built-in, which includes long options like you and I both wish `test` had. You can write:
if path -q --type=dir a/b/c
touch a/b/c/some-file
end
You don't need `test` for asking about or asserting equality of variables, either;
grep -qxF "$A" <<< "$B"
is equivalent to
test "A" = "$B"
or with the Fish `string` built-in
string match --entire $A $B
The key is that in a shell, all commands are truthy in terms of their exit status. `&&` and `||` let you combine those exit statuses in exactly the way you'd expect, as do the (imo much more elegant) `and` and `or` combiner commands in Fish.
Finally, there's no need to use the likes of `test` for combining conditions. I certainly never do. You can just write
test "$A" = "$B" && test "$C" = "$D"
instead of something like
[ "$A" = "$B" -a "$C" = "$D" ]
If-statements in shell languages are so simple that there's practically nothing to them. They just take a single command (any!) and branch based on its exit status! That's it.
As for readability: any program in any language is difficult to understand if you don't know the interfaces or behaviors of the functions it invokes. `[`/`test` is no different from any such function, although it appears that `[[` is something weirder and, imo, worse.
Historically; my rule of thumb is as soon as I can't see the ~entire script without scrolling - time to rewrite in Python/ansible. I Think about the rewrite, but it usually takes awhile to do it (if ever)
When you solve the dependency management issue for shell scripts, you can also use newer language features because you can ship a newer interpreter the same way you ship whatever external dependencies you have. You don't have to limit yourself to what is POSIX, etc. Depending on how you solve it, you may even be able to switch to a newer shell with a nicer language. (And doing so may solve it for you; since PowerShell, newer shells often come with a dependency management layer.)
> any script that needs those things
It's not really a matter of needing those things, necessarily. Once you have them, you're welcome to write scripts in a cleaner, more convenient way. For instance, all of my shell scripts used by colleagues at work just use GNU coreutils regardless of what platform they're on. Instead of worrying about differences in how sed behaves with certain flags, on different platforms, I simply write everything for GNU sed and it Just Works™. Do those scripts need such a thing? Not necessarily. Is it nicer to write free of constraints like that? Yes!
Same thing for just choosing commands with nicer interfaces, or more unified syntax... Use p7zip for handling all your archives so there's only one interface to think about. Make heavy use of `jq` (a great language) for dealing with structured data. Don't worry about reading input from a file and then writing back to it in the same pipeline; just throw in `sponge` from moreutils.
> The language just plain sucks too much
There really isn't anything better for invoking external programs. Everything else is way clunkier. Maybe that's okay, but when I've rewritten large-ish shell scripts in other languages, I often found myself annoyed with the new language. What used to be a 20-line shell script can easily end up being 400 lines in a "real" language.
I kind of agree with you, of course. POSIX-ish shells have too much syntax and at the same time not enough power. But what I really want is a better shell language, not to use some interpreted non-shell language in their place.
Nice, if only you could count on having it installed on your fleet, and your fleet is 100pct Linux, no AIX, no HPUX, no SOLARIS, no SUSE on IBM Power....
Been there, tried to, got a huge slap in the face.
Been there, done that. I am so glad I don’t have to deal with all that insanity anymore. In the build farm I was responsible for, I was always happy to work on the Linux and BSD boxes. AIX and HPUX made me want to throw things. At least the Itanium junk acted like a normal server, just a painfully slow one.
I will never voluntarily run a bunch of non-Linux/BSD servers again.
At the time (10 years ago) I worked for a company with enormous customers who had all kinds of different deployment targets. I bet that list is a lot shorter today.
I have a couple of projects consisting of around >1k lines of Bash. :) Not to bloat, but it is pretty easy to read and maintain. It is complete as well. I tested all of its functionalities and it just works(tm). Were it another language, it may have been more than just around 1k LOC, however, or more difficult to maintain. I call some external programs a lot, so I stick'd to a shell script.
I simply do not write shell scripts that use or reference binaries/libraries that are no pre-installed on the target OS (which is the correct target, writing shell scripts for portability is silly).
There is no package manager that is going to make a shell script I write for macOS work on Linux if that script uses commands that only exist on macOS.
That's a shame as I got to a monk-level python jujitsu. I can fix any problem, you name it, https nightmare, brew version vs pyenv, virtualenv shenanigans. Now all this knowledge is a bad investment of time.
Knowing the Python packaging ecosystem, uv could very well be replaced by something else. It feels different this time, but we won't know for a while yet.
Agreed. I migrated ~all my personal things to Uv; but I'm sure once I start adopting widely at work I'll find edge cases you need to know the weeds to figureout/work around.
I'm unable to resist responding that clearly the solution is to run Nix in Docker as your shell since packaging, dependency management, and reproducibility will be at theoretical maximum.
For the specific case of solving shell script dependencies, Nix is actually very straightforward. Packaging a script is a writeShellApplication call and calling it is a `nix run`.
I guess the issue is just that nobody has documented how to do that one specific thing so you can only learn this technique by trying to learn Nix as a whole.
So perhaps the thing you're envisaging could just be a wrapper for this Nix logic.
> finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Hmm, last time I checked, uv installs into ~/.local/share/uv/python/cpython-3.xx and can not be installed globally e.g. inside a minimal docker without any other python.
Like the author, I find myself going more for cross-platform Python one-offs and personal scripts for both work and home and ditching Go. I just wish Python typechecking weren't the shitshow it is. Looking forward to ty, pyrefly, etc. to improve the situation a bit
Speed is one thing, the type system itself is another thing, you are basically guaranteed to hit like 5-10 issues with python's weird type system before you start grasping some of the oddities
I wouldn't describe Python type checking as a shit-show. pyright is pretty much perfect. One nit against it perhaps is that it doesn't support non-standard typing constructs like mypy does (for Django etc). That's an intentional decision on the maintainer's part. And I'm glad he made that decision because that spurned efforts to make the standard typing constructs more expressive.
I'm also looking forward to the maturity of Rust-based type checkers, but solely because one can almost always benefit from an order of magnitude improvement in speed of type checking, not because Python type-checking is a "shit show".
I do grant you that for outsiders, the fact that the type checker from the Python organization itself is actually a second rate type checker (except for when one uses Django, etc, and then it becomes first-rate) is confusing.
I've never particularly liked go for cross platform code anyway. I've always found it pretty tightly wedded to Unix. Python has its fair share of issues on windows aswell though, I've been stuck debugging weird .DLL issues with libraries for far too long in my life.
Strangely, I've found myself building personal cross platform apps in game engines because of that.
I do hope the community will converge on one type checker like ty. The fact that multiple type checkers exist is really hindering to the language as a whole.
uv has been fantastic to use for little side projects. Combining uv run with `uv tool run` AKA `uvx` means one can fetch, install within a VM, and execute Python scripts from Github super easily. No git clone, no venv creation + entry + pip install.
And uv is fast — I mean REALLY fast. Fast to the point of suspecting something went wrong and silently errored, when it fact it did just what I wanted but 10x faster than pip.
It (and especially its docs) are a little rough around the edges, but it's bold enough and good enough I'm willing to use it nonetheless.
I know there are real reasons for slow Python startup time, with every new import having to examine swaths of filesystem paths to resolve itself, but it really is a noticeable breath of fresh air working with tools implemented in Go or Rust that have sub-ms startup.
You don't have to import everything just to print the help. I try to avoid top-level imports until after the CLI arguments have been parsed, so the only import until then is `argparse` or `click`. This way, startup appears to be instant even in Python.
Example:
if __name__ == "__main__":
from myapp.cli import parse_args
args = parse_args()
# The program will exit here if `-h` is given
# Now do the heavy imports
from myapp.lib import run_app
run_app(args)
Another pattern, though, is that a top level tool uses pkg_resources and entry_points to move its core functionality out to verb plugins— in that case the help is actually the worst case scenario because not only do we have to scan the filesystem looking for what plugins are available, they all have to be imported in order to ask each for its help strings.
An extreme version of this is the colcon build tool for ROS 2 workspaces:
The Python startup latency thing makes sense, but I really don't understand why it would take `pyenv` a long time to print each line of its "usage" output (the one that appears when invoking it with `--help`) once it's already clearly in the code branch that does only that.
It feels like like it's doing heavy work between each line printed! I don't know any other cli tool doing that either.
The "slowness" and the utter insanity of trying to make a "works on my computer" Python program work on another computer pushed me to just rewrite all my Python stuff in Go.
About 95% of my Python utilities are now Go binaries cross-compiled to whatever env they're running in. The few remaining ones use (API) libraries that aren't available for Go or aren't mature enough for me to trust them yet.
I agree uv is amazing, but it's not a virtual machine, it's a virtual environment. It runs the scripts on top of your OS without any hardware virtualization. The virtual environment only isolates the Python dependencies.
Is there a reason you didn’t explicitly pull in mkdocs as a dependency in that invocation? I guess uv will expose it/let you run it anyways due to the fact that it’s required by everything else you did specify.
Very nice, I believe Rust is doing something similar too which is where I initially learned of this idea of single-file shell-type scripts in other languages (with dependency management included, which is how it differs from existing ways of writing single-file scripts in e.g. scripting languages) [0].
Hopefully more languages follow suit on this pattern as it can be extremely useful for many cases, such as passing gists around, writing small programs which might otherwise be written in shell scripts, etc.
I’ve been a python dev for nearly a decade and never once thought dep management was a problem.
If I’ve ever had to run a “script” in any type of deployed ENV it’s always been done in that ENVs python shell .
So I still don’t see what the fuss is about?
I work on a massive python code base and the only benefit I’ve seen from moving to UV is it has sped up dep installation which has had positive impact on local and CI setup times.
Thankfully some newer systems will error by default if you try to mess with them via pip instead of your system's package manager. Easy to override if you want to, and saves a lot of effort fixing accidental screw ups.
I guess this is why people need to get out of this “Python dev” or “JS dev” mindset and try other languages to see why those coming to Python complain so much about dependency management.
People complain because the experience is less confusing in many other languages. Think Go, Rust, or even JS. All the tooling chaos and virtual environment jujitsu are real deterrents for newcomers. And it’s not just beginners complaining about Python tooling. Industry veterans like Armin Ronacher do that all the time.
uv is a great step in the right direction, but the issue is that as long as the basic tooling isn’t built into the language binary, like Go’s tools or Rust’s Cargo, more tools will pop up and fragment the space even further.
Confusing is underselling it. That implies that Python dependency management is working fine, it's just complex. But it's not working fine: there's no such thing as lock files, which makes reproducible installs a gamble and not a given. For small scripts this is probably "okay", but if you're working in a team or want to deploy something on a server, then it's absolutely not fine because you want deterministic builds and that's simply impossible without a decent package manager.
Tools like uv solve the "it works on my machine" problem. And it's also incredibly fast.
Issue is since there are no standardized build tool (pip, uv both are third party), there are a zillion ways of generating this lockfile unlike go.mod or cargo.toml. So it doesn't work in many scenarios and it's confusing as hell.
My view is I’m an engineer first and foremost and I use the tools which are best for the task at hand. That also means what’s best for the business in terms of others working on the project, this has meant python with some sort of framework.
People have suggested using other languages that might be faster but the business always choices what’s best for everyone to work with.
Sure, it depends on the type and maturity of the business, as well as the availability of local talent. I've worked at three companies that started out with Python and Django, then transitioned to other technologies as the business scaled. In those environments, there were two kinds of developers: those who quickly adapted and picked up new languages, and those who wanted to remain "Python devs." The latter group didn’t have a great time moving forward.
What I don't like about the "Python + Framework + Postgres" argument is that it often lacks context. This is a formidable combination for starting a business and finding PMF. But unless you've seen Python and Postgres completely break under 100k RPS and petabyte-scale data, it's hard to understand the limitations just from words. Python is fantastic, but it has its limits and there are cases where it absolutely doesn't work. This “single language everywhere” mindset is how we ended up with JavaScript on the backend and desktop.
Anyone can write Python, and with LLMs, there's not much of a moat around knowing a single language. There's also no reason not to familiarize yourself with others, since it broadens your perspective. Of course, some businesses scale quite well with Python or JavaScript. But my point isn't to abandon Python. It's to gain experience in other languages so that when people criticize Python’s build tools, you can genuinely empathize with those concerns. Otherwise, comments like “Python tooling is fine” from people who have mostly worked with only Python are hard to take seriously.
> it’s always been done in that ENVs python shell .
What if you don't have an environment set up? I'm admittedly not a python expert by any means but that's always been a pain point for me. uvx makes that so much easier.
I wrote PHP/JS/Java before Python. Been doing Python for nearly a decade too, and like 4dregress haven't had the need to worry much about dep management. JS and PHP had all sorts of issues, Maven & Gradle are still the ones that gave me less trouble. With Python I found that most issues could be fixed by finding the PEP that implemented what I needed, and by trying to come up with a simple workflow & packaging strategy.
Nowadays I normally use `python venv/bin/<some-executable>`, or `conda run -n <some-env> <some-executable>`, or packaged it in a Singularity container. And even though I hear a lot of good things about uv, given that my job uses public money for research, we try to use open source and standards as much as possible. My understanding is that uv is still backed by a company, and at least when I checked it some time ago (in peps discussions & GH issues) they were no implementing the PEPs that I needed -- even if they did, we would probably still stay with simple pip/setuptools to avoid having to use research budget to update our build if the company ever changed its business model (e.g. what anaconda did some months/year? ago).
Digressing: the Singularity container is useful for research & HPC too, as it creates a single archive, which is faster to load on distributed filesystems like the two I work on (GPFS & LustreFS) instead of loading many small files over network.
Virtual environments alone are not enough. They don't guarantee deterministic builds. What do you do to ensure that your production environment runs the same code as your local dev environment? How do you solve that problem without dependency managers like uv or poetry?
I've been a python dev for nearly 3 decades and feel that uv is removing a lot of the rough edges around dependency management. So maybe "problem" is the wrong word; I've been able to solve dependency management issues usually without too much trouble, I have also spent a significant amount of time dealing with them. For close to a decade I was managing other peoples Python environments on production systems, and that was a big mess, especially with trying to ensure that they stayed updated and secure.
If you don't see what the fuss is about, I'm happy for you. Sounds like you're living in a fairly isolated environment. But I can assure you that uv is worth a lot of fussing about, it's making a lot of our lives a lot easier.
So far I've only run into one minor ergonomic issue when using `uv run --script` with embedded metadata which is that sometimes I want to test changes to the script via the Python REPL, but that's a bit harder to do since you have to run something like:
$ uv run --python=3.13 --with-requirements <(uv export --script script.py) -- python
>>> from script import X
I'd love if there were something more ergonomic like:
$ uv run --with-script script.py python
Edit: this is better:
$ "$(uv python find --script script.py)"
>>> from script import X
That fires up the correct python and venv for the script. You probably have to run the script once to create it.
You can make `--interactive` or whatever you want a CLI flag from the script. I often make these small Typer CLIs with something like that (or in this case, in another dev script like this, I have `--sql` for entering a DuckDB SQL repl)
Between yesterday's thread and this thread I decided to finally give uv a shot today - I'm impressed, both by the speed and how easy it is to manage dependencies for a project.
I think their docs could use a little bit of work, especially there should be a defined path to switch from a requirements.txt based workflow to uv. Also I felt like it's a little confusing how to define a python version for a specific project (it's defined in both .python-version and pyproject.toml)
This would’ve been really handy for me a few weeks ago when I ended up working this out for myself (not a huge job, but more effort than reading your documentation would’ve been). While I can’t think of anything missing off the top of my head, I do think a PR to uv to update the official docs would help a lot of folk!
Actually, I’ve thought of something! Migrating from poetry! It’s something I’ve been meaning to look at automating for a while now (I really don’t like poetry).
This is wonderful. When I was learning I found the documentation inadequate and gpt4 ran in circles as I did not know what to ask (I did not realize “how do I use uv instead of conda/pip?” was a fundamentally flawed question).
> it's defined in both .python-version and pyproject.toml
The `requires-version` field in `pyproject.toml` defines a range of compatible versions, while `.python-version` defines the specific version you want to use for development. If you create a new project with uv init, they'll look similar (>=3.13 and 3.13 today), but over time `requires-version` usually lags behind `.python-version` and defines the minimum supported Python version for the project. `requires-version` also winds up in your package metadata and can affect your callers' dependency resolution, for example if your published v1 supports Python 3.[old] but your v2 does not.
> how to define a python version for a specific project (it's defined in both .python-version and pyproject.toml)
pyproject.toml is about allowing other developers, and end users, to use your code. When you share your code by packaging it for PyPI, a build backend (uv is not one, but they seem to be working on providing one - see https://github.com/astral-sh/uv/issues/3957 ) creates a distributable package, and pyproject.toml specifies what environment the user needs to have set up (dependencies and python version). It has nothing to do with uv in itself, and is an interoperable Python ecosystem standard. A range of versions is specified here, because other people should be able to use your code on multiple Python versions.
The .python-version file is used to tell uv specifically (i.e. nobody else) specifically (i.e., exact version) what to do when setting up your development environment.
(It's perfectly possible, of course, to just use an already-set-up environment.)
Read-only TOML support is in the standard library since Python 3.11, though. And it's based on an easily obtained third-party package (https://pypi.org/project/tomli/).
(If you want to write TOML, or do other advanced things such as preserving comments and exact structure from the original file, you'll want tomlkit instead. Note that it's much less performant.)
Same, although I think it doesn't support my idiosyncratic workflow. I have the same files sync'd (via dropbox at the moment) on all my computers, macos and windows and wsl alike, and I just treat every computer likes it's the same computer. I thought this might be a recipe for disaster when I started doing it years ago but I have never had problems.
Some stuff like npm or dotnet do need an npm update / dotnet restore when I switch platforms. At first attempt uv seems like it just doesn't really like this and takes a fair bit of work to clean it up when switching platforms, while using venvs was fine.
You should probably look to have the uv managed venvs completely excluded from being synced, and forcing every machine to build its own venv. Given how fast and consistent uv is, there’s no real reason to share the actual venvs between machines anymore.
I agree the docs are not there yet. There is a lot of documentation but it's a description of all the possible options that are available (which is a lot). But it doesn't really tell me how to actually _use_ it for a certain type of workflow, or does a mediocre job at best.
> Before this I used to prefer Go for one-off scripts because it was easy to create a self-contained binary executable.
I still do because:
- Go gives me a single binary
- Dependencies are statically linked
- I don’t need any third-party libs in most scenarios
- Many of my scripts make network calls, and Go has a better stdlib for HTTP/RPC/Socket work
- Better tooling (built-in formatter, no need for pytest, go vet is handy)
- Easy concurrency. Most of my scripts don’t need it, but when they do, it’s easier since I don’t have to fiddle with colored functions, external libs, or, worse, threads.
That said, uv is a great improvement over the previous status quo. But I don’t write Python scripts for reasons that go beyond just tooling. And since it’s not a standard tool, I worry that more things like this will come along and try to “improve” everything. Already scarred and tired in that area thanks to the JS ecosystem. So I tend to prefer stable, reliable, and boring tools over everything else. Right now, Go does that well enough for my scripting needs.
I needed to process a 2 GB xml file the other day. While my Python script was chugging away, I had Claude translate it to Go. The vibe-coded Go program then processed the file before my original Python script terminated. That was the first time I ever touched Go, but it certainly won't be the last.
I still use both Go and Python. But Python gives me access to a lot more libraries that do useful stuff. For example the YouTube transcript example I wrote about in the article was only possible in Python because afaik Go doesn't have a decent library for transcript extraction.
Between how good ChatGPT/Claude are at writing Python, and discovering uv + PEP 723, I'm creating all sorts of single file python scripts. Some of my recent personal tools: compression stats for resources when gzipped, minify SVGs, a duplicate file tool, a ping testing tool, a tool for processing large CSVs through LLMs one row at a time, etc.
uv is the magic that deals with all of the rough edges/environment stuff I usually hate in Python. All I need to do is `uv run myFile.py` and uv solves everything else.
I've recently updated a Python script that I originally wrote about 10 years ago. I'm not a programmer - I just have to get stuff done - think sysops.
For me there used to be a clear delineation between scripting languages and compiled languages. Python has always seemed to want to be both and I'm not too sure it can really. I can live with being mildly wrong about a concept.
When Python first came out, our processors were 80486 at best and RAM was measured in MB at roughly £30/MB in the UK.
"For the longest time, ..." - all distros have had scripts that find the relevant Python or Java or whatevs so that's simply daft. They all have shebang incantations too.
So we now have uv written in Rust for Python. Obviously you should install it via a shell script directly from curl!
I love all of the components involved here but please for the love of a nod to security at least suggest that the script is downloaded first, looked over and then run.
I recently came across a Github hosted repo with scripts that changed Debian repos to point somewhere else and install ... software. I'm sure that's all fine too.
curl | bash is cute and easy and very, very insecure.
Note the quite professional looking README.md and think about the audience for this thing - kittens hitting the search bong and trying to get something very complicated working.
Read the scripts: they are pretty short and could put your hypervisor in the hands of someone else who may not be too friendly.
Now pip has the same problem except you don't normally go in with a web browser first.
I raised an issue to at least provide a hint to casual browsers and also raised it with the github AI bottie complaint thang which doesn't care about you, me or anything else for that matter.
1) Subscribe to the GitHub repo for tag/release updates.
2) When I get a notification of a new version, I run a shell function (meup-uv and meup-ruff) which grabs the latest tag via a GET request and runs an install. I don't remember the semantics off the top of my head, but it's something like:
Of course this implies I'm willing to wait the ~5-10 minutes for these apps to compile, along with the storage costs of the registry and source caches. Build times for ruff aren't terrible, but uv is a straight up "kick off and take a coffee break" experience on my system (it gets 6-8 threads out of my 12 total depending on my mood).
> For me there used to be a clear delineation between scripting languages and compiled languages. Python has always seemed to want to be both and I'm not too sure it can really. I can live with being mildly wrong about a concept.
Eh. There's a lot of space in the middle to "well actually" about, but Python really doesn't behave like a "compiled" language. The more important question is: what do you ship to people, and how easily can they use it? Lots of people in this thread are bigging up Go's answer of "you ship a thing which can run immediately with no dependencies". For users that solves so many problems.
Quite a few python usecases would benefit from being able to "compile" applications in the same sense. There are py-to-exe solutions but they're not popular or widely used.
It works far more of the time than people give it credit for. There are a lot of good XKCDs, but that one is by far the worst one ever made, as far as being a damaging meme goes.
If you want to manually manage envs and you're using conda, you can activate the env in a shell wrapper for your python script, like so (this is with conda)
If momentum for uv in the community continues, I’d love to see it distributed more broadly. uv can already be installed easily on macOS via Homebrew (like pyenv). uv can also be installed on Windows via WinGet (unlike pyenv). It would be nice to see it packaged for Linux as well.
This seems timely, `uv` is a complete revelation for me and has made working with Python extremely convenient ... the Python "just works" time has arrived.
I'm not a Python master but I've struggled with all the previous package managers, and uv is the first tool that does everything easily (whether it's installing or generating packages or formatting or checking your code).
I don't know why there is such a flurry of posts since it's a tool that is more than a year old, but it's the one and only CLI tool that I recommend when Python is needed for local builds or on a CI.
Hatch was a good contender at the time but they didn't move fast enough, and the uv/ruff team ate everybody's lunch. uv is really good and IMHO it's here to stay.
Anyway try it for yourself but it's not a high-level tool that is hiding everything, it's fast and powerful and yet you stay in control. It feels like a first-party tool that could be included in the Python installer.
I’ve started doing Python before 2.0 launched. I understand perfectly where you’re coming from.
The answer is an unequivocal yes in this case. uv is on a fast track to be the defacto standard and make pip relegated to the ‘reference implementation’ tier.
I also went through a similar enlightenment of just sticking to pip, but uv convinced me to switch and I’m so glad I did. You can dip your toe in by just using the ‘uv pip’ submodule as a drop in replacement for pip but way faster.
Yes, IMO it does. I wrote my first lines of Python 16 years ago and have worked with raw pip & venv, PDM and Poetry. None of those solutions come close to how easy it is to use (and migrate to) uv. Just give it a try for half an hour, you likely won't want to use anything else after that.
I’m a moron when it comes to python tooling but switching a project to uv was a pleasant experience. It seems well thought out and the speed is genuinely a feature compared to other python tooling I’ve used.
A lot of people like all-in-one tools, and uv offers an opinionated approach that works. It's essentially the last serious attempt at this since Poetry, except that uv is also supporting a variety of new Python packaging standards up front (most notably https://peps.python.org/pep-0621/ , which Poetry lagged on for years - see https://github.com/python-poetry/roadmap/issues/3 ) and seems committed to keeping on top of new ones.
How much you can benefit depends on your use case. uv is a developer tool that also manages installations of Python itself (and maintains separate environments for which you can choose a Python version). If you're just trying to install someone else's application from PyPI - say https://pypi.org/project/pycowsay/ as an example - you'll likely have just as smooth of an experience via pipx (although installation will be even slower than with pip, since it's using pip behind the scenes and adding its own steps). On the other hand, to my understanding, to use uv as a developer you'll still need to choose and install a build backend such as Flit or Hatchling, or else rely on the default Setuptools.
One major reason developers are switching to uv is lockfile support. It's worth noting here that an interoperable standard for lockfiles was recently approved (https://peps.python.org/pep-0751/), uv will be moving towards it, and other tools like pip are moving towards supporting it (the current pip can write such lockfiles, and installing from them is on the roadmap: https://github.com/pypa/pip/issues/13334).
If you, like me, prefer to follow the UNIX philosophy, a complete developer toolchain in 2025 looks like:
* Ability to create virtual environments (the standard library takes care of this; some niche uses are helped out by https://virtualenv.pypa.io/)
* Package installer (Pip can handle this) and manager (if you really want something to "manage" packages by installing into an environment and simultaneously updating your pyproject.toml, or things like that; but just fixing the existing environment is completely viable, and installers already resolve dependencies for whatever it is they're currently installing)
* Build backend (many options here - by design! but installers will assume Setuptools by default, since the standard requires them to, for backwards compatibility reasons)
* Some version of Python (the one provided with a typical Linux distribution will generally work just fine; Windows users should usually just install the current version, with the official installer, unless they know something they want to install isn't compatible)
* Ability to create virtual environments and also install packages into them (https://pipx.pypa.io/stable/ takes care of both of these, as long as the package is an "application" with a defined entry point; I'm making https://github.com/zahlman/paper which will lift that restriction, for people who want to `import` code but not necessarily publish their own project)
* Ability to actually run the installed code (pipx handles this by symlinking from a standard application path to a wrapper script inside the virtual environment; the wrappers specify the absolute path to the virtual environment's Python, which is generally all that's needed to "use" that virtual environment for the program. It also provides a wrapper to run Pip within a specific environment that it created. PAPER will offer something a bit more sophisticated here, for both aspects.)
It is difficult to use Python for utility scripts on the average Linux machine. Deploying Python projects almost require using a container. Popular distros try managing Python packages through the standard package manager rather than pip but not all packages are readily available. Sometimes you're limited by Python version and it can be non-trivial to have multiple versions installed at once. Python packaging has become a shit show.
If you use anything outside the standard library the only reliable way to run a script is installing it in a virtual environment. Doing that manually is a hassle and pyenv can be stupidly slow and wastes disk space.
With uv it's fast and easy to set up throw away venvs or run utility scripts with their dependencies easily. With the PEP-723 scheme in the linked article running a utility script is even easier since its dependencies are self-declared and a virtual environment is automatically managed. It makes using Python for system scripting/utilities practical and helps deploy larger projects.
> Deploying Python projects almost require using a container.
Really? `apt install pipx; pipx install sphinx` (for example) worked flawlessly for me. Pipx is really just an opinionated wrapper that invokes a vendored copy of Pip and the standard library `venv`.
The rest of your post seems to acknowledge that virtual environments generally work just fine. (Uv works by creating them.)
> Sometimes you're limited by Python version and it can be non-trivial to have multiple versions installed at once.
I built them from source and make virtual environments off of them, and pass the `--python` argument to Pipx.
> If you use anything outside the standard library the only reliable way to run a script is installing it in a virtual environment. Doing that manually is a hassle and pyenv can be stupidly slow and wastes disk space.
If you're letting it install separate copies of Python, sure. (The main use case for pyenv is getting one separate copy of each Python version you need, if you don't want to build from source, and then managing virtual environments based off of that.) If you're letting it bootstrap Pip into the virtual environment, sure. But you don't need to do either of those things. Pip can install cross-environment since 22.3 (Pipx relies on this).
Uv does save disk space, especially if you have multiple virtual environments that use the same packages, by hard-linking them.
> With uv it's fast and easy to set up throw away venvs or run utility scripts with their dependencies easily. With the PEP-723 scheme in the linked article running a utility script is even easier since its dependencies are self-declared and a virtual environment is automatically managed.
Pipx implements PEP 723, which was written to be an ecosystem-wide standard.
Firstly, I have been a HN viewer for so many time and this is the one thing about pep python scripts THAT always get to the top of leaderboard of hackernews by each person discovering it themselves.
I don't mean to discredit the author. His work was simple and clear to understand. I am just sharing this thesis that I have that if someone wants karma on Hackernews for whatever reason, this might be the best topic. (Please don't pitchfork me since I don't mean offense to the author)
Also, can anybody please explain to me on how to create that pep metadata in uv from just a python script and without anything else, like some command which can take a python script and give pep and add that in the script, I am pretty sure that uv has a feature flag but I feel that the author might've missed out on this feature because I don't know when coding one off scripts in python using AI (gemini) it had some options with pep so I always had to paste uv's documentation I don't know, so please if anybody knows a way to create pep easier using the cli, then please tell me! Thanks in advance!!
Thanks a lot friend, But one of the issues with this is that I need to know about requests and sometimes their names can be different and I actually had created a cli tool called uvman which actually wanted to automate that part too.
But my tool was really finnicky and I guess it was built by AI ,so um yea, I guess you all can try it, its on pypi. I think that it has a lot of niche cases where it doesn't work. Maybe someone can modify it to make it better as I had built it like 3-4 months ago if I remember correctly and I have completely forgotten how things worked in uv.
Last time I looked at switching from poetry to uv I had an issue with pinning certain dependencies to always install from a private PyPI repository. Is there a way to do that now?
(also: possible there's always been a way and I'm an idiot)
Some years ago I thought it would be interesting to develop a tool to make a python script automatically install its own dependencies (like uvx in the article), but without requiring any other external tool, except python itself, to be installed.
The downside is that there are a bunch of seemingly weird lines you have to paste at the begging of the script :D
Grace Hopper technology: A well formed Python program shall define an ENVIRONMENT division that specifies the environment in which the program will be compiled and executed. It outlines the hardware and software dependencies. This division is crucial for making COBOL^H^H^H^H^HPython programs portable across different systems.
Each environment itself only takes a few dozen kilobytes to make some folders and symlinks (at least on Linux). People think of Python virtual environments as bloated (and slow to create) because Pip gets bootstrapped into them by default. But there is no requirement to do so.
The packages take up however much space they take up; the cost there is unavoidable. Uv hard-links packages into the separate environments from its cache, so you only pay a disk-space cost for shared packages once (plus a few more kilobytes for more folders).
(Note: none of this depends on being written in Rust, but Pip doesn't implement this caching strategy. Pip can, however, install cross-environment since 22.3, so you don't actually need the bootstrap. Pipx depends on this, managing its own vendored copy of Pip to install into multiple environments. But it's still using a copy of Pip that interacts with a Pip-styled cache, so it still can't do the hard-link trick.)
Yes, it creates a separate environment for each script. No, it doesn’t create a lot of bloat. There’s a separate cache and the packages are hard-linked into the environments, so it’s extremely fast and efficient.
The venv is created and then discarded once the script finishes execution. This is well suited to one-off scripts like what is demonstrated in the article.
In a larger project you can manage venvs like this using `uv venv`, where you end up with a familiar .venv folder.
uv actually reuses environments when dependencies match, creating a content-addressed store that significantly reduces disk usage compared to traditional per-script virtualenvs.
Using Guix (guix shell) it was already possible to run Python scripts one-off. I see others have also commented about doing it using Nix.
Also that would be reproducible, in contrast to what is shown in the blog post. To make that reproducible, one would have to keep the lock file somewhere, or state the checksums directly in the Python script file, which seems rather un-fun.
I like uv run and uvx like the swiss army knifes of python that they are, but PEP 723 stuff I think is mostly just a gimmick. I'm not convinced it's more than a cool trick.
It's useful for people who don't want to create a "project" or otherwise think about the "ecosystem". People who, if they share their code at all, will email it to coworkers or something. It lets you get by without a pyproject.toml file etc.
Probably not. NPM has its problems but Python packaging has always been significantly messier (partly because, Python is much older than Node and, indeed, much older than the very concept of resolving dependencies over the internet).
The upside in Python is that dependencies tend to be more coarse grained and things break less when you update. With JS you have to be on the treadmill constantly to avoid bitrot, and because packages tend to be so small and dependency trees so large, there's a lot of potential points of failure when updating anything.
The bigger problem in Python has been its slowness and reliance on C dependencies.
Maven solved Java packaging circa 2005, for example. Yes, XML is verbose, but it's an implementation detail. Python still lags on many fronts, 20 years later.
An example: even now it makes 0 sense to me why virtual envs are not designed and supposed to be portable between machines with the same architecture (!). Or why venvs need to be activated with shell-variety specific code.
None of this example has anything to do with performance or reliance on C dependencies, but ok.
> even now it makes 0 sense to me why virtual envs are not designed and supposed to be portable between machines with the same architecture (!).
They aren't designed to be relocatable at all - and that's the only actual stumbling block to it. (They may even contain activation scripts for other platforms!)
That's because a bunch of stuff in there specifies absolute paths. In particular, installers (Pip, at least) will generate wrapper scripts that specify absolute paths. This is so that you can copy them out of the environment and have them work. Yes, people really do use that workflow (especially on Windows, where symlinking isn't straightforward).
It absolutely could be made to work - probably fairly easily, and there have been calls to sacrifice that workflow to make it work. It's also entirely possible to do a bit of surgery on a relocated venv and make it work again. I've done it a few times.
The third-party `virtualenv` also offers some support for this. Their documentation says there are some issues with this. I'm pretty sure they're mainly talking about that wrapper-script-copying use case.
> Or why venvs need to be activated with shell-variety specific code.
The activation sets environment variables for the current shell. That isn't possible (at least in a cross-platform way) from Python since the Python process would be a child of that shell. (This is also why you have to e.g. use `source` explicitly to run the Linux versions.)
But venvs generally don't need to be activated at all. The only things the activation script effectively does:
* Set the path environment variable so that the virtual environment's Python (or symlink thereto) will be found first.
* Put some fancy stuff in the prompt so that you can feel like you're "in" the virtual environment (a luxury, not at all required).
* Set `VIRTUAL_ENV`, which some Python code might care about (but they could equally well check things like `sys.executable`)
* Unset (and remember) `PYTHONHOME` (which is a hack that hardly anyone has a good use case for anyway)
* (on some systems that don't have a separate explicit deactivate script) set up the means to undo all those changes
The actually important thing is the path variable change, and even then you don't need that unless the code is going to e.g. start a Python subprocess and ask the system to find Python. (Or, much more commonly, because you have a `#!/usr/bin/env python` shebang somewhere.) You can just run the virtual environment's Python directly.
In particular, you don't have to activate the virtual environment in order to use its wrapper scripts, as long as you can find them. And, in fact, Pipx depends on this.
> None of this example has anything to do with performance or reliance on C dependencies, but ok.
<C dependencies>
You'd realize why I wrote that if you used Java/Maven. Java is by and large self-contained. Stuff like Python, Ruby, PHP, Javascript[1] etc, are not, they depend on system libraries.
So when you install something on Solaris, FreeBSD, MacOS, Windows, well, then you have to deal with the whole mess.
1. Is the C dependency installed on the system?
2. Is it the correct major version or minor version?
3. Has it been compiled with the correct flags or whatnot? 4. If it's not on the system, can the programming language specific package manager pull a binary from a repo for my OS-arch combo?
5. If there's no binary, can the programming language specific package manager pull the sources and compile them for my OS-arch combo?
All of those steps can and do fail, take time, and sometimes you have to handle them yourself because of bugs.
Java is fast enough that almost everything can be written in Java, so 99% of the libraries you use only have 1 artifact: the universal jar, and that's available in Maven repos. No faffing around with wheels or whatnot, or worse, with actual system dependencies that are implicitly (or explicitly) required by dependencies written in the higher level programming language.
<Virtual envs>
I won't even bother to address in detail the insanity that you described about virtual envs, it's just Stockholm syndrome. Almost every other programming language does just fine without venvs. Also I don't really buy that issue with the lack of portability, it's just a bunch of bad design decision early on in Python's design. Even for Windows there are better possibilities (symlinks are feasible, you just need admin access).
I say this as someone who's used basically all mainstream programming language over the years.
The sane way to do virtual envs is to have them be just... folders. No absolute paths, no activation, just have a folder and a configuration file. The launcher automatically detects a configuration file with the default name and the configuration file in turn points the launcher and Python to use the stuff in the folder.
Deployment then becomes... drumroll just copying the folder to another machine (usually zipping/unzipping it first).
* * *
[1] Javascript is a bit of a special case but it's still slower than Java, on average.
> Stuff like Python, Ruby, PHP, Javascript[1] etc, are not, they depend on system libraries.
The Python runtime itself may depend on system libraries.
Python packages usually include their own bundled compiled code (either directly in the wheel, or else building an sdist will build complete libraries that end up in site-packages rather than just wrappers). A wheel for NumPy will deliver Python code that uses an included copy of OpenBLAS, even if your system already had a BLAS implementation installed.
Regardless, that has no bearing on how virtual environments work.
> I won't even bother to address in detail the insanity that you described about virtual envs, it's just Stockholm syndrome. Almost every other programming language does just fine without venvs.
You say this like you think there's something difficult or onerous about using virtual environments. There really isn't.
> Also I don't really buy that issue with the lack of portability
The use of absolute paths is the only thing preventing you from relocating venvs, including moving them to another machine on the same architecture. I know because I have done the "surgery" to relocate them.
They really do have the reason for using absolute paths that I cited. Here's someone from the Pip team saying as much a few days ago: https://discuss.python.org/t/_/96177/3 (Using a relative path in the wrapper script would of course also require a little more work to make it relative to the script rather than to the current working directory. It's worse for the activation script; I don't even know if that can be done in pure sh when the script is being sourced.)
Yes, it's different between Linux and Windows, because installers will create actual .exe wrappers (stub executables that read their own file name and then `CreateProcess` a Python process) instead of Python wrapper scripts with a shebang. They do this explicitly because of the UX that Windows users expect.
> Even for Windows there are better possibilities (symlinks are feasible, you just need admin access).
Please go survey all the people you know who write code on Windows and see how many of them have ever heard of a symlink or understand what they are. But also, giving admin rights to these Python tools all the time is annoying, and bad security practice (since they don't actually need to modify any system files).
> The sane way to do virtual envs is to have them be just... folders. No absolute paths, no activation, just have a folder and a configuration file.
A virtual environment is just a folder (hierarchy) and a configuration file. Like I said, activation is not required to use them. And the fact that certain standard tools create and expect absolute paths is not essential to what a virtual environment is. If you go in and replace the absolute paths with relative paths, you still have a virtual environment, and it still works as you'd expect - minus the tradeoffs inherent to relative paths. And like I said, there is third-party tooling that will do this for you.
Oh, I guess you object because there are actual symlinks (or copies by default on Windows) to Python in there. That's the neat thing: because you start Python from that path, you don't need a launcher. (And you also, again, don't need to activate. But you can if you want, for a UX that some prefer.) Unless by "launcher" you meant the wrapper script? Those are written to invoke the venv's Python - again, you don't need to activate. Which is why this can work:
Again, yes, the wrapper scripts would have to be a little more complex in order to make relative paths work reliably - but that's a Pip issue, not a venv issue.
> But also, giving admin rights to these Python tools all the time is annoying, and bad security practice (since they don't actually need to modify any system files).
You only need to do it once, when the symlink is created...
My point is that venvs are a code smell. Again, there's a reason basically no other programming language ecosystem needs them. They're there because in Python's history, that was the best idea they had 20 years ago (or whenever they were created), where they didn't even bother to do their homework and see what other ecosystems did to solve that specific problem.
> You say this like you think there's something difficult or onerous about using virtual environments. There really isn't.
They're a bad, leaky, abstraction. They provide value through a mechanism that is cumbersome and doesn't even work well compared to basically all competing solution used outside of Python.
* * *
Anyway, I've used Python for long enough and I've seem many variations of your argument often enough that I'm just... bored. Python packaging is a cluster** and it has sooo many braindead ideas (like setup.py having arbitrary code in it :-| ) that yes, with a ton of work invested in it, if you squint enough, it basically works.
But that doesn't excuse the ton of bad design around it.
> Again, there's a reason basically no other programming language ecosystem needs them.
You have failed to explain how they are meaningfully different from what other programming language ecosystems use to create isolated environments, such that I should actually care about the difference.
Again: activating the venv is not in any way necessary to use it. The only relevant components are some empty folders, `pyvenv.cfg` and a symlink to the Python executable (Windows' nonsense notwithstanding).
> 20 years ago (or whenever they were created), where they didn't even bother to do their homework and see what other ecosystems did to solve that specific problem.
Feel free to state your knowledge of what other ecosystems did to solve those problems at the time, and explain what is substantively different from what a venv does and why it's better to do it that way.
No, a .jar is not comparable, because either it vendors its dependencies or you have to explain what classpath to use. And good luck if you want to have multiple versions of Java installed and have the .jar use the right one.
> They're a bad, leaky, abstraction. They provide value through a mechanism that is cumbersome and doesn't even work well compared to basically all competing solution used outside of Python.
You have failed to demonstrate this, and it does not match my experience.
> it has sooo many braindead ideas (like setup.py having arbitrary code in it :-| )
You know that setup.py is not required for packaging pure-Python projects, and hasn't been for many years? And that it only appears in a source distribution and not in wheels? And that in other ecosystems where packages contain arbitrary code in arbitrary foreign languages that needs to be built on the end user's machine, the package also includes arbitrary code that gets run at install time in order to orchestrate that build process? (For that matter, Linux system packages do this; see e.g. https://askubuntu.com/questions/62534 .)
Yes, using arbitrary code to specify metadata was braindead. Which is why pyproject.toml exists. And do keep in mind that the old way was conceived of in a fundamentally different era.
> And do keep in mind that the old way was conceived of in a fundamentally different era.
Maven first appeared in 2004 (and took the Java world by storm, it was widely adopted within a few years). Not studying prior art seems to happen a lot in our field.
> Feel free to state your knowledge of what other ecosystems did to solve those problems at the time, and explain what is substantively different from what a venv does and why it's better to do it that way.
Maven leverages the Java CLASSPATH to avoid them entirely.
There is a single, per-user, shared repository.
So every dependency is stored only once.
The local repository is actually more or less an incomplete clone of the remote repository, which makes the remote repository really easy to navigate with basic tools (the remote repo can be an Apache hosted anywhere, basically).
The repository is name spaced and things are very neatly grouped up by multiple levels (groupId, artifactId, version, type).
When you build or run something through Maven, you need:
1. The correct JAVA in your PATH.
2. A pom.xml config file in your folder (yes, Maven is THAT old, from back when XML was cool).
That's it.
You don't need to activate anything, ever.
You don't need to care about locations of whatamajigs in project folders or temp folders or whatever.
You don't need symlinks.
One of the many Maven packaging plugins spits out the correct package format you need for your platform.
Maven does The Right ThingTM, composing the correct CLASSPATH for your specific project/folder.
There is NO concept of a "virtual env", because ALL envs, by default, are "virtual". They're all compartmentalized by default. Nobody's stepping on anyone else's toes.
You take that plus Java's speed, so no need for slightly faster native dependencies (except in very rare cases), and installing or building a Maven project you've never seen in your life is trivial (unless the authors went to great lenghts to avoid that for some weird reason).
Now THAT's design.
Python has a hodge-podge of a programming language ecosystem with a brilliant beginner-friendly programming language syntax UX (most languages in the future will basically look like Python's pseudo-pseudocode), that's slowly starting to look like something that's actually been designed as a programming ecosystem. Similar story to Javascript, actually.
Anyway, this was more of a rant. I know Python is fixing these sins of its infancy.
I'm happy it's doing that because it's making my life a bit more liveable.
I don't think running with uv vs uvx imposes any extra limitations on how you specify dependencies. You should either way be able to reference dependencies not just from PyPi but also by git repo or local file path in a [tool.uv.sources] table, the same as you would in a pyproject.toml file.
Ok I didn’t know about this pep. But I love uv. I use it all day long. Going to use this to change up a lot of my shell scripts into easily runnable Python!
There's no lockfile or anything with this approach right? So in a year or two all of these scripts will be broken because people didn't pin their dependencies?
> So in a year or two all of these scripts will be broken because people didn't pin their dependencies?
People act like this happens all the time but in practice I haven't seen evidence that it's a serious problem. The Python ecosystem is not the JavaScript ecosystem.
I think it's because you don't maintain much python code, or use many third party libraries.
An easy way to prove that this is the norm is to take some existing code you have now, and update to the latest versions your dependencies are using, and watch everything break. You don't see a problem because those dependencies are using pinned/very restricted versions, to hide the frequency of the problem from you. You'll also see that, in their issue trackers, they've closed all sorts of version related bugs.
Are you sure you’re reading what I wrote fully? Getting pip, or any of them, to ignore all version requirements, including those listed by the dependencies themselves, required modifying source, last I tried.
I’ve had to modify code this week due to changes in some popular libraries. Some recent examples are Numpy 2.0 broke most code that used numpy. They changed the c side (full interpreter crashes with trimesh) and removed/moved common functions, like array.ptp(). Scipy moved a bunch of stuff lately, and fully removed some image related things.
If you think python libraries are somehow stable in time, you just don’t use many.
... So if the installer isn't going to ignore the version requirements, and thereby install an unsupported package that causes a breakage, then there isn't a problem with "scripts being broken because people didn't pin their dependencies". The packages listed in the PEP 723 metadata get installed by an installer, which resolves the listed (unpinned) dependencies to concrete ones (including transitive dependencies), following rules specified by the packages.
I thought we were talking about situations in which following those rules still leads to a runtime fault. Which is certainly possible, but in my experience a highly overstated risk. Packages that say they will work with `foolib >= 3` will very often continue to work with foolib 4.0, and the risk that they don't is commonly-in-the-Python-world considered worth it to avoid other problems caused by specifying `foolib >=3, <4` (as described in e.g. https://iscinumpy.dev/post/bound-version-constraints/ ).
The real problem is that there isn't a good way (from the perspective of the intermediate dependency's maintainer) to update the metadata after you find out that a new version of a (further-on) dependency is incompatible. You can really only upload a new patch version (or one with a post-release segment in the version number) and hope that people haven't pinned their dependencies so strictly as to exclude the fix. (Although they shouldn't be doing that unless they also pin transitive dependencies!)
That said, the end user can add constraints to Pip's dependency resolution by just creating a constraints file and specifying it on the command line. (This was suggested as a workaround when Setuptools caused a bunch of legacy dependencies to explode - not really the same situation, though, because that's a build-time dependency for some packages that were only made available as sdists, even pure-Python ones. Ideally everyone would follow modern practice as described at https://pradyunsg.me/blog/2022/12/31/wheels-are-faster-pure-... , but sometimes the maintainers are entirely MIA.)
> Numpy 2.0 is a very recent example that broke most code that used numpy.
This is fair to note, although I haven't seen anything like a source that would objectively establish the "most" part. The ABI changes in particular are only relevant for packages that were building their own C or Fortran code against Numpy.
> `foolib >= 3` will very often continue to work with foolib 4.0,
Absolute nonsense. It's industry standard that major version are widely accepted as/reserved for breaking changes. This is why you never see >= in any sane requirements list, you see `foolib == 3.*`. For anything you want to work for a reasonable amount of time, you see == 3.4.*, because deprecations often still happen within major versions, breaking all code that used those functions.
Breaking changes don't break everyone. For many projects, only a small fraction of users are broken any given time. Firefox is on version 139 (similarly Chrome and other web browsers); how many times have you had to reinstall your plugins and extensions?
For that matter, have you seen any Python unit tests written before the Pytest 8 release that were broken by it? I think even ones that I wrote in the 6.x era would still run.
For that matter, the Python 3.x bytecode changes with every minor revision and things get removed from the standard library following a deprecation schedule, etc., and there's a tendency in the ecosystem to drop support for EOL Python versions, just to not have to think about it - but tons of (non-async) new code would likely work as far back as 3.6. It's not hard to avoid the := operator or the match statement (f-strings are definitely more endemic than that).
> For the longest time, I have been frustrated with Python because I couldn’t use it for one-off scripts.
Bruh, one-off scripts is the whole point of Python. The cheat code is to add "break-system-packages = true" to ~/.config/pip/pip.conf. Just blow up ~/.local/lib/pythonX.Y/site-packages/ if you run into a package conflict (exceedingly rare) and reinstall. All these venv, uv, metadata peps, and whatnot are pointless complications you just don't need.
I'm not a python dev, but if you read HN even semi-regularly you have surely come across it several times in at least the past few months if not a year by now. It is all the rage these days in python world it seems.
And so, if you are the kind of person who has not heard of it, you probably don't read blogs about python, therefor you probably aren't reading _this_ blog. No harm no foul.
> uv is an extremely fast Python package and project manager, written in Rust.
Is there a version of uv written in Python? It's weird (to me) to have an entire ecosystem for a language and a highly recommended tool to make your system work is written in another language.
Similar to ruff, uv mostly gathers ideas from other tools (with strong opinions and a handful of thoughtful additions and adjustments) and implements them in Rust for speed improvements.
Interestingly, the speed is the main differentiator from existing package and project management tools. Even if you are using it as a drop-in replacement for pip, it is just so much faster.
There are many competing tools in the space, depending on how you define the project requirements.
Contrary to the implication of other replies, the lion's share of uv's speed advantage over Pip does not come from being written in Rust, from any of the evidence available to me. It comes from:
* bootstrapping Pip into the new environment, if you make a new environment and don't know that you don't actually have to bootstrap Pip into that environment (see https://zahlman.github.io/posts/2025/01/07/python-packaging-... for some hints; my upcoming post will be more direct about it - unfortunately I've been putting it off...)
* being designed up front to install cross-environment (if you want to do this with Pip, you'll eventually and with much frustration get a subtly broken installation using the old techniques; since 22.3 you can just use the `--python` flag, but this limits you to environments where the current Pip can run, and re-launches a new Pip process taking perhaps an additional 200ms - but this is still much better than bootstrapping another copy of Pip!)
* using heuristics when solving for dependencies (Pip's backtracking resolver is exhaustive, and proceeds quite stubbornly in order)
* having a smarter caching strategy (it stores uncompressed wheels in its cache and does most of the "installation" by hard-linking these into the new environment; Pip goes through a proxy that uses some opaque cache files to simulate re-doing the download, then unpacks the wheel again)
* not speculatively pre-loading a bunch of its own code that's unlikely to execute (Pip has large complex dependencies, like https://pypi.org/project/rich/, which it vendors without tree-shaking and ultimately imports almost all of, despite using only a tiny portion)
* having faster default behaviours; e.g. uv defaults to not pre-compiling installed packages to .pyc files (since Python will do this on the first import anyway) while Pip defaults to doing so
* not (necessarily) being weighed down by support for legacy behaviours (packaging worked radically differently when Pip first became publicly available)
* just generally being better architected
None of these changes require a change in programming language. (For example, if you use Python to make a hard link, you just use the standard library, which will then use code written in C to make a system call that was most likely also written in C.) Which is why I'm making https://github.com/zahlman/paper .
But also, because it's written in rust. There are tools written in python that do these smart caching and resolving tricks as well, and they are still orders of magnitude slower
Poetry doesn't do this caching trick. It creates its own cache with the same sort of structure as Pip's, and as far as I can tell it uses its own reimplementation of Pip's core installation logic from there (including `installer`, which is a factored-out package for the part of Pip that actually unpacks the wheel and copies files).
uv wins precisely because it isn't written in python. As various people have pointed out, it can complete its run before competing python implementations have finished handling their imports.
Besides, the most important tool for making python work, the python executable itself, is written in C. People occasionally forget it's not a self-hosting language.
Well, I use Debian and Bash: pretty much everything to make my system work, including and especially Python development, is written in C, another language!
finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Now if only someone could do the same for shell scripts. Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages. Right now it’s still curl | bash and hope for the best, or a README with 12 manual steps and three missing dependencies.
Sure, there’s Nix... if you’ve already transcended time, space, and the Nix manual. Docker? Great, if downloading a Linux distro to run sed sounds reasonable.
There’s got to be a middle ground simple, declarative, and built for humans.
Nix is overkill for any of the things it can do. Writing a simple portable script is no exception.
But: it’s the same skill set for every one of those things. This is why it’s an investment worth making IMO. If you’re only going to ever use it for one single thing, it’s not worth it. But once you’ve learned it you’ll be able to leverage it everywhere.
Python scripts with or without dependencies, uv or no uv (through the excellent uv2nix which I can’t plug enough, no affiliation), bash scripts with any dependencies you want, etc. suddenly it’s your choice and you can actually choose the right tool for the job.
Not trying to derail the thread but it feels germane in this context. All these little packaging problems go away with Nix, and are replaced by one single giant problem XD
> Nix is overkill for any of the things it can do. Writing a simple portable script is no exception.
ChatGPT writes pretty good nix now. You can simply paste any errors in and it will fix it.
I don't think nix is that hard for this particular use case. Installing nix on other distros is pretty easy, and once it's installed you just do something like this
Sure all of nixos and packaging for nix is a challenge, but just using it for a shell script is not too badLast time I checked,[0] this works great - as long as you don't particularly care which specific versions of imagemagick or cowsay you want.
If you do care, then welcome to learning about niv, flakes, etc.
[0]: admittedly 3 years ago or so.
This is a hack but I still found it helpful. If you do want to force a certain version, without worrying about flakes [1] this can be your bash shebang, with similar for nix configuration.nix or nix-shell interactive. It just tells nix to use a specific git hash for it's base instead of whatever your normal channel is.
For my use case, most things I don't mind tracking mainline, but some things I want to fix (chromium is very large, python changes a lot, or some version broke things)
``` #! nix-shell -i bash -p "cowsay" '(import (fetchTarball { url="https://github.com/NixOS/nixpkgs/archive/eb090f7b923b1226e8b... sha256 = "15iglsr7h3s435a04313xddah8vds815i9lajcc923s4yl54aj4j";}) {}).python3' ```
[1] flakes really aren't bad either, especially if you think about it as just doing above, but automatically
I will say this with a whole heart. My arch linux broke and I wanted to try out nix.
The most shocking part about nix is the nix-shell (I know I can use it in other distros but hear me out once), its literally so cool to install projects for one off.
Want to record a desktop? Its one of those tasks that for me I do just quite infrequently and I don't like how in arch, I had to update my system with obs as a dependency always or I had to uninstall it. Ephemerality was a concept that I was looking for before nix since I always like to try out new software/keep my home system kind of minimalist-ish Cool. nix-shell -p obs-studio & obs and you got this.
honestly, I like a lot of things about nix tbh. I still haven't gone too much into the flake sides of things and just use it imperatively sadly but I found out that nix builds are sandboxed so I found a unique idea of using it as a sandbox to run code on reddit and I think I am going to do something cool with it. (building something like codapi , codapi's creator is kinda cool if you are reading this mate, I'd love talking to ya)
And I personally almost feel as if some software could truly be made plug n play (like imagine hetzner having nix os machines (currently I have heard that its support is finnicky) but then somehow a way to get hetzner nix os machines and then I almost feel as if we can get something really really close to digital ocean droplets/ plug n play without any isolation that docker provides because I guess docker has its own usecases but I almost feel as if managing docker stuff is kinda harder than nix stuff but feel free to correct me as I am just saying what I am feelin using nix.
I also wish if something like functional lua (does fxn lua exist??) -> nix transpiler because I'd like to write lua instead of nix to manage my system but I guess nix is fine too!
Hi there, Since you mentioned Hetzner, I thought I would respond here. While we do not have NixOS as one of our standard images for our cloud products, it is part of our ISO library. Customers can install it manually. To do this, create a cloud server, click on it, and then on the "ISO" in the menu, and then look for it listed alphabetically. --Katie
Hey hetzner. I am just a 16 year old boy (technically I am turning 17 on 2nd july haha but I want nothing from ya haha) who has heard great things about your service while being affordable but never have tried them because I guess I just don't have a credit card/I guess I am a really frugal person at this moment haha. I was just reading one of your own documents if I feel correct and it said that the support isn't the best(but I guess I was wrong)
I guess I will try out nix on hetzner for sure one day. This is really cool!!! Thanks! I didn't expect you to respond. This is really really cool. You made my day to whoever responded with this. THANKS A LOT KATIE. LOTS OF LOVE TO HETZNER. MAY YOU BE THE WAY YOU ARE, SINCE Y'ALL ARE PERFECT.
Hi again, I'm happy that I made your day! You seem pretty easy to please if that is all it takes. Keep in mind that customers must be 18 years old. I believe that is a legal requirement here in Germany, where we are based. Until then, if you're a fan, maybe you'd enjoy seeing what we're up to. We're on YouTube, reddit, Mastodon, Instagram, Facebook, and X. --Katie
and I've been using nixos on hetzner, nothing crazy but it's always worked great :-). A nice combination with terraform
If you think nix-shell is cool, try out comma. https://github.com/nix-community/comma
When there's some random little utility I need I don't always bother to install it. It's just `, weirdlittleutil`.
> Packaging, dependency management, and reproducibility in shell land are still stuck in the Stone Ages.
IMO it should stay that way, because any script that needs those things is way past the point where shell is a reasonable choice. Shell scripts should be small, 20 lines or so. The language just plain sucks too much to make it worth using for anything bigger.
My rule of thumb is that as soon as I write a conditional, it's time to upgrade bash to Python/Node/etc. I shouldn't have to search for the nuances of `if` statements every time I need to write them.
This is a decent heuristic, although (IMO) you can usually get away with ~100 lines of shell without too much headache.
Last year I wrote (really, grew like a tumor) a 2000 line Fish script to do some Podman magic. The first few hundred lines were great, since it was "just" piping data around - shell is great at that!
It then proceeded to go completely off the rails when I went full sunk cost fallacy and started abusing /dev/shm to emulate hash tables.
E: just looked at the source code. My "build system" was another Fish script that concatenated several script files together. Jeez. Never again.
What nuances are there to if statements, exactly?
An if statement in, for instance bash, just runs any command and then runs one of two blocks of code based on the exit status of that command. If the exit status is truthy, it runs what follows the `then`. If it's falsey, it rhns what follows the `else`. (`elsif` is admittedly gratuitous syntax— it would be better if it were just implemented as an if inside an else statement.) This seems quite similar to other programming languages and like not very much to remember.
I'll admit that one thing I do in my shell scripts is avoid "fake syntax"— I never use `[` or `[[` because these obscure the real structure of the statements for the sake of cuteness. I just write `test`, which makes clear that it's just an ordinary command, ans also signals to someone who isn't sure what it's doing that they can find out just by running `man test`, `help test`, `info test`, etc., from the same shell.
I also agree that if statements and if expressions should be kept few and simple. But in some ways it's actually easier to do this in shell languages than in many others! Chaining && and/or || can often get you through a substantial script without any if statements at all, let alone nested ones.
I mean, there are 3 equally valid ways to write an if statement: `test`, `[`, and `[[`. In the case of the latter two, there are a mess of single-letter flags to test things about a file or condition[0]. I'm not sure what makes them "fake syntax", but I also don't know that much about bash.
It's all reasonable enough if you go and look it up, but the script immediately becomes harder to reason about. Conditionals shouldn't be this hard.
[0]: https://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.ht...
You don't need any of those to write an if statement. I frequently write if statements like this one
The `test` command is there if you want to use it, but it's just another command.In the case of Bash, `test` is a built-in command rather than an external program, and it also has two other names, `[` and `[[`. I don't like the latter two because they look, to a naive reader, like special syntax built into the shell— like something the parser sees as unique and different and bear a special relationship to if-statements— but they aren't and they don't. And in fact you can use them in other shells that don't have them as built-ins, if you implement them as external commands. (You can probably find a binary called `[` on your system right now.)
(Actually, it looks like `[[` is even worse than "fake syntax"... it's real special syntax. It changes how Bash interprets `&&` and `||`. Yikes.)
But if you don't like `test`, you don't have to use it; you can use any command you like!
For instance, you might use `expr`:
Fish has some built-ins that fall into a similar niche that are handy for simple comparisons like this, namely `math` and `string`, but there are probably others.If you really don't like `test`, don't even need to use it for checking the existence or type (dir, symlink, socket, etc.) of files! You can use GNU `find` for that, or even sharkdp's `fd` if you ache for something new and shiny.
Fish actually has something really nice here in the `path` built-in, which includes long options like you and I both wish `test` had. You can write:
You don't need `test` for asking about or asserting equality of variables, either; is equivalent to or with the Fish `string` built-in The key is that in a shell, all commands are truthy in terms of their exit status. `&&` and `||` let you combine those exit statuses in exactly the way you'd expect, as do the (imo much more elegant) `and` and `or` combiner commands in Fish.Finally, there's no need to use the likes of `test` for combining conditions. I certainly never do. You can just write
instead of something like If-statements in shell languages are so simple that there's practically nothing to them. They just take a single command (any!) and branch based on its exit status! That's it.As for readability: any program in any language is difficult to understand if you don't know the interfaces or behaviors of the functions it invokes. `[`/`test` is no different from any such function, although it appears that `[[` is something weirder and, imo, worse.
Historically; my rule of thumb is as soon as I can't see the ~entire script without scrolling - time to rewrite in Python/ansible. I Think about the rewrite, but it usually takes awhile to do it (if ever)
When you solve the dependency management issue for shell scripts, you can also use newer language features because you can ship a newer interpreter the same way you ship whatever external dependencies you have. You don't have to limit yourself to what is POSIX, etc. Depending on how you solve it, you may even be able to switch to a newer shell with a nicer language. (And doing so may solve it for you; since PowerShell, newer shells often come with a dependency management layer.)
> any script that needs those things
It's not really a matter of needing those things, necessarily. Once you have them, you're welcome to write scripts in a cleaner, more convenient way. For instance, all of my shell scripts used by colleagues at work just use GNU coreutils regardless of what platform they're on. Instead of worrying about differences in how sed behaves with certain flags, on different platforms, I simply write everything for GNU sed and it Just Works™. Do those scripts need such a thing? Not necessarily. Is it nicer to write free of constraints like that? Yes!
Same thing for just choosing commands with nicer interfaces, or more unified syntax... Use p7zip for handling all your archives so there's only one interface to think about. Make heavy use of `jq` (a great language) for dealing with structured data. Don't worry about reading input from a file and then writing back to it in the same pipeline; just throw in `sponge` from moreutils.
> The language just plain sucks too much
There really isn't anything better for invoking external programs. Everything else is way clunkier. Maybe that's okay, but when I've rewritten large-ish shell scripts in other languages, I often found myself annoyed with the new language. What used to be a 20-line shell script can easily end up being 400 lines in a "real" language.
I kind of agree with you, of course. POSIX-ish shells have too much syntax and at the same time not enough power. But what I really want is a better shell language, not to use some interpreted non-shell language in their place.
Nice, if only you could count on having it installed on your fleet, and your fleet is 100pct Linux, no AIX, no HPUX, no SOLARIS, no SUSE on IBM Power....
Been there, tried to, got a huge slap in the face.
Been there, done that. I am so glad I don’t have to deal with all that insanity anymore. In the build farm I was responsible for, I was always happy to work on the Linux and BSD boxes. AIX and HPUX made me want to throw things. At least the Itanium junk acted like a normal server, just a painfully slow one.
I will never voluntarily run a bunch of non-Linux/BSD servers again.
I honestly don't get why there are still a bunch of non-Linux/BSD servers, at least if the goal is to do UNIX-y stuff.
I haven't touched AIX or HPUX in probably a decade and I thought they were a weird idea back then: proprietary UNIX? Is it still 1993?
At the time (10 years ago) I worked for a company with enormous customers who had all kinds of different deployment targets. I bet that list is a lot shorter today.
I hope so, for their sake. shudder
Broke: Dependency management used for shell scripts
Woke: Dependency management used for installing an interpreter for a better programming language to write your script in it
Bespoke: Dependency management used for installing your script
Unfortunately there’s basically no guarantee that even the simplest scripts work.
Has multiple possible problems with it.I have a couple of projects consisting of around >1k lines of Bash. :) Not to bloat, but it is pretty easy to read and maintain. It is complete as well. I tested all of its functionalities and it just works(tm). Were it another language, it may have been more than just around 1k LOC, however, or more difficult to maintain. I call some external programs a lot, so I stick'd to a shell script.
I simply do not write shell scripts that use or reference binaries/libraries that are no pre-installed on the target OS (which is the correct target, writing shell scripts for portability is silly).
There is no package manager that is going to make a shell script I write for macOS work on Linux if that script uses commands that only exist on macOS.
fwiw (home)brew exists on both platforms
Why bother writing new shell scripts?
If you're allowed to install any deps go with uv, it'll do the rest.
I'm also kinda in love with https://babashka.org/ check it out if you like Clojure.
Check out mise: https://mise.jdx.dev/
We use it at $work to manage dev envs and its much easier than Docker and Nix.
It also installs things in parallel, which is a huge bonus over plain Dockerfiles
I declared nix bankruptcy earlier this year and moved to mise. It does 90% of what I need for only 1% of the effort of nix.
That's a shame as I got to a monk-level python jujitsu. I can fix any problem, you name it, https nightmare, brew version vs pyenv, virtualenv shenanigans. Now all this knowledge is a bad investment of time.
Never say never.
Knowing the Python packaging ecosystem, uv could very well be replaced by something else. It feels different this time, but we won't know for a while yet.
Agreed. I migrated ~all my personal things to Uv; but I'm sure once I start adopting widely at work I'll find edge cases you need to know the weeds to figureout/work around.
+1 for Mise, it has just totally solved the 1..N problem for us and made it hilariously easy to be more consistent across local dev and workflows
I'm unable to resist responding that clearly the solution is to run Nix in Docker as your shell since packaging, dependency management, and reproducibility will be at theoretical maximum.
For the specific case of solving shell script dependencies, Nix is actually very straightforward. Packaging a script is a writeShellApplication call and calling it is a `nix run`.
I guess the issue is just that nobody has documented how to do that one specific thing so you can only learn this technique by trying to learn Nix as a whole.
So perhaps the thing you're envisaging could just be a wrapper for this Nix logic.
Guix is easier to grok than Nix, if anyone is looking to save themselves some effort.
> Great, if downloading a Linux distro to run sed sounds reasonable.
There's a reason why distroless images exist. :)
I use Nix for this with resholve and I like it a lot.
Consider porting your shell scripts to Python? The language is vastly superior and subprocess.check_call is not so bad.
> finally feels like Python scripts can Just Work™ without a virtualenv scavenger hunt.
Hmm, last time I checked, uv installs into ~/.local/share/uv/python/cpython-3.xx and can not be installed globally e.g. inside a minimal docker without any other python.
So basically it still runs in a venv.
https://docs.astral.sh/uv/reference/settings/#pip_system
I mean how to install `uv python install` into system-wide.
No matter what I tried it's always a symlink into ~/.local
Would homebrew do the job?
Homebrew does a great job @ initial setup; it does a poor job of keeping a system clean and updated over time.
This is really great, and it seems that it's becoming more popular. I saw it first on simonw's blog:
https://simonwillison.net/2024/Dec/19/one-shot-python-tools/
And there was a March discussion of a different blog post:
https://news.ycombinator.com/item?id=43500124
I hope this stays on the front page for a while to help publicize it.
Like the author, I find myself going more for cross-platform Python one-offs and personal scripts for both work and home and ditching Go. I just wish Python typechecking weren't the shitshow it is. Looking forward to ty, pyrefly, etc. to improve the situation a bit
Speed is one thing, the type system itself is another thing, you are basically guaranteed to hit like 5-10 issues with python's weird type system before you start grasping some of the oddities
I wouldn't describe Python type checking as a shit-show. pyright is pretty much perfect. One nit against it perhaps is that it doesn't support non-standard typing constructs like mypy does (for Django etc). That's an intentional decision on the maintainer's part. And I'm glad he made that decision because that spurned efforts to make the standard typing constructs more expressive.
I'm also looking forward to the maturity of Rust-based type checkers, but solely because one can almost always benefit from an order of magnitude improvement in speed of type checking, not because Python type-checking is a "shit show".
I do grant you that for outsiders, the fact that the type checker from the Python organization itself is actually a second rate type checker (except for when one uses Django, etc, and then it becomes first-rate) is confusing.
I've never particularly liked go for cross platform code anyway. I've always found it pretty tightly wedded to Unix. Python has its fair share of issues on windows aswell though, I've been stuck debugging weird .DLL issues with libraries for far too long in my life.
Strangely, I've found myself building personal cross platform apps in game engines because of that.
I do hope the community will converge on one type checker like ty. The fact that multiple type checkers exist is really hindering to the language as a whole.
uv has been fantastic to use for little side projects. Combining uv run with `uv tool run` AKA `uvx` means one can fetch, install within a VM, and execute Python scripts from Github super easily. No git clone, no venv creation + entry + pip install.
And uv is fast — I mean REALLY fast. Fast to the point of suspecting something went wrong and silently errored, when it fact it did just what I wanted but 10x faster than pip.
It (and especially its docs) are a little rough around the edges, but it's bold enough and good enough I'm willing to use it nonetheless.
Truly. uv somehow resolves and installs dependencies more quickly than pyenv manages to print its own --help output.
I know there are real reasons for slow Python startup time, with every new import having to examine swaths of filesystem paths to resolve itself, but it really is a noticeable breath of fresh air working with tools implemented in Go or Rust that have sub-ms startup.
You don't have to import everything just to print the help. I try to avoid top-level imports until after the CLI arguments have been parsed, so the only import until then is `argparse` or `click`. This way, startup appears to be instant even in Python.
Example:
Another pattern, though, is that a top level tool uses pkg_resources and entry_points to move its core functionality out to verb plugins— in that case the help is actually the worst case scenario because not only do we have to scan the filesystem looking for what plugins are available, they all have to be imported in order to ask each for its help strings.
An extreme version of this is the colcon build tool for ROS 2 workspaces:
https://github.com/colcon/colcon-core/blob/master/setup.cfg#...
Unsurprisingly, startup time for this is not great.
The Python startup latency thing makes sense, but I really don't understand why it would take `pyenv` a long time to print each line of its "usage" output (the one that appears when invoking it with `--help`) once it's already clearly in the code branch that does only that.
It feels like like it's doing heavy work between each line printed! I don't know any other cli tool doing that either.
There's a launcher wrapper shell script + Python startup time that contributes to pyenv's slow launch times.
Not to derail the Python speed hate train but pyenv is written in bash.
It's a tool for installing different versions of Python, it would be weird for it to assume it already had one available.
Oh, that might actually explain the slow line printing speed. Thank you, solves a long standing low stakes mystery for me :)
The "slowness" and the utter insanity of trying to make a "works on my computer" Python program work on another computer pushed me to just rewrite all my Python stuff in Go.
About 95% of my Python utilities are now Go binaries cross-compiled to whatever env they're running in. The few remaining ones use (API) libraries that aren't available for Go or aren't mature enough for me to trust them yet.
Last time I looked, pyenv contributors were considering implementing a compiled launcher for that reason.
But that ship has sailed for me and I'm a uv convert.
I agree uv is amazing, but it's not a virtual machine, it's a virtual environment. It runs the scripts on top of your OS without any hardware virtualization. The virtual environment only isolates the Python dependencies.
No more dependency problems with mkdocs I ran into before every other month:
Funnily enough it also feels like it is starting faster.Is there a reason you didn’t explicitly pull in mkdocs as a dependency in that invocation? I guess uv will expose it/let you run it anyways due to the fact that it’s required by everything else you did specify.
its a `uvx` call, so the tool being invoked is `mkdocs`, and all the other dependencies are additions on top of that
Very nice, I believe Rust is doing something similar too which is where I initially learned of this idea of single-file shell-type scripts in other languages (with dependency management included, which is how it differs from existing ways of writing single-file scripts in e.g. scripting languages) [0].
Hopefully more languages follow suit on this pattern as it can be extremely useful for many cases, such as passing gists around, writing small programs which might otherwise be written in shell scripts, etc.
[0] https://rust-lang.github.io/rfcs/3424-cargo-script.html
C# too: https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-...
I’ve been a python dev for nearly a decade and never once thought dep management was a problem.
If I’ve ever had to run a “script” in any type of deployed ENV it’s always been done in that ENVs python shell .
So I still don’t see what the fuss is about?
I work on a massive python code base and the only benefit I’ve seen from moving to UV is it has sped up dep installation which has had positive impact on local and CI setup times.
How did you tell other people/noobs to run your python code (or how did you run it yourself after 5+ years of not touching older projects)?
run script
"missing x..."
pip install x
run script
"missing y..."
pip install y
> y not found
google y to find package name
pip install ypackage
> conflict with other package
realize I forgot a venv and have contaminated my system python
check pip help output to remember how to uninstall a package
clean up system python
create venv at cwd
start over
...
</end of time>
Thankfully some newer systems will error by default if you try to mess with them via pip instead of your system's package manager. Easy to override if you want to, and saves a lot of effort fixing accidental screw ups.
>realize I forgot a venv and have contaminated my system python
>check pip help output to remember how to uninstall a package
>clean up system python
>create venv at cwd
>start over
This hits disturbingly close to home.
This is like seeing someone complain they have to turn their computer on to do work
Python's dependency management has been terrible until very recently compared to nearly every other mainstream language.
I guess this is why people need to get out of this “Python dev” or “JS dev” mindset and try other languages to see why those coming to Python complain so much about dependency management.
People complain because the experience is less confusing in many other languages. Think Go, Rust, or even JS. All the tooling chaos and virtual environment jujitsu are real deterrents for newcomers. And it’s not just beginners complaining about Python tooling. Industry veterans like Armin Ronacher do that all the time.
uv is a great step in the right direction, but the issue is that as long as the basic tooling isn’t built into the language binary, like Go’s tools or Rust’s Cargo, more tools will pop up and fragment the space even further.
Confusing is underselling it. That implies that Python dependency management is working fine, it's just complex. But it's not working fine: there's no such thing as lock files, which makes reproducible installs a gamble and not a given. For small scripts this is probably "okay", but if you're working in a team or want to deploy something on a server, then it's absolutely not fine because you want deterministic builds and that's simply impossible without a decent package manager.
Tools like uv solve the "it works on my machine" problem. And it's also incredibly fast.
There is a lock file now.
https://packaging.python.org/en/latest/specifications/pylock...
Issue is since there are no standardized build tool (pip, uv both are third party), there are a zillion ways of generating this lockfile unlike go.mod or cargo.toml. So it doesn't work in many scenarios and it's confusing as hell.
My view is I’m an engineer first and foremost and I use the tools which are best for the task at hand. That also means what’s best for the business in terms of others working on the project, this has meant python with some sort of framework.
People have suggested using other languages that might be faster but the business always choices what’s best for everyone to work with.
Sure, it depends on the type and maturity of the business, as well as the availability of local talent. I've worked at three companies that started out with Python and Django, then transitioned to other technologies as the business scaled. In those environments, there were two kinds of developers: those who quickly adapted and picked up new languages, and those who wanted to remain "Python devs." The latter group didn’t have a great time moving forward.
What I don't like about the "Python + Framework + Postgres" argument is that it often lacks context. This is a formidable combination for starting a business and finding PMF. But unless you've seen Python and Postgres completely break under 100k RPS and petabyte-scale data, it's hard to understand the limitations just from words. Python is fantastic, but it has its limits and there are cases where it absolutely doesn't work. This “single language everywhere” mindset is how we ended up with JavaScript on the backend and desktop.
Anyone can write Python, and with LLMs, there's not much of a moat around knowing a single language. There's also no reason not to familiarize yourself with others, since it broadens your perspective. Of course, some businesses scale quite well with Python or JavaScript. But my point isn't to abandon Python. It's to gain experience in other languages so that when people criticize Python’s build tools, you can genuinely empathize with those concerns. Otherwise, comments like “Python tooling is fine” from people who have mostly worked with only Python are hard to take seriously.
> it’s always been done in that ENVs python shell .
What if you don't have an environment set up? I'm admittedly not a python expert by any means but that's always been a pain point for me. uvx makes that so much easier.
I wrote PHP/JS/Java before Python. Been doing Python for nearly a decade too, and like 4dregress haven't had the need to worry much about dep management. JS and PHP had all sorts of issues, Maven & Gradle are still the ones that gave me less trouble. With Python I found that most issues could be fixed by finding the PEP that implemented what I needed, and by trying to come up with a simple workflow & packaging strategy.
Nowadays I normally use `python venv/bin/<some-executable>`, or `conda run -n <some-env> <some-executable>`, or packaged it in a Singularity container. And even though I hear a lot of good things about uv, given that my job uses public money for research, we try to use open source and standards as much as possible. My understanding is that uv is still backed by a company, and at least when I checked it some time ago (in peps discussions & GH issues) they were no implementing the PEPs that I needed -- even if they did, we would probably still stay with simple pip/setuptools to avoid having to use research budget to update our build if the company ever changed its business model (e.g. what anaconda did some months/year? ago).
Digressing: the Singularity container is useful for research & HPC too, as it creates a single archive, which is faster to load on distributed filesystems like the two I work on (GPFS & LustreFS) instead of loading many small files over network.
Create a virtual environment:
python3 -m venv venv
Activate the virtual environment:
source venv/bin/activate
Deactivate the virtual environment:
deactivate
Or: `uvx ruff`
Which one is easier to run, especially for someone who doesn't use python everyday?
The one they definitely won't have to re-learn in a few years.
It's still easier if you use virtual environments so infrequently that you have to look up how to do it every time.
Virtual environments alone are not enough. They don't guarantee deterministic builds. What do you do to ensure that your production environment runs the same code as your local dev environment? How do you solve that problem without dependency managers like uv or poetry?
I've been a python dev for nearly 3 decades and feel that uv is removing a lot of the rough edges around dependency management. So maybe "problem" is the wrong word; I've been able to solve dependency management issues usually without too much trouble, I have also spent a significant amount of time dealing with them. For close to a decade I was managing other peoples Python environments on production systems, and that was a big mess, especially with trying to ensure that they stayed updated and secure.
If you don't see what the fuss is about, I'm happy for you. Sounds like you're living in a fairly isolated environment. But I can assure you that uv is worth a lot of fussing about, it's making a lot of our lives a lot easier.
So far I've only run into one minor ergonomic issue when using `uv run --script` with embedded metadata which is that sometimes I want to test changes to the script via the Python REPL, but that's a bit harder to do since you have to run something like:
I'd love if there were something more ergonomic like: Edit: this is better: That fires up the correct python and venv for the script. You probably have to run the script once to create it.I think you're looking for something like this (the important part being embeddeding a REPL call toward the end after whateve rsetup): https://gist.github.com/lostmygithubaccount/77d12d03894953bc...
You can make `--interactive` or whatever you want a CLI flag from the script. I often make these small Typer CLIs with something like that (or in this case, in another dev script like this, I have `--sql` for entering a DuckDB SQL repl)
you are welcome
If I may ask, why `unlink` instead of `rm`?
This is rather silly.
Between yesterday's thread and this thread I decided to finally give uv a shot today - I'm impressed, both by the speed and how easy it is to manage dependencies for a project.
I think their docs could use a little bit of work, especially there should be a defined path to switch from a requirements.txt based workflow to uv. Also I felt like it's a little confusing how to define a python version for a specific project (it's defined in both .python-version and pyproject.toml)
I write an ebook on Python Developer tooling. I've attempted to address some of the weaknesses in the official documentation.
How to migrate from requirements.txt: https://pydevtools.com/handbook/how-to/migrate-requirements.... How to change the Python version of a uv project: https://pydevtools.com/handbook/how-to/how-to-change-the-pyt...
Let me know if there are other topics I can hit that would be helpful!
This would’ve been really handy for me a few weeks ago when I ended up working this out for myself (not a huge job, but more effort than reading your documentation would’ve been). While I can’t think of anything missing off the top of my head, I do think a PR to uv to update the official docs would help a lot of folk!
Actually, I’ve thought of something! Migrating from poetry! It’s something I’ve been meaning to look at automating for a while now (I really don’t like poetry).
https://pydevtools.com/handbook/how-to/how-to-migrate-from-p...
You don't have to pip install it before calling uvx, do you?
This is wonderful. When I was learning I found the documentation inadequate and gpt4 ran in circles as I did not know what to ask (I did not realize “how do I use uv instead of conda/pip?” was a fundamentally flawed question).
This is a great resource, thank you for putting this together
> it's defined in both .python-version and pyproject.toml
The `requires-version` field in `pyproject.toml` defines a range of compatible versions, while `.python-version` defines the specific version you want to use for development. If you create a new project with uv init, they'll look similar (>=3.13 and 3.13 today), but over time `requires-version` usually lags behind `.python-version` and defines the minimum supported Python version for the project. `requires-version` also winds up in your package metadata and can affect your callers' dependency resolution, for example if your published v1 supports Python 3.[old] but your v2 does not.
> how to define a python version for a specific project (it's defined in both .python-version and pyproject.toml)
pyproject.toml is about allowing other developers, and end users, to use your code. When you share your code by packaging it for PyPI, a build backend (uv is not one, but they seem to be working on providing one - see https://github.com/astral-sh/uv/issues/3957 ) creates a distributable package, and pyproject.toml specifies what environment the user needs to have set up (dependencies and python version). It has nothing to do with uv in itself, and is an interoperable Python ecosystem standard. A range of versions is specified here, because other people should be able to use your code on multiple Python versions.
The .python-version file is used to tell uv specifically (i.e. nobody else) specifically (i.e., exact version) what to do when setting up your development environment.
(It's perfectly possible, of course, to just use an already-set-up environment.)
I have never researched this, but I thought the .python-version file only exists to benefit other tools which may not have a full TOML parser.
Read-only TOML support is in the standard library since Python 3.11, though. And it's based on an easily obtained third-party package (https://pypi.org/project/tomli/).
(If you want to write TOML, or do other advanced things such as preserving comments and exact structure from the original file, you'll want tomlkit instead. Note that it's much less performant.)
> there should be a defined path to switch from a requirements.txt based workflow to uv
Try `uvx migrate-to-uv` (see https://pypi.org/project/migrate-to-uv/)
Same, although I think it doesn't support my idiosyncratic workflow. I have the same files sync'd (via dropbox at the moment) on all my computers, macos and windows and wsl alike, and I just treat every computer likes it's the same computer. I thought this might be a recipe for disaster when I started doing it years ago but I have never had problems.
Some stuff like npm or dotnet do need an npm update / dotnet restore when I switch platforms. At first attempt uv seems like it just doesn't really like this and takes a fair bit of work to clean it up when switching platforms, while using venvs was fine.
You should probably look to have the uv managed venvs completely excluded from being synced, and forcing every machine to build its own venv. Given how fast and consistent uv is, there’s no real reason to share the actual venvs between machines anymore.
Thank you! I wrap all my tools in very simple shell+batch scripts anyway so just specifying a different venv for each does the trick.
I agree the docs are not there yet. There is a lot of documentation but it's a description of all the possible options that are available (which is a lot). But it doesn't really tell me how to actually _use_ it for a certain type of workflow, or does a mediocre job at best.
> Before this I used to prefer Go for one-off scripts because it was easy to create a self-contained binary executable.
I still do because:
- Go gives me a single binary
- Dependencies are statically linked
- I don’t need any third-party libs in most scenarios
- Many of my scripts make network calls, and Go has a better stdlib for HTTP/RPC/Socket work
- Better tooling (built-in formatter, no need for pytest, go vet is handy)
- Easy concurrency. Most of my scripts don’t need it, but when they do, it’s easier since I don’t have to fiddle with colored functions, external libs, or, worse, threads.
That said, uv is a great improvement over the previous status quo. But I don’t write Python scripts for reasons that go beyond just tooling. And since it’s not a standard tool, I worry that more things like this will come along and try to “improve” everything. Already scarred and tired in that area thanks to the JS ecosystem. So I tend to prefer stable, reliable, and boring tools over everything else. Right now, Go does that well enough for my scripting needs.
I needed to process a 2 GB xml file the other day. While my Python script was chugging away, I had Claude translate it to Go. The vibe-coded Go program then processed the file before my original Python script terminated. That was the first time I ever touched Go, but it certainly won't be the last.
Go is pretty awesome. I’m sure that spending some time with the script would have made it at least 50 times faster than Python.
(author of post here)
I still use both Go and Python. But Python gives me access to a lot more libraries that do useful stuff. For example the YouTube transcript example I wrote about in the article was only possible in Python because afaik Go doesn't have a decent library for transcript extraction.
Yeah that's a fair point. I still do a ton of Python for work. The language is fine; it's mostly tooling that still feels 30 years old.
Good for you. I dont See how this is relevant to this topic.
> Before this I used to prefer Go for one-off scripts because it was easy to create a self-contained binary executable.
Here's how it's relevant :)
Between how good ChatGPT/Claude are at writing Python, and discovering uv + PEP 723, I'm creating all sorts of single file python scripts. Some of my recent personal tools: compression stats for resources when gzipped, minify SVGs, a duplicate file tool, a ping testing tool, a tool for processing large CSVs through LLMs one row at a time, etc.
uv is the magic that deals with all of the rough edges/environment stuff I usually hate in Python. All I need to do is `uv run myFile.py` and uv solves everything else.
I've recently updated a Python script that I originally wrote about 10 years ago. I'm not a programmer - I just have to get stuff done - think sysops.
For me there used to be a clear delineation between scripting languages and compiled languages. Python has always seemed to want to be both and I'm not too sure it can really. I can live with being mildly wrong about a concept.
When Python first came out, our processors were 80486 at best and RAM was measured in MB at roughly £30/MB in the UK.
"For the longest time, ..." - all distros have had scripts that find the relevant Python or Java or whatevs so that's simply daft. They all have shebang incantations too.
So we now have uv written in Rust for Python. Obviously you should install it via a shell script directly from curl!
I love all of the components involved here but please for the love of a nod to security at least suggest that the script is downloaded first, looked over and then run.
I recently came across a Github hosted repo with scripts that changed Debian repos to point somewhere else and install ... software. I'm sure that's all fine too.
curl | bash is cute and easy and very, very insecure.
> Obviously you should install it via a shell script directly from curl!
No? You can install it via pip.
I was going off on a bit of a tangent but take a look at this horror, which is still up:
https://github.com/InboraStudio/Proxmox-VGPU
Note the quite professional looking README.md and think about the audience for this thing - kittens hitting the search bong and trying to get something very complicated working.
Read the scripts: they are pretty short and could put your hypervisor in the hands of someone else who may not be too friendly.
Now pip has the same problem except you don't normally go in with a web browser first.
I raised an issue to at least provide a hint to casual browsers and also raised it with the github AI bottie complaint thang which doesn't care about you, me or anything else for that matter.
You can do both but the official recomendation is shell + curl[0].
Not an expert but I think there's performance gains to calling the binary directly rather than through python.
[0]: https://docs.astral.sh/uv/
My solution to this is:
1) Subscribe to the GitHub repo for tag/release updates.
2) When I get a notification of a new version, I run a shell function (meup-uv and meup-ruff) which grabs the latest tag via a GET request and runs an install. I don't remember the semantics off the top of my head, but it's something like:
Of course this implies I'm willing to wait the ~5-10 minutes for these apps to compile, along with the storage costs of the registry and source caches. Build times for ruff aren't terrible, but uv is a straight up "kick off and take a coffee break" experience on my system (it gets 6-8 threads out of my 12 total depending on my mood).> For me there used to be a clear delineation between scripting languages and compiled languages. Python has always seemed to want to be both and I'm not too sure it can really. I can live with being mildly wrong about a concept.
Eh. There's a lot of space in the middle to "well actually" about, but Python really doesn't behave like a "compiled" language. The more important question is: what do you ship to people, and how easily can they use it? Lots of people in this thread are bigging up Go's answer of "you ship a thing which can run immediately with no dependencies". For users that solves so many problems.
Quite a few python usecases would benefit from being able to "compile" applications in the same sense. There are py-to-exe solutions but they're not popular or widely used.
What's going on? This whole thread reads like paid amazon reviews
What's going on is "we have 14 standards so we need to create a 15th" actually worked this time
To be fair, I've used Poetry for years and it works/worked amazingly well. It's just not as fast as uv.
It works far more of the time than people give it credit for. There are a lot of good XKCDs, but that one is by far the worst one ever made, as far as being a damaging meme goes.
It's survival bias. You'd never see the confusion from would-have-failed standards-wannabes that xkcd927 helped prevent.
"xkcd 927 Considered Harmful" ?
Fantastic comment
Beacuse with uv finally Python dependency management is not a shitshow. The faster we get everyone to switch the better.
Occasionally the reviews match reality.
If you want to manually manage envs and you're using conda, you can activate the env in a shell wrapper for your python script, like so (this is with conda)
Admittedly this isn't self contained like the PEP 723 solution.If momentum for uv in the community continues, I’d love to see it distributed more broadly. uv can already be installed easily on macOS via Homebrew (like pyenv). uv can also be installed on Windows via WinGet (unlike pyenv). It would be nice to see it packaged for Linux as well.
Mise has a very similar feature with its shebangs: https://mise.jdx.dev/tips-and-tricks.html#shebang
It makes throwing together a bash scripts with dependencies very enjoyableThis seems timely, `uv` is a complete revelation for me and has made working with Python extremely convenient ... the Python "just works" time has arrived.
I'm building yt-dlp / uvx based WebUI
- https://github.com/ocodo/uvxytdlp
Still work in progress but shaping up nicely.
There has been a flurry of `uv` posts on HN recently. I don't have any experience with it, is it really the future, or is it a fad?
As Ive gotten older I've grown weary of third party tools, and almost always try to stick with the first party built in methods for a given task.
Does uv provide enough benefit to make me reconsider?
I'm not a Python master but I've struggled with all the previous package managers, and uv is the first tool that does everything easily (whether it's installing or generating packages or formatting or checking your code).
I don't know why there is such a flurry of posts since it's a tool that is more than a year old, but it's the one and only CLI tool that I recommend when Python is needed for local builds or on a CI.
Hatch was a good contender at the time but they didn't move fast enough, and the uv/ruff team ate everybody's lunch. uv is really good and IMHO it's here to stay.
Anyway try it for yourself but it's not a high-level tool that is hiding everything, it's fast and powerful and yet you stay in control. It feels like a first-party tool that could be included in the Python installer.
I’ve started doing Python before 2.0 launched. I understand perfectly where you’re coming from.
The answer is an unequivocal yes in this case. uv is on a fast track to be the defacto standard and make pip relegated to the ‘reference implementation’ tier.
I also went through a similar enlightenment of just sticking to pip, but uv convinced me to switch and I’m so glad I did. You can dip your toe in by just using the ‘uv pip’ submodule as a drop in replacement for pip but way faster.
The learning curve is so low that yes.
Try it for <20mins and if you don't like it, leave it behind. These 20mins include installation, setup, everything.
Yes, IMO it does. I wrote my first lines of Python 16 years ago and have worked with raw pip & venv, PDM and Poetry. None of those solutions come close to how easy it is to use (and migrate to) uv. Just give it a try for half an hour, you likely won't want to use anything else after that.
I’m a moron when it comes to python tooling but switching a project to uv was a pleasant experience. It seems well thought out and the speed is genuinely a feature compared to other python tooling I’ve used.
A lot of people like all-in-one tools, and uv offers an opinionated approach that works. It's essentially the last serious attempt at this since Poetry, except that uv is also supporting a variety of new Python packaging standards up front (most notably https://peps.python.org/pep-0621/ , which Poetry lagged on for years - see https://github.com/python-poetry/roadmap/issues/3 ) and seems committed to keeping on top of new ones.
How much you can benefit depends on your use case. uv is a developer tool that also manages installations of Python itself (and maintains separate environments for which you can choose a Python version). If you're just trying to install someone else's application from PyPI - say https://pypi.org/project/pycowsay/ as an example - you'll likely have just as smooth of an experience via pipx (although installation will be even slower than with pip, since it's using pip behind the scenes and adding its own steps). On the other hand, to my understanding, to use uv as a developer you'll still need to choose and install a build backend such as Flit or Hatchling, or else rely on the default Setuptools.
One major reason developers are switching to uv is lockfile support. It's worth noting here that an interoperable standard for lockfiles was recently approved (https://peps.python.org/pep-0751/), uv will be moving towards it, and other tools like pip are moving towards supporting it (the current pip can write such lockfiles, and installing from them is on the roadmap: https://github.com/pypa/pip/issues/13334).
If you, like me, prefer to follow the UNIX philosophy, a complete developer toolchain in 2025 looks like:
* Python itself (if you want standalone binaries like the ones uv uses, you can get them directly; you can also build from source like I do; if you want to manage Python installations then https://github.com/pyenv/pyenv is solid, or you can use the multi-language https://asdf-vm.com/guide/introduction.html with https://github.com/asdf-community/asdf-python I guess)
* Ability to create virtual environments (the standard library takes care of this; some niche uses are helped out by https://virtualenv.pypa.io/)
* Package installer (Pip can handle this) and manager (if you really want something to "manage" packages by installing into an environment and simultaneously updating your pyproject.toml, or things like that; but just fixing the existing environment is completely viable, and installers already resolve dependencies for whatever it is they're currently installing)
* Build frontend (the standard is https://build.pypa.io/en/stable/; for programmatic use, you can work with https://pyproject-hooks.readthedocs.io/en/latest/ directly)
* Build backend (many options here - by design! but installers will assume Setuptools by default, since the standard requires them to, for backwards compatibility reasons)
* Support for uploading packages to PyPI (the standard is https://twine.readthedocs.io/en/stable/)
* Optional: typecheckers, linters, an IDE etc.
A user on the other hand only needs
* Some version of Python (the one provided with a typical Linux distribution will generally work just fine; Windows users should usually just install the current version, with the official installer, unless they know something they want to install isn't compatible)
* Ability to create virtual environments and also install packages into them (https://pipx.pypa.io/stable/ takes care of both of these, as long as the package is an "application" with a defined entry point; I'm making https://github.com/zahlman/paper which will lift that restriction, for people who want to `import` code but not necessarily publish their own project)
* Ability to actually run the installed code (pipx handles this by symlinking from a standard application path to a wrapper script inside the virtual environment; the wrappers specify the absolute path to the virtual environment's Python, which is generally all that's needed to "use" that virtual environment for the program. It also provides a wrapper to run Pip within a specific environment that it created. PAPER will offer something a bit more sophisticated here, for both aspects.)
It is difficult to use Python for utility scripts on the average Linux machine. Deploying Python projects almost require using a container. Popular distros try managing Python packages through the standard package manager rather than pip but not all packages are readily available. Sometimes you're limited by Python version and it can be non-trivial to have multiple versions installed at once. Python packaging has become a shit show.
If you use anything outside the standard library the only reliable way to run a script is installing it in a virtual environment. Doing that manually is a hassle and pyenv can be stupidly slow and wastes disk space.
With uv it's fast and easy to set up throw away venvs or run utility scripts with their dependencies easily. With the PEP-723 scheme in the linked article running a utility script is even easier since its dependencies are self-declared and a virtual environment is automatically managed. It makes using Python for system scripting/utilities practical and helps deploy larger projects.
> Deploying Python projects almost require using a container.
Really? `apt install pipx; pipx install sphinx` (for example) worked flawlessly for me. Pipx is really just an opinionated wrapper that invokes a vendored copy of Pip and the standard library `venv`.
The rest of your post seems to acknowledge that virtual environments generally work just fine. (Uv works by creating them.)
> Sometimes you're limited by Python version and it can be non-trivial to have multiple versions installed at once.
I built them from source and make virtual environments off of them, and pass the `--python` argument to Pipx.
> If you use anything outside the standard library the only reliable way to run a script is installing it in a virtual environment. Doing that manually is a hassle and pyenv can be stupidly slow and wastes disk space.
If you're letting it install separate copies of Python, sure. (The main use case for pyenv is getting one separate copy of each Python version you need, if you don't want to build from source, and then managing virtual environments based off of that.) If you're letting it bootstrap Pip into the virtual environment, sure. But you don't need to do either of those things. Pip can install cross-environment since 22.3 (Pipx relies on this).
Uv does save disk space, especially if you have multiple virtual environments that use the same packages, by hard-linking them.
> With uv it's fast and easy to set up throw away venvs or run utility scripts with their dependencies easily. With the PEP-723 scheme in the linked article running a utility script is even easier since its dependencies are self-declared and a virtual environment is automatically managed.
Pipx implements PEP 723, which was written to be an ecosystem-wide standard.
In Ruby, this feature is built-in with its default package manager: [bundler/inline](https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...).
I have a lot of opinions about this.
Firstly, I have been a HN viewer for so many time and this is the one thing about pep python scripts THAT always get to the top of leaderboard of hackernews by each person discovering it themselves.
I don't mean to discredit the author. His work was simple and clear to understand. I am just sharing this thesis that I have that if someone wants karma on Hackernews for whatever reason, this might be the best topic. (Please don't pitchfork me since I don't mean offense to the author)
Also, can anybody please explain to me on how to create that pep metadata in uv from just a python script and without anything else, like some command which can take a python script and give pep and add that in the script, I am pretty sure that uv has a feature flag but I feel that the author might've missed out on this feature because I don't know when coding one off scripts in python using AI (gemini) it had some options with pep so I always had to paste uv's documentation I don't know, so please if anybody knows a way to create pep easier using the cli, then please tell me! Thanks in advance!!
One can use uv to add into the dependencies list
Thanks a lot friend, But one of the issues with this is that I need to know about requests and sometimes their names can be different and I actually had created a cli tool called uvman which actually wanted to automate that part too.
But my tool was really finnicky and I guess it was built by AI ,so um yea, I guess you all can try it, its on pypi. I think that it has a lot of niche cases where it doesn't work. Maybe someone can modify it to make it better as I had built it like 3-4 months ago if I remember correctly and I have completely forgotten how things worked in uv.
Last time I looked at switching from poetry to uv I had an issue with pinning certain dependencies to always install from a private PyPI repository. Is there a way to do that now?
(also: possible there's always been a way and I'm an idiot)
Yes, see: https://docs.astral.sh/uv/concepts/projects/dependencies/#in...
You mean something like https://docs.astral.sh/uv/concepts/indexes/ ?
Some years ago I thought it would be interesting to develop a tool to make a python script automatically install its own dependencies (like uvx in the article), but without requiring any other external tool, except python itself, to be installed.
The downside is that there are a bunch of seemingly weird lines you have to paste at the begging of the script :D
If anyone is curios it's on pypi (pysolate).
Also this thing that never took off: https://github.com/fal-ai/isolate
Not quite the same but interesting!
Grace Hopper technology: A well formed Python program shall define an ENVIRONMENT division that specifies the environment in which the program will be compiled and executed. It outlines the hardware and software dependencies. This division is crucial for making COBOL^H^H^H^H^HPython programs portable across different systems.
Does this create a separate environment for each script? If so, won't that create lots of bloat?
It does create separate environments.
Each environment itself only takes a few dozen kilobytes to make some folders and symlinks (at least on Linux). People think of Python virtual environments as bloated (and slow to create) because Pip gets bootstrapped into them by default. But there is no requirement to do so.
The packages take up however much space they take up; the cost there is unavoidable. Uv hard-links packages into the separate environments from its cache, so you only pay a disk-space cost for shared packages once (plus a few more kilobytes for more folders).
(Note: none of this depends on being written in Rust, but Pip doesn't implement this caching strategy. Pip can, however, install cross-environment since 22.3, so you don't actually need the bootstrap. Pipx depends on this, managing its own vendored copy of Pip to install into multiple environments. But it's still using a copy of Pip that interacts with a Pip-styled cache, so it still can't do the hard-link trick.)
Yes, it creates a separate environment for each script. No, it doesn’t create a lot of bloat. There’s a separate cache and the packages are hard-linked into the environments, so it’s extremely fast and efficient.
Is the environment located in the .venv folder under the same directory as the script?
The venv is created and then discarded once the script finishes execution. This is well suited to one-off scripts like what is demonstrated in the article.
In a larger project you can manage venvs like this using `uv venv`, where you end up with a familiar .venv folder.
uv actually reuses environments when dependencies match, creating a content-addressed store that significantly reduces disk usage compared to traditional per-script virtualenvs.
This is very cool.
Note that PEP 723 is also supported by pipx run:
https://pipx.pypa.io/latest/examples/#pipx-run-examples
Using Guix (guix shell) it was already possible to run Python scripts one-off. I see others have also commented about doing it using Nix.
Also that would be reproducible, in contrast to what is shown in the blog post. To make that reproducible, one would have to keep the lock file somewhere, or state the checksums directly in the Python script file, which seems rather un-fun.
I like uv run and uvx like the swiss army knifes of python that they are, but PEP 723 stuff I think is mostly just a gimmick. I'm not convinced it's more than a cool trick.
It's useful for people who don't want to create a "project" or otherwise think about the "ecosystem". People who, if they share their code at all, will email it to coworkers or something. It lets you get by without a pyproject.toml file etc.
It’s amazing for one-offs.
Pretty nice!
Some Python devs told me, it's an awesome language, but they envy the Node.js ecosystem for their package management.
Seems like uv finally removed that roadblock.
I think they must have been joking!
Probably not. NPM has its problems but Python packaging has always been significantly messier (partly because, Python is much older than Node and, indeed, much older than the very concept of resolving dependencies over the internet).
The upside in Python is that dependencies tend to be more coarse grained and things break less when you update. With JS you have to be on the treadmill constantly to avoid bitrot, and because packages tend to be so small and dependency trees so large, there's a lot of potential points of failure when updating anything.
The bigger problem in Python has been its slowness and reliance on C dependencies.
Maven solved Java packaging circa 2005, for example. Yes, XML is verbose, but it's an implementation detail. Python still lags on many fronts, 20 years later.
An example: even now it makes 0 sense to me why virtual envs are not designed and supposed to be portable between machines with the same architecture (!). Or why venvs need to be activated with shell-variety specific code.
> An example:
None of this example has anything to do with performance or reliance on C dependencies, but ok.
> even now it makes 0 sense to me why virtual envs are not designed and supposed to be portable between machines with the same architecture (!).
They aren't designed to be relocatable at all - and that's the only actual stumbling block to it. (They may even contain activation scripts for other platforms!)
That's because a bunch of stuff in there specifies absolute paths. In particular, installers (Pip, at least) will generate wrapper scripts that specify absolute paths. This is so that you can copy them out of the environment and have them work. Yes, people really do use that workflow (especially on Windows, where symlinking isn't straightforward).
It absolutely could be made to work - probably fairly easily, and there have been calls to sacrifice that workflow to make it work. It's also entirely possible to do a bit of surgery on a relocated venv and make it work again. I've done it a few times.
The third-party `virtualenv` also offers some support for this. Their documentation says there are some issues with this. I'm pretty sure they're mainly talking about that wrapper-script-copying use case.
> Or why venvs need to be activated with shell-variety specific code.
The activation sets environment variables for the current shell. That isn't possible (at least in a cross-platform way) from Python since the Python process would be a child of that shell. (This is also why you have to e.g. use `source` explicitly to run the Linux versions.)
But venvs generally don't need to be activated at all. The only things the activation script effectively does:
* Set the path environment variable so that the virtual environment's Python (or symlink thereto) will be found first.
* Put some fancy stuff in the prompt so that you can feel like you're "in" the virtual environment (a luxury, not at all required).
* Set `VIRTUAL_ENV`, which some Python code might care about (but they could equally well check things like `sys.executable`)
* Unset (and remember) `PYTHONHOME` (which is a hack that hardly anyone has a good use case for anyway)
* (on some systems that don't have a separate explicit deactivate script) set up the means to undo all those changes
The actually important thing is the path variable change, and even then you don't need that unless the code is going to e.g. start a Python subprocess and ask the system to find Python. (Or, much more commonly, because you have a `#!/usr/bin/env python` shebang somewhere.) You can just run the virtual environment's Python directly.
In particular, you don't have to activate the virtual environment in order to use its wrapper scripts, as long as you can find them. And, in fact, Pipx depends on this.
> None of this example has anything to do with performance or reliance on C dependencies, but ok.
<C dependencies>
You'd realize why I wrote that if you used Java/Maven. Java is by and large self-contained. Stuff like Python, Ruby, PHP, Javascript[1] etc, are not, they depend on system libraries.
So when you install something on Solaris, FreeBSD, MacOS, Windows, well, then you have to deal with the whole mess.
1. Is the C dependency installed on the system? 2. Is it the correct major version or minor version? 3. Has it been compiled with the correct flags or whatnot? 4. If it's not on the system, can the programming language specific package manager pull a binary from a repo for my OS-arch combo? 5. If there's no binary, can the programming language specific package manager pull the sources and compile them for my OS-arch combo?
All of those steps can and do fail, take time, and sometimes you have to handle them yourself because of bugs.
Java is fast enough that almost everything can be written in Java, so 99% of the libraries you use only have 1 artifact: the universal jar, and that's available in Maven repos. No faffing around with wheels or whatnot, or worse, with actual system dependencies that are implicitly (or explicitly) required by dependencies written in the higher level programming language.
<Virtual envs>
I won't even bother to address in detail the insanity that you described about virtual envs, it's just Stockholm syndrome. Almost every other programming language does just fine without venvs. Also I don't really buy that issue with the lack of portability, it's just a bunch of bad design decision early on in Python's design. Even for Windows there are better possibilities (symlinks are feasible, you just need admin access).
I say this as someone who's used basically all mainstream programming language over the years.
The sane way to do virtual envs is to have them be just... folders. No absolute paths, no activation, just have a folder and a configuration file. The launcher automatically detects a configuration file with the default name and the configuration file in turn points the launcher and Python to use the stuff in the folder.
Deployment then becomes... drumroll just copying the folder to another machine (usually zipping/unzipping it first).
* * *
[1] Javascript is a bit of a special case but it's still slower than Java, on average.
> Stuff like Python, Ruby, PHP, Javascript[1] etc, are not, they depend on system libraries.
The Python runtime itself may depend on system libraries.
Python packages usually include their own bundled compiled code (either directly in the wheel, or else building an sdist will build complete libraries that end up in site-packages rather than just wrappers). A wheel for NumPy will deliver Python code that uses an included copy of OpenBLAS, even if your system already had a BLAS implementation installed.
Regardless, that has no bearing on how virtual environments work.
> I won't even bother to address in detail the insanity that you described about virtual envs, it's just Stockholm syndrome. Almost every other programming language does just fine without venvs.
You say this like you think there's something difficult or onerous about using virtual environments. There really isn't.
> Also I don't really buy that issue with the lack of portability
The use of absolute paths is the only thing preventing you from relocating venvs, including moving them to another machine on the same architecture. I know because I have done the "surgery" to relocate them.
They really do have the reason for using absolute paths that I cited. Here's someone from the Pip team saying as much a few days ago: https://discuss.python.org/t/_/96177/3 (Using a relative path in the wrapper script would of course also require a little more work to make it relative to the script rather than to the current working directory. It's worse for the activation script; I don't even know if that can be done in pure sh when the script is being sourced.)
Yes, it's different between Linux and Windows, because installers will create actual .exe wrappers (stub executables that read their own file name and then `CreateProcess` a Python process) instead of Python wrapper scripts with a shebang. They do this explicitly because of the UX that Windows users expect.
> Even for Windows there are better possibilities (symlinks are feasible, you just need admin access).
Please go survey all the people you know who write code on Windows and see how many of them have ever heard of a symlink or understand what they are. But also, giving admin rights to these Python tools all the time is annoying, and bad security practice (since they don't actually need to modify any system files).
> The sane way to do virtual envs is to have them be just... folders. No absolute paths, no activation, just have a folder and a configuration file.
A virtual environment is just a folder (hierarchy) and a configuration file. Like I said, activation is not required to use them. And the fact that certain standard tools create and expect absolute paths is not essential to what a virtual environment is. If you go in and replace the absolute paths with relative paths, you still have a virtual environment, and it still works as you'd expect - minus the tradeoffs inherent to relative paths. And like I said, there is third-party tooling that will do this for you.
Oh, I guess you object because there are actual symlinks (or copies by default on Windows) to Python in there. That's the neat thing: because you start Python from that path, you don't need a launcher. (And you also, again, don't need to activate. But you can if you want, for a UX that some prefer.) Unless by "launcher" you meant the wrapper script? Those are written to invoke the venv's Python - again, you don't need to activate. Which is why this can work:
A venv is created, and used without activation.Again, yes, the wrapper scripts would have to be a little more complex in order to make relative paths work reliably - but that's a Pip issue, not a venv issue.
> But also, giving admin rights to these Python tools all the time is annoying, and bad security practice (since they don't actually need to modify any system files).
You only need to do it once, when the symlink is created...
My point is that venvs are a code smell. Again, there's a reason basically no other programming language ecosystem needs them. They're there because in Python's history, that was the best idea they had 20 years ago (or whenever they were created), where they didn't even bother to do their homework and see what other ecosystems did to solve that specific problem.
> You say this like you think there's something difficult or onerous about using virtual environments. There really isn't.
They're a bad, leaky, abstraction. They provide value through a mechanism that is cumbersome and doesn't even work well compared to basically all competing solution used outside of Python.
* * *
Anyway, I've used Python for long enough and I've seem many variations of your argument often enough that I'm just... bored. Python packaging is a cluster** and it has sooo many braindead ideas (like setup.py having arbitrary code in it :-| ) that yes, with a ton of work invested in it, if you squint enough, it basically works.
But that doesn't excuse the ton of bad design around it.
> Again, there's a reason basically no other programming language ecosystem needs them.
You have failed to explain how they are meaningfully different from what other programming language ecosystems use to create isolated environments, such that I should actually care about the difference.
Again: activating the venv is not in any way necessary to use it. The only relevant components are some empty folders, `pyvenv.cfg` and a symlink to the Python executable (Windows' nonsense notwithstanding).
> 20 years ago (or whenever they were created), where they didn't even bother to do their homework and see what other ecosystems did to solve that specific problem.
Feel free to state your knowledge of what other ecosystems did to solve those problems at the time, and explain what is substantively different from what a venv does and why it's better to do it that way.
No, a .jar is not comparable, because either it vendors its dependencies or you have to explain what classpath to use. And good luck if you want to have multiple versions of Java installed and have the .jar use the right one.
> They're a bad, leaky, abstraction. They provide value through a mechanism that is cumbersome and doesn't even work well compared to basically all competing solution used outside of Python.
You have failed to demonstrate this, and it does not match my experience.
> it has sooo many braindead ideas (like setup.py having arbitrary code in it :-| )
You know that setup.py is not required for packaging pure-Python projects, and hasn't been for many years? And that it only appears in a source distribution and not in wheels? And that in other ecosystems where packages contain arbitrary code in arbitrary foreign languages that needs to be built on the end user's machine, the package also includes arbitrary code that gets run at install time in order to orchestrate that build process? (For that matter, Linux system packages do this; see e.g. https://askubuntu.com/questions/62534 .)
Yes, using arbitrary code to specify metadata was braindead. Which is why pyproject.toml exists. And do keep in mind that the old way was conceived of in a fundamentally different era.
> And do keep in mind that the old way was conceived of in a fundamentally different era.
Maven first appeared in 2004 (and took the Java world by storm, it was widely adopted within a few years). Not studying prior art seems to happen a lot in our field.
> Feel free to state your knowledge of what other ecosystems did to solve those problems at the time, and explain what is substantively different from what a venv does and why it's better to do it that way.
Maven leverages the Java CLASSPATH to avoid them entirely.
There is a single, per-user, shared repository.
So every dependency is stored only once.
The local repository is actually more or less an incomplete clone of the remote repository, which makes the remote repository really easy to navigate with basic tools (the remote repo can be an Apache hosted anywhere, basically).
The repository is name spaced and things are very neatly grouped up by multiple levels (groupId, artifactId, version, type).
When you build or run something through Maven, you need:
1. The correct JAVA in your PATH. 2. A pom.xml config file in your folder (yes, Maven is THAT old, from back when XML was cool).
That's it.
You don't need to activate anything, ever.
You don't need to care about locations of whatamajigs in project folders or temp folders or whatever.
You don't need symlinks.
One of the many Maven packaging plugins spits out the correct package format you need for your platform.
Maven does The Right ThingTM, composing the correct CLASSPATH for your specific project/folder.
There is NO concept of a "virtual env", because ALL envs, by default, are "virtual". They're all compartmentalized by default. Nobody's stepping on anyone else's toes.
You take that plus Java's speed, so no need for slightly faster native dependencies (except in very rare cases), and installing or building a Maven project you've never seen in your life is trivial (unless the authors went to great lenghts to avoid that for some weird reason).
Now THAT's design.
Python has a hodge-podge of a programming language ecosystem with a brilliant beginner-friendly programming language syntax UX (most languages in the future will basically look like Python's pseudo-pseudocode), that's slowly starting to look like something that's actually been designed as a programming ecosystem. Similar story to Javascript, actually.
Anyway, this was more of a rant. I know Python is fixing these sins of its infancy.
I'm happy it's doing that because it's making my life a bit more liveable.
https://web.archive.org/web/20250624191820/https://www.cotto...
Thanks! I'm guessing it's blocked on your work/uni network too? Stupid over-eager firewall.
I also tried to open it got notified it was blocked due to "pornography" - lovely!
If PEP 723 is only an enhancement proposal does it work only because `uv` happens to support it?
Can you not use `uvx` with your script because it only works on packages that are installed already or on PyPi?
PEP 723 was incorporated (with modifications) into the official Python packaging specifications: https://packaging.python.org/en/latest/specifications/inline...
I don't think running with uv vs uvx imposes any extra limitations on how you specify dependencies. You should either way be able to reference dependencies not just from PyPi but also by git repo or local file path in a [tool.uv.sources] table, the same as you would in a pyproject.toml file.
PEP 723 is final and most relevant tools will support it:
https://discuss.python.org/t/40418/82
uvx is useful to run scripts inside PyPi packages. It does not support running Python scripts directly
You can use uvx run scripts with a combination of the --with flag to specify the dependencies and invoking python directly. For e.g
uvx --with youtube-transcript-api python transcript.py
But you wont get the benefit of PEP 723 metadata.
Ok I didn’t know about this pep. But I love uv. I use it all day long. Going to use this to change up a lot of my shell scripts into easily runnable Python!
I honestly don't like that this is expressed as a comment but I guess it makes the implementation easy and backwards compatible...
been doing this with Pipenv before, but uv is like Pipenv on steroids.
My only question is: who asked for faster pip?
uv has a lot more perks! It makes distributing python tooling easier too
Comparing apples and orange here. Uv scope is so much more than pip
There's no lockfile or anything with this approach right? So in a year or two all of these scripts will be broken because people didn't pin their dependencies?
I like it though. It's very convenient.
> There's no lockfile or anything with this approach right?
There are options to both lock the dependencies and limit by date:
https://docs.astral.sh/uv/guides/scripts/#locking-dependenci...
https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...
> So in a year or two all of these scripts will be broken because people didn't pin their dependencies?
People act like this happens all the time but in practice I haven't seen evidence that it's a serious problem. The Python ecosystem is not the JavaScript ecosystem.
I think it's because you don't maintain much python code, or use many third party libraries.
An easy way to prove that this is the norm is to take some existing code you have now, and update to the latest versions your dependencies are using, and watch everything break. You don't see a problem because those dependencies are using pinned/very restricted versions, to hide the frequency of the problem from you. You'll also see that, in their issue trackers, they've closed all sorts of version related bugs.
> An easy way to prove that this is the norm is to take some existing code you have now, and update to the latest versions your dependencies are using
I have done this many times and watched everything fail to break.
Are you sure you’re reading what I wrote fully? Getting pip, or any of them, to ignore all version requirements, including those listed by the dependencies themselves, required modifying source, last I tried.
I’ve had to modify code this week due to changes in some popular libraries. Some recent examples are Numpy 2.0 broke most code that used numpy. They changed the c side (full interpreter crashes with trimesh) and removed/moved common functions, like array.ptp(). Scipy moved a bunch of stuff lately, and fully removed some image related things.
If you think python libraries are somehow stable in time, you just don’t use many.
... So if the installer isn't going to ignore the version requirements, and thereby install an unsupported package that causes a breakage, then there isn't a problem with "scripts being broken because people didn't pin their dependencies". The packages listed in the PEP 723 metadata get installed by an installer, which resolves the listed (unpinned) dependencies to concrete ones (including transitive dependencies), following rules specified by the packages.
I thought we were talking about situations in which following those rules still leads to a runtime fault. Which is certainly possible, but in my experience a highly overstated risk. Packages that say they will work with `foolib >= 3` will very often continue to work with foolib 4.0, and the risk that they don't is commonly-in-the-Python-world considered worth it to avoid other problems caused by specifying `foolib >=3, <4` (as described in e.g. https://iscinumpy.dev/post/bound-version-constraints/ ).
The real problem is that there isn't a good way (from the perspective of the intermediate dependency's maintainer) to update the metadata after you find out that a new version of a (further-on) dependency is incompatible. You can really only upload a new patch version (or one with a post-release segment in the version number) and hope that people haven't pinned their dependencies so strictly as to exclude the fix. (Although they shouldn't be doing that unless they also pin transitive dependencies!)
That said, the end user can add constraints to Pip's dependency resolution by just creating a constraints file and specifying it on the command line. (This was suggested as a workaround when Setuptools caused a bunch of legacy dependencies to explode - not really the same situation, though, because that's a build-time dependency for some packages that were only made available as sdists, even pure-Python ones. Ideally everyone would follow modern practice as described at https://pradyunsg.me/blog/2022/12/31/wheels-are-faster-pure-... , but sometimes the maintainers are entirely MIA.)
> Numpy 2.0 is a very recent example that broke most code that used numpy.
This is fair to note, although I haven't seen anything like a source that would objectively establish the "most" part. The ABI changes in particular are only relevant for packages that were building their own C or Fortran code against Numpy.
> `foolib >= 3` will very often continue to work with foolib 4.0,
Absolute nonsense. It's industry standard that major version are widely accepted as/reserved for breaking changes. This is why you never see >= in any sane requirements list, you see `foolib == 3.*`. For anything you want to work for a reasonable amount of time, you see == 3.4.*, because deprecations often still happen within major versions, breaking all code that used those functions.
Breaking changes don't break everyone. For many projects, only a small fraction of users are broken any given time. Firefox is on version 139 (similarly Chrome and other web browsers); how many times have you had to reinstall your plugins and extensions?
For that matter, have you seen any Python unit tests written before the Pytest 8 release that were broken by it? I think even ones that I wrote in the 6.x era would still run.
For that matter, the Python 3.x bytecode changes with every minor revision and things get removed from the standard library following a deprecation schedule, etc., and there's a tendency in the ecosystem to drop support for EOL Python versions, just to not have to think about it - but tons of (non-async) new code would likely work as far back as 3.6. It's not hard to avoid the := operator or the match statement (f-strings are definitely more endemic than that).
On the flip side, you can never really be sure what will break someone. Semver is an ideal, not reality (https://hynek.me/articles/semver-will-not-save-you).
And lots of projects are on calver anyway.
PEP 723 allows you to specify version numbers for direct dependencies, but of course indirect dependencies aren't guaranteed to be the same.
> For the longest time, I have been frustrated with Python because I couldn’t use it for one-off scripts.
Bruh, one-off scripts is the whole point of Python. The cheat code is to add "break-system-packages = true" to ~/.config/pip/pip.conf. Just blow up ~/.local/lib/pythonX.Y/site-packages/ if you run into a package conflict (exceedingly rare) and reinstall. All these venv, uv, metadata peps, and whatnot are pointless complications you just don't need.
> If you are not a Pythonista (or one possibly living under a rock)
That's bait! / Ads are getting smarter!
I would also have accepted "unless you're geh", "unless you're a traitor to the republic", "unless you're not leet enough" etc.
I'm not a python dev, but if you read HN even semi-regularly you have surely come across it several times in at least the past few months if not a year by now. It is all the rage these days in python world it seems.
And so, if you are the kind of person who has not heard of it, you probably don't read blogs about python, therefor you probably aren't reading _this_ blog. No harm no foul.
Why do I feel like I’m in an infomercial?
> uv is an extremely fast Python package and project manager, written in Rust.
Is there a version of uv written in Python? It's weird (to me) to have an entire ecosystem for a language and a highly recommended tool to make your system work is written in another language.
Similar to ruff, uv mostly gathers ideas from other tools (with strong opinions and a handful of thoughtful additions and adjustments) and implements them in Rust for speed improvements.
Interestingly, the speed is the main differentiator from existing package and project management tools. Even if you are using it as a drop-in replacement for pip, it is just so much faster.
They are not making a Python version.
There are many competing tools in the space, depending on how you define the project requirements.
Contrary to the implication of other replies, the lion's share of uv's speed advantage over Pip does not come from being written in Rust, from any of the evidence available to me. It comes from:
* bootstrapping Pip into the new environment, if you make a new environment and don't know that you don't actually have to bootstrap Pip into that environment (see https://zahlman.github.io/posts/2025/01/07/python-packaging-... for some hints; my upcoming post will be more direct about it - unfortunately I've been putting it off...)
* being designed up front to install cross-environment (if you want to do this with Pip, you'll eventually and with much frustration get a subtly broken installation using the old techniques; since 22.3 you can just use the `--python` flag, but this limits you to environments where the current Pip can run, and re-launches a new Pip process taking perhaps an additional 200ms - but this is still much better than bootstrapping another copy of Pip!)
* using heuristics when solving for dependencies (Pip's backtracking resolver is exhaustive, and proceeds quite stubbornly in order)
* having a smarter caching strategy (it stores uncompressed wheels in its cache and does most of the "installation" by hard-linking these into the new environment; Pip goes through a proxy that uses some opaque cache files to simulate re-doing the download, then unpacks the wheel again)
* not speculatively pre-loading a bunch of its own code that's unlikely to execute (Pip has large complex dependencies, like https://pypi.org/project/rich/, which it vendors without tree-shaking and ultimately imports almost all of, despite using only a tiny portion)
* having faster default behaviours; e.g. uv defaults to not pre-compiling installed packages to .pyc files (since Python will do this on the first import anyway) while Pip defaults to doing so
* not (necessarily) being weighed down by support for legacy behaviours (packaging worked radically differently when Pip first became publicly available)
* just generally being better architected
None of these changes require a change in programming language. (For example, if you use Python to make a hard link, you just use the standard library, which will then use code written in C to make a system call that was most likely also written in C.) Which is why I'm making https://github.com/zahlman/paper .
But also, because it's written in rust. There are tools written in python that do these smart caching and resolving tricks as well, and they are still orders of magnitude slower
Such as?
Poetry doesn't do this caching trick. It creates its own cache with the same sort of structure as Pip's, and as far as I can tell it uses its own reimplementation of Pip's core installation logic from there (including `installer`, which is a factored-out package for the part of Pip that actually unpacks the wheel and copies files).
uv wins precisely because it isn't written in python. As various people have pointed out, it can complete its run before competing python implementations have finished handling their imports.
Besides, the most important tool for making python work, the python executable itself, is written in C. People occasionally forget it's not a self-hosting language.
Well, I use Debian and Bash: pretty much everything to make my system work, including and especially Python development, is written in C, another language!
pip?
A tool written in Python is never going to be as fast as one written in Rust. There are plenty of Python alternatives and you're free to use them.