Hacker News new | past | comments | ask | show | jobs | submit login
Uv 0.3 – Unified Python packaging (astral.sh)
163 points by gaspb 4 months ago | hide | past | favorite | 56 comments



To me the Astral folks have a lot of credibility because both their ruff linter and formatter have been fantastic. Elevates this kind of announcement from yet another Python packaging thingy to something worth paying attention to.

I like the idea of that single-file Python script with inline dependency info construct, but it's probably going to be a bummer in terms of editor experience. I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.


Good thing we've been investing in our LSP server lately[1]! We're still a ways out from integrating uv into the LSP (and, more generally, providing auto-completions) but it's definitely on our minds.

The script dependency metadata _is_ standardized[2], so other LSP servers could support a good experience here (at least in theory).

[1] The Ruff Language Server: https://astral.sh/blog/ruff-v0.4.5

[2] Inline script metadata: https://peps.python.org/pep-0723/


Do you intend to get to feature parity with pyright for ruff-lsp? I wasn't aware you were aiming to make it a full fledged lsp, despite using it daily.


> I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.

Given this is a recently accepted standard (PEP 723), why would language servers not start to support features based on it?


Well it's not that they can't, but it's definitely work because it's a departure from the traditional model.

Consider an editor feature, e.g. goto-definition. When working in a normal Python environment (global or virtual) the code of your dependencies actually exists on the filesystem. With one of these scripts with inline dependency information, that dependency code perhaps doesn't exist on the filesystem at all, and possibly won't ever until after the script has been run in its special way (e.g. with a `pipx run` shebang?).


I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.

Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.


We'd love it if there _were_ official portable Python binaries, but there just aren't. We're not just distributing someone else's builds though, we're actively getting involved in the project, e.g., we did the last five releases.

We've invested quite a bit of effort into finding system Python interpreters though and support for bringing your own Python versions isn't going anywhere.


I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).

Anyways, software supply chain security and Python & package build signing and then containers and signing them too

Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...

Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...

From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :

> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.

> We will show how to use this in practice with `rattler-build`

> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.

> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel

Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)

virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.

ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :

  rpm-ostree rebase ostree-image-signed:registry:<oci image>
  rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).

So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.

e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.

Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?


We changed the URL from https://github.com/astral-sh/uv/releases/tag/0.3.0 to the project page. Interested readers probably should look at both.


Is there any reason to still use Rye, now? looks like this release adds all the things I would have missed from Rye, but I don't think I use all of Rye's features



I think uv hits everything that rye does, and with a solid implementation.

With love, rye is all vision, philosophy and duct tape. uv is built by a full-time team.


Aren't they the same team?


Rye was a one man project by mitsuhiko before it was adopted by astral. I think it's basically just his work.


See "Rye and Uv: August Is Harvest Season for Python Packaging"[1]

[1] https://lucumr.pocoo.org/2024/8/21/harvest-season/



Does it support building native extensions and Cython modules or are setuptools still the only reasonable way to do this?


Uv is installer not a build backend. It’s similar to pip. If you install library with uv it will call backend like setuptools as needed. It is not a replacement for setuptools.


Does it support Numpy and Pytorch?


The astral team is definitely doing great work, and it's wonderful that these tools are permissively licensed, but what happens if astral doesn't work out as a business?


Does anyone know if Astral has plans to build an LSP like pyright? There are many projects that try to replicate pyright's functionality, pylyzer comes to mind, but don't have sufficient coverage (e.g. missing Generic support). Having a team like Astral's behind creating a fast and good LSP for Python would be great.


Ruff (the linter/formatter from Astral) has its own LSP right now: https://github.com/astral-sh/ruff-lsp. Although Ruff itself now ships with a language server already integrated, so I have no idea what the plan is long term.


What does the Astral team recommend for setting up their tools on a new machine? Install uv with curl, manage python versions and projects from there and install ruff with uv?


Not on the Astral team, but to the first step, I'd get uv from your distro package manager (e.g. https://build.opensuse.org/package/show/openSUSE:Factory/uv) and then the rest as you say ("manage python versions and projects from there and install ruff with uv").

If you have some other tool manager on your system (e.g. mise) then you can likely install uv through that.


Yeah, I think so. Their documentation includes a note on how to bootstrap from a fresh Docker image here: https://docs.astral.sh/uv/guides/integration/docker/


Congrats! What’s impressive is not just the speed of the tools Astral develops but also the speed of delivery.

I wonder, if you plan to extend the functionality of building and publishing packages. For example, support for dynamic version (from github) and trusted publishers.


I've been closely following the Astral team's work. I am excited by this release and look forward to trying out uv again.

I have been working with python for over 10 years and have standardized my workflow to only use pip & setuptools for all dependency management and packaging [1]. It works great as a vanilla setup and is 100% standards based. I think uv and similar tools mostly shine when you have massive codebases.

1: https://dubnest.com/blog/vanilla-python-packaging/


It's too bad that the blog post explicitly mentions rye but does not explain how the new features included in uv 0.3 will affect the role that rye will play going forward.


We wanted to cover this in a dedicated discussion. See https://github.com/astral-sh/rye/discussions/1342


Wonder if there’s any reason to stick with hatch or move fully to uv now? Seems maybe hatch still handles the testing matrix whereas uv doesn’t?

EDIT: I still like how hatch allows for defining multiple envs for a given project with the ability to switch between / run in them by name.


Love that you’ve kept the workspace support from Rye. Is there any support for workspace _builds_? That is, resolving the graph of interdependencies between packages to build a wheel/Dockerfile/whatever?

Fwiw I’m building a thing [1] that does this. Current docs suggest Rye but will s/rye/uv/ shortly. It’s basically just some CLI commands and a Hatch/PDM plugin that injects needed stuff at build-time.

[1] https://una.rdrn.me/


Just did some playing around and workspace support has definitely evolved from Rye [0] but there isn't yet (afaict) any mechanism that supports building workspace packages with their "internal" dependencies.

[1] https://docs.astral.sh/uv/concepts/workspaces/


I'm happy to see uv is adopted in Pixi, which is a new personal favorite:

https://prefix.dev/blog/uv_in_pixi

Reasons for liking pixi, over e.g. poetry:

- Like poetry, it keeps everything in a project-local definition file and environment

But unlike poetry, it also:

- Can install python itself, even ancient versions

- Can install conda packages

- Is extremely fast


The documentation suggests that `uv python list` should be able to see my pyenv Python's as well... but it doesn't appear to?


Seems they need to be on $PATH to get picked up. In Rye there's also a command to "register" a Python install at a specific path.

https://docs.astral.sh/uv/concepts/python-versions/#discover...

https://rye.astral.sh/guide/toolchains/#registering-toolchai...


My <HOME>/.pyenv/shims directory is on the path, but still getting nothing from uv about the pyenv versions specifically.


Write an issue on their GitHub if you don't already see one, they're very responsive



Why is it called “uv”?


How's the work on a type checker coming along?


Imagine mypy with the speed of uv…


What's the recommended/opinionated


This:


Ahhhh yes they did it. They delivered workspaces and editable dependencies, which is literally all I needed.

I can't wait to set this up, I'm very confident they'll iron out any remaining bugs or gotchas (e.g. CUDA) quickly.


Ok so stupid question.. even though the majority of package documentation seems to mention pip, it's very dated, right? And I have only been trying to focus a bit more on python the last couple of years, so actually I just found out about poetry like last month. But poetry is also very uncool now right?

Most people not on the bleeding edge use conda, not poetry? But people who are hip use rye and uv? Up until today and now they only use uv if possible?

I'm actually building a system around user-installed plugins. Where there is a UI to search for and install plugins on the fly.

Also one other thing just to double check, it is now very uncool or considered bad practice to use dynamic or flexible types in Python?


Conda is mainly used by ML, AI, and data science people and to a certain extent feels like its own separate ecosystem. In other areas, like web dev, conda use is pretty rare.


Ok so for web dev, are most projects using poetry, the, or something else?


Most projects do just fine with pip + venv, and that’s what they stick to.

The exception is if they have specific dependencies outside the CPython ecosystem - in which case they’ll probably be using conda. Examples of such dependencies include nodejs/cuda/cublas/specific versions of gcc. Webdev generally doesn’t have as many of these dependencies compared to the data world, which is why conda is less popular there.

Speaking in sweeping generalities here: you probably don’t need poetry, uv, or kin - at all. But there’s nothing wrong with choosing to use them if you prefer to do so either.


ok.. thanks.. but the reaction in this thread seems to slightly contradict that? I didn't see anyone say "we use pip + venv, it works fine".


Think of it this way: the release of VS Code didn’t mean people suddenly stopped using or updating Emacs/Vim. VS Code simply offered a more polished, beginner-friendly way of setting up and building software projects than the old TUI editors.

In the same way, none of the “fancy-pip-replacement” projects will outright obsolete pip or conda. They’re just tools that can work a bit more intuitively for new users and provide a bit of polish/UX value - but their niche fills the exact same role as pip/conda: managing the set of binaries on your PATH.



I think they meant "unified" here in the same sense in which BusyBox is unified, i.e. several things you'd normally use different tools for combined into one executable/project. They're not trying to invent any new standards, most of these tasks already have PEPs that the existing tools and uv both implement. Maybe not the best word choice though because I normally have the same instinctive reaction in response to someone claiming something is "unified".


Unfortunate that python dev these days requires dependency manager managers to set up an entire for-purpose python just for each project. As opposed to being able to run on the system python (like every other scripting language). A victim of it's popularity.

Now since Uv is in Rust we'll need a dependency manager manager manager for it on any OS that's not rolling to compile since what rustc is changes every 3 months and breaks forwards compatibility.

I'll check back in in 3 years (3x longer than Astral has so far existed) and see if Uv still exists and has become stable enough to use.


Python is hardly unique there. Other trends include using docker images for tools (using all kinds of for-purpose frozen version language runtimes)


Probably will need a K8S management cluster to manage your python project management tools that will of course need another cluster manager for managing the project management tools…


That was my instinctive reaction as well, but as others have pointed out, the team does have credibility from e.g. the Ruff linter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: