To me the Astral folks have a lot of credibility because both their ruff linter and formatter have been fantastic. Elevates this kind of announcement from yet another Python packaging thingy to something worth paying attention to.
I like the idea of that single-file Python script with inline dependency info construct, but it's probably going to be a bummer in terms of editor experience. I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.
Good thing we've been investing in our LSP server lately[1]! We're still a ways out from integrating uv into the LSP (and, more generally, providing auto-completions) but it's definitely on our minds.
The script dependency metadata _is_ standardized[2], so other LSP servers could support a good experience here (at least in theory).
Do you intend to get to feature parity with pyright for ruff-lsp? I wasn't aware you were aiming to make it a full fledged lsp, despite using it daily.
Well it's not that they can't, but it's definitely work because it's a departure from the traditional model.
Consider an editor feature, e.g. goto-definition. When working in a normal Python environment (global or virtual) the code of your dependencies actually exists on the filesystem. With one of these scripts with inline dependency information, that dependency code perhaps doesn't exist on the filesystem at all, and possibly won't ever until after the script has been run in its special way (e.g. with a `pipx run` shebang?).
I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
We'd love it if there _were_ official portable Python binaries, but there just aren't. We're not just distributing someone else's builds though, we're actively getting involved in the project, e.g., we did the last five releases.
We've invested quite a bit of effort into finding system Python interpreters though and support for bringing your own Python versions isn't going anywhere.
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).
So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to,
as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Is there any reason to still use Rye, now? looks like this release adds all the things I would have missed from Rye, but I don't think I use all of Rye's features
Uv is installer not a build backend. It’s similar to pip. If you install library with uv it will call backend like setuptools as needed. It is not a replacement for setuptools.
The astral team is definitely doing great work, and it's wonderful that these tools are permissively licensed, but what happens if astral doesn't work out as a business?
Does anyone know if Astral has plans to build an LSP like pyright? There are many projects that try to replicate pyright's functionality, pylyzer comes to mind, but don't have sufficient coverage (e.g. missing Generic support). Having a team like Astral's behind creating a fast and good LSP for Python would be great.
Ruff (the linter/formatter from Astral) has its own LSP right now: https://github.com/astral-sh/ruff-lsp. Although Ruff itself now ships with a language server already integrated, so I have no idea what the plan is long term.
What does the Astral team recommend for setting up their tools on a new machine? Install uv with curl, manage python versions and projects from there and install ruff with uv?
Not on the Astral team, but to the first step, I'd get uv from your distro package manager (e.g. https://build.opensuse.org/package/show/openSUSE:Factory/uv) and then the rest as you say ("manage python versions and projects from there and install ruff with uv").
If you have some other tool manager on your system (e.g. mise) then you can likely install uv through that.
Congrats! What’s impressive is not just the speed of the tools Astral develops but also the speed of delivery.
I wonder, if you plan to extend the functionality of building and publishing packages. For example, support for dynamic version (from github) and trusted publishers.
I've been closely following the Astral team's work. I am excited by this release and look forward to trying out uv again.
I have been working with python for over 10 years and have standardized my workflow to only use pip & setuptools for all dependency management and packaging [1]. It works great as a vanilla setup and is 100% standards based. I think uv and similar tools mostly shine when you have massive codebases.
It's too bad that the blog post explicitly mentions rye but does not explain how the new features included in uv 0.3 will affect the role that rye will play going forward.
Love that you’ve kept the workspace support from Rye. Is there any support for workspace _builds_? That is, resolving the graph of interdependencies between packages to build a wheel/Dockerfile/whatever?
Fwiw I’m building a thing [1] that does this. Current docs suggest Rye but will s/rye/uv/ shortly. It’s basically just some CLI commands and a Hatch/PDM plugin that injects needed stuff at build-time.
Just did some playing around and workspace support has definitely evolved from Rye [0] but there isn't yet (afaict) any mechanism that supports building workspace packages with their "internal" dependencies.
Ok so stupid question.. even though the majority of package documentation seems to mention pip, it's very dated, right? And I have only been trying to focus a bit more on python the last couple of years, so actually I just found out about poetry like last month. But poetry is also very uncool now right?
Most people not on the bleeding edge use conda, not poetry? But people who are hip use rye and uv? Up until today and now they only use uv if possible?
I'm actually building a system around user-installed plugins. Where there is a UI to search for and install plugins on the fly.
Also one other thing just to double check, it is now very uncool or considered bad practice to use dynamic or flexible types in Python?
Conda is mainly used by ML, AI, and data science people and to a certain extent feels like its own separate ecosystem. In other areas, like web dev, conda use is pretty rare.
Most projects do just fine with pip + venv, and that’s what they stick to.
The exception is if they have specific dependencies outside the CPython ecosystem - in which case they’ll probably be using conda. Examples of such dependencies include nodejs/cuda/cublas/specific versions of gcc. Webdev generally doesn’t have as many of these dependencies compared to the data world, which is why conda is less popular there.
Speaking in sweeping generalities here: you probably don’t need poetry, uv, or kin - at all. But there’s nothing wrong with choosing to use them if you prefer to do so either.
Think of it this way: the release of VS Code didn’t mean people suddenly stopped using or updating Emacs/Vim. VS Code simply offered a more polished, beginner-friendly way of setting up and building software projects than the old TUI editors.
In the same way, none of the “fancy-pip-replacement” projects will outright obsolete pip or conda. They’re just tools that can work a bit more intuitively for new users and provide a bit of polish/UX value - but their niche fills the exact same role as pip/conda: managing the set of binaries on your PATH.
I think they meant "unified" here in the same sense in which BusyBox is unified, i.e. several things you'd normally use different tools for combined into one executable/project. They're not trying to invent any new standards, most of these tasks already have PEPs that the existing tools and uv both implement. Maybe not the best word choice though because I normally have the same instinctive reaction in response to someone claiming something is "unified".
Unfortunate that python dev these days requires dependency manager managers to set up an entire for-purpose python just for each project. As opposed to being able to run on the system python (like every other scripting language). A victim of it's popularity.
Now since Uv is in Rust we'll need a dependency manager manager manager for it on any OS that's not rolling to compile since what rustc is changes every 3 months and breaks forwards compatibility.
I'll check back in in 3 years (3x longer than Astral has so far existed) and see if Uv still exists and has become stable enough to use.
Probably will need a K8S management cluster to manage your python project management tools that will of course need another cluster manager for managing the project management tools…
I like the idea of that single-file Python script with inline dependency info construct, but it's probably going to be a bummer in terms of editor experience. I doubt the typical LSP servers will be aware of dependencies specified in that way and so won't offer autocompletion etc for those libraries.