> In unstable you get a timely fix from upstream, in stable you get a fix by the security team.
For anything that is serious enough, you will get a security fix straight away for both unstable and testing (through the testing-security repository).
For things that are not really that important, yes, you will get the fix later.
> There is also --remote-header-name (-J) which takes the remote file name from the header instead of the URL, which is what wget does.
I don't think that's the case, that behavior is opt-in as indicated in wget's manpage:
> This can currently result in extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by default.
As others pointed out, you can do that, you can also set them in .curlrc, or you can write a script if you want to allow for multiple URLs to be downloaded in parallel (not possible with an alias), or now you can just use wcurl :)
Note: wcurl sets a bit more flags than that, it also encodes the whitespaces in the URL and does parallel downloading of multiple URLs.
Sadly I can’t since it is dependent upon the util-linux version of getopt which means it fails on bsd and macOS systems. Understandable since it is always available on the specific target the script was written for and it does make life easier.
I knew getopt was linux-specific but I thoughts the only impact was that the long form argument (--opt) would not work. I turns out it doesn't run at all instead.
We should be able to fix this within the next few days, thank you!
I can't speak for other people. But for myself I'd rather have it as a --option than another command. I dislike tools that install a lot of different commands, it just gets harder to remember them all.
I was wondering if that should be the default for wcurl, but the bar only works for downloading single files, I was afraid of users thinking there's a bug whenever they downloaded more than one file if the bar wasn't there anymore.
It does not work because they intentionally disable pipewire support. You can binary patch the slack executable to enable wayland screen sharing https://askubuntu.com/a/1492207
Yeah screen sharing seems to be Wayland's biggest weakness. It doesn't work for me in Chrome, Firefox, OBS Studio, or VMWare Horizon (which explicitly says "Wayland is not supported").
Most of the others are just buggy, e.g. it "works" in Chrome but the window selector always shows the file browser icon at like 10x size so I can't select a window.
This is on RHEL 8, so maybe they've fixed things in the mean time.
I can't speak for Chrome nor VMWare, but Firefox and OBS have received "Wayland native" screen capturing functionality, through XDG portals and Pipewire, as far as I'm aware.
As a side-note, I've recently discovered a really cool project[0] that enables incredibly fast screen capture for OpenGL and Vulkan applications, mostly tailored to games. Tried it with a bunch of stuff and it works much better than both X's and Pipewire's screen grabbing. I can actually capture videos at my monitor's native refresh rate (144Hz).
I'm one of the Debian maintainers of curl and we are close to enabling http3 on the gnutls libcurl we ship.
We have also started discussing the plan for enabling http3 on the curl CLI in time for the next stable release.
Right now the only option is to switch the CLI to use the gnutls libcurl, but looks like it might be possible to stay with openssl, depending on when non-experimental support lands and how good openssl's implementation is.
What's up the the Red Hat mention?
One of the first things the presenter says is that his job has nothing to do with this.
Edit: mentioning the presenter's name might be better.
FTR the video is titled: "Glorious Eggroll (ProtonGE Developer )Talks at the Ubuntu Summit (Day 2)"
Red Hat has more name recognition (at least for now). Even though the GE fork of Proton is way more technically impressive for one person than a Red Hat job IMO. Creating a fork of a public tool maintained by a large company with a team that actually gets popular is something.
edit: Proton itself is also a massive achievement that often gets overlooked. Somehow emulation on a grand scale with many edge cases and optimizations can run 95% of Windows games, at near native performance and often with multiplayer. If you think of Linux gaming in, say, 2014, this is kind of insane.
Just a bit: it's not emulation but compatibility. Proton (and Wine) implement and/or translate calls made by Windows programs without requiring any virtualization or emulation. In some ways it's closer to an open-source implementation of Windows and its runtime.
After all, Wine (on which Proton is layered atop) stands for "Wine is not emulation."
I still treat it as black magic since it just kind of slowly started almost always working in the last 5 years. As if some great curse was lifted and we were freed from the shoddy install scripts and long debug sessions etc. etc.
I would have laughed at you and said there’s no way to get all the FOSS developers to implement a win32 shim like this. Here I am now, foot in mouth, because it was done by people who had a vested interest, Valve et al. It’s like if someone said: “Hey kid, 10 years from now you’ll be on the moon.” and then, a decade later, you’re literally on the moon doing science or moonwalks to Michael Jackson or potato farming.
afaik valve did mostly package and polish the components like wine and dxvk when making proton but the bulk of the w32 api was implemented by foss dev's in the decades before
Exaxctly. The one really big thing, at least for me, was making controller/joypad usage easy or more less hazle free. Before that, you had to do that manually which was a PITA
https://samueloph.dev/blog/debian-curl-now-supports-http3/
Daniel also has a more up-to-date post on HTTP/3:
https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...
reply