Hacker News new | past | comments | ask | show | jobs | submit | samueloph's comments login

HTTP/3 is enabled in the curl package on Debian:

https://samueloph.dev/blog/debian-curl-now-supports-http3/

Daniel also has a more up-to-date post on HTTP/3:

https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-20...


> In unstable you get a timely fix from upstream, in stable you get a fix by the security team.

For anything that is serious enough, you will get a security fix straight away for both unstable and testing (through the testing-security repository).

For things that are not really that important, yes, you will get the fix later.


> There is also --remote-header-name (-J) which takes the remote file name from the header instead of the URL, which is what wget does.

I don't think that's the case, that behavior is opt-in as indicated in wget's manpage:

> This can currently result in extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by default.


As others pointed out, you can do that, you can also set them in .curlrc, or you can write a script if you want to allow for multiple URLs to be downloaded in parallel (not possible with an alias), or now you can just use wcurl :)

Note: wcurl sets a bit more flags than that, it also encodes the whitespaces in the URL and does parallel downloading of multiple URLs.


“or now you can just use wcurl :)”

Sadly I can’t since it is dependent upon the util-linux version of getopt which means it fails on bsd and macOS systems. Understandable since it is always available on the specific target the script was written for and it does make life easier.


argh, we are looking into fixing that now.

I knew getopt was linux-specific but I thoughts the only impact was that the long form argument (--opt) would not work. I turns out it doesn't run at all instead.

We should be able to fix this within the next few days, thank you!


> We should be able to fix this within the next few days, thank you!

It's fixed now, should work in non-linux environments.


That might be a nice improvement, but I believe people want a single command without any arguments for this use case.


I can't speak for other people. But for myself I'd rather have it as a --option than another command. I dislike tools that install a lot of different commands, it just gets harder to remember them all.


That argument doesn't make sense at first glance: how is lots of differen't commands harder to remember than lots of different --options?


one command can list all it's options with tool -h?


alias wget='curl --wget', then?


"curl --progress-bar" does that

"wcurl --opts="--progress-bar" URL"

I was wondering if that should be the default for wcurl, but the bar only works for downloading single files, I was afraid of users thinking there's a bug whenever they downloaded more than one file if the bar wasn't there anymore.


> Opens up blog post about pattern matching in Rust.

> Not a single "match" statement.

This post could increase its reach by explaining why a let statement is pattern matching, or at least point to a reference.

Although I understand people who don't know this already are not the target audience.

Here's the reference for those who need it: https://doc.rust-lang.org/book/ch18-01-all-the-places-for-pa...


Thanks


The only issue I have with wayland is that screen sharing from slack does not work (when using the app).


IIRC, that's because Slack uses an older version of Electron/Chromium.


It does not work because they intentionally disable pipewire support. You can binary patch the slack executable to enable wayland screen sharing https://askubuntu.com/a/1492207


Thank you for the correction. I knew it had something to do on Slack's end. I must've gotten the cause confused with something else, perhaps Discord.


Yeah screen sharing seems to be Wayland's biggest weakness. It doesn't work for me in Chrome, Firefox, OBS Studio, or VMWare Horizon (which explicitly says "Wayland is not supported").

Most of the others are just buggy, e.g. it "works" in Chrome but the window selector always shows the file browser icon at like 10x size so I can't select a window.

This is on RHEL 8, so maybe they've fixed things in the mean time.


I can't speak for Chrome nor VMWare, but Firefox and OBS have received "Wayland native" screen capturing functionality, through XDG portals and Pipewire, as far as I'm aware.

As a side-note, I've recently discovered a really cool project[0] that enables incredibly fast screen capture for OpenGL and Vulkan applications, mostly tailored to games. Tried it with a bunch of stuff and it works much better than both X's and Pipewire's screen grabbing. I can actually capture videos at my monitor's native refresh rate (144Hz).

[0]: https://github.com/nowrep/obs-vkcapture


Nice write-up.

I'm one of the Debian maintainers of curl and we are close to enabling http3 on the gnutls libcurl we ship.

We have also started discussing the plan for enabling http3 on the curl CLI in time for the next stable release.

Right now the only option is to switch the CLI to use the gnutls libcurl, but looks like it might be possible to stay with openssl, depending on when non-experimental support lands and how good openssl's implementation is.


Any chance of WebSocket being enabled too?


That's still an experimental feature on curl's side so I'm not sure. https://everything.curl.dev/helpers/ws/support


maybe the right time to clean up the unexpected and awkward set of libs that are currently installed, too ?


What's up the the Red Hat mention? One of the first things the presenter says is that his job has nothing to do with this.

Edit: mentioning the presenter's name might be better. FTR the video is titled: "Glorious Eggroll (ProtonGE Developer )Talks at the Ubuntu Summit (Day 2)"


Red Hat has more name recognition (at least for now). Even though the GE fork of Proton is way more technically impressive for one person than a Red Hat job IMO. Creating a fork of a public tool maintained by a large company with a team that actually gets popular is something.

edit: Proton itself is also a massive achievement that often gets overlooked. Somehow emulation on a grand scale with many edge cases and optimizations can run 95% of Windows games, at near native performance and often with multiplayer. If you think of Linux gaming in, say, 2014, this is kind of insane.


Just a bit: it's not emulation but compatibility. Proton (and Wine) implement and/or translate calls made by Windows programs without requiring any virtualization or emulation. In some ways it's closer to an open-source implementation of Windows and its runtime.

After all, Wine (on which Proton is layered atop) stands for "Wine is not emulation."


Fair enough, it's in the name.

I still treat it as black magic since it just kind of slowly started almost always working in the last 5 years. As if some great curse was lifted and we were freed from the shoddy install scripts and long debug sessions etc. etc.


I would have laughed at you and said there’s no way to get all the FOSS developers to implement a win32 shim like this. Here I am now, foot in mouth, because it was done by people who had a vested interest, Valve et al. It’s like if someone said: “Hey kid, 10 years from now you’ll be on the moon.” and then, a decade later, you’re literally on the moon doing science or moonwalks to Michael Jackson or potato farming.


afaik valve did mostly package and polish the components like wine and dxvk when making proton but the bulk of the w32 api was implemented by foss dev's in the decades before


Exaxctly. The one really big thing, at least for me, was making controller/joypad usage easy or more less hazle free. Before that, you had to do that manually which was a PITA


Gabe used to work on Windows, he has a long memory.


And the submitted title should reflect that

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: