Hacker News new | past | comments | ask | show | jobs | submit | kaeso's comments login

Nice post @javierhonduco, really interesting read!

As you mention that you are looking into loosening the minimum kernel requirements, what is currently the primitive(s) that is dictating the minimum required version? And how do you plan to sidestep that?


Thanks! :)

Good question! We are building atop the incredible job that the BPF community does. BPF is a restricted environment [0], among others, the stack size is very small, programs have to be proved to finish, and there's no access to arbitrary kernel function.

We've done most of the work to loosen the minimum required kernel, sorry if that wasn't clear. The minimum supported kernel is 4.18.

More context, for those interested:

In order to provide an API to interact with the rest of the kernel, we use BPF helpers, which are like a library (sorta "syscalls") functions that BPF programs can make.

Not every BPF helper is available in a given kernel. The BCC project has a comprehensive and up-to-date list of the different helpers and other features and their introductory commit [1].

The minimum kernel that we can support is then decided by the most modern helper or feature we use. In our case, the most modern features we require are:

- BPF tail calls: since 4.2

- BPF Type Format (BTF): since 4.18

[0]: https://www.kernel.org/doc/html/latest/bpf/bpf_design_QA.htm...

[1]: https://github.com/iovisor/bcc/blob/master/docs/kernel-versi...


FWIW, the prodfiler.com agent has had no-symbol stack unwinding since summer 2021, and a minimum kernel version of 4.15 :-), and happens to have a lower footprint than the solution discussed here.


In a funny twist of events, the author of the grandparent comment above yours was indeed building that OS (Container Linux) at CoreOS :)


> So we were already potentially vulnerable to the DOS [...]

> the security org at the big tech company I worked at and reported this to

I'm confused about these two statements, because I did not find any recent CVEs for log4j in the DoS category, nor related to format lookup (other than CVE-2021-44228 of course).

Perhaps I misread it, but are you basically saying that (after you reported the issue to them internally) the security team at your previous company could not successfully report a DoS vulnerability in the default configuration of a widely used (by them, at least) Apache library and make sure a CVE got assigned to track it?

If so, it would be interesting to know where the CVE/vuln-reporting chain broke, possibly to reduce the blast radius for similar future cases.

Hypothetically speaking, a CVE in March for a DoS in a problematic design/feature could have resulted in flipping the default setting earlier. Instead of chasing live RCE in the wild in December.


no they're saying they discovered the behavior of their Log4j that was using interpolation was so slow that is had the potential of causing a DDoS at their company


> have disappeared like locksmith

Locksmith[0] implementation is tightly coupled to the specific update daemon, so it can't be directly re-used outside of Container Linux or without update-engine[1].

Its logic has been ported over to Zincati[2], which performs reboot management on top of rpm-ostree[3].

[0] https://github.com/coreos/locksmith

[1] https://github.com/coreos/update_engine

[2] https://github.com/coreos/zincati

[3] https://github.com/coreos/rpm-ostree


> Is there any benefit to making this [compute_pi_digits()] function awaitable?

If you introduce suspension points in that (e.g. every 100 computed digits), then you can co-schedule other tasks (e.g. a similar `compute_phi_digits`) or handle graceful cancellation (e.g. if a deadline is exceeded, or its parent task aborted in the meanwhile).


Kind of. Starting from 49, the <keygen> feature needs to be whitelisted per web-site. Client certificates are not anymore imported automatically, only downloaded (user action needed to load into the keystore).


> It seems like rust bundles it's own version of llvm. Are patches from the rust community making it in slowly? Or are there some fundamental differences?

https://github.com/rust-lang/llvm/commits/rust-llvm-2016-02-...

The delta is minimal, and mostly consists in bugfixes and optimizations. All changes are typically forwarded upstream. Since long time, rustc can be built without using the embedded LLVM, and several distributions (eg. Debian) do that.


Reality check: ~30% of active AS worldwide don't drop spoofed packets originating from their networks.

http://spoofer.cmand.org/summary.php


It is a bit like polluting the ocean by dumping wastes. It cost less for the polluters if they are not caught.

For "typical AS router", is it easy or cheap to block spoofed packets?

I wonder if that test software/website can/should "OUT / Shame" the AS routing subnets as "Major Internet Polluters" and publish a monthly reports to shame that 30% polluters.


This seems to be a common opinion recently, see https://tools.ietf.org/html/draft-thomson-postel-was-wrong-0...


But Jon Postel didn't mean what people now think he did.

His famous principle is about border cases, when the spec is vague, handwavy or thought by some to be vague. It's not about the other cases.

Remember that Jon Postel was the RFC editor. He didn't want anyone to ignore the RFCs, he wanted the RFCs to be readable and pleasant, and he wanted implementers to do the right thing when when an RFC erred on the side of readability.

FWIW I wrote a blog post about this a few years ago, http://rant.gulbrandsen.priv.no/postel-principle


Here's an example of the problem with that though: a mailing[0] by someone in February of this year asking if there's a formal grammar for the DNS zone (master) file format. This is a format that was first loosely specified in a RFC almost 32 years ago and there still isn't a rigorous definition. BIND now specifies a defacto interpretation with lots of liberal "treat this as a warning" options[1] and new gTLDs registries now insist on a subset of the original specification.

HTTP also has corner cases that widely-used implementations simply aren't handling consistently because the original RFCs are vague or the ideas being conveyed are buried in even older RFCs that nobody has the incentive to drill in to, or simply aren't known to them.

IMHO the IETF really should move to a wiki format, where information and wording changes on a particular protocol can be seen in one place. Plaintext snapshots of particular versions could still be published.

[0] https://www.ietf.org/mail-archive/web/dnsop/current/msg13349...

[1] https://kea.isc.org/wiki/ZoneLoadingRequirements#a3.3RFCimpl...


BTW there's a reason for that. The IETF decided (it must be a couple of decades ago) to restrict itself to matters of the internet. Things like file formats are thus out of scope for RFCs. There have been exceptions, RFC5952 is a good example and I know at least two others, but by and large RFCs are about the internet now, not about file formats or other worthy subjects.


RFC 6120 and 6121 are for XMPP (chat), and define XML is to be used, and even goes into the exact structure of the XML "packets".


Bad example. XML is the wire format in XMPP RFCs, not a file format.


RFC7159? It even recommends a file extension.


That's a perfect example: Publishable as an RFC because the format is used in many APIs on the general internet, but also says a little about a local matter, in this case the file names.

Is the rule a bit messy? Yes it is!


There's a reason why I try to use the tinydns zonefile format (http://cr.yp.to/djbdns/tinydns-data.html) whenever I can.

It's so much simpler to use, and less problematic.


The problem isnt so simple in the world of large scale protocol design - standards are rarely successful when imposed, they're usually adopted as a reflection of the current implementations. And when you're dealing with multiple independent implementations the variance can be subtle, and the standards often are broken or at best under-specified at first.

When dealing with integration among many parties, there is tremendous pressure to just "make it work". The web arguably is an example of this - the standards were post-facto representations of what's already implemented.

Of course we are all hating the long term implications on our codebases , but "let's force everyone to do it one way through strict behaviour" seems to discount the social dynamics of interoperability.

Moving away from Postel's principle in production will not lead to successful open and interoperable implementations, it will rather trend towards towards one single implementation , likely open source, that is shared and tweaked by all. That has some positive (interop!) and negative implications (limited ability to innovate / dragged down into programmer religions, etc).


I wouldn't say he is wrong, so much as you just need a dev mode where strict acceptance is the order of the day. You need people to learn to produce correct results, but still be resilient in the field.


This x10. Strict Dev modes are helpful. Strict everywhere just means that there will only be one successful implementation that everyone uses.


It's not a particularly new opinion, just one people refuse to learn. Prior example from 2008: http://www.joelonsoftware.com/items/2008/03/17.html


Rocket-Internet SE - https://www.rocket-internet.com - Berlin, Germany (VISA) - Security team

# About the job #

Rocket-Internet's security team is seeking talented and motivated security professionals to help us in protecting our key information assets.

We currently have two open positions:

* Security Engineer - https://goo.gl/pdCRkM

* IT Security / Penetration Tester - https://goo.gl/pGYSvR

If interested (and for questions/doubts), please drop me an e-mail at luca DOT bruno AT rocket-internet DOT de

# About Rocket-Internet #

Rocket is the largest Internet platform outside of China and the United States. We identify and build proven Internet business models and transfer them to new, underserved or untapped markets where we seek to scale them into market leading online companies. We are focused on online business models that satisfy basic consumer needs across three sectors: e-commerce, marketplaces and financial technology. Our company was founded in 2007 and now has more than 25,000 employees across its network of companies, which operate in more than 100 countries on five continents.

We currently several open engineering positions, not only in Berlin: https://www.rocket-internet.com/join-us/engineering


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: