CTF challenges are to cooking competitions what exploit development is to being a restaurant cook. There are time limits, practicality is less of a concern, and everyone knows that toy constraints are added because nobody wants to watch you stare at IDA for three weeks
I hope so! There’s not much to share yet because I’m still working on evaluating how well it would work, but the aim is to make n-day exploits infeasible to deploy to devices that are still vulnerable. It’s difficult because we are have very little we can work with when dealing with an attacker who can fully compromise the device, but some preliminary analysis of the strategy against recent exploits and approximations of how they might change if we ship are promising. The key point is that we expect our strategies to “expire” on a given timeline and need to explicitly design in a way to respond to changing techniques in a way that is highly asymmetric. We’ve found that the closer to the actual bug you place a mitigation the harder it becomes to work around, and we think we have a new way to get very close cheaply.
It would be cool for one of these folks (or anyone really) to show us why these things won't work. I see people I respect in that thread but I'm also very tired of hearing about how trivial these things are and not seeing someone spend, according to them, very little time to bypass these things.
Obviously, I'm not smart enough to do it or else I'd be doing it. However, I'm not going around making wild claims either. I think something like that would help rather than hinder OpenBSD.
People have done this in the past, at this point most people are just going to meme about it rather than respond. The response that they get is always “if it’s so easy why don’t you hack it?” which is quite frankly more effort than anyone wants to spend on an OS that doesn’t really harm anyone just sitting by itself layering all sorts of “mitigations” on itself. They’re basically completely divorced from what any real-world exploit these days looks like (blind ROP, really?) or how attackers work (“99% secure will stop them!!”) but somehow always really convoluted and optimized at stopping one very specific exploit flow rather than a general technique. The real solution for stopping ROP/JOP is going to be CFI, shadow stacks, etc. rather than trying to kludge something on hardware that doesn’t support it.
I hear you. I guess I'd just like to see more hacking and less of the memes. For me I think again that it would help more than hurt.
I'm an old man now and maybe I've gone a bit soft but I don't see much benefit in mocking and am more interested in helping even if that means wasting a bit of time.
While I don't say it is, this is also a classical tactic to discredit something that could be a problem. This strategy has been used with other projects. If you are worried that this has potential it will ruin your project/income you're going to shit-talk it.
Some years ago there was a leak of plans to do that very with Tor. Spreading FUD so less secure systems are used. Discrediting contributors, turning people against each other and so on.
Common theme. If someone has a way to break something, they'd at least gain publicity for it, if they have any positive interest they'll at least mention a source or provide any chance for rebuttal (the whole point of the scientific method), if neither happens be at least skeptical.
Take a breath. Read it again. If you still don't understand something, think about what it is you don't understand, then state your issue clearly so I can help you.
I'm not sure what to tell you. It didn't make sense. It doesn't make sense. There was nothing technical about what you wrote. Other people don't need to confirm that something doesn't make sense to me. I'm not making a claim, I'm telling you that it didn't make sense. What do you not understand about this?
I'm not trying to insult you or dispute what you wrote. I'm just telling you, again, that it does not make sense.
You keep repeating that all OpenBSD tries to do is audit to find bugs, but that is very obviously not all that they do to prevent exploitation and post-exploitation issues. I'm not sure why you keep doing that. This is one of the parts that doesn't make sense to me.
You say that unveil or pledge aren't enough or aren't as good as SELinux. There is nothing technical about saying that. That's just your opinion that others do not share. I'm not even commenting on whether or not I agree or disagree with you about that. However, you aren't making a point in expressing this opinion. That's something else that doesn't make sense to me.
So, do you want to try to explain what point you're trying to make again? The whole thing. All I'm getting from the things you're saying is that you love SELinux and you have almost no understanding of any other aspect of what OpenBSD does beyond auditing code.
No, you ARE making a claim, and that claim is bullshit. Certain knowledge is required to understand what I wrote and why it makes sense. And it does make sense, which is why other people were able to meaningfully respond to what I wrote.
It's fine that it doesn't make sense to YOU, but you shouldn't confuse that with it not making sense objectively, which it does.
OpenBSD doesn't literally ONLY try to audit bugs, but it is the bulk of their work and they prioritize that over addin or improving mechanisms to lockdown and prevent exploitation issues.
Others can disagree that pledge or unveil are not as good as SELinux, but they would be wrong. It isn't a subjective issue, and you would have to be rather ignorant of the differences to insist it is. SELinux removes the concept of an all powerful root user, and can grant every process the specific minimum access it needs. Pledge and unveil don't come close to offering anything like that.
Then you say I'm not making a point...even though you clearly here disagree with the point I supposedly didn't make. Are you by chance on the spectrum? Just trying to understand your issues with what I wrote, it's quite odd.
I'm not interested to explain anything further as I don't think it would be productive in proportion to the effort I would have to expend. I'm mainly just curious to see where this goes at this point.
Not trying to fight at all, but no time for someone that has such trouble comprehending something and then blames the author instead of being able to self-reflect.
It'll never happen. Every single time this comes up it turns into the same thing over and over again. You must remember the person that said they'd "write a blogpost bypassing OpenBSD mitigations next week" and that's been well over a month now and, surprise, there's no blog about this.
Everything OpenBSD does is wrong and trivial to bypass but everyone's too busy to do it. Maybe the dumbest part about this is that nobody on the other side of this is making claims that these mitigations are perfect in any way.
Qualys has bypassed some OpenBSD malloc hardening features recently but then they don't go around making wild or insulting claims about how wrong and trivial they are either. Go figure.
Just replying to provide some context [1] for those who come across this comment as I think it's pretty interesting. Also OpenBSD are working on something for this [2].
i really hope you do this because it's super annoying to see all these folks talk about how these mitigations don't work but nobody really _shows_ it.
it's always the same thing whenever openbsd is mentioned.
"these mitigations don't work."
- "okay, please show us that they don't work"
"well, i don't use openbsd"
rinse. repeat.
it would also be nice to see some patches/fixes/suggestions/etc submitted after you've bypassed/defeated/whatever these things sent to the mailing lists. i don't suppose you'd agree to that?
I don't believe these mitigations are fixable, they are based on a fundamental misunderstanding of how people actually write exploits and what attackers are capable of. The issue is that it is extremely hard to try and limit what an attacker can do once they already have code execution in a process. There is a reason why most low level exploit mitigations apply before this point--once they have code execution, it's largely a lost cause to try and protect the integrity of the compromised process. The way we mitigate after code execution is by taking a step up and using things like sandboxing to try and prevent the compromise from impacting the rest of the system.
Also, I don't think it is quite right of you to be miffed that people aren't writing exploit PoCs to prove these mitigations are moot. These mitigations are trivially wrong to anyone who has any experience with exploit development, and being told to "show proof then" is baffling. It's like trying to explain to your uncle that no, vaccines will not make your child autistic, and him demanding proof. Obviously, that's neither how autism or vaccines work and trying to demonstrate that takes significantly more effort than most anyone cares to put in to a random (internet) argument. But, I am a fool who has too much time, so I'll bite.
For a while you could just execute another binary, it would run without restrictions imposed on the (pledged) parent. This is a stark contrast to Capsicum, where the monotonicity (ie the fact that once you loose the permission to something you'll never ever get it back, unless being explicitly passed it again) is one of the fundamental assumption behind the design.