Hacker News new | past | comments | ask | show | jobs | submit | artjomb's comments login

Exactly! AD FS is part of Tier 0 in the same way as Active Directory itself and needs to be treated and secured as such. Of course, security goes a long way when it's part of a holistic approach like zero trust.

Mitigation is also not really possible when using SSO. One way would be to require the target service to require a second factor in addition to a valid SAML token, but then each user needs to keep current its second factor, whatever it might be, in each target service. This get unmanageable quite quick not to mention that there are basically no SaaS or self-hosted applications out there that support SSO and a second factor at the same time.


Yes, Apple is supporting older devices, but has made my SE 2020 nearly unusable (slow as hell, horrible UI bugs when typing) after updating to iOS 17. Everything worked perfectly until then. It seems as though Apple wants me to buy a more expensive phone. A friend had the exact same problem and now upgraded to a newer model.


Exactly, as a user you need to be aware of this addition of risk. Anonymous OSS maintainers are the issue here. When I want to use a project and want to reduce the risk here, I need to vet the maintainers in a similar way I would vet a company I want to invest in.


I don't think there is a need to have someone's reputation score to use their software.

There are plenty of pieces of open source software running on operating systems that were contributed by people who are effectively anonymous.

Also, it doesn't seem like a contributor's overall character would be a great measure of how malicious their contriubtions are, as evidenced by plenty of examples of assumed good people eventually doing bad things.


I don’t understand why people believe that open source inherently makes software secure and trustable. Yes, you have access to look through the code, but I usually don’t have the expertise to understand what I’m looking at. I wouldn’t know how to look for well hidden exploits or malicious intent. I’m still reliant on others to find these issues.

At the moment, I do rely on reputation before I trust open source software. But in the case of an app store, I can trust the reputation of the store. I can trust that the app store has to work to uphold their reputation which is their motivation for maintaining a good track record of identifying problem apps. I agree this is far from perfect, but I think it’s much safer than relying on open source.

I love the idea of open source and I hope that it will never be replaced by app stores, but I don’t feel that software is inherently more trustworthy if it’s open source.


I like to distinguish "trustworthy" from "trustable". Trustworthy software is worthy of trust: it is not malicious or unacceptably buggy. Trustable software is software which can, in theory, be verified to be trustworthy. OSS is trustable, but not necessarily trustworthy. Closed-source software might be trustworthy, but it's not trustable (since trustworthiness can't be verified).


I believe it’s not necessary to fully verify a piece of software before it can be trusted. We humans are all black boxes, no one can read our minds, but we can trust each other through our reputations. I treat software the same way; as long as software comes from a reputable developer, I’ll give it the benefit of the doubt until proven otherwise.

Verified trustworthy is too high a standard to hold to software. Take for example Log4j, an open source logging library used by many enterprise Java apps worldwide, had a huge vulnerability existing in its code base for over 7 years. Even with its widespread use and open sourced code, the exploit was not reported in a timely fashion.

Thus I’m left with reputation as the only practical means of determining trust; imperfect as it may be.


Exactly, in fact the reddit post talks about this situation -- the code that sents sensitive information is right there on GitHub but nobody saw it before OP did. And what could happen is that the developers maintains two codebases, one "clean" version on github and a "dirty" version that is almost identical except the part where it secretly sends your password, and use that version to build an iOS app. How would you ever know that?


There is a solution: even more telemetry!


I mean this kinda is an argument for telemetry. I only write very basic code, but stuff like "UI thing X doesn't work" is super painful to debug - let alone a million different use cases of big chunky code like Windows.


While I mostly agree with them, I don't regard them as fallacies. Those are different stances on the idealism-realism spectrum.


I disagree with the article that server-side iterations in this case are useless. They are used for access control.

Bitwarden's API likely doesn't permit anybody to access the encrypted blobs of anybody. You have to authenticate at the server to be able to access your blob. Since the iterations might be low for producing the master key and therefore the master password hash, the server must treat the master password hash as just another password and therefore iterate the hash quite often (100,000x).

Assuming no malicious insider or an outside attacker gets their hands on the encrypted blobs this is the most important attack prevention.


I live in Berlin and I don't know any German personally who is not for nuclear power.


And it's not possible to change/withdraw consent after allowing it. I've searched 5 minutes and found no link or widget that would get me to that screen.



A part of my work tasks consists of reviewing answers to security questionnaires.

These are reasonable questions and I see quite a lot of value if they are filled out extensively and in a good faith approach. Most of the answers of usual security questionnaires can be deduced from the responses to this DSQ.

I really have a problem with Q6:

> Have you had any security breaches in the last two years?

> If yes: please explain the breach, and provide copies of any postmortem/root cause analysis/after-action reports.

Almost nobody will answer this truthfully. I see a couple of options: 1. There was a breach and it was public, then why are you asking. Do your research! 2. There was a breach and it was not made public. The company will likely not admit it to you. 3. There was a breach but it was a) not relevant to your case/b) internal/c) the data lost was not customer data/d) we forgot that there was/etc.

While lying in case of 2. might make the vendor liable (IANAL) they might be able to argue that 3 was actually the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: