Anti-Virus is little more than snake oil. If you need to secure a Windows box, get EMET and read http://decentsecurity.com and you'll eliminate most of your attack surface.
Everyone can be secure.
It is with those four words this website is founded. Computer, smartphone,
and online security does not require a degree or years of experience. All
it requires is someone show you the way.
You've been sold a lie. You can't buy computer security. It is something
obtained through configuration and knowledge. Tragically, these aren't even
hard to do or obscure to learn. But no one makes money telling you how to
use what you already have. What you need is someone who doesn't care about
your money or looking smart by spouting off fancy words of no consequence -
just that you not be a victim.
It pains me to see people who distrust and fear their computers, and who
feel powerless in that fear. Because that's not what I see when I look at
computers and phones and websites. I see tools I trust with the story of my
life, and the secrets I leave out when I tell that story to others. Everyone
should be able to feel like that.
This site does not sell anything. This site does not take donations. This
site has no one's name on it.
This site is to fix what is broken. Which is how we teach security.
If you were wondering because it looked familiar, it's run by the same person behind @SwiftOnSecurity.
> You can't buy computer security. It is something obtained through configuration and knowledge.
Tragically, I believe this is true. But it isn't a great and noble thing that people must gain knowledge to overcome their powerless fear of computer technology, it is a failure of technology creators to provide people with simple tools that they can use without fear.
The problem isn't how we teach security, because hardly anybody should have to learn security in the first place. That the mainstream public is even aware of a concern called "security" having to do with their computing tools is already a failure. I can't think of any other mainstream products that people have to be so careful with, where they are told it is their fault that they just haven't gained the expertise necessary to use it without problems.
> I can't think of any other mainstream product that people have to be so careful with...
Cars. Those also tend to kill people, not just wipe out some baby photos. It's not an accident that almost every country requires licensing before you're allowed to use a car.
Good point! Although the most dangerous issue for most un-careful computer users is identity and/or financial theft, rather than losing baby photos. That can arguably have as severe an impact as many types of auto accidents. But the point that it won't kill you is a good one.
Sadly, the current popular 'solution' to this problem is for massive centralised gatekeepers (Google/Apple/Microsoft) to control software distribution, and to varying degrees prevent you from installing anything that didn't come from their store.
This undeniably makes security easier for non-technical users, but I hardly need to point on the downside on HN: these companies get to decide what programs people can install and distribute to others. They're not held to the standards of governments like due process and accountability - even though there are probably now more Android users than citizens of any one country [1].
Kudos to anyone working on alternative ways to make security easy without these gatekeepers.
[1] 1.4bn Android users in September 2015, according to Techcrunch, vs 1.38bn estimated population of China in 2015. Android is growing faster.
I (and many others far more impressive than myself) am trying to solve this problem at a fundamental level: Give the developers tools that are secure-by-default (i.e. libsodium not mcrypt) and teach better development habits. Make it easier to do the secure thing than the insecure thing.
It might take years, but I believe these initiatives will trickle up and make the software everyone uses more secure at a base, so it will require less cognitive load from the end users to communicate safely with each other.
That's the idea, anyway. Time will tell if we can succeed.
Wouldn't you be better off solving it by sandboxing? Basically don't allow programs to do bad things in the first place rather than try and get all programmers to be perfect. Basically the web (and/or some phone OSes).
Sandboxing is good for stopping memory corruption and privilege escalation bugs. It's not very useful for problems affecting cryptography implementation flaws, logic errors, out-of-date software, etc.
Those problems are better solved by giving developers better tools and frameworks that solve these problems for them, that are simple to use and don't introduce massive security foot-cannons.
(This comment is a minor spoiler to my current project, I suppose.)
The problem with sandboxing is that "bad" has no formal specification. There are legitimate reasons to access contacts, intercept system calls or key presses, use raw sockets, etc.
If you try to make those things not possible then people who need them have to use a different platform, which tends to cause other people who need to interact with those people to use the same platform (and so on) until the original platform is in decline. And the effect is worse the more you lock things down. It doesn't help anybody to have an ultra-secure platform that nobody uses.
I completely agree! This is one of the reasons I'm bullish on Rust; in the long run it will be nice to have a (more-)secure-by-default systems level language.
I see your point, but it's at least partly a feedback loop. If software is going to protect users from themselves, it has to be opinionated and not allow the user to shoot themselves in the foot. This approach is not popular with users.
A good example is the backlash browser vendors get when they try to make TLS errors fatal (without a "continue anyway" button). Users will cry bloody murder until the option to bypass the warning screen is re-added, at which point everyone returns to clicking through all warnings and we're back at "you need to know what you're doing to have a secure machine".
But the users screaming bloody murder are right! (Or rather, they are exaggerating, but they are in essence right.) Most TLS errors are not the result of someone trying to spy on you; they are the result of someone letting the certificate expire or something equally silly.
Furthermore, if all the people advocating HTTPS everywhere get their wish, then the people screaming bloody murder will become even more right! If I'm trying to load the HN homepage, and heaven forfend I get a security error, you better believe I'll ignore it, because even in the unlikely case that someone is spying on me, I can't think of how someone knowing which HN threads I read is going to hurt me in some way.
I'm not sure why you've been downvoted, I think you're quite right. A warning that's shown too often when there's no real threat gets ignored. Making the warnings bigger and scarier is just crying wolf ever louder, and it doesn't work.
I don't think the solution is to avoid HTTPS, however. I think sysadmins need monitoring and automation tools so that expired certificates can be an exceedingly rare event. Letsencrypt has taken a big step towards this by making a fully automated process to get a certificate.
> I think sysadmins need monitoring and automation tools so that expired certificates can be an exceedingly rare event. Letsencrypt has taken a big step towards this by making a fully automated process to get a certificate.
I agree that as a computer literate? person I think I mostly know how to avoid viruses. I've never run any anti-virus software (could just be getting lucky).
My family on the other hand can't avoid click "Yes", "ok" to anything ever asked of them on their computers. They get massively gunked up and infected and nothing I tell them changes their behavior because at a base level they just don't have the awareness. They're very smart people but what the computer is doing or might do in response to their actions is just not something they think about.
Sadly most of that advice will only work for those that work in IT directly. For those that use IT as a tool in the box to get something else done, or as a internet appliance, most of the suggestions will not fly. They will just hit yes on every UAC, and approve every outgoing connection.
Sure, but instead of throwing our arms up and accepting defeat, initiatives like Decent Security are trying to move the needle away from "insecure by default".
I'm trying to do the same thing with developers. :)
I wonder if science is doing us a disservice here. I get the feeling that just a single vulnerability (no matter how complicated it may be to exploit) is enough to claim "fundamentally insecure". Meaning that we are looking at the topic like we are trying to disprove a scientific hypothesis.
Anti-Virus is little more than snake oil. If you need to secure a Windows box, get EMET and read http://decentsecurity.com and you'll eliminate most of your attack surface.
If you were wondering because it looked familiar, it's run by the same person behind @SwiftOnSecurity.