From the article: "Proprietary security software is an oxymoron -- if the user is not fundamentally in control of the software, the user has no security."
I cannot agree with you. A user, on a given day, interacts with countless interfaces, not all of those interfaces have to do with technology.
If you give everyone 100% control over every interface - this will drive the world crazy. How much control do you have on your house lock - do you take it apart everyday? How much control do you have on temperature regulation in your fridge, except for temperature dial?
People in computing must realize that a computer is just another interface for a user. For a programmer, I agree with you. But if I made cupcakes for a living, I'd want control over my oven, not my computer. As a programmer though, I trust that my oven company did their job and eat food out of it everyday.
> How much control do you have on your house lock - do you take it apart everyday?
No, but I can hire any locksmith of my choosing to look at it.
The point isn't that end-users are in control of it, but that end-users can hire experts to adjust it if needed. Giving a manufacturer complete aftermarket control of their product is entirely different.
That's not really a valid analogy. Why? Well, the user of an open source product gets the same sort of interface as the user of a proprietary product. It can be a pretty GUI that only presents the things you care about.
The difference comes when you want to customize its behavior--in the open source version, you could five until the code yourself or, more likely, hire somebody to look into it for you. Similarly, if you were very worried about security, you could run an independent audit of the code yourself. Neither of these options exists for a proprietary product.
Basically, you're arguing against a straw man. Nobody is suggesting giving the user more options in the interface. Rather, the suggestion is to give the user access to the source code in case they want something modified.
The "fundamental control" in question is about who gets to see and use the source code, not about what the average user can do from a GUI.
> The difference comes when you want to customize its behavior
What does that have to do with whether the unmodified one must means user has no security?
All you said can mean you can be more sure that open source code is secure. But it does in no way proof that "proprietary security software mean no security at all (EDIT: better be 'proprietary security software mean user has no security')".
That was addressed in the parent poster's very next sentence:
Similarly, if you were very worried about security, you could run an independent audit of the code yourself. Neither of these options exists for a proprietary product.
> How much control do you have on your house lock - do you take it apart everyday
I actually have taken apart the locking mechanism once to repair it. It got stuck, the door wouldn't open, so I fixed it. I didn't take out the barrel or dig too deep in, but I didn't need to. If I hadn't been technically competent to do that, I would have been able to hire someone else to do it for me.
The software mentioned in the article already gives you that level of control, with use of exceptions. But this is just as much control as my fridge gives me with the temperature control dial.
What FSF is debating is "fundamental" and full control of inner workings of the software, which is what I find a bit narrow.
Make software secure-by-default, make it difficult to override defaults by accident, easy to override on purpose. This is the thought process that seems to be behind, for example, CyanogenMod's recent decision to disable root access by default.
To me, the essence of that quote is that any sane security model must include the vendor as a threat, but when I do not have control over my own hardware and software, I have little to no ability to respond to that threat. Moreover, without access to source code, I have little to no ability to audit my own security. Fundamentally, I do not believe in the idea that security can reasonably exist without auditability.
The problem is that, in most cases, the user is unable to judge quality. The difference here is software anyone with knowledge can audit and software only the vendor can.
And the user has to trust that these 'knowledgeable people' are indeed knowledgeable. The exact same thing they have to do with a vendor.
Open-source and closed-source software really isn't all that much different for users. All the differences require you to become a professional in the field, or a subset of the field before they mean anything.
The difference is that, once you have chosen who to trust, you are not forced to trust them forever; you can change who you trust whenever you want and for whatever reason you may have.
Can you say "I would like to keep using Exchange, but I do not trust Microsoft, so I will switch to another vendor"?
PS: this has to do both with open source and open formats as well.
I don't trust anybody. That applies to both open- and closed-source apps.
Security is one concern of many. The overriding one being business value. Using Exchange as an example, it would be difficult for me to come up with something that matches its value. I don't, however, trust it. As an aside, I trust Gmail even less, because at least Exchange runs in an environment I control.
And therein lies the solution. Defence in depth. It's a process, and should not rely on any single product. Open- or closed-source.
I might choose to protect myself against external, internal and vendor attacks (unintentional as well as malicious) by installing a network firewall, a proxy service, and an application firewall. I might then deploy access controls that authenticate users and authorise their access based on certain criteria. I'd devise a patch strategy. I'd implement an audit policy. I'd do a whole heap of stuff.
Frankly, the argument for either open- or closed source is getting tired. Any threat can be mitigated. It's success depends entirely on the value of the asset being protected, and the amount of money you're prepared to spend protecting that asset.
If we are talking about open-source versus proprietary software you shouldn't offer Gmail as an option to Exchange, since both are proprietary. In fact, I believe your perception of security is wrong - if someone wanted to steal your e-mails from your Exchange server, all they have to do is to compromise a sysadmin inside your organization (or your co-lo provider). When you need to get someone's email from Gmail, you'll have to compromise a sysadmin within Google with access to the information you want. Even pinpointing one is, most likely, hard.
I did point out that the Gmail comparison is an aside. An aside because as you point out, neither is open source. Gmail in comparison to Exchange does, however, demonstrate the benefits of controlling the environment.
Apologies, I wasn't very clear.
That said, compromising a sysadmin account in Exchange does not yield email content. Assuming service accounts have been correctly configured (ref. security policy). However, a Gmail sysadmin can have (and has, in the past) access to users' email content.
If I wanted acess to anyone's email account I'd opt for a social engineering attack on that user directly. Given history that's the easiest vector.
If you think advocating defense in depth as a strategy betrays a lack of understanding of security then, well, damn.
> And the user has to trust that these 'knowledgeable people' are indeed knowledgeable
Actually, the user doesn't have to trust anyone: the user can ask more than one party to audit the code. In fact, with open source, there are, usually, many different parties already auditing the code.
The advantage of Open-Source is that the source code will (or at least should/could) be audited by a variety of people from different organizations or private individuals who will have differing motivations and therefor it is much more difficult to get away with something that may be advantageous to one group but not to all the others.
Unless you consider the 'user' to be the employee sitting at the desk which is a topic for another discussion, unless the FSF is claiming that website blocking by an employer is user abuse.
Edit: There's speculation that the gambling classification was triggered by a pattern matching algorithm because of accepting bitcoin which mostly gambling sites tend to do. Maybe that's why only the donation page got blocked.
> The users are in control of the software, they can disable the gateway or add exceptions to it.
Still, you have to trust Microsoft the software will run as expected. With proprietary software, you have no option but to trust the provider. It doesn't matter how much they claim the software is secure, you will never know.
BadVista campaign pages were conspicuously absent from
Microsoft's live.com search results, even though the same
pages had been appearing on the first page of "windows
vista" Google results for some time. Many people contacted
Microsoft about this, and eventually the pages began
appearing as one would expect.
I wonder if services like DuckDuckGo, who aggregate search results across multiple providers, are effective at bypassing this kind of censorship[1]? For example if Yahoo started filtering out sites that are negative towards Yahoo, and Microsoft did the same for their brand, and Google did the same for their brand. I think DuckDuckGo would be able to provide a more balanced result set.
(Unfortunately the place where DDG seems to least effective is when I'm looking for a particular article by searching for a certain phrase; Google always seems to find it and with DDG I have to dig around. So hopefully that article isn't anti-Google...)
The software in question appears to be Microsoft Forefront Threat Management Gateway. From the features page [1], it states:
"Forefront TMG 2010 blocks malicious sites more effectively by using aggregated data from multiple URL filtering vendors and the anti-phishing and anti-malware technologies that also protect Internet Explorer 8 users. The highly accurate categorization of websites also blocks sites that may violate corporate policies."
Unless IE8 blocks fsf, then we can assume that the "multiple URL filtering vendors" are the source. Does anyone know who the URL filtering vendors might be?
This seems like a Hanlon's razor situation to me. A lot of these lists are purchased from third parties who do this for a living (make blacklists).
While we cannot say anything for certain until Microsoft responds - I think their response will essentially blame a "partner" and release an update which removes the FSF from the gambling blacklist.
Grey's Law? Because Snidely Whiplash at Microsoft really and fundamentally cares about what the FSF is going to say to people, right? They care so deeply that they're going to unleash a bwa-ha-ha eeeeeevil campaign to call the FSF's site a gambling site?
Or, shock of shocks, maybe one person, working for the third-party that sold Microsoft that particular "dodgy domains" list, clicked the wrong checkbox on that one domain (of many hojillions checked per week).
Could someone explain how a mistake like this happens?
If there was a person typing IP addresses into a list I can imagine them making a typo, but obviously that's absurd and these lists are auto-created. So how does a website get labelled as a gambling site?
Based on non-Microsoft software, what happens is that uncategorised sites are fed back upstream, so the software provider receives a list of pages.
They will then forward this to a partner likely owned in the West but who out-sources most of the actual work to somewhere cheap: like India or perhaps China.
Then you have a bunch of people who come into work, and work their way through a massive list of web-pages trying to spend no more than few seconds on each (metrics etc) and put them into boxes:
- Adult
- Web Mail
- Social Media
- News
- Entertainment
- Gambling
- et al
These lists are then sold to many companies like Microsoft, firewall vendors like Sonicwall, Parent Software like Netnanny, and also sold on to people who write anti-spam software.
Except that no one involved really gives a shit. This is enterprise software bought by some manager to restrict access for the hoi-polloi in order to satisfy some HR rule. It's at least three levels removed from reality.
Not since false negatives are the biggest issue. For companies that use this software keeping porn and viruses off the network is a higher priority than letting charity donation pages through.
These companies have never little trust in their employees. A flip side of this is a lack of respect. Blocking none work related sites is an acceptable if not desirable result.
Yes; but they are paid professionals, not some random person on the internet.
There is normally a way to "complain" if something is incorrectly tagged but it is after the fact. The local administrators (or parents) can normally add it to a whitelist while it gets resolved.
Each of these systems are probably different, but the way one vendor's system has been explained to me is that an automated classifier will eventually visit most sites and will come up with a computed category for only the main page. The system does not want to give full access to the site with a lot more potential harm, so if a customer visits one of these automatically classified sites and sees it is restricted, they can submit a classification at which point an employee will verify the classification and unlock the full site. Sometimes the restrictions are bizarre, such as allowing the HTML content to be viewed, but not the CSS. This process takes up to 24 hrs. From what I've seen, these automated classifiers are not always friendly to hostnames tacked onto a domain name (e.g., fsf.org/donate would probably be better than donate.fsf.org). They also routinely prevent visiting new domain names thrown up in Show HN posts until the URL is submitted for classification and gets whitelisted.
At the end they ask people to ask their employers to stop using Microsoft software like the kind causing this problem.
First, I really doubt all but a handful of companies would actually do this on account of the FSF site being blocked. It's a simple cost benefit analysis. The cost of replacing such software is high and the occurrences of such mistakes that would actually hurt a company are rare. Therefor it's not happening the vast majority of times.
Secondly, I'm surprised the FSF is using a service that has anything to do with proprietary, closed source, non-free software. Given their philosophy you'd think they'd have found some way to collect donations that uses free software from top to bottom. Maybe I'm way off on this but one of my first thoughts was that maybe even complaining about this takes away just a bit of the FSF's credibility. They preach the gospel of free software but when it's time to fundraise they make an exception? Is this a "do as I say, not as I do" situation now?
I'm not trying to be overly critical and I realize this may be a bit pedantic too. It's not a big deal to me, just thought it interesting. Food for thought maybe.
> Secondly, I'm surprised the FSF is using a service that has anything to do with proprietary, closed source, non-free software.
Where does FSF's post say anything about FSF using proprietary software? As far as I can tell, the post is simply about how some r/GNU redditors [1] noticed how Microsoft's gateway was incorrectly categorizing donate.fsf.org, not that FSF itself is using a service that is proprietary (unless you mean PayPal).
When you say mention "a service", are you referring to PayPal? In fairness, there are 0 payment gateways (as far as I know) that are open-source, sadly, so they don't have much choice. I guess the only option they have is Bitcoin, which they accept, however as that hasn't caught on they have little choice other services too.
So, when it comes to walking the walk, are you saying FSF is just like everyone else: free software is great, as long as it doesn't stand in the way of making money?
Without online donations, they wouldn't be able to raise as much money to support their campaigns. Personally I believe that it is worth it, just as it is worth using a closed-source machine to save somebody's life, rather than let them die for the sake of ideals. Of course the FSF would rather use free software to take donations (accepting bitcoin was an attempt to enable this), however by sticking to ideals in this situation it would harm free software more than it would help it.
This reminds me of working at a place where the Sonicwall filtering service would block the browser "Opera Desktop Blog" link claiming it was a swimsuit/modeling site.
"This reminds me of another situation several years ago, when BadVista campaign pages were conspicuously absent from Microsoft's live.com search results,..."
I could not agree more.