> Mozilla expects that the add-on limits data collection whenever possible, in keeping with Mozilla's Lean Data Practices and Mozilla's Data Privacy Principles, and uses the data only for the purpose for which it was originally collected.
There are a number of browser extensions maintained by antivirus companies that use security as a disguise to collect and monetize user data. It is time for Google and Mozilla to act on this issue and protect users from these predatory practices.
Benign extensions that are genuinely useful and don't have a company behind them do not get this treatment [1], they are blocked whitout preliminary contact with developers [2].
I want to second this sentiment and EXPLICITLY call out Mozilla.
I am the lead dev for a relatively small Firefox extension. We do not do ANY tracking.
We were rejected and removed for simply including the sdk for Microsoft outlook addins as a script in one of our html pages (we share the codebase for an outlook addon as well). This script is well documented and published by Microsoft.
I find the hypocrisy here staggering.
I know that Firefox gets a lot of love, particularly on HN because it feels like Mozilla is still a trustworthy company. I want to clearly express that I no longer believe this. They want addons in the store that they can use for marketing and sales. Period.
Thanks for posting this. I have had similarly odd experiences recently with the Firefox addon store, which my startup has been in since 2013. All of a sudden we got yanked, among other things because we modify third party libraries. Apparently we would have been fine if we did exactly the same thing but wrote the code ourselves, but since we used a library, and then modified it, we were in violation. To be clear, we provide all our source code in the review process, so it is 100% clear what we are doing. And we don't do anything that is remotely privacy- or security-compromising, which is very clear from the code.
We tried to understand this bizarre no-modifying-third-party-libraries policy and see how we could fix it, but they stopped responding and eventually even deleted our extension from the browsers where our users had previously installed it. (Even Apple doesn't do this when it yanks apps — only rarely if there is proven bad behavior will the pull an already-installed/paid-for app from a device.)
I happen to know a couple very high-up people at Mozilla, and one of them was able to flag our mistreatment, and the reviewers now seem to be walking back the previously-described global ban on modifying third-party libraries, but we're still not back in the addon store (it's been months).
The (alleged?) policy makes no sense to me, and I also don't understand why Mozilla is now blocking users from installing any addon that hasn't been blessed by Mozilla. I understand that they want to vet addons that are listed in their store, but they've assured me that users can't even install off our website unless Mozilla signs off. That seems very un-Mozilla-ish to me. What happened to the open web?
For the record, I used to love Mozilla/Firefox, and have used their browsers for decades. I now use Brave, both because of experiences like this one, and because it's much faster on my Mac.
The most charitable explanation I can come up with is that they want to avoid confusion attacks on their reviewers. But then again they already have a list of hashes for trusted libs, so clearly they could just diff the modified library against the canonical version when a hash doesn't match.
> but they've assured me that users can't even install off our website unless Mozilla signs off. That seems very un-Mozilla-ish to me. What happened to the open web?
I have clashed with their addons staff over similar issues. Their argument is basically security absolutism, motivated by (some) users easily being tricked. Any inch of rope you give the user they will use to hang themselves. So any security-related decisions they make are based on this dumbest possible user. Additionally they consider it a reputational issue, if an extension screws up firefox then firefox is blamed, or so they say.
Firefox stable basically means nannying. If you want more control you're supposed to use developer edition or (on linux) the distribution builds. They don't say it quite that directly, but that's what their answers boil down to.
> Their argument is basically security absolutism, motivated by (some) users easily being tricked.
Honestly, most of the people I know who use FF these days are pretty tech-savvy. This wasn't always the case — my mom used to use Firefox. But Chrome/Safari/Edge are the default browsers these days, and they're pretty good. The only people installing other browsers are either pretty savvy, or being helped out by people who are savvy.
I realize that Mozilla perhaps yearns of the days when they had much larger market share (and therefore many more easily-tricked users). But those days seem to have passed.
Likewise, they may be trying to position themselves to become the go-to browser for the unwise masses. And perhaps someday they will retake that crown. But in the meantime, these nanny-state restrictions piss of their developer-heavy base.
This article, along with some other sites and my own digging made me leave Firefox for good. It's a good read if you value privacy and think Mozilla is the good guy in a field dominated by evil corporations. At first it might seem biased, but the author makes very good points and backs them up with tons of sources that can't be really argued - most of the article is reviewing privacy policies and terms of service agreements that are available publicly to check.
For now I stuck with Pale Moon, but I'm considering some other browsers like Ungoogled Chromium for my daily private and work use. Pale Moon feels kinda janky and old-school in a not good way. But it's fast and (with some minor tweaking: https://spyware.neocities.org/guides/palemoon.html) respects your privacy 100%.
I've read through the first bit of it, and the author comes off as a huge pedant.
They spend most of the introduction ranting about browser features being removed in updates - because their expectation seems to be that once Mozilla ships a feature, it must forevermore be a part of the latest version of the browser, from now, until the heat death of the universe...
Disclaimer: I don't actually use Firefox, I don't have a hobby horse in this race.
isn't this generally true of every extension/app/smart appliance? Everything wants to collect and sling your data on the side. Browser and phone vendors do try to protect users with permissions controls in addition to the store curation that you point out is imperfectly executed. How do you propose they enact a perfect system of vetting for the long tail of benign extensions?
Equally enforcing policies regardless of who's the publisher of an extension would be a good start.
This is how the Avast blocklisting request should have been handled:
- Immediately blocklist [1] the extension because it harvests personal data without user consent, notify Avast
- Offer to reenable the extension for the existing user base when Avast reaches out and stops data collection in an extension update
Because the extension was not blocklisted, but temporarily removed from the store, personal data was siphoned off without the consent of existing users for weeks, with the knowledge and implicit approval of Mozilla, until Avast has released an update.
Users that have configured Firefox to update extensions manually may continue to use an extension version which steals personal data, despite Mozilla being capable of disabling those extension instances, and despite their recent commitment [2] to use blocklisting more proactively when an extension is circumventing user consent or control.
Mozilla did not respect its own policies and has put the interests of Avast before the privacy and safety of its users.
The extensions were temporarily removed from AMO, but blocklisting was not enacted, so around 1 million existing installations have remained operational while Mozilla was waiting for weeks for Avast to submit an update.
The initial update for one extension was also approved despite the extension not respecting a declined consent and resuming data collection after browser restart [1].
Well they could start by banning extensions that make the news for sending all of their visited urls to a third party.
Ultimately we need to give the user more control over what software on their system is doing. None of these entities (mozilla, google, apple, microsoft, amazon, facebook) can be trusted to act in our best interest
The article seems a little sensationalized, but even viewed in a generous light it's still downright creepy.
...clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others.
It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.
Although the data does not include personal information such as users' names, it still contains a wealth of specific browsing data, and experts say it could be possible to deanonymize certain users.
From the PCMag article, it seems that Jumpshot's marketing materials claimed they'd worked with some companies that denied actually doing business with them when contacted by reporters, hence the "potential" clients - it's not clear who's lying.
The article is actually rather generous, particularly when it says "could be possible to deanonymize certain users." Having done my own research on that [1], I would have formulated this as "allowed to deanonymize most users."
IIUC, porn is used as an example to highlight that these are very personal details that are being sold, and porn preferences are things people often fear more of leaking than - for example - their financial data.
John Oliver, in his work on the NSA scandal had Snowden explain individual NSA programs based on "dick picks", to help people visualize and clarify the consequences of what otherwise is just generically described as "selling/stealing your data".
Yeah people always assume the buyers of the data are companies like Google, and not some group who wants to blackmail you if you don't give them five Bitcoins.
> porn preferences are things people often fear more of leaking than - for example - their financial data
I remember this being the opposite - for nude pictures instead of porn preferences at least.
Avast did a study where 77% of US respondents said that between nude pictures and financial data they would rather have nude pictures of them leaked[0].
Maybe it's because financial data is directly tied to one's well-being but except for certain professions nude photos aren't.
Then again nude photos aren't necessarily the same as porn.
Why concentrate on intended use? More interesting is what, er, unanticipated value-add might be there.
Could a malicious actor create a blackmail-bot? It wouldn't even need to be that great - just something a little more believable than the "I took over your computer and videoed you masturbating" spam.
Could more subtle, targeted blackmail operation involve this data? There is a lot of this sort of thing in politics.
We know PIs, bail-bonders and other folks on the fuzzy LE/commercial coercion industry line were heavy users of realtime cellphone tracking. I imagine the more creative ones have considered the value of these data pits, too.
Avast/Jumpshot data is bought by many marketing companies too and packaged as SEO tools, market research/analysis. They've been really proud to talk about their ability to collect all this data in the past [1]. But recently it became clear that all the data is stolen without user permission.
My dad had me remove Avast (that he paid for) from his laptop over Christmas because he started to get mailers based on his browsing activity. It literally locked up his computer for an hour uninstalling and giving prompts that would try to get you to accidentally cancel the uninstall.
Kind of amusing that this is currently the #2 story, while #1 is "Trust Is at the Core of Software Marketing".
If you had told me five years ago that I would stand up exfil monitors on my home network because commercial and criminal surveillance was so pervasive, I would have said that would be crazy talk. And yet here we are.
Basically, DNS logging forwards to a daemon I wrote that detects Base64/UU/other encodings in DNS requests and asks the network manager[1] to shut off connectivity to the client asking such questions. There is a volume-of-queries timer, and I have some other ideas to add to it.
I'm working on a TLS-inspecting proxy with squid and sslsplit, as I expect there's a lot to look at. I very much want to know if something emits any of various magic numbers - SSN, bank account numbers, address book entries, IMEIs, etc.
As far as more general blocking, I use parts of Pihole, which merges with some other data sources, to defeat DNS resolution for folks I don't want on my internet. And I use Maxmind's geoip data to generate iptables rules to block most of the world on public facing infra - my tiny user base is not in most of it.
I've often felt that antivirus software is like the rock that keeps the tigers away. I haven't used it since the AOL days, and that's also the last time I was actually hit with Malware (something from Kazaa I would guess.)
It's also worth noting that every corporate IT department I've ever seen installs antivirus software on employee machines, so it must be good for something? I'm curious what the actual statistics are for caught viruses.
That industry seems like a dumpster fire. I once saw an assessment that includes a regular firewall scan. Apparently everyone locks down their firewall, triggers the scan, re-opens their firewall.
We were provided with a very large list of AV software (~2 printed pages long). Windows Defender was not on it.
It was decided the risk of using software not on the list was not worth it.[0]
[0] It was the 2nd annual assessment/audit I was involved in -- during the first one the sticking point was that our user-facing website was exposed to users via a CDN that was detected by the vulnerability scanner as some PHP application, which caused us to fail that test. Needless to say we did not want to deal with such mess again.
Antivirus was more necessary back in the days when firewalls and security elevation privileges were rarely enabled on consumer machines. But more modern security software has rather weakened the value proposition for AV, and they've started branching into areas that aren't really helpful at all and sometimes counterproductive (e.g., web browsing protection--your AV does a much, much worse job of validating TLS than browser vendors do).
Given the dirty things that AV does to "work" correctly, you can characterize modern AV as malware that tries to keep other malware out.
I don't think this is true. Generally, I think that privilege levels are a red herring, and that being able to get malware running at the privilege of the user installing it is sufficient; to the extent that this is changing, it's changing only recently.
I'd say that the problem with AV is that it has never worked at all; it's always been a "deck chairs on the Titanic" performative spot clean situation – at best, a smart team that reimages a machine with a confirmed "infection" can say they're using AV as a weak sort of host intrusion detection system.
AV or no AV, for as long as I can remember (back into the 1990s), if your desktop is "infectable", you're screwed. You'd think that the Summer of Worms would have confirmed that for everyone, but AV is an extremely powerful marketing product category, because of its built-in per-seat multiplier.
Do they have a 100% track record of preventing viruses from being installed on your system? Once your system is compromised, it's compromised. There's nothing the AV can do. It's the OS (and to a lesser extent user practice) that's going to limit the damage.
>It's also worth noting that every corporate IT department I've ever seen installs antivirus software on employee machines, so it must be good for something?
Yes, it absolutely is good for something. Antivirus software is excellent for making companies like McAfee highly profitable, and it's also really good for slowing down your computer.
Owners of the Avast, 2 Czech guys, are ranked among the wealthiest Czechs ever. I mean one has net worth over 1 billion USD, the other at least the same.
I had some respect for them some time ago, but these days their product is shit and worse than having nothing. Plus with Windows defender, who still actually pays for it?
For corporate IT, it’s a compliance requirement. It is the equivalent of putting a sign in the bathroom that says to wash your hands, only less effective. AV does almost nothing and costs almost nothing.
The newer products are different and more effective, and come with an appropriate price tag. (The Microsoft solution costs as much as O365!)
For those that might not understand the compliance requirement, PCI compliance is a good example. If you process credit card payments, you need to be PCI (Payment Card Industry) compliant. And PCI DSS Requirement 5.1 [1] states
>Deploy anti-virus software on all systems commonly affected by malicious software (particularly
personal computers and servers).
So most enterprise companies have to have AV on their workstation and servers (yes Mac and Linux too) in order to keep processing credit card payments.
In that case Square is the payment processor. The vendor using the Square reader is not required to be PCI DSS compliant, only Square is. Same for accepting Paypal, Stripe, etc.
As soon as the charges cross certain rolling dollar amount, one needs to complete a security assessment that requires AV in addition to a pile of other idiotic things - such as a specific set of signatures returned by the web servers in a scan by TrustWave. Unrecognized signature? Failed!
I've done PCI/DSS consulting. For the biggest provider of online loan servicing, in fact. Over 80% of all online loans are processed through their system, mostly done as a white-label service for virtually every big bank, credit union, or other financial services provider in existence.
The reality of PCI/DSS is a bit more ... complex.
What it really comes down to is whatever your auditor says you have to do in order to meet the requirements. And history has already taught us that if your auditor is one of the Big Accounting firms, then they can be ... flexible ... if you're a big enough customer.
So, there's the letter of the law, and then there's what you actually have to implement in your code. And the two may have relatively little to do with each other.
Just make sure that you document everything you do within an inch of your life and the life of the code in question, so that when there is a breach (and there WILL be a breach), you're not the one that is being left out in the cold.
> For corporate IT, it’s a compliance requirement. It is the equivalent of putting a sign in the bathroom that says to wash your hands, only less effective. AV does almost nothing and costs almost nothing.
Corporate IT at my previous employer even allowed and aided us in uninstalling the resource-hogging McAfee installation that comes default with a company machine. That stuff was crippling our machines.
We loved it at a previous employer when the corporate-approved Endpoint Protection Solution silently installed a new firewall component, without telling anyone. Of course, it is (was?) a requirement of the corporate-approved Mobile Device Management system.
Broke everyone in the company for weeks (?), because the only thing it allowed was ports 80 and 443.
No one could figure out what the problem was, because the OS-standard firewall configuration clearly showed that all the required ports were open, and therefore it couldn't possibly be a firewall problem.
Well, at least it broke all those people who had the corporate MDM software installed on their machines. Those who had a different MDM installed and so could not install the corporate MDM, well those people were able to get regular work done.
Oh man. Are you talking about Triton? I'm forced to endure that for now as well. It's terrible.
It actively attempts to inject its own version of libwebp into nearly every single build and breaks them because it's not signed!
It's maddening, and crippling! We've recently moved umbrella companies, so that software will hopefully be going the way of the buffalo, but I can't even run Vagrant on my machine because of it.
If you do any business with the US federal/state/local government, healthcare, criminal justice, Tax collections, most financial institutions, you've signed a contract where you agreed to do a bunch of stuff, including this.
Well... do you want to stand up in court some day and explain why you didn't install A/V software? It reduces risk, just maybe not the risk we expect it to.
Look at in Avast's privacy policy:
"Enable use of your personal data to create a de-identified data set that is provided to Jumpshot to build trend analytics products and services."[1]
Where does the "consent" mentioned there occur? They claim to be GPDR compliant; it has to be explicit and separate from other agreements.
AV software acts like and treats your computer pretty much the same way that malware does, it should not be surprising the companies do the same.
They're in Prague, would be nice to see the teeth of the GDPR used to put the fear of god into parties in this space. Maximum penalty times the number of clients they had should do the job nicely.
Is there any chance what they're doing might fall under some sort of 'unauthorized access' computer hacking criminal law? Something that could get individuals sent to prison rather than just getting the company fined? There is no way they had meaningful informed consent for any of this.
I don't think shit like this will stop unless executives and the engineers who humor them start getting their lives justifiably destroyed.
Can we discuss how ridiculous PCI compliance is to require anti-virus softare? Particularly in mac OS where anti-virus software is the primary surface area for introducing viruses.
Why hasn't this requirement been updated to somthing more sensible?
Keep in mind that PCI requirements only apply to machines within the cardholder environment - everything else is out of scope.
What this means is that as long as you isolate your cardholder environment, you don't need to deploy AV across your whole company - only on those in scope.
I'm not a huge fan of AV, but I do advocate for a layered approach to security, and have to concede that AV may have some value as a "last chance" layer if malware somehow manages to get past your other defences.
I'm not surprised. The antivirus sector seems to prey on people as much as the virus makers. I had my PC infected accidentally by norton antivirus a few years ago. They snuck it onto my system by hiding themselves in an oracle java update. They didn't install right away, waited a bit so I couldn't immediately link it back to what caused it. The uninstall function in programs and settings was a misdirection and threw an obscure error. The uninstaller downloadable from their website would pretend to uninstall, require a reboot, and after the reboot the malware would reseat itself on my system. Eventually I had to start up in safe mode and painstakingly go through my system file by file, and registry key by registry key, to root it out. They know all the techniques the viruses use, and they use most of them.
That was when I decided to stop using Oracle Java on all my personal systems and minimize my use of Oracle products on a professional basis. Learning they were a malware distributor was the drop that tipped the bucket.
Glad I'm not the only one. I was a Java dev for many years and also used it for private projects. When they started shipping unwanted malware together with Java was when I decided to remove Java from my toolset.
> When they started shipping unwanted malware together with
> Java was when I decided to remove Java from my toolset.
Would just like to say, OpenJDK is quite a solid step-in replacement for Oracle's Java: https://openjdk.java.net/
I've used it in production, I've built apps that I then _had to_ use Oracle's Java (without problem), using older and newer versions have been very stable - would very much recommend. I had one issue 8 years ago between versions, but nothing since.
If this happens to you (or anyone else) again, I've found the go-to solution from my previous job as a repair technician. There are antivirus removal tools that basically do automatically what you did manually, purge the system of the antivirus files and registry keys. I've found them to work very well, even on later AV versions.
For McAfee it's called the McAfee Removal Tool, and for any Symantec product, there is CleanWipe as well as the Norton Removal Tool.
I am not sure of the legality of posting these tools publicly, but they are probably easy to find via search engine or torrent. May not help you if you're working with corporate machines, but I use them all the time for any machine I inherit or when helping out friends and family.
> Avast was collecting the browsing data of its customers who had installed the company's browser plugin, which is designed to warn users of suspicious websites.
No, Chrome and Firefox nowadays don’t allow extensions to be automatically installed and enabled. Any extensions running were specifically enabled by the user at some point.
I know this because I helped someone install McAfee antivirus a few days ago. Both Chrome and Firefox showed a small, non-modal popover on the next launch saying that an extension had tried to install itself. The popover contained two buttons, one for enabling the extension and one for keeping it disabled (Chrome) or uninstalling it (Firefox). If you never clicked inside the popover (as many alert-blind users might do), the extension would stay disabled.
Until Windows 10 (which just has a great built in virus detection system,) I would always push people to Avast. I am shamefully disappointed that they've fallen this far.
Most antivirus software feels sketchy in general (snake oil).
The only antivirus I run at the moment is Windows Defender. For me the main reason is that Microsoft actually benefits from a virus free system because it improves Windows as a platform. Antivirus software vendors benefit from viruses spreading around because it improves sales of their products.
Also Windows Defender is fairly non-intrusive, it doesn't advertise a sketchy VPN, a sketchy adblock, a sketchy torrent or any other tool that usually anti-virus vendors try to bundle together.
Microsoft and antivirus vendors have very different goals. Microsoft wants to quietly protect your computer without making a big deal out of viruses, while antivirus vendors always want to let you know that they're hard at work protecting you and that the world is a dangerous place so keep giving them money(or in the case of free versions, keep it installed so they can keep vacuuming that data).
I think one of the worst parts of this kind of stuff is that paying for the product doesn't mean they stop selling all your data. It just means they make even more money off of you.
That's the standard these days unfortunately. Why turn these options off when most users won't even notice and you get more money in the end?
I wish that my various "premium" subscriptions to services like Spotify or various other apps meant that my privacy will be respected but I have absolutely no illusions about it. That's why I'll never ever consent to turning off u-block, I want the freemium/ad-driven model to die, it's the original evil that makes all these practices worthwhile.
Why turn these options off when most users won't even notice and you get more money in the end?
Because it's the right thing to do? Why is it so hard for people in technology to simply do what's right?
I don't buy the hackneyed HN arguments about "A company is required to maximize profits for the shareholder!" or high school thought exercises like, "Who gets to decide what is right?" It's simply not that hard.
I wonder if software engineers had to pass ethics classes like other types of engineers if the world would be a different place today.
> I wonder if software engineers had to pass ethics classes like other types of engineers if the world would be a different place today.
I had to take one as part of my ABET program. I highly doubt it made a difference. If anything, it demonstrated how much ethical behavior varies between people. I distinctly remember conversations about how awesome computer-aided sniper rifles would be (before these became a literal thing). That seems completely unethical to me, but I went to a college full of people getting back from tours in Afghanistan, so they had a very different perspective on the topic.
They do in some universities. Even when I was going. Most just treat it as a "Yup, okay fine" course, and many instructors are really bad about making it clear just the kind of heinous things to watch out for.
Example:
What is an acceptable use case:
A) Utilizing photo manipulation to place an individual somewhere they haven't been before.
B) The same except for them to enter in a contest.
C) The same, for some one who just wants the picture to flesh out a personal scrapbook with a picture of them and their SO having done something they wanted to do, but the SOwasno longer alive for somehow.
They don't do a great job at preparing you to understand how industry gets you to actually do unethical things, or how to react when they do.
I.e. It's rarely, in my experience, "Hey, do blatantly unethical thing." Though that happens. I've had some snuck by my filters in the form of
>Hey welcome aboard! <Several months of innocuous work later>. Okay guys, time for <skeevy thing>. Business really needs this, so it's pretty high visibility.
Generally it escapes your notice because for once you're just happy to have clear requirements, and to not be getting jerked around, so you do it. Minimal complaint is encouraged, or will at least be pretended to be entertained, relying on the cushy salary and in-house benefits to carry you through to job completion.
But the ball keeps rolling regardless. You can decide to not work on certain stuff, but that generally gets shluffed off to another team that doesn't share your ethical proclivities.
The sad thing is, you almost need a labor or professional organization like structure to be capable of keeping major divergence from the ethical straight and narrow in check, because otherwise you can pay lip service to higher ethical and moral standards, but when it comes paycheck time, most are going to go with that phat paycheck.
I'm working on considering my next link in the career chain, and ethical implications are strongly influencing my choice. I'm sick and tired of being someone else's money vacuum mechanic. I'd like to work on software that actually helps people.
And I'll stop anyone who suggests "but those systems are capital pumps, and help people because the free market." No they don't not in any but the most indirect of ways. If people overall weren't having all their siphoned off them, I'dwager there would be a much higher degree of actualization or entrepreneurial development in the hands of less capital starved people.
I know a positive feedback loop when I see one, and you can damn well bet that's why every tech company is scrambling to become a fintech in one way or another.
Well yeah, as long as you can't have your engineering degree stripped away for doing unethical things like the OP, and be forbidden to even code for other people for free - like AFAIK it's the case for doctors violating the Hippocratic Oath - then the situation will not get better !
To be clear I wasn't condoning this behavior at all, it was just a statement of fact. I also wish people valued ethics over short term profits but here we are. Like my grandmother used to say, there is no such thing as ethical consumption under capitalism.
There’s an argument that growing awareness will lead to backlash, some of which will carry legal weight. I’m not sure how compelling long-term brand reputation is to most CEOs, however — Apple has been doing well but that’s a single data point with lots of confounds.
True, as long as keeping outdated software/practices is facilitated. The attack surface is relatively small if someone installs only apps from an App Store - no matter which one that is - and from well-known vendors.
That said, Microsoft has created a nice breeding ground for viruses by keeping all the software backwards compatible but accepting that users might need to get new hardware for every major update. At least the major updates have become quite cheap...
Using software from official app stores comes with even worse issues... (and didn't prevent like a quarter of top-100 Android apps from sending data to Facebook!)
I think that has been the conclusion for a while. I went poking around for old stories, and the best I could find was a thread from 2016: “AV is my single biggest impediment to shipping a secure browser,” https://news.ycombinator.com/item?id=13079569
I believe there have been academic studies on the non-effectiveness of antivirus software, but I can't find them. (If others can, I'd like to read them.) I find the argument that antivirus software increases attack surface compelling because it's third-party software that tries to get into everything. This is not the '90s anymore; all of the big players care about security. I think that the engineers at Microsoft, Google, Apple and Firefox are more qualified to secure their own software than outside vendors are.
One issue with all antivirus on windows, including defender, is the performance impact. It seems to be dreadfully single-threaded, or at least make single-threaded workloads even worse. Monitoring CPU use during updates easily shows this, there are periods where a single core is busy with the defender service and nothing else is happening.
Another issue that while microsoft's incentives may be better aligned it still fundamentally is an antivirus and thus an increase in attacksurface. There have been exploits against it[0] and it turned out that it did things with SYSTEM privileges and no sandboxing.
I don't recall seeing significant locking in the stack samples, just that the scanning happens synchronously during some IO (e.g. closing file handles) and >90% of the CPU cycles are spent on defender. Which effectively boils down to a task that should be IO-bound becoming bottlenecked by waiting on defender.
I think there's several different underlying causes to Windows feeling more secure (not needing antivirus) these days:
1. Security on desktops has been improved quite a bit. You can no longer simply email someone a .exe file that's the equivalent of `sudo rm -rf /` and have it work, multiple levels of safeguards (from the browser, to the email virus scan to windows defender catch it). OS's also simply won't run unsigned binaries anymore unless you force them to which is (maybe?) a good thing.
2. Piracy has been on a downtick, and people seem to be more open to paying for software since App stores made it somewhat more acceptable. Less likelihood that people download sketchy binaries off the internet and run them on their PCs.
3. More hours spent on mobile OSes which are inherently more secure. casual PC usage at home is declining.
speaking of this, is there a recommended way to run a windows binary within Ubuntu 18.04? I don't want to dual-boot windows because I read that installing windows after ubuntu isn't recommended, but I don't have experience running a windows environment within Ubuntu either.
Other than Wine, you can run any virtualisation system (qemu, VirtualBox, VMWare, Xen, ...) which will effectively give you a Windows (or other) OS running within your Linux system.
It won't have direct local OS access, but files and networking can be shared through most emulation systems, generally as SMB/CFS shares.
> For me the main reason is that Microsoft actually benefits from a virus free system because it improves Windows as a platform.
From the article:
> Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others.
Microsoft doesn't need its AV products to do shady data collection because it outsources this to third-party shady AV products, therefore keeping its own reputation clean as your post demonstrates.
> Microsoft declined to comment on the specifics of why it purchased products from Jumpshot, but said that it doesn't have a current relationship with the company.
Which seems to indicate that Microsoft is in the list of past, not potential clients.
I remember when it came out and it would scan away and do it's thing and it was pretty much noticeable. I was "yeah this is it" uninstalled whatever I was using at the time and never looked back.
That was a big plus in a market where as you say everything feels sketchy.
This is the same logic motivating me to (infrequently) buy Windows computers from the Microsoft store. They don't benefit from a brand-new Windows PC that runs like molasses so they don't have any bloatware* or other performance slowing software like a Dell PC or worse, your typical Best buy computer. The prices are on-par despite not getting the kickbacks where most manufacturers generate the majority of their profit.
* except for that which is or comes with Windows. Unfortunately this has grown substantially - Windows 7 seems like something of a high-water mark for performance/polish/lean...
Except that M$ was bundling ads and apps in Windows 10 (a paid OS) themselves. It's a sad state where other actors are so terrible that MS starts to look like a savior.
Agree with the non-intrusive part. However Defender does slow down programs that do a lot of filesystem IO. I tend to disable it when I update bigger programs or run other IO heavy tasks.
As a dev house, I agree on the io, but our solution is to add our dev folders to the exclusion list. One of our build times went from 10min to start a debug to less than 1min...
Doubling down on this, I do the same and have had maybe 1 instance where Defender didn't catch something in what seems like the last 10 years I've used it almost exclusively for 'virus' protection.
More often then not you'll find some "helpful" changes to your browsing experience, and/or "team.bitcoinz" running at 100% cpu, or your files are encrypted.
Most viruses aren't stealthy, although a few are, and you're right, those you'll have a hard time with if they're well hidden and undetected.
IMO - Something like Glasswire + Windows Defender is pretty robust. Nearly any virus will need network connectivity after all.
Yes? But most viruses don't lay dormant waiting for you to type in a bank account password and quietly send it off (some do for sure).
A lot of viruses sell "installs" - where other less moral software devs will pay per installation of their software. Or they spam ads across your entire computer etc. Or they encrypt your files and demand a ransom. None of these things are quiet or stealthy - anybody with decent IT experience will notice them pretty fast.
It's kinda like regular crime. Sure every once in a while there's a thief that quietly lockpicks his way into somebodies home after spying on them and knowing they've left on vacation, leaving little evidence they took anything at all. Most of the time somebody just does a smash-and-grab, quick and dirty.
I keep running into the same phenomenon everywhere; everyone thinks they know whether they've been infected or not. I suppose everyone is a malware expert, so there's no need to share the data with anyone. I'm quite out of the loop here.
> I keep running into the same phenomenon everywhere; everyone thinks they know whether they've been infected or not.
I mean I get what you're saying. My point is that most viruses aren't trying to be hidden, so they're rather obvious. Of course there's a chance they try to be sneaky, and you're right you have no way of knowing if those are present and well hidden.
But anti-virus only does so much, if you're past Windows Defender you're past a bunch of other tools too. A lot of anti-malware software is very generic, and relies on some rather dumb techniques.
Don't recall what specifically I was doing at the time but was maybe 7 or 8 years ago my computer got a virus and took me close to a week to fully gain control of my machine again. Only time I got actual warnings from Defender about something it couldn't erase.
funny, i said the same thing before but it came off wrong and people didn't get it or something. i only run windows defender too, not because it's efficient -- i don't know, i have never seen it catch anything -- but because it's the less sketchy of the bunch!
What is there to prevent Microsoft from not having Defender monitor and report the traffic back to MS once it has gained a big foothold?
MS with Win10 forcefully collects a lot of data and adding a little bit of data through Defender will give them a really huge(complete) picture.
It's amusing to watch how slowly is this becoming a common knowledge - company I worked for at the time has been buying complete Jumpshot url logs 4 years ago, and everyone knew where the data is coming from.
I don't use any antivirus software now a days. They basically seems to slow down the computer (I am not quite sure about this) but here I got one more reason not to use one. I guess qindows defender is okay and sufficient for most of the cases and you can always switch to Linux.
I thought this was well known. I wrote code to ingest Jumpshot data years ago. You have to wonder about the quality of the data though - maybe those using Avast aren't the best measure of all things web related.
I'm glad Vice is writing about this, but how many mb of browser-tracking crap do you think Jane Sixpack gets alongside that article if she doesn't have a (real) adblocker and noscript?
Eset and Windows Defender is pretty much the only two antiviruses left that I still kinda trust. The rest has become either bloatware or straight up spyware.
Well e.g. they don't spam you with ads for their other products. They like to say that your PC is slow and show you their nice tools that should fix it (although I know avg does that, not sure if avast too, but they're supposed to be the same thing now). People tend to freak out if they see something like this popping out of taskbar. Eset doesn't do any of this and keeps quiet unless there is a problem.
Also don't add snake oil parts to their products like registry cleaners and install extensions. The most obtrusive thing they do is probably MITM TLS connections which I don't agree with but at least kinda understand.
Computer salesman have talking points about how WD doesn't catch 95% of virus out there. Of course, they leave out the part about how WD focuses on the 5% of viruses actually found in the wild, as continuously surveyed by Microsoft, not the 30k variants of known viruses that are not relevant to a Windows 10 machine.
My go to is to say, the security lead for Google Chrome once tweeted, "the biggest impediment to implementing a secure browser is AV software."
I saw that quote being posted quite often in HN. I always smile at this because Google Chrome (on Windows only) includes a cleanup tool based on an popular AV engine.
I think his original criticism was that the OS hooks that typical AV install are a source of vulnerabilities and bugs.
I don't think he took issue with the idea of a well-maintained database of signatures used to identify malicious code, just most vendors implementations that check against such.
Windows Defender is a little too understated IMO. Most people don't even know that it's a decent antivirus.
I wouldn't mind if better marketing for Windows Defender made a bunch of shady antivirus companies go bankrupt, but Microsoft probably doesn't want to get into another round of antitrust lawsuits.
I wish there was a way to punish all the people participating in these data sharing things. Because it is endemic, a gold rush even, with every single individual involved lacking scruples as they sell their fellow man out
It could be exaggeration, or “potential” in this case could include data from people and companies who expressed interest in the product captured during lead generation.
I work in Prague and I know some people who work for Avast. It's an open secret that the company got pretty rich on selling customer data from their antivirus (that was before GDPR, though). I am glad that somebody confirmed the rumor.
My question on avast is: How do we know it has completely uninstalled from our devices?
Given the reputability of this company whose to say they don't leave something behind after we uninstall.
It would be great if someone with the means could do a before and after test to verify this.
Avast also portscans your local network. I kept getting alarms on my Symantec laptop and couldn't figure out why. Took running TCPNetView on the two avast machines full time and waiting for the alarm to fire to figure it out.
I just tried to uninstall the Avast desktop software, but "avast.hub" and avast "worker" processes remained running even after the "Uninstall Avast" process quit.
I don't use Avast myself (for obvious reasons), but how do they get around HTTPS encryption? Do they install a self-signed root certificate into the OS/browser? Wouldn't that be easily detectable?
NB this article also notes the selling of data - from the article it cites:
> Jumpshot is the data arm of Avast[...] This suite of products, in order to function, must collect and analyze every URL visited by every browser of every machine on which its installed. [...] Because Avast has to see and process all these URLs anyway (in order to serve their function of providing web security), they anonymize, aggregate, and remove any personally-identifiable information from the browser URL visits and then provide them to Jumpshot, who then makes estimates about broad web usage behavior. In my opinion, this is both an ethical way to gather crucial data about what’s happening on the web[...]
The data is heavily anonymized and aggregated before selling. Avast is a Czech company under GDPR with regulators breathing on it’s neck.
“Selling data to Google” is true as much as when my github project is cloned by Google guy and I claim it’s used by Google :)
It’s a free product and it’s written in T&S, why is Vice so sensational?
> The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies' LinkedIn pages, particular YouTube videos, and people visiting porn websites.
> "It's very granular, and it's great data for these companies, because it's down to the device level with a timestamp," the source said, referring to the specificity and sensitivity of the data being sold
> Jumpshot gave Omnicom access to all click feeds[...] The product includes [...] "the entire URL string"
> A set of Jumpshot data obtained by Motherboard and PCMag shows how each visited URL comes with a precise timestamp down to the millisecond, which could allow a company with its own bank of customer data to see one user visiting their own site, and then follow them across other sites in the Jumpshot data.
How can you _possibly_ anonymise that at scale? Maybe someone searches an email address, or searches a unique phrase, such as a nickname, that identifies them. If someone makes multiple searches for directions from/to a particular place, it's probably their home or work. If you are logged in to a company's site, they likely know exactly who you are, and can correlate their logs with the Jumpshot logs (URL+millisecond timestamp could very well be unique) to follow you across other sites. The article notes that some products include "inferred gender" and "inferred age" - what _else_ can you infer from the provided data that may be enough to ID you?
Even if they _aren't_ directly selling it, it's a uncomfortable amount of information for a company to have (at best).
I can see how the data could be anonymised but when Google get it the access time, coupled with the referer (sic.) info could completely de-anonymise some data for Google, eg you arrived there from a Google search.
We regularly call out random browser extensions doing the same thing, it's not sensationalized to call out a top 10 manufacturer in the "security" space on this behaviour. Anonymization of search histories has, time and time again, been shown to be largely ineffective.
https://www.avast.com/eula does not mention Jumpshot and grepping around does not indicate any sensible anonymization and aggregation efforts, just the default legalese whereas the demo video on https://www.jumpshot.com/solutions/industry/retail leads me to believe that while they might not tie histories to a single user, they show statistics like "XX% of users shopping at A, went on to buy at B" which indicates at least some level of unaggregated data / tracking.
If the Vice article is to be believed, that's certainly enough to at least raise an eyebrow. The opt-in the article talks about is likely to be an underspecified mess that's intended to decieve the user, this functionality has simply no place in an AV package. Let's not act like it's hard to get users to press a shiny green button these days. That might be found GDPR compliant in court, it's still not morally right.
You’re correct about the EULA, any idea when it was last changed?
I’ll look into this and correct myself but since I know few people working there, I heard stories about the process and how regulated it is.
The top of that page says "Version 1.11 (Revised April 1, 2019)", the history seems to be this (linked at the bottom): https://www.avast.com/eula-legacy
I'll give them the benefit of the doubt but if the Vice report is accurate, the business practice needs changing, not their terms of service.
Blocklisting the extensions would disable existing installations and stop ancillary data collection, which is prohibited by Firefox Add-on Policies.
https://extensionworkshop.com/documentation/publish/add-on-p...
> Mozilla expects that the add-on limits data collection whenever possible, in keeping with Mozilla's Lean Data Practices and Mozilla's Data Privacy Principles, and uses the data only for the purpose for which it was originally collected.
There are a number of browser extensions maintained by antivirus companies that use security as a disguise to collect and monetize user data. It is time for Google and Mozilla to act on this issue and protect users from these predatory practices.
Benign extensions that are genuinely useful and don't have a company behind them do not get this treatment [1], they are blocked whitout preliminary contact with developers [2].
[1] https://www.jeremiahlee.com/posts/page-translator-is-dead/
[2] https://www.ghacks.net/2019/11/05/mozilla-bans-all-extension...