I think when I was younger I’d assumed the sheer number of technical users out there would mean it would be hard for companies to get away with things like this but these days I realize this sort of analysis and public exposition is actually rare and the number of skilled developers investing time in this is slim
Perhaps collectively as a community we can create public bug and privacy bounties that enable and incentivise more work like this
It is becoming harder and harder to troubleshoot which apps talk what to which servers because all off them, including OS, talk something all the time.
Back in the day if user told me they have a problem accessing some Internet content I would instruct them to close all the applications and start to dump their traffic on firewall and proxy. There wouldn't be any traffic from their IP address. Then, when they started the application I would see if traffic goes through proxy or directly through firewall, and make adjustments, like putting destination domain on an exclusion list in proxy or destination ip and port on an exclusion list in firewall.
Nowadays, Windows 10 without any applications started sends hundreds of requests per minute to dozens of IPs. Something respects global proxy settings, something not. I guess Android is even worse.
Protip: you can configure Windows Firewall to block all outbound traffic on a per-process basis. It’s not advertised as a feature though. There’s also ‘netsh http’ too. (I’m not sure how to block HTTP requests originating in the kernel though)
“Responsible disclosure” is a concept mostly proposed by companies looking to accommodate their own willful irresponsibility. This is even more true in the case of intentional privacy violations by software vendors. The responsible thing is to immediately put these companies on blast the moment this kind of spying is uncovered.
I do see your point, but I still think a standardised way to at least make sure the vendor is aware of the issue would be needed if we're talking about a formal program. Not necessarily holding off publishing to do so though.
But I don't mean to back the side of vendors unduly here...
> Software vendors do this on purpose; they don't need notification.
I must admit I didn't put much thought into my comment on ethics but I guess what I had in mind is perhaps a scenario where the behaviour is not actually intentional, and the vendor should at least be properly informed that there may be leakage (to them) of private data as opposed to just jumping straight to blogging about it.
So rather than "responsible disclosure" perhaps just a code of conduct to ensure that such a program doesn't just attract people looking for glory and blog posts, but actually has a standardised way to report these issues to the vendor and give them an opportunity to fix and/or respond.
I don't mean to dilute the core of the idea though, it's a good one, and it definitely needs to be geared towards being in favour of the consumer rather than letting the vendor off the hook.
The only value in responsible disclosure is protection of users. If you figure out there's a way to harm a boatload of people, it's nice to do what you can to ensure it can't happen before telling everybody how. It makes sense. But there's a very good reason it comes with a not-too-distant deadline before you give up on it.
But this? We're talking about finding ways that people are being actively harmed. How does "responsible disclosure" come into play here?
The only thing it would seem to do is to protect companies and their bad decisions. That's not the point. At best they've screwed up, and at worst they're actively malicious. How do users not deserve to know that they are being harmed as soon as possible? How do potential users not deserve to know that they will be harmed by using the product, and that the company is either doing a poor job of protecting them or actively trying to exploit them?
There's no reason to try to attach any ideas of "responsible disclosure" here unless you're explicitly trying to protect the vendor.
I’m not suggesting a delay. Responsible disclosure was the wrong term to use.
The distinction I was trying to draw is rather than just blogging about it or unleashing a Twitter storm and jumping straight to an adversarial public crucifixion of the vendor (and by all means do that as well), there should be a standardised process of also contacting that vendor directly and engaging with them to give them an opportunity to fully understand what is being reported, reproduce the issue (in the case of it being unexpected) and fixing the problem. Some vendors won’t engage or will stick their head in the sand, but others may actually choose to address the problem. This is also in the users’ best interests.
Some issues will hit Hacker News or gain visibility in other ways, but other issues that are published may not naturally reach the eyes of someone at a vendor unless the person publishing actually takes steps to contact them. That’s the point I was trying to get across.
Not suggesting any of that is a prerequisite to publishing anything publicly in parallel.
Not only is this kind of technical user rare, but public response to a post like this is also demonstrably rare. Of the people who read this, only a fraction will consider changing their configuration to stop Wacom from sending this data.
Incentives are well aligned for a corporation to just try it.
I think we all as the end users could use a bit more support in terms of getting access to good information on configuring all the things we might use, being able to make better and more effective choices overall.
For example, one thing that jumped to mind was we seem to be lacking any objective measures of the speed of various OS versions, so everybody is always upgrading and claiming its faster, but is it objectively faster every time? What kind of regressions might happen?
There's nobody that is spending the time figuring out this kind of information, so everyone is kind of uninformed and there's more pressure to always upgrade.
That’s an interesting idea, specifically the privacy bounty part, I imagine one of the difficulties here would be who handles the adjudication of it’s a privacy issue or not /you get paid or not.
As much as I struggle to personally get behind crypto, this is exactly the sort of motivating use case that DAOs (decentralized autonomous organizations) are intending to solve. User would pool their crypto currency, gaining voting rights in the process, researchers can submit to the pool to collect a bounty and members of the DAO get the opportunity to vote on bounty release. All this is would be built to happen autonomously
It's a funky idea, at the moment I'm suggesting more as a curiosity rather than thinking it's the right approach for something like this
My immediate criticism is you could end up with an organisation that _is_ ultimately centralized but now the major players would be hidden. Currently crypto seems to generally tend towards oligarchic growth, so I imagine you'd have a few players that control most of the shares and many people controlling negligible portions. Perhaps these issues (not to mention the energy costs) can be solved, but right now I'm curious but skeptical about these ideas
There are tons of highly skilled and wealthy folks in the HN community, who are upset about issues such as privacy in tech, poor security in public sector organizations (utilities, education), etc. With some good, informal, leadership, we could put the community's resources to good use and help solve these problems.
Folks who don't see meaning in their regular jobs could find contributing their skill or money to this and similar projects fulfilling and rewarding.
Wacom collecting kinda intrusive information (likely justified under knowing what applications to test their product against) is not some sort of oligarchial/royalty based plot.
I mean, everything you wrote sounds great, it just doesn't mean anything or have any relation to reality.
Wacom (ワコム, or wakomu) is also a Japanese company, not American.
It's weird to presuppose that the US even matters when Wacom is a multinational company that operates domestically in Japan and also in Europe. This is probably a GDPR issue for Europeans.
I think when I was younger I’d assumed the sheer number of technical users out there would mean it would be hard for companies to get away with things like this but these days I realize this sort of analysis and public exposition is actually rare and the number of skilled developers investing time in this is slim
Perhaps collectively as a community we can create public bug and privacy bounties that enable and incentivise more work like this