Hacker News new | past | comments | ask | show | jobs | submit login

By-passing the discussion whether one actually needs root kit powered endpoint surveillance software such as CS perhaps an open-source solution would be a killer to move this whole sector to more ethical standards. So the main tool would be open source and it would be transparent what it does exactly and that it is free of backdoors or really bad bugs. It could be audited by the public. On the other hand it could still be a business model to supply malware signatures as a security team feeding this system.



I'd say no. Kolide is one such attempt, and their practices, and how it's used in companies, are as insidious as those from a proprietary product. As a user, it gives me no assurance that an open source surveillance rootkit is better tested and developed, or that it has my best interests in mind.

The problem is the entire category of surveillance software. It should not exist. Companies that use it don't understand security, and don't trust their employees. They're not good places to work at.


> Companies that use it don't understand security

What should these companies understand about security exactly?

And aren’t they kinda right to not trust their employees if they employ 50,000 people with different skills and intentions?


"And aren’t they kinda right to not trust their employees if they employ 50,000 people with different skills and intentions?"

Yes, in a 50k employee company, the CEO won't know every single employee and be able to vouch for their skills and intentions.

But in a non-dysfunctional company, you have a hierarchy of trust, where each management level knows and trusts the people above and below them. You also have siloed data, where people have access to the specific things they need to do their jobs. And you have disaster mitigation mechanisms for when things go wrong.

Having worked in companies of different sizes and with different trust cultures, I do think that problems start to arise when you add things like individual monitoring and control. You're basically telling people that you don't trust them, which makes them see their employer in an adversarial role, which actually makes them start to behave less trustworthy, which further diminishes trust across the company, harms collaboration, and eventually harms productivity and security.


Setting aside the possibility of deploying an EDR like Crowdstrike just being a box ticking exercise for compliance or insurance purposes, can something like an EDR be used not because of a lack of trust but a desire to protect the environment?

A user doesn’t have to do anything wrong for the computer to become compromised, or even if they do, being able to limit the blast radius and lock down the computer or at least after the fact have collected the data to be able to identify what went wrong seems important.

How would you secure a network of computers without an agent that can do anti-virus, detect anomalies, and remediate them? That is to say, how would you manage to secure it without doing something that has monitoring and lockdown capabilities? In your words, signaling that you do not trust the users?


This. From all the comments I've seen in the multiple posts and threads about the incident, this simple fact seems to be the least discussed. How else to protect a complex IT environment with thousands of assets in form of servers and workstations, without some kind of endpoint protection? Sure, these solutions like CrowdStrike et al are box-checking and risk transferring exercises in one sense, but they actually work as intended when it comes to protecting endpoints from novel malware and TTP:s. As long as they don't botch their own software, that is :D


> How else to protect a complex IT environment with thousands of assets in form of servers and workstations, without some kind of endpoint protection?

There is no straightforward answer to this question. Assuming that your infrastructure is "secure" because you deployed an EDR solution is wrong. It only gives you a false sense of security.

The reality is that security takes a lot of effort from everyone involved, and it starts by educating people. There is no quick bandaid solution to these problems, and, as with anything in IT, any approach has tradeoffs. In this case, and particularly after the recent events, it's evident that an EDR system is as much of a liability as it is an asset—perhaps even more so. You give away control of your systems to a 3rd party, and expect them to work flawlessly 100% of the time. The alarming thing is how much this particular vendor was trusted with critical parts of our civil infrastructure. It not only exposes us to operational failures due to negligence, but to attacks from actors who will seek to exploit that 3rd party.


I totally agree. In my current work environment, we do deploy EDR but it is primarily for assets critical for delivering our main service to customers. Ironically, this incident caused them all to be unavailable and there is for sure a lesson to be learned here!

It is not considered a silver bullet by the security team, rather a last-resort detection mechanism for suspicious behavior (for example if the network segmentation or access control fails, or someone managed to get foothold by other means). It also helps them identify which employees need more training as they keep downloading random executables from the web.


> starts by educating people

Any security certification has a section on regularly educating employees on the topic.

To your point, I agree that companies are attempting to bypass the hard work by deploying a tool and thinking they are done.


Absolutely, training is key. Alas, managers don't seem to want their employees spending time on anything other than delivering profit and so the training courses are zipped through just to mark them as completed.

Personally, I don't know how to solve that problem.


It is a good question. Is there a possibility of fundamentally fixing software/hardware to eliminate the vectors that malware exploits to gain a foot hold at all? e.g. not storing return address on the stack or letting it be manipulated by callee? memory bounds enforcement, either statically at compile time, or with the help of hardware, to prevent writing past memory not yours? (Not asking about feasibility of coexisting with or migrating from the current world, just about the possibility of fundamentally solving this at all...)


Economic drivers spring to mind, possibly connected with civil or criminal liability in some cases.

But this will be the work of at least two human generations; our tools and work practices are woefully inadequate, so even if the pointy haired bosses (fearing imprisonment for gratuitous failure) and grasping, greedy investors fear (for the destruction of “hard earned” capital), it’s not going to be done in the snap of our fingers, not least because the people occupying technology industry - and this is an overgeneralisation, but I’m pretty angry so I’m going to let it stand - Just Don’t Care Enough.

If we cared, it would be nigh on impossible for my granny to get tricked to pop her Windows desktop by opening an attachment in her email client.

It wouldn’t be possible to sell (or buy!) cloud services for which we don’t get security data in real time and signal about what our vendor advises to do if worst comes to worst.

And on and on.


"But in a non-dysfunctional company, you have a hierarchy of trust, where each management level knows and trusts the people above and below them. "

Even in a company of two sometimes a husband or a wife betrays the trust. Now multiply that probability by 50000.


Yet we don't apply total surveillance to people. The reason isn't just ethics and US constitution, but also that it's just not possible without destroying society. Same perhaps applies to computer systems.


Which is a completely different argument


I think it doesn't. I think that the kind of security the likes of CrowdStrike promise is fundamentally impossible to have, and pursuing it is a fool's errand.


I disagree. You seem to start from a premise that all people are honest, except those that aren't, but you don't work with or meet dishonest people, unless the employer sets himself up in an adversarial role?

As the other reply to your comment said: the world is not 'fair' or 'honest', that's just a lie told to children. Apart from geuinely evil people, there are unlimited variables that dictate people's behavior. Culture, personality, nutrition, financial situation, mood, stress, bully coworkers, intrinsic values, etc etc. To think people are all fair and honest "unless" is a really harmful worldview to have and in my opinion the reason for a lot of bad things being allowed to happen and continue (troughout all society, not just work).

Zero-trust in IT is just the digitized version of "trust is earned". In computers you can be more crude and direct about it, but it should be the same for social connections and interactions.


> You seem to start from a premise that all people are honest

You have to start with that premise otherwise organizations and society fail. Every hour of every day, even people in high security organizations have opportunities to betray the trust bestowed on them. Software and processes are about keeping honest people honest. The dishonest ones you cannot do too much about but hope you limit the damage they can cause.

If everyone is treated as dishonest then there will eventually be an organizational breakdown. Creativity, high productivity, etc... do not work in a low/zero trust environment.


That’s a lie we tell children so they think the world is fair.

A Marxist reading would suggest alienation, but a more modern one would realize that it is a bit more than that: to enable modern business practices (both good and bad!) we designed systems of management to remove or reduce trust and accountability in the org, yet maintain as similar results to a world that is more in line with the one you believe is possible.

A security professional though would tell you that even in such a world, you can not expect even the most diligent folks to be able to identify all risks (e.g. phishing became so good, even professionals can’t always discern the real from fake), or practice perfect opsec (which probably requires one to be a psychopath).


Security is a process not a product. Anyone selling you security as a product is scamming you.

These endpoint security companies latch onto people making decisions, those people want security and these software vendors promise to make the process as easy as possible. No need to change the way a company operates, just buy our stuff and you're good. That's the scam.


Exactly, well said.

Truthfully, it must be practically infeasible to transform security practices of a large company overnight. Most of the time they buy into these products because they're chasing a security certification (ISO 27001, SOC2, etc.), and by just deploying this to their entire fleet they get to sidestep the actually difficult part.

The irony is that at the end of this they're not anymore "secure" than they were before, but since they have the certification, their customers trust that they are. It's security theater 101.


whether you morally agree with surveillance software's purpose is not the same as whether a particular piece of surveillence software works well or not.

I would imagine an open source version of crowdstrike would not have had such a bad outcome.


I disagree with the concept of surveillance altogether. Computer users should be educated about security, given control of their devices, and trusted that they will do the right thing. If a company can't do that, that's a sign that they don't have good security practices to begin with, and don't do a good job at hiring and training.

The only reason this kind of software is used is so that companies can tick a certification checkbox that gives the appearance of running a tight ship.

I realize it's the easy way out, and possibly the only practical solution for a large corporation, but then this type of issues is unavoidable. Whether the product is free or proprietary makes no difference.


Most people do not understand, or care to understand, what "security" means.

You highlight training as a control. Training is expensive - to reduce cost and enhanced effectiveness, how do you focus training on those that need it without any method to identify those that do things in insecure ways?

Additionally, I would say a major function of these systems is not surveillance at all - it is preventive controls to prevent compromise of your systems.

Overall, your comment strikes me a naive and not based on operational experience.


This type of software is notorious for severely degrading employees' ability to do their jobs, occasionally preventing it entirely. It's a main reason why "shadow IT" is a thing - bullshit IT restrictions and endpoint security malware can't reach third-party SaaS' servers.

This is to say, there are costs and threats caused by deploying these systems too, and they should be considered when making security decisions.


Explain exactly how any AV prevents a user from checking e-mails and opening word?

The years I spent doing IT at that level, every time, every single time I got a request for admin privileges to be granted to a user or for software to be installed on an endpoint we already had a solution in place for exactly what the user wanted, installed and tested on their workstation that was taught in onboarding and they simply "forgot".

Just like the users I had to reset their passwords for every monday because they forgot their passwords. It's an irritation but that doesn't mean they didn't do their job well. They met all performance expectations, they just needed to be handheld with technology .

The real world isn't black and white and this isn't Reddit.


> Explain exactly how any AV prevents a user from checking e-mails and opening word?

For example by doing continuous scans that consume so much CPU the machine stays thermally throttled at all times.

(Yes, really. I've seen a colleague raising a ticket about AV making it near-impossible to do dev work, to which IT replied the company will reimburse them for a cooling pad for the laptop, and closed the issue as solved.)

The problem is so bad that Microsoft, despite Defender being by far the lightest and least bullshit AV solution, created "dev drive", a designated drive that's excluded by design from Defender scanning, as a blatant workaround for corporate policies preventing users and admins from setting custom Defender exclusions. Before that, your only alternative was to run WSL2 or a regular VM, which are opaque to AVs, but that tends to be restricted by corporate too, because "sekhurity".

And yes, people in these situations invent workarounds, such as VMs, unauthorized third-party SaaS, or using personal devices, because at the end of the day, the work still needs to be done. So all those security measures do is reduce actual security.


Most AV and EDR solutions support exceptions, either on specific assets or fleets of assets. You can make exceptions for some employees (for example developers or IT) while keeping (sane) defaults for everybody else. Exceptions are usually applied on file paths, executable image names, file hashes, signature certificates or the complete asset. It sounds like people are applying these solutions wrong, which of course has a negative outcome for everybody and builds distrust.


In theory, those solutions could be used right. In practice, they never are.

People making decisions about purchasing, deploying and configuring those systems are separated by many layers from rank-and-file employees. The impact on business downstream is diffuse and doesn't affect them directly, while the direct incentives they have are not aligned with the overall business operations. The top doesn't feel the damage this is doing, and the bottom has no way of communicating it in a way that will be heard.

It does build distrust, but not necessarily in the sense that "company thinks I'm a potential criminal" - rather, just the mundane expectation that work will continue to get more difficult to perform with every new announcement from the security team.


I'm going to just echo my sibling comment here. This seems like a management issue. If IT wouldn't help it was up to your management to intervene and say that it needs to be addressed.

Also I'm unsure I've ever seen an AV even come close to stressing a machine I would spec for dev work. Likely misconfigured for the use case but I've been there and definitely understand the other side of the coin, sometimes a beer or pizza with someone high up at IT gets you much further than barking. We all live in a society with other people.

I would also hazard a guess that the defender drive is more a matter of just making it easier for IT to do the right thing, requested by IT departments more than likely. I personally have my entire dev tree excluded from AV purely because of false positives on binaries and just unnecessary scans because the fines change content so regularly. That can be annoying to do with group policy if where that data is stored isn't mandated and then you have engineers who would be babies about "I really want my data in %USERPROFILE%/documents instead oF %USERPROFILE%/source" now IT can much easier just say that the Microsoft blessed solution is X and you need to use it.

Regarding WSL, if it's needed for you job then go for it and have you manager out in a request. However if you are only doing it to circumvent IT restrictions, well don't expect anyone to play nice.

On the person devices note. If there's company data on your device it and all it's content can be subpoenad in a court case. You really want that? Keep work and personal seperate, it really is better for all parties involved.


> sometimes a beer or pizza with someone high up at IT gets you much further than barking. We all live in a society with other people.

That's true, but it gets tricky in a large multinational, when the rules are set by some team in a different country, whose responsibilities are to the corporate HQ, and the IT department of the merged-in company I worked for has zero authority on the issue. I tried, I've also sent tickets up the chain, they all got politely ignored.

From the POV of all the regular employees, it looks like this: there are some annoying restrictions here and there, and you learn how to navigate the CPU-eating AV scans; you adapt and learn how to do your work. Then one day, some sneaky group policy update kills one of your workarounds and you notice this by observing that compilation takes 5x as long as it used to, and git operations take 20x as long as they should. You find a way to deal (goodbye small commits). Then one day, you get an e-mail from corporate IT saying that they just partnered with ESET or CrowdStrike or ZScaler or not, and they'll be deploying the new software to everyone. Then they do, and everything goes to shit, and you need to start to triple every estimate from now on, as the new software noticeably slows down everything across the board. You think to yourself, at least corporate gave you top-of-the-line laptops with powerful CPUs and absurd amount of RAM; too bad for sales and managers who are likely using much weaker machines. And then you realize that sales and management were doing half their work in random third-party SaaS, and there is an ongoing process to reluctantly in-house some of the shadow IT that's been going on.

Fortunately for me, in my various corporate jobs, I've always managed to cope by using Ubuntu VMs or (later) WSL2, and that this always managed to stay "in the clear" with company security rules. Even if it meant I had to figure out some nasty hacks to operate Windows compilers from inside Linux, or to stop the newest and bestest corporate VPN from blackholing all network traffic to/from WSL2 (was worth it, at least my work wasn't disrupted by the Docker Desktop licensing fiasco...). I never had to use personal devices, and I learned long ago to keep firm separation between private and work hardware, but for many people, this is a fuzzy boundary.

There was one job where corporate installed a blatant keylogger on everyones' machines, and for a while, with our office IT's and our manager's blessing, our team managed to stave it off - and keep local admin rights - by conveniently forgetting to sign relevant consent forms. The bad taste this left was a major factor in me quitting that job few months later, though.

Anyway, the point to these stories is, I've experienced first-hand how security in medium and large enterprises impacts day-to-day work. I fought both alongside and against IT departments over these. I know that most of the time, from the corporate HQ's perspective, it's difficult to quantify the impact of various security practices on everyone's day-to-day work (and I briefly worked in cybersecurity, so I also know this isn't even obvious to people this should be considered!). I also know that large organizations can eat a lot of inefficiency without noticing it, because at that size, they have huge inertia. The corporate may not notice the work slowing down 2x across the board, when it's still completing million-dollar contracts on time (negotiated accordingly). It just really sucks to work in this environment; the inefficiency has a way of touching your soul.

EDIT:

The worst is the learned helplessness. One day, you get fed up with Git taking 2+ minutes to make a goddamn commit, and you whine a bit on the team channel. You hope someone will point out you're just stupid and holding it wrong, but no - you get couple people saying "yeah, that's how it is", and one saying "yeah, I tried to get IT to fix that; they told me a cooling stand for the laptop should speed things a bit". You eventually learn that security people just don't care, or can't care, and you can only try to survive it.

(And then you go through several mandatory cybersecurity trainings, and then you discover a dumb SQL injection bug in a new flagship project after 2 hours of playing with it, and start questioning your own sanity.)


Look I'm not disagreeing with you that it sucks. I just know I've been on the other side of the fence and people like to throw shade at IT when they themselves are just trying to do their jobs.

And let's see if we can agree that likely corporate multinationals are probably a bad thing, or at least micromanaging from the stratosphere when you cannot see how youe decision effects things. That however is likely a management antipattern and if it is really negatively effecting your mental health but you are still meeting performance expectations I'm not against you making a decision to walk.

Sometimes the only way to solve those problems is to cause turnover and make management look twice, and a lot of time one key person leaving can cause an exodus that will force change.

Not being negative here, sometimes you are just in a toxic relationship and need to get out.


> Computer users should be educated about security, given control of their devices, and trusted that they will do the right thing.

Imagine you are a bank. Imagine you have no way to ensure no employee is a crook.

It does happen.


> Imagine you have no way to ensure no employee is a crook.

Wait, are you saying we have gotten rid of all the crooks in a bank/or those that handle money?


I'm curious about this bad 'news' about Kolide. Could you tell me more about your experience with it?


I don't have first-hand experience with Kolide, as I refused to install it when it was pushed upon everyone in a company I worked for.

Complaints voiced by others included false positives (flagging something as a threat when it wasn't, or alerting that a system wasn't in place when it was), being too intrusive and affecting their workflow, and privacy concerns (reading and reporting all files, web browsing history, etc.). There were others I'm not remembering, as I mostly tried to stay away from the discussion, but it was generally disliked by the (mostly technical) workforce. Everyone just accepted it as the company deemed it necessary to secure some enterprise customers.

Also, Kolide's whole spiel about "honest security"[1] reeks of PR mumbo jumbo whose only purpose is to distance themselves from other "bad" solutions in the same space, when in reality they're not much different. It's built by Facebook alumni, after all, and relies on FB software (osquery).

[1]: https://honest.security/


I think some of the information here is misleading and a bit unfair.

> being too intrusive and affecting their workflow

Kolide is a reporting tool, it doesn't for example remove files or put them in quarantine. You also cannot execute commands remotely like in Crowdstrike. As you mentioned, it's based on osquery which makes it possible to query machine information using SQL. Usually, Kolide is configured to send a Slack message or email if there is a finding, which I guess can be seen as intrusive but IMO not very.

> reading and reporting all files

It does not read and report all files as far as I know, but I think it's possible to make SQL queries to read specific files. But all files or file names aren't stored in Kolide or anything like that. And that live query feature is audited (ens users can see all queries run against their machines) and can be disabled by administrators.

> web browsing history

This is not directly possible as far as I know, but maybe via a file read query but it's not something built-in out of the box/default. And again, custom queries are transparent to users and can be disabled.

> Kolide's whole spiel about "honest security"[1] reeks of PR mumbo jumbo whose only purpose is to distance themselves from other "bad" solutions in the same space

While it's definitely a PR thing, they might still believe in it and practice what they preach. To me it sounds like a good thing to differentiate oneself from bad actors.

Kolide gives users full transparency of what data is collected via their Privacy Center, and they allow end users to make decisions about what to do about findings (if anything) rather than enforcing them.

> It's built by Facebook alumni, after all, and relies on FB software (osquery).

For example React and Semgrep is also built by Facebook/Facebook alumni, but I don't really see the relevance other than some ad-hominem.

Full disclosure: No association with Kolide, just a happy user.


Great news - Kolide has a new integration with Okta that'll prevent you from logging into anything if Kolide has a problem with your device!


I concede that I may be unreasonably biased against Kolide because of the type of software it is, but I think you're minimizing some of these issues. My memory may be vague on the specifics, but there were certainly many complaints in the areas I mentioned in the company I worked at.

That said, since Kolide/osquery is a very flexible product, the complaints might not have been directed at the product itself, but at how it was configured by the security department as well. There are definitely some growing pains until the company finds the right balance of features that everyone finds acceptable.

Re: intrusiveness, it doesn't matter that Kolide is a report-only tool. Although, it's also possible to install extensions[1,2] that give it a deeper control over the system.

The problem is that the policies it enforces can negatively affect people's workflow. For example, forcing screen locking after a short period of inactivity has dubious security benefits if I'm working from a trusted environment like my home, yet it's highly disruptive. (No, the solution is not to track my location, or give me a setting I have to manage...) Forcing automatic system updates is also disruptive, since I want to update and reboot at my own schedule. Things like this add up, and the combination of all of them is equivalent to working in a babyproofed environment where I'm constantly monitored and nagged about issues that don't take any nuance into account, and at the end of the day do not improve security in the slightest.

Re: web browsing history, I do remember one engineer looking into this and noticing that Kolide read their browser's profile files, and coming up with a way to read the contents of the history data in SQLite files. But I am very vague on the details, so I won't claim that this is something that Kolide enables by default. osquery developers are clearly against this kind of use case[3]. It is concerning that the product can, in theory, be exploited to do this. It's also technically possible to pull any file from endpoints[4], so even if this is not directly possible, it could easily be done outside of Kolide/osquery itself.

> Kolide gives users full transparency of what data is collected via their Privacy Center

Honestly, why should I trust what that says? Facebook and Google also have privacy policies, yet have been caught violating their users' privacy numerous times. Trust is earned, not assumed based on "trust me, bro" statements.

> For example React and Semgrep is also built by Facebook/Facebook alumni, but I don't really see the relevance other than some ad-hominem.

Facebook has historically abused their users' privacy, and even has a Wikipedia article about it.[5] In the context of an EDR system, ensuring trust from users and handling their data with the utmost care w.r.t. their privacy are two of the most paramount features. Actually, it's a bit silly that Kolide/osquery is so vocal in favor of preserving user privacy, when this goes against working with employer-owned devices where employee privacy is definitely not expected. In any case, the fact this product is made by people who worked at a company built by exploiting its users is very relevant considering the type of software it is. React and Semgrep have an entirely different purpose.

[1]: https://github.com/trailofbits/osquery-extensions

[2]: https://github.com/hippwn/osquery-exec

[3]: https://github.com/osquery/osquery/issues/7177

[4]: https://osquery.readthedocs.io/en/stable/deployment/file-car...

[5]: https://en.wikipedia.org/wiki/Privacy_concerns_with_Facebook


> For example, forcing screen locking after a short period of inactivity has dubious security benefits if I'm working from a trusted environment like my home, yet it's highly disruptive.

There is a better alternative too. Make it a fair game for coworkers to send an invitation to a beer from the forgetful worker's machine to the whole company / department. It works wonders.


If your company is large enough, you can’t really trust your employees. Do you really think google can trust their employees that not a single user does something stupid or even is actively malicious?


Limit their abilities using OS features? Have the vendor fix security issues rather than a third party incompetently slapping on band-aid?

It's like you let one company build your office building and then bring in another contractor to randomly add walls and have others removed while having never looked at the blueprints and then one day "whoopsie, that was a supporting wall I guess".

Why is it not just completely normal but even expected that an OS vendor can't build an OS properly, or that the admins can't properly configure it, but instead you need to install a bunch of crap that fucks around with OS internals in batshit crazy ways? I guess because it has a nice dashboard somewhere that says "you're protected". Checkbox software.


The sensor basically monitors everything that's happening on the system and then uses heuristics and known attack vectors and behavior to for example then lock compromised systems down. For example a fileless malware that connects to a c&c and then begins to upload all local documents and stored passwords, then slowly enumerates every service the employee has access to for vulnerabilities.

If you manage a fleet of tens of thousands of systems and you need to protect against well funded organized crime? Employees running malicious code under their user is a given and can't be prevented. Buying crowdstrike sensor doesn't seem like such a bad idea to me. What would you do instead?


> What would you do instead?

As said, limit the user's abilities as much as possible with features of the OS and software in use. Maybe if you want those other metrics, use a firewall, but not a Tls-breaking virus scanning abomination that has all the same problems, but a simple one that can warn you on unusual traffic patterns. If soneone from accounting starts uploading a lot of data, connects to Google cloud when you don't use any of their products, that should be odd.

If we're talking about organized crime, I'm not convinced crowdstrike in particular doesn't actually enlarge the attack surface. So we had what now as the cause, a malformed binary ruleset that the parser, running with kernel privileges, choked on and crashed the system. Because of course the parsing needs to happen in kernel space and not a sandboxed process. That's enough for me to make assumptions about the quality of the rest of the software, and answer the question regarding attack surface.

Before this incident nobody ever really looked at this product at all from a security standpoint, maybe because it is (supposed to be) a security product and thus cannot have any flaws. But it seems now security researchers all over the planet start looking at this thing and are having a field day.

Bill gates sent that infamous email in the early 2000s, I think after sasser hit the world, that security should be made the no1 priority for Windows. As much as I dislike windows for various reasons, I think overall Microsoft does a rather good job about this. Maybe it's time those companies behind these security products start taking security serious too?


> Before this incident nobody ever really looked at this product at all from a security standpoint

If you only knew how absurd of a statement that is. But in any case, there are just too many threats network IDS/IPS solutions won't help you with, any decent C2 will make it trivial to circumvent them. You can't limit the permissions of your employees to the point of being effective against such attacks while still being able to do their job.


> If you only knew how absurd of a statement that is.

You don't seem to know either since you don't elaborate on this. As said, people are picking this apart on Twitter and mastodon right now. Give it a week or two and I bet we'll see a couple CVEs from this.

For the rest of your post you seem to ignore the argument regarding attack surface, as well as the fact that there are companies not using this kind of software and apparently doing fine. But I guess we can just claim they are fully infiltrated and just don't know because they don't use crowdstrike. Are you working for crowdstrike by any chance?

But sure, at the end of the day you're just gonna weigh the damage this outage did to your bottom line and the frequency you expect this to happen with, against a potential hack - however you even come up with the numbers here, maybe crowdstrike salespeople will help you out - and maybe tell yourself it's still worth it.


In a sense the secure platform already exists. You use web apps as much as possible. You store data in cloud storage. You restrict local file access and execute permissions. Authenticate using passkeys.

The trouble is that people still need local file access, and use network file shares. You have hundreds of apps used by a handful of users that need to run locally. And a few intranet apps that are mission critical and have dubious security. That creates the necessity for wrapping users in firewalls, vpns, tls interception, end point security etc. And the less well it all works the more you need to fill the gaps.


Next you'll be saying "I dont need an immune system..."

Fun fact an attacker only needs to steal credentials from the home directory to jump into a companies AWS account where all the juicy customer data lives, so there are reasons we want this control.

Frankly I'd like to see the smart people complaining help write better solutions rather than hinder.


If that’s all it takes an attacker, you’re doing AWS wrong.


Problem is that many do.

Doing it right requires very capable individuals and a significant effort. Less than it used to take, more than most companies are ready to invest.


This is the real world, everyone is doing something wrong.

The alternative is to replace you with AI yes?


people get lazy


There is an open source alternative. GRR:

https://github.com/google/grr

Every Google client device has it.


There are lots of variants of this. Wazuh, Velociraptor, etc. They have several problems. One is that user-mode EDR is just not very efficient and effective, and kernel mode requires Microsoft driver signing. There are some hoops for that, and I don't know how hard they are, but I don't know of any of these products that seems to be jumping through them.

The other issue is that detection engineering is really expensive, so the detections that are included with CrowdStrike out of the box are your problem if you're using a free product. From a cost perspective you're not getting off a lot cheaper and trying to sell open source and a detection engineer's salary to a CISO who can just buy CrowdStrike instead is understandably a pretty tough sell. Or it was until this weekend, anyway.


It sounds really interesting. But the only thing it does not do is scanning for vira/malwares, although this could be implemented using GRR I guess. How does Google mitigate malware threats in-house?


> By-passing the discussion whether one actually needs root kit powered endpoint surveillance software such as CS perhaps an open-source solution would be a killer to move this whole sector to more ethical standards.

As a red teamer developing malware for my team to evade EDR solutions we come across, I can tell you that EDR systems are essential. The phrase "root kit powered endpoint surveillance" is a mischaracterization, often fueled by misconceptions from the gaming community. These tools provide essential protection against sophisticated threats, and they catch them. Without them, my job would be 90% easier when doing a test where Windows boxes are included.

> So the main tool would be open source and it would be transparent what it does exactly and that it is free of backdoors or really bad bugs.

Open-source EDR solutions, like OpenEDR [1], exist but are outdated and offer poor telemetry. Assembling various GitHub POCs that exist for production EDR is impractical and insecure.

The EDR sensor itself becomes the targeted thing. As a threat actor, the EDR is the only thing in your way most of the time. Open sourcing them increases the risk of attackers contributing malicious code to slow down development or introduce vulnerabilities. It becomes a nightmare for development, as you can't be sure who is on the other side of the pull request. TAs will do everything to slow down the development of a security sensor. It is a very adversarial atmosphere.

> On the other hand it could still be a business model to supply malware signatures as a security team feeding this system.

It is actually the other way around. Open-source malware heuristic rules do exist, such as Elastic Security's detection rules [2]. Elastic also provides EDR solutions that include kernel drivers and is, in my experience, the harder one to bypass. Again, please make an EDR without drivers for Windows, it makes my job easier.

> *It could be audited by the public."

The EDR sensors already do get "audited" by security researchers and the threat actors themselves. Reverse engineering and debugging the EDR sensors to spot weaknesses that can be "abused." If I spot things like the EDR just plainly accepting kernel mode shellcode and executing it, I will, of course, publicly disclose that. EDR sensors are under a lot of scrutiny.

[1] https://github.com/ComodoSecurity/openedr [2] https://github.com/elastic/detection-rules


> Open sourcing them increases the risk of attackers contributing malicious code to slow down development or introduce vulnerabilities.

This is a such tired non-sequitur argument with no evidence whatsoever to back it up that the risk is actually higher for open source versus closed source.

I can just easily argue that a state or non-state actor could buy[1], bribe or simply threaten to get weak code in a proprietary system, without users having any means to ever find out. On the other hand, it is always easier(easier not easy) to discover compromise in open-source like it happened with xz[2] and verify such reports independently.

If there is no proof that compromise is less likely with closed source and it is far easier to discover them in open-source, the logical conclusion is simply open source is better for security libraries.

Funding defensive security infrastructure which is open source and freely available for everyone to use even with 1/100th of the NSA budget that is effectively only offensive, would improve info-security enormously for everyone not just from nation state actors, but also from scammers etc. Instead we get companies like CS that have enormous vested interest in seeing that never happens and trying to scare the rest of us that open-source is bad for security.

[1] https://en.wikipedia.org/wiki/Dual_EC_DRBG

[2] https://en.wikipedia.org/wiki/XZ_Utils_backdoor


I could see an open source solution with "private" or vendor specific definition files. But I think I'd disagree with the statement that open sourcing everything wouldn't cause any problem. Engineering isn't necessarily about peer reviewed studies, it's about empirical observations and applying the engineering method (which can be complemented by a more scientific one but shouldn't be confused for it). It's clear that this type of stuff is a game of cat and mouse. Attackers search for any possible vulnerability, bypass etc. It does make sense that exposing one side's machinery will make it easier for the other side to see how it works. A good example of that is how active hackers are at finding different ways to bypass Windows Defender by using certain types of Office file formats, or certain combinations of file conversions to execute code. Exposing the code would just make all of those immediately visible to everyone.

Eventually that's something that gets exposed anyways, but I think the crucial part is timing and being a few steps ahead in the cat and mouse game. Otherwise I'm not sure what kind of proof would even be meaningful here.


> open sourcing everything wouldn't cause any problem

That is not what am saying, I am saying open sourcing doesn’t cause more problems than proprietary systems which is the argument OP was making .

Open source is not a panacea, it is just not objectively worse as OP implies.


I actually agree there is no intrinsic advantage in having this piece of software as opensource - closed teams tend to have a more contained collaborator "blast radius", and you don't have 500 forks with patches that may modify behaviour in a subtle way and that are somehow conflated with the original project.

On the other hand, anyone serious about malware development already has "the actual source code", either for defensive operations and offensive operations.


Open source doesn't mean the bazzar, plenty of projects have a cathedral style development.

Bazzar works absolutely fine for security, Linux kernel is one project which does this , all security infrastructure uses it one way or another. The tens of thousands of patches and forks has not once been discovered to have the subtle bug/vulnerability scenario intentionally submitted yet in 30 years .

There seems to be a lot of misconceptions in this thread what open source is or can do. Most of my points have been made by people much better than me for decades now.


I have a different take on this.

I feel having the solution open sourced isn't bad from a code security standpoint, but rathee that it is simply not economically viable. To my knowledge most of the major open source technologies are currently funded by FAANG and purely because it's needed by them to conduct business and the moment it becomes inconvenient for them to support it they fork it or develop their own, see Terraform/Redis...

I also cannot get behind a government funding model purely because it will simply become a design by committee nightmare because this isn't flashy tech. Just see how many private companies have beaten NASA to market in a pretty well funded and very flashy industry. The very government you want to fund these solutions are currently running on private companies infrastructure for all their IT needs.

Yes opensouring is definitely amazing and if executed well will be better, just like communism.


Plenty of fundamental research and development happens in academia fairly effectively.

Government has to fund not run it like any other grant works today. The existing foundations and non profits like Apache or even mixed ones like Mozilla are fairly capable of handling the grants.

Expecting private companies or dedicated volunteers to maintain mission critical libraries like xz is not a viable option as we are doing it now.


Seems like we agree then. There is a middle point and I would actually prefer for it to be some sort of open source one.


> The phrase "root kit powered endpoint surveillance" is a mischaracterization, often fueled by misconceptions from the gaming community.

How exactly is this is mischaracterization? Technically these EDR tools are identical to kernel level anticheat and they are identical to rootkits, because fundamentally they're all the same thing just with a different owner. If you disagree it would be nice if you explained why.

As for open source EDRs becoming the target, this is just as true of closed source EDR. Cortex for example was hilariously easy to exploit for years and years until someone was nice enough to tell them as much. This event from CrowdStrike means that it's probably just as true here.

The fact that the EDR is 90% of the work of attacking a Windows network isn't a sign that we should continue using EDRs. It means that nothing privileged should be in a Windows network. This isn't that complicated, I've administered such a network where everything important was on Linux while end users could run Windows clients, and if anything it's easier than doing a modern Windows/AD deployment. Good luck pivoting from one computer to another when they're completely isolated through a Linux server you have no credentials for. No endpoint should have any credentials that are valid anywhere except on the endpoint itself and no two endpoints should be talking to each other directly: this is in fact not very restrictive to end users and completely shuts down lateral movement - it's a far better solution than convoluted and insecure EDR schemes that claim to provide zero-trust but fundamentally can't, while following this simple rule actually provides you zero-trust.

Look at it this way - if you (and other redteamers) can economically get past EDR systems for the cost of a pentest, what do you think competent hackers with economies of scale and million dollar payouts can do? For now there's enough systems without EDRs that many just won't bother, but as it spread more they will just be exploited more. This is true as well of the technical analogue in kernel anticheat, which you and I can bypass in a couple days of work.

Where we are is that we're using EDRs as a patch over a fundamentally insecure security model in a misguided attempt to keep the convenience that insecurity brings.


Mischaracterization is a quite a good term to use

People don't go around complaining that Microsoft Defender is "rootkit powered endpoint surveillance". It's intent is to protect the system.

There is a lot more suspicion around kernel level anti-cheat software developed by the likes of Epic games due to their ownership than they Crowdstrike or Microsoft.


People don't complain about kernel code from Microsoft because Microsoft wrote the kernel. You don't have a choice but to trust Microsoft with that.

People have been complaining about rootkit powered antimalware for a long time. It didn't start with CrowdStrike: there was a whole debacle about it in the Windows XP days when Microsoft stopped antiviruses from patching the kernel.


The value CrowdStrike provides is the maintenance of the signature database, and being able to monitor attack campaigns worldwide. That takes a fair amount of resources that an open source project wouldn’t have. It’s a bit more complicated than a basic hash lookup program.


Security isn't really a product you can just buy or outsource, but here we are.


Crowdstrike is a gun. A tool. But not the silver bullet. Or training to be able to fire it accurately under pressure at the werewolf.

You can very easily shoot your own foot off instead of slaying the monster, use the wrong ammunition to be effective, or in this case a poorly crafted gun can explode in your hand when you are holding it.


There used to be Winpooch Watchguard, based on ClamAV. Stopped using it when it caused Bluescreens. A "Killer" indeed.


There are a number of OSS EDRs. They all suck.

DAT-style content updates and signature-based prevention are very archaic. Directly loading content into memory and a hard-coded list of threats? I was honestly shocked that CS was still doing DAT-style updates in an age of ML and real-time threat feeds. There are a number of vendors who've offered it for almost a decade. We use one. We have to run updates a couple of times a year.

SMH. The 90's want their endpoint tech back.


There are no "ethical standards" to move to. Nobody should be able to usurp control of our computers. That should simply be declared illegal. Creating contractual obligations that require people to cede control of their computers should also be prohibited. Anything that does this is malware and malware does not become justified or "ethical" when some corporation does it. Open source malware is still malware.


What does “our computer” mean when it is not owned by you, but issued to you to perform a task with by your employer? Does that also apply to the operator at a switchboard in a nuclear missile launch facility?


Does the switchboard in a nuclear missile launch facility run Crowdstrike? I picture it as a high quality analog circuit board that does 1 thing and 1 thing only. No way to run anything else.

Globally networked personal computers were kind of cultural revolution against the setting you describe. Everyone had their own private compute and compute time and everyone could share their own opinion. Computers became our personal extensions. This is what IBM, Atari, Commodore, Be, Microsoft and Apple (and later desktop Linux) sold. Now given this ideology, can a company own my limbs? If not, they can't own my computers.


> What does “our computer” mean when it is not owned by you, but issued to you to perform a task with by your employer?

Well, presuming that:

1. the employee is issued a computer, that they have possession of even if not ownership (i.e. they bring the computer home with them, etc.)

2. and the employee is required to perform creative/intellectual labor activities on this computer — implying that they do things like connecting their online accounts to this computer; installing software on this computer (whether themselves or by asking IT to do it); doing general web-browsing on this computer; etc.

3. and where the extent of their job duties, blurs the line between "work" and "not work" (most salaried intellectual-labor jobs are like this) such that the employee basically "lives in" this computer, even when not at work...

4. ...to the point that the employee could reasonably conclude that it'd be silly for them to maintain a separate "personal" computer — and so would potentially sell any such devices (if they owned any), leaving them dependent on this employer-issued computer for all their computing needs...

...then I would argue that, by the same chain of reasoning as in the GP post, employers should not be legally permitted to “issue” employees such devices.

Instead, the employer should either purchase such equipment for the employee, giving it to them permanently as a taxable benefit; or they should require that the employee purchase it themselves, and recompense them for doing so.

Cyberpunk analogy: imagine you are a brain in a vat. Should your employer be able to purchase an arbitrary android body for you; make you use it while at work; and stuff it full of monitoring and DRM? No, that'd be awful.

Same analogy, but with the veil stripped off: imagine you are paraplegic. Should your employer be allowed to issue you an arbitrary specific wheelchair, and require you to use it at work, and then monitor everything you do with it / limit what you can do with it because it’s “theirs”? No, that’d be ridiculous. And humanity already knows that — employers already can't do that, in any country with even a shred of awareness about accessibility devices. The employer — or very much more likely, the employer's insurance provider — just buys the person the chair. And then it's the employee's chair.

And yes, by exactly the same logic, this also means that issuing an employee a company car should be illegal — at least in cases where the employee lives in a non-walkable area, and doesn't already have another car (that they could afford to keep + maintain + insure); and/or where their commute is long enough that they'd do most non-employment-related car-requiring things around work and thus using their company car. Just buy them a car. (Or, if you're worried they might run away with it, then lease-to-own them a car — i.e. where their "equity in the car" is in the form of options that vest over time, right along-side any equity they have in the company itself.)

> Does that also apply to the operator at a switchboard…

Actually, no! Because an operator of a switchboard is not a “user” of the computer that powers the switchboard, in the same sense that a regular person sitting at a workstation is a "user" of the workstation.

The system in this case is a “kiosk computer”, and the operator is performing a prescribed domain-specific function through a limited UX they’re locked into by said system. The operator of a nuclear power plant is akin to a customer ordering food from a fast-food kiosk — just providing slightly more mission-critical inputs. (Or, for a maybe better analogy: they're akin to a transit security officer using one of those scanner kiosk-handhelds to check people's tickets.)

If the "computer" the nuclear-plant operator was operating, exposed a purely electromechanical UX rather than a digital one — switches and knobs and LEDs rather than screens and keyboards[1] — then nothing about the operator's workflow would change. Which means that the operator isn't truly computing with the computer; they're just interacting with an interface that happens to be a computer.

[1] ...which, in fact, "modern" nuclear plants are. The UX for a nuclear power plant control-center has not changed much since the 1960s; the sort of "just make it a touchscreen"-ification that has infected e.g. automotive has thankfully not made its way into these more mission-critical systems yet. (I believe it's all computers under the hood now, but those computers are GPIO-relayed up to panels with lots and lots of analogue controls. Or maybe those panels are USB HID devices these days; I dunno, I'm not a nuclear control-systems engineer.)

Anyway, in the general case, you can recognize these "the operator is just interacting with an interface, not computing on a computer" cases because:

• The machine has separate system administrators who log onto it frequently — less like a workstation, more like a server.

• The machine is never allowed to run anything other than the kiosk app (which might be some kind of custom launcher providing several kiosk apps, but where these are all business-domain specific apps, with none of them being general-purpose "use this device as a computer" apps.)

• The machine is set up to use domain login rather than local login, and keeps no local per-user state; or, more often, the machine is configured to auto-login to an "app user" account (in modern Windows, this would be a Mandatory User Profile) — and then the actual user authentication mechanism is built into the kiosk app itself.

Hopefully, the machine is using an embedded version of the OS, which has had all general-purpose software stripped out of it to remove vulnerability surface.


> the employee could reasonably conclude that it'd be silly for them to maintain a separate "personal" computer — and so would potentially sell any such devices

What a bizarre leap of logic. Can Fedex employees reasonably sell their non-uniform clothes? Just because the employer in this scenario didn't 100% lock down the computer (which is a good thing because the alternative would be incredibly annoying for day-to-day work), doesn't mean the the employee can treat it as their own. Even from the privacy perspective, it would be pretty silly. Are you going to use the employer provided computer to apply to your next job?


People do do it, though. Especially poor people, who might not use their personal computers very often.

Also, many people don't own a separate "personal" computer in the first place. Especially, again, poor people. (I know many people who, if needing to use "a PC" for something, would go to a public library to use the computers there.)

Not every job is a software dev position in the Bay Area, where everyone has enough disposable income to have a pile of old technology laying around. Many jobs for which you might be issued a work laptop still might not pay enough to get you above the poverty line. McDonald's managers are issued work laptops, for instance.

(Also, disregarding economic class for a moment: in the modern day, most people who aren't in tech solve most of their computing problems by owning a smartphone, and so are unlikely to have a full PC at home. But their phone can't do everything, so if they have a work computer they happen to be sat in front of for hours each day — whether one issued to them, or a fixed workstation at work — then they'll default to doing their rare personal "productivity" tasks on that work computer. And yes, this does include updating their CV!)

---

Maybe you can see it more clearly with the case of company cars.

People sometimes don't own any other car (that actually works) until they get issued a company car; so they end up using their company car for everything. (Think especially: tradespeople using their company-logo-branded work box-truck for everything. Where I live, every third vehicle in any parking lot is one of those.)

And people — especially poorer people — also often sell their personal vehicle when they are issued a company car, because this 1. releases them from the need to pay a lease + insurance on that vehicle, and 2. gets them possibly tens of thousands of dollars in a lump sum (that they don't need to immediately reinvest into another car, because they can now rely on the company car.)


The point is that if you do do it, it's on you to understand the limitations of using someone else property. Just like the difference between rental vs owned housing.

There are also fairly obvious differences between work-issued computers and all of your other analogies:

1. A car (and presumably the cyberpunk android body) is much more expensive than a computer, so the downside of owning both a personal and a work one is much higher.

2. A chair or a wheel chair doesn't need security monitoring because it's a chair (I guess you could come up with an incredibly convoluted scenario where it would make sense to put GPS tracking in a wheelchair, but come on).

> just buys the person the chair. And then it's the employee's chair.

It's not because there's a law against loaning chairs, it's because the chair is likely customized for a specific person and can't be reused. Or if you're talking about WFH scenarios, they just don't want to bother with return shipping.


No, it's the difference between owned housing vs renting from a landlord who is also your boss in a company town, where the landlord has a vested interest in e.g. preventing you from using your apartment to also do work for a competitor.

Which is, again, a situation so shitty that we've outlawed it entirely! And then also imposed further regulations on regular, non-employer landlords, about what kinds of conditions they can impose on tenants. (E.g. in most jurisdictions, your landlord can't restrict you from having guests stay the night in your room.)

Tenants' rights are actually a great analogy for what I'm talking about here. A company-issued laptop is very much like an apartment, in that you're "living in it" (literally and figuratively, respectively), and that you therefore should deserve certain rights to autonomous possession/use, privacy, freedom from restriction/compromise in use, etc.

While you don't literally own an apartment you're renting, the law tries to, as much as possible, give tenants the rights of someone who does own that property; and to restrict the set of legal justifications that a landlord can use to punish someone for exercising those (temporary) rights over their property.

IMHO having the equivalent of "tenants' rights" for something like a laptop is silly, because that'd be a lot of additional legal edifice for not-much gain. But, unlike with real-estate rental, it'd actually be quite practical to just make the "tenancy" case of company IT equipment use impossible/illegal — forcing employers to do something else instead — something that doesn't force employees into the sort of legal area that would make "tenants' rights" considerations applicable in the first place.


No, that would be more like sleeping at the office (purely because of employee preferences, not because the employer forces you to or anything like that) and complaining about security cameras.


Tangent — a question you didn't ask, but I'll pretend you did:

> If employers allowed employees to "bring their own devices", and then didn't force said employees to run MDM software on those devices, then how in the world could the employer guarantee the integrity of any line-of-business software the employee must run on the device; impose controls to stop PII + customer-shared data + trade secrets from being leaked outside the domain; and so forth?

My answer to that question: it's safe to say that most people in the modern day are fine with the compromise that your device might be 100% yours most of the time; but, when necessary — when you decide it to be so — 99% yours, 1% someone else's.

For example, anti-cheat software in online games.

The anti-cheat logic in online games, is this little nugget of code that runs on a little sub-computer within your computer (Intel SGX or equivalent.) This sub-computer acts as a "black box" — it's something the root user of the PC can't introspect or tamper with. However:

• Whenever you're not playing a game, the anti-cheat software isn't loaded. So most of the time, your computer is entirely yours.

You get to decide when to play an online game, and you are explicitly aware of doing so.

• When you are playing an online game, most of your computer — the CPU's "application cores", and 99% of the RAM — is still 100% under your control. The anti-cheat software isn't actually a rootkit (despite what some people say); it can't affect any app that doesn't explicitly hook into it.

• In a brute-force sense, you still "control" the little sub-computer as well — in that you can force it to stop running whatever it's running whenever you want. SGX and the like aren't like Intel's Management Engine (which really could be used by a state actor to plant a non-removable "ring -3" rootkit on your PC); instead, SGX is more like a TPM, or an FPGA: it's something that's ultimately controlled by the CPU from ring 0, just with a very circumscribed API that doesn't give the CPU the ability to "get in the way" of a workload once the CPU has deployed that workload to it, other than by shutting that workload off.

As much as people like Richard Stallman might freak out at the above design, it really isn't the same thing as your employer having root on your wheelchair. It's more like how someone in a wheelchair knows that if they get on a plane, then they're not allowed to wheel their own wheelchair around on the plane, and a flight attendant will instead be doing that for them.

How does that translate to employer MDM software?

Well, there's no clear translation currently, because we're currently in a paradigm that favors employer-issued devices.

But here's what we could do:

• Modern PCs are powerful enough that anything a corporation wants you to do, can be done in a corporation-issued VM that runs on the computer.

• The employer could then require the installation of an integrity-verification extension (essentially "anti-cheat for VMs") that ensures that the VM itself, and the hypervisor software that runs it, and the host kernel the hypervisor is running on top of, all haven't been tampered with. (If any of them were, then the extension wouldn't be able to sign a remote-attestation packet, and the employer's server in turn wouldn't return a decryption key for the VM, so the VM wouldn't start.)

• The employer could feel free to MDM the VM guest kernel — but they likely wouldn't need to, as they could instead just lock it down in much-more-severe ways (the sorts of approaches you use to lock down a server! or a kiosk computer!) that would make a general-purpose PC next-to-useless, but which would be fine in the context of a VM running only line-of-business software. (Remember, all your general-purpose "personal computer" software would be running outside the VM. Web browsing? Outside the VM. The VM is just for interacting with Intranet apps, reading secure email, etc.)

(Why yes, I am describing https://en.wikipedia.org/wiki/Multilevel_security.)


> For example, anti-cheat software in online games

> The anti-cheat software isn't actually a rootkit (despite what some people say); it can't affect any app that doesn't explicitly hook into it.

Out of all examples you could have cited, you chose this one.

https://www.theregister.com/2016/09/23/capcom_street_fighter...

https://twitter.com/TheWack0lian/status/779397840762245124

There you go. An anti-cheat rootkit so ineptly coded it serves as literal privilege escalation as a service. Can we stop normalizing this stuff already?

My computer is my computer, and your computer is your computer.

The game company owns their servers, not my computer. If their game runs on my machine, then cheating is my prerrogative. It is quite literally an exercise of my computer freedom if I decide to change the game's state to give myself infinite health or see through walls or whatever. It's not their business what software I run on my computer. I can do whatever I want.

It's my machine. I am the god of this domain. The game doesn't get to protect itself from me. It will bend to my will if I so decide. It doesn't have a choice in the matter. Anything that strips me of this divine power should be straight up illegal. I don't care what the consequences are for corporations, they should not get to usurp me. They don't get to create little extraterritorial islands in our domains where they have higher power and control than we do.

I don't try to own their servers and mess with the code running on them. They owe me the exact same respect in return.


> If their game runs on my machine, then cheating is my prerrogative. v

Sure.

However, due to the nature of how these games work, cheating cannot be prevented serverside only.

So, if you want to play the game, you have to agree to install the anti-cheat because it's the only way to actually stop cheating.

The *only other alternative is to sell a separate category of gaming machines where users wouldn't have access to install cheats, using something like the TPM to enforce.


I don't have to agree to a thing. They're the ones who should have to accept our freedom. We're not about to sacrifice our power and freedom for the sake of preventing cheating in video games. Not only are we going to play the games, we're going to impose some of our terms and conditions on these things.


> I don't have to agree to a thing.

Sure, you don't have to agree the earth isn't flat either. But then, as with here, you'd be entirely wrong.

> We're not about to sacrifice our power and freedom for the sake of preventing cheating in video games

Sure we are, gladly. Maybe not you or me, but most people absolutely.

If you want to play AAA games, that's the compromise. Until they release limited gaming PCs that are basically consoles.

> we're going to impose some of our terms and conditions on these things.

I doubt that but wish all the best. It won't change that the consumer end needs to be locked down to prevent cheating though.


Yes, that is why the owners of the computers (corps) use these tools - to maintain control over their hardware (and IP accessible on it). The end user is not the customer or user here.


Oh stop it. It’s not your machine, it’s your employer’s machine. You’re the user of the machine. You’re cargo-culting some ideological take that doesn’t apply here at all.


> It’s not your machine, it’s your employer’s machine.

Agreed. I'm fine with this, as long as the employer also accepts that I will never use a personal device for work, that I will never use a minute of personal time for work, and that my productivity is significantly affected by working on devices and systems provided and configured by the employer. This knife cuts both ways.


If only that were possible. Luckily for my employer, I end up thinking about problems to be solved during my off hours like when I'm sleeping and in the shower. Then again, I also think about non-work life problems sitting at my desk when I'm supposed to be working, so (hopefully) it evens out.


I don't think it's possible either. But the moment my employer forces me to install a surveillance rootkit on the machine I use for work—regardless of who owns the machine—any trust that existed in the relationship is broken. And trust is paramount, even in professional settings.


If you don't already have an anti virus on your work machine, you're in a extremely small minority. As a consultant with projects that go about a week, I've experienced the onboarding process of over a hundred orgs first hand. They almost all hand out a Windows laptop, and every single Windows laptop had an AV on it. It's considered negligent not to have some AV solution in the corporate world. And these days, almost all the fancy AVs live in the kernel.


I don't doubt that to be the case, but I'm happy to not work in corporate environments (anymore...). :)


Setting aside the question whether these security tools are effective at their stated goal, what does this have to do with trust at all? Does the existence of a bank vault break the trust between the bank and the tellers? What is the mechanism that would prevent your computer from getting infected by a 0-day if only your employer trusted you?


> Does the existence of a bank vault break the trust between the bank and the tellers?

That's a strange analogy, since the vault is meant to safeguard customer assets from the public, not from bank employees. Besides, the vault doesn't make the teller's job more difficult.

> What is the mechanism that would prevent your computer from getting infected by a 0-day if only your employer trusted you?

There isn't one. What my employer does is trust that I take care of their assets and follow good security practices to the best of my abilities. Making me install monitoring software is an explicit admission that they don't trust me to do this, and with that they also break my trust in them.


You mean like AV software is meant to safeguard the computer from malware? I'm sure banks have a lot of annoying security related processes that make teller's job more difficult.


My experience is that in these workplaces where EDR is enforced on all devices used for work, your hypothetical is true (i.e. you are not expected to work on devices not provided by your employer - on the contrary, that is most likely forbidden).


This is an infantile perspective on the relevant issues. Be better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: