Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Purview: Additional classifiers for Communication Compliance (preview) (pupuweb.com)
325 points by CPAhem on June 2, 2022 | hide | past | favorite | 307 comments




I'm already not wanting to have personal conversations on teams. My tech savvy colleagues and the ones who can be convinced are on signal, where we talk about job offers and relationships. A few others do Instagram, and get to see my art photography. And occasionally I'll bump into someone when we're both in the office and be able to say whatever not looked over by AI. There's a real chilling effect on getting to know people.


> And occasionally I'll bump into someone when we're both in the office and be able to say whatever not looked over by AI.

At my present workplace, we have cameras with microphones. They also have installed spyware on laptops and desktops, to be able to see the screens of employees. They also go through mails and have a list of all web traffic done by employees.

Which is one of the reasons I've handed my resignation a few days ago.


In most places where I worked, I signed an explicit consent form stating that all company-provided means of communication are for work purposes only, and may be audited. I suppose it's required by law.

So my rule of thumb for workplace is: expect no privacy.

If you want to use work-provided email, slack, etc to discuss things which you'd be very uncomfortable discussing in your office in the open, especially in the presence of your bosses, don't. Find a different venue.


Why would anyone have personal conversations on a platform which is linked to your work?


Because you might be friendly with coworkers and not always switch platform as the topic transitions.


> not always switch platform as the topic transitions

My point is, you totally should. I am friendly with my coworkers too, if I want to have a non "work-friendly" talk with them, we talk in the kitchen or at the pub. It baffles me that people would use a work provided form of communication and _not_ assume it's auditable in some way.

edit: should clarify that my work is probably more calm than most and would probably not GAF about it regardless.. but it's just good opsec. Never write something down you wouldn't be comfortable having read out to you if it can be traced back to you.


Your original question was about "personal conversations". I don't think all personal conversations fall into the category of things I wouldn't write down. If I remember that I need to ask my coworker about dietary restrictions for going out together on the weekend while we are talking about our db problems, I won't necessarily switch to Signal for that.


Then we misunderstood each other. In that case there is no need to change platforms at all -- no one is ever going to penalize you for talking about dietary restrictions or similar topics. If you want to talk about looking for jobs, how much you hate your wife, or what an asshole your boss is, that should obviously not be done in a way that it could ever be traced to you. Failing to do that makes the situation at least half your fault, what did you expect?


It's more the delusion that you can casually maintain perfect opsec. People committing actual crimes regularly get caught by those slip-ups, of course you too will have slip-ups in your every day life.


You should read the report on Brett Goldstein: he was forced out of government for using signal.


They also built tools to detect when you're doing that


Unless they're recording my conversation at the pub I do not see how (very illegal where I live, I'm hoping illegal everywhere). Care to link an example?


I'm hoping the comment you're replying to is implying that if you switch to a non-work communication platform while still using corporate assets and infrastructure you can still be tracked, which makes perfect sense.


Is that a recent thing? Or only in the US? Age related? Size of the company? The level of personal communication I've witnessed over such tools is pretty superficial, casual. But perhaps that's just my age, location, or the fact that the last time I worked for a large company, Skype was still new.


Signal and WhatsApp aren't 100% trust worthy though. Why not pick something you can host yourself?


Right, I'm tired of this.

E2EE doesn't mean anything if you have the same entity controlling the server as is controlling the endpoints.

If you control both ends of an E2EE communication and they are closed then you gain nothing over normal TLS encryption, you still trust the authority. (Whatsapp is obviously closed and yes, signal can be considered effectively closed as their client is not reliably or reproducably built from public sources and has hidden their agendas before[0]; and even depends on binary blobs from Google..)

I know your favourite closed/walled messenger platform is basically religion at this point: but for heavens sake; please understand that unless you're auditing your clients or you can run trustable third-party clients; then end-to-end doesn't mean anything at all.

It's just marketing buzzwords.

[0]: https://www.youtube.com/watch?v=tJoO2uWrX1M&t=880s


If you say something ML thinks is wrong in Teams, you can be fired (at will).

If you say something to your colleague via Whatsapp, the only scenario it can be used against you is if you commit an actual crime with reasonable evidence, they subpoena the records, and FB will be willing to go on record to the entire world as lying about Whatsapp E2EE, all in the name of putting you behind bars.

(Also, maybe we can imagine that products actually do what they do and it is not normal to fear lies and nefarious agenda behind every offering?)


there were two points made:

1) Signal and whatsapp are not 100% trustworthy

Maybe the implication is that it will leak info to your employer, but I think this is more like a general statement; one that is likely an attempt to discuss why we still put our conversations into the hands of large companies with potentially unknown motives; and the questionable state of using "end-to-end" where one entity controls the network, access to the network and both ends of the exchange.

2) Why not use something you can host yourself

to which a reasonable reply is: network effects; I already have signal/whatsapp/telegram and I do not worry about them sending information to my employer.

Unless your employer is facebook, then I think that's a perfectly legitimate rebuttal, but one nobody is making.

In fact, people would rather argue that signal/whatsapp is the best privacy platform in the universe due to e2ee!


We're talking in the context about employers reading the plaintext of messages you send, right? Is Signal the same as Teams in this regard?


I don't really know who you're arguing against, because I'm with you on the points you address. Why I care that its open is so that when they eventually pull a Microsoft, or Facebook style anti-consumer move, we can fork off and continue on how we see fit. It's about control of our communication.

Using signal is the same as using WhatsApp. Eventually Facebook will buy it.


I'm agreeing with you and I'm frustrated that you got downvoted.


Ah I see. Well thank you for your concern. I'm pretty used to it by now. :)

Its weird how much people fight against their own interests though eh.


Other people can audit it. That gives you a fair amount more assurance than non-E2E.

In addition to the theoretical benefits of E2E, I get actual noticeable behavior benefits. I've sent links to a family member on Facebook Messenger that it decided to censor and my messages didn't get sent. This happens to others as well[1]. There are reports of similar things happening with SMS[2]. That doesn't happen with WhatsApp or Signal.

[1] https://news.ycombinator.com/item?id=28341737

[2] https://news.ycombinator.com/item?id=29744347


And we have cross platform signal style E2EE now in the form of OMEMO for XMPP. It's just that no one wants to use it.


This is a pretty classic example of “what’s your threat model”.

WhatsApp/Signal may not be perfectly private, but it’s plenty private enough to hide trivial things like job offers from your employer.


Yeah, this is kind of how we got to the point where Microsoft tattles on you to your employer. Small concessions.


- I don't have enough disposable income and disposable time to self host and then keep up to date some flavour of messaging protocol server.

- Everyone's understanding of this issue is different. It's hard enough to convince technical people to use matrix/element vs signal, vs what ever they already have installed. Non-Technical people will either just ignore you or trust you entirely, I'm not sure which is worse.

- When something goes wrong I have to fix it myself. now I'm 24/7 on call.

- Even If I have knowledge enough to run the infrastructure myself, to compile clients and servers myself, to register domains etc.. I cant understand the source code to identify every possible un-trust worthy thing. even if I could, system security is not just about the code.. what is a trusted architecture to run it on?

It just isn't in any way, by any stretch of the imagination, feasible to self host any messaging service myself, that I want to use with the aim of talking to a wide range of people, from all parts of my life.. When I just want to chat with my work colleagues and arrange to go to drinks, or about their break up, or some other company or whatever..


Signal is open source. What's not trust worthy exactly?


1) You need to audit that code, which.. everyone will have to do on both sides of the communication channel.

2) https://signal.org/blog/reproducible-android/

> the Signal Android codebase includes some native shared libraries that we employ for voice calls (WebRTC, etc). At the time this native code was added, there was no Gradle NDK support yet, so the shared libraries aren’t compiled with the project build.

A good answer in my opinion, but it does mean that what you install from the play store is not reproducible and thus can never really be confirmed to be the same as public sources. There are also binary blobs needed for interacting with Google Play.

3) Signal is openly hostile to third party client implementations: https://github.com/LibreSignal/LibreSignal/issues/37 Meaning they have a near monopoly on all signal communications through their client.. and since it's not reproducible, I hope everyone is building from source.


1) Nonsense. If you don't trust other people's code, you're screwed. You put yourself into the position where you have to audit your OS code, your CPU code, code of every driver that runs in your system. None of which you did.

2) Isn't WebRTC open source too?

3) Their code, their decisions.


These are extremely unconvincing and rather shallow refutations.

I expect more of people on this forum honestly.

Taking the core of your argument: "Trust".

The point of E2EE is that we don't trust the network. We put all the trust in the client, something we control. Or at the very least we seperate our concerns. (please refer to this lovely interactive "Tor" diagram by the EFF for what I mean by splitting out concerns: https://www.eff.org/pages/tor-and-https )

Not being able to run your own client is a pretty big problem. At the very least in that case you should expect to be able to run on another network.. Otherwise that's a lot of trust for one entity and it's not different than just using TLS with HPKP/CA pinning

To give a direct refutation to one of your points: "Isn't WebRTC open source too?"

It is, but they're using native libraries which are compiled. Like I said, it's a good argument, but the result is that they don't have reproducible builds.

> Their code, their decisions.

Extremely dismissive, almost to the point of insulting.

It is absolutely not true that they are above criticism because they built something. They've positioned their product as a security product. Thus it will be judged on those merits. There are many pro-signal zealots who will bend over backwards to defend it in all circumstances. It's intellectually dishonest to do so in the face of valid criticisms.

I will shut up when federation is supported, or you can run your own network, or you can bring third party clients.

You need this to be able to trust your client, because the point is to decouple some trust from a single entity.

that's what e2ee is!


> These are extremely unconvincing and rather shallow refutations.

That's not a refutation of my counterarguments at all. It just shows you're frustrated and talked yourself into a corner. We both know you don't audit your OS code, your drivers code, your hardware. All of them can be leaking your secret messages.

> Extremely dismissive, almost to the point of insulting.

Another non-refutation, another frustration, because you have no counterargument.

> It is absolutely not true that they are above criticism

Straw man logical fallacy. I never claimed they were above criticism. Criticize all you want. But expect your arguments disassembled.

> You need this to be able to trust your client, because the point is to decouple some trust from a single entity.

Without auditing your OS, your drivers and your hardware it's pointless. Any of them can leak your messages. Yet you're fine with it.


Oh dear, you definitely chose the wrong person to accuse of not auditing their code.

I'm typing this from my OpenBSD laptop, which, I assure you, I have audited extensively; but that's hardly relevant to this topic.. I just think it's funny that you would assume this of me. I'm also big on system-transparency[0] and micro systems like Oasis Linux[1] which attempt to limit things being able to hide.

Granted, nothing is perfectly secure.

But, again, besides the point entirely.

Your central thesis is that nothing is safe.

Why, then, should I not just use telegram? Or VK, or WeChat?

We have consensus in the HN community that those chat systems (especially telegram) are inherently insecure. Why?

Don't worry, I'll answer for you: Because they do not support E2EE except when specifically asked to, and because they used their own encryption.

This is enough for the security community to decide that Telegram is a bad product(tm).

I'm not arguing in defense of telegram, I'm just letting you know what happens to "secure messengers" under a microscope.

The same criticism has not been levied to Signal, despite them offering no more protection in real terms than HTTPS would. There are theoretical safety-nets but nothing you can concretely audit.

Your argument that "it's their code they can do what they like" holds as much water as an inverted plate, given the context that they've chosen to live under.

So, instead of attempting to talk me down with and Argument from fallacy[2] perhaps you can talk about this point.

[0]: https://www.system-transparency.org/

[1]: https://github.com/oasislinux/oasis

[2]: https://en.wikipedia.org/wiki/Argument_from_fallacy


> which, I assure you, I have audited extensively;

For which I call BS.

Did you audit your OS code, drivers code and your laptop's hardware? We both know you didn't. Why do you make such an obvious lie?

If it's magically not a lie, how exactly did you do it and how long did it take?


A belief you hold strongly because you have never enjoyed the beauty of an operating system code you can actually read I guess: https://github.com/openbsd/src

OpenBSD is a lot of code, sure, but far from insurmountable, the drivers are few and quite generalised.

I can’t really say how long it took me to read it because it was over a few years of getting curious and diving in, but it wasn’t much.

I’d say if you were to study the code for 8 hours a day it would probably take about 3-5 weeks.

That said: I’m not claiming that I did a full security audit and found all the bugs: I am stating outright that I have read every line of code in the source tree, and the majority of the code that I run from ports, it’s simple enough that you can do that.

And yes; I still get horrified at a lot of the ports; not everything is perfect.

Exceptions to my curious browsing include Chromium and firefox due to sheer complexity, (and I have had reason to dive into those: the tweaks file is fun); and I have read the majority of the GCC code too (which somehow is much less complex and is quite easy to wrap your head around once you’ve read the dragon book than the browsers).

But the OS. Like you claimed. Is not a binary blob, at least to me. I compile it myself, with a compiler I understand, and with code I have read and understand; this is not uncommon in OpenBSD users; the OS is literally designed in a way that is easy to read; because being easy to read means security bugs have less places to hide. (As per the OpenBSD philosophy).

All of the above notwithstanding, I’m writing this message from an iPhone so not everything in my life is so rigorously understood; I’m not a purist, just a curious tinkerer, like most Linux enthusiasts used to be before the ecosystem became a bit too complex to understand for any one person.

You could argue my phone can leak my chats, to which I say: your matter of “trust” comes back, and I don’t think I would trust my phone with my life to not leak my secrets (signal is asking people to trust them with their lives; journalists and dissidents). But I would trust my laptop.


You went from:

> What's not trust worthy exactly?

to:

> Their code, their decisions.

It's okay to be a fanboy! Evangelism is needed for any great product/company/ideology. But on HN you'll get typically called out for disingenuous or bad-faith lines of rhetoric.

The person above gave you a perfectly reasonable answer to your original question of "What about Signal is not trustworthy?". It'd be kind to acknowledge that they at least have a single iota of merit.


> You went from:

>

> > What's not trust worthy exactly?

>

> to:

>

> > Their code, their decisions.

Two separate comments addressing two different points. One doesn't follow from the other. Stop arguing in such dishonest manner.


Their code, their decisions is why it's bad. If the decide to start tattling to Microsoft too, what are you going to do? If it's open, we can fork it and move on with your lives. Free and open gives you and me the power to control our own communication.


> If the decide to start tattling to Microsoft too, what are you going to do?

That would be obvious in their source code, wouldn't it?

I would stop using them then.


Its not open enough to verify that.


Your employer is not obliged to maintain a communication system so that you "get to know people". If you consider how much these tools cost to maintain, it's completely understandable that companies want to have 100% content control.


So, now we can't use Teams to have the "water-cooler" moments that supervisors claim we need, but really we are having them on Signal or IOS and they just can't measure that. Organizations really, really, really hate transparency.


They don't want you shooting the shit with company communications mediums because that has limited upside and much less limited potential downside.

Remember the famous "will the atom bomb test ignite the atmosphere" gentleman's bet those scientists had? Nobody actually thought it would but they discussed it semi-seriously. Today discussing some fanciful bad outcome like that (be it the mundane failure to deliver a product or something more interesting) is a liability when it's sitting in your company email servers. Even if that bad thing isn't what winds up happening or the people speculating aren't in a position to have accurate info the other side's lawyer or the regulator will try and construe it as proof that the company should have known ahead of time.

Or, more likely, say there's some sexual harassment or adultery kerfuffle between employees. It's way better for the company if none of that happened on company provided communications tools.

From the company's perspective it's avoidable risk to have work communication tools be used for informal BSing between employees. But they can't realistically prevent that so they introduce Skynet in order to make people watch their mouths and move those sensitive conversations elsewhere.


Having employees is a big potential liability. Having a corporation is a big potential liability. Drinking water is a big potential liability. I guess just don't do anything at all and then recalculate your risk metric.


The conversations will happen elsewhere and so will the relationships. Management is locking themselves out from the team leaders and suddenly those off site 'adult kerfuffles' are exactly the conversation you needed to hear to prevent exodus.


> The conversations will happen elsewhere

That's entirely intentional. You really don't want internal evidence of something that's going to be construed 10 years down the line as cancel-worthy, or worse, something that politicians/regulators are going to take out of context to attack you with.


Nah. People like to talk and not get fires for saying the wrong keyword.


This made me laugh. Back in the late 1970s, there was suspicion that the Soviet Union a had completely tapped the AT&T phone network on the East coast. I cannot remember the author of the article, but they stated that every American having any telephone call with anyone on the East coast should toss in a number of different key words to overwhelm the ability of the USSR to gather any useful intelligence, because they would be overwhelmed by data. Then they gave a list of keywords. I wish I had saved it. So my brother and I, when we called each other, would toss in the occasional 'enriched uranium' 'satellite imagery' 'battalion' 'missile test' 'weapons research' and other nonsense into our conversation.

I don't know what I found funnier, the idea that some poor fool at a Soviet embassy had to listen to our conversation because a key word hit caused the recording to be saved, or the idea that the author even proposed that the idea would work.


This is nothing new: corporations have scanned instant messages, emails and even recorded phone calls for decades, and will fire you based on that evidence for violations of corporate policy. And will sue you or call the cops if they detect potential crimes.

I’m kind of surprised so many people are shocked by this. I know of one company where dozens of people were fired because their email was scanned for external job interviews and the CIO had a report, which he used to prematurely cut staff when he needed to save budget.

The only difference now is that the tech is smarter and cheaper so that you don’t need to pay as many people to spy on their coworkers.

Your defence against this is to find a job where you’re too valuable for them to do anything. As with any jurisdiction where there is at will employment.


> The only difference now is that the tech is smarter and cheaper so that you don’t need to pay as many people to spy on their coworkers.

Your comment implies this isn't potentially an enormous difference. The difference is between having to pay people to spy on their coworkers, and having computers that do it passively, invisibly, continuously, in real time?


… And completely predictable and inevitable for 50 years, and written about endlessly since the mid 20th century. Short of a butlerian jihad level event, the only way to fight back is better countermeasures and better value.

The law won’t help (they want more surveillance). Democracy won’t help (most people want more surveillance on their neighbors). Exploit the system.


> I’m kind of surprised so many people are shocked by this. I know of one company where dozens of people were fired because their email was scanned for external job interviews and the CIO had a report, which he used to prematurely cut staff when he needed to save budget.

On a related note, if you were a Microsoft employee, how comfortable would you be talking with recruiters on LinkedIn?


Was Microsoft employee. Did not worry one bit about talking to a recruiter on LinkedIn. Your immediate management is unable to see or even get that data. You would have to trip an alarm elsewhere, trigger a major investigation, get legal sign-off and then maybe Microsoft would ask LinkedIn to pull that data. Even though MS owns LinkedIn, they are treated as a separate business entity. LinkedIn has its own security team etc.

Even MS's own recruiters will use LinkedIn to contact current MS employees for internal positions.

Tech is a double sided coin. Things like this have the power to be abused, even easily at times, but that doesn't mean they always will be.


I’d have no problems with it. If they were going to fire me over that, it’s their loss. And it’s not generally in Microsoft’s culture these days to be that petty.


My worry in that position wouldn't be getting fired, it would be getting the conversation (or, if it's got that far, the offer) spiked because someone in HR thinks they can either keep me in my current role by scaring off anyone I might talk to, or share my salary information so they can low-ball me, thereby stopping me from getting a pay bump.


I don't think you need to work at Microsoft to be nervous about that. I've had recruiters offer to sell me information on who here is communicating with them on LinkedIn, without even being in a management or hiring position myself.


Why do people persist in using work emails for personal things like job interviews?


I remember during the Ashley Madison leaks there were so many work emails. I wouldn’t even use work email for buying a movie ticket much less organizing dates and affairs.

Some people are weird like that. There’s also old people who only have work emails. Lots of different people in the world.


Arguably if you're having an affair it's something to keep from your private email but your work email may not be as visible to a spouse ...


I guess that’s the reasoning, but sending it to work seems pretty dumb compared to creating a separate, cheater account.

Although sending it to work is probably the most secure from a spouse.


Wait until you hear someone yelling at the helpdesk because their Ashley Madison email was caught by a corporate spam filter..


I think it's just another unfortunate manifestation of the phenomenon that Big Tech loves to exploit --- that people seem to care very little about privacy in general, or perhaps have been guided gently in the direction of doing so by those who stand to profit the most strongly from it.

In your specific example, there could be a slightly more positive reason --- proof that you do actually have a job at where you claim to be working.


> that people seem to care very little about privacy in general, or perhaps have been guided gently in the direction of doing so by those who stand to profit the most strongly from it.

It's not just that they don't care: it's like they see some privacy-invasive thing and automatically use it because it's privacy invasive

Like whenever people not only use chrome, but are logged into their google account 24/7 while using chrome (I'm sorry if the reader does this...). I get it if there's some niche feature that requires it, but most of the people I know who do this aren't doing it for that reason (or any reason at all I guess)


The chrome thing is annoying as hell, as Google automatically logs you into Chrome if you log into basically any Google property.


The difference here is that "find a new job" might be accomplished by LinkedIn...which is also owned by Microsoft.

So Microsoft's cloud ecosystem generally owns your work email, and the site you use to find a job.

Honestly: I don't care what they say (because it'll be "we datamined LinkedIn, but don't worry we did it with only the public APIs and just bypassed rate limiting so technically...to add data to our "employee leaving" filter...) - Microsoft and LinkedIn, specifically, need to be forcibly broken up with this sort of control over the full employee lifecycle.


"The leavers classifier detects messages that explicitly express intent to leave the organization, which is an early signal that may put the organization at risk of malicious or inadvertent data exfiltration upon departure". In other words "how to promote and encourage paranoid behaviors from employers" :(


Have you seen high schools in the US?

Once I discovered that every school-issued machine had a VNC server running on it I assumed that the contents of my screen were being recorded at every moment. Turns out I was half right, as I caught up with the IT guy afterwards and the principal (a paranoid sociopath who shouldn't be anywhere near kids) wanted the ability to catch kids when she thought they were looking at non-school related things.

It's fundamental safety in a society with these sorts of companies to assume: company infra = logged until you die. Once your company has come under a subpoena for information or under some kind of long term discovery, you write emails under the assumption they're going to be in court for everyone and your mother to see.


I am so glad to be living in a country where shenanigans like this are deeply illegal and where violations would see employers/principals facing actual jail time, so nobody does it. Land of the free indeed.


It's extremely revealing that this particular classifier is framed as "prevent data loss" not "intercept skills loss" or "figure out why your employees want to leave and then fix that".


It seems maybe not the intent, but the practical result is to use the private sector to implement CCP like social credit scores isn't it? By doing everything in the private sector they get around all those pesky constitutional protections.


so the rights and freedoms are only protecting citizens from government oppression... if a private company does it, then it's fine cuz corporations are also free people.

they're free people who somehow are getting to oppress and censor individual humans (otherwise the corporation is who is being oppressed), but let's pretend that we can punish them by "taking our dollars elsewhere" such that it's our own fault

IMO, tracing this towards the root, I find along the way the grand system of royalties and other kinds of rent schemes. Nobody cares cuz we prefer the promise (for the majority is a promise) that we can come up with something great to make it BIG and then get to live from rent or other kinds of royalty payments


Most large companies had "social credit scores" for decades. They're called performance reviews. Nothing new here. You just had unreasonable and naïve expectations. You now know MS Teams is monitored. You are free to seek employment in companies that don't use MS Teams if you dislike this so much.


Submitted title was "Office 365 implementing AI to detect employees colluding, leaving and more". That broke the site guidelines: "Please use the original title, unless it is misleading or linkbait; don't editorialize." - https://news.ycombinator.com/newsguidelines.html

The proper place to include that sort of interpretation is by adding it in a comment in the thread. Then your interpretation is on a level playing field with everyone else's (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). Also, a comment gives you room to actually substantiate your interpretation.

On the other hand, a thread like this probably wouldn't have gotten attention without the sensational title in the first place, so this kind of submission is a borderline case and at worst a venial sin. (We still change the title once it does make the frontpage though.)


I know it's policy to use original titles, but the editorialization in this case hardly seems sensational. Just look at the linked roadmap tickets:

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

* https://www.microsoft.com/en-my/microsoft-365/roadmap?filter...

The title "Microsoft Purview: Additional classifiers for Communication Compliance (preview)" sounds like nothing at all. It doesn't seem like exaggerating to say that the reality is literally Big Brother in a corporate context. Seems like your changing the title is just going to have the effect of reducing attention given to something that really needs to be exposed in clear terms.


I agree, but there's a big difference between such a title on /newest and the same title near the top of the front page. In the former case, it's a reason to skip over it; in the latter, it's a reason to dig further, and digging further is what HN is all about.

I'm not saying the current title is the perfect outcome—I'm just not sure what the perfect outcome is. I do think that in this case, the dystopian title adds to the quality of the post (but only once it's on the front page).

It's impossible to cover the general case with a simple rule. Even a paragraph of rules wouldn't be enough—people would discover corner case after corner case and you'd eventually need a book. I think HN's guideline covers the domain as well as any single sentence could; and then we can cover all the exceptions ad hoc, and talk about them in the comments.


It sounds 200% like newspeak and it’s pretty obvious what will be inside the egg.


> I know it's policy to use original titles, but the editorialization in this case hardly seems sensational.

It's interesting that the HackerNews guideline makes no statement about whether a custom headline is sensational or reasonable. It is: "Please use the original title, unless it is misleading or linkbait; don't editorialize." They probably have a slightly different reason for this rule than many people first imagine. And that reflects in the actual wording of the rule being slightly different than many people would first phrase it themselves.


Yeah, the point of the HN rule isn’t to prevent drawing attention to the post - it’s to put your comment on it at the same level as everyone else’s.


There's quite a bit of overlap between 'linkbait' and 'sensational', so I'd say that concept is in there.


If you want to make an overarching statement via the post title, the route is to write up your thoughts on the matter, title your post however you want, and then post that.


OK, I will so in future.

I tried to summarize the article in the title. Will follow the guidelines from now on.


Thanks for the clarification about this sort of catch 22. I had been wondering about this exact scenario that seems to happen fairly regularly.


[flagged]


It's not a very well-implemented ad block modal. You can just delete the overlay element and continue browsing.


I think if you have an E5 license there is already thoughtcrime functionality built-in. I remember someone demoing this to me in a Teams user group, and no one seemed to think it was creepy at all. In addition to flagging keywords it also used AI to detect undesirable thoughts and emotions, under the guise of anti-harassment and compliance. Unfortunately I can't remember the name of the feature but I think it might be this:

https://docs.microsoft.com/en-us/microsoft-365/compliance/co...

So I think if Microsoft existed in the world of 1984, they would easily be the preferred tech vendor for IngSoc.

Side note, do you think this would also detect the money laundering and bribery going on within Microsoft itself?

https://www.theverge.com/2022/3/25/22995144/microsoft-foreig...

Side-side note, I think the reason why that is allowed to still keep going on given that the SEC knows about it and that there's ample evidence has to do with national security reasons.

It's extremely troubling that given all this corporate authoritarian AI tech they built that Microsoft is still trying to be the voice of reason about the dangers of AI.


> It's extremely troubling that given all this corporate authoritarian AI tech they built that Microsoft is still trying to be the voice of reason about the dangers of AI.

Just speculating, but this phenomenon could either be explained by 1.) diverse internal opinion; the parts of Microsoft responsible for warning against AI are not the same parts pushing authorarian AI software, or 2.) Moat-building/ladder-pulling; Microsoft is warning people of the danger of _other people's_ AI, but of course you can trust _their_ AI, because they're the ones warning you after all!


Emails aren’t thoughtcrimes. This is nonsense.

Everything in corporate email has always been subject to read by others, there is no expectation of privacy.

As we’ve seen from countless court cases, they range from boring nothingburgers, to evidence of actual crimes.


I think it's that you can be considered effectively guilty (there's grounds to fire you or take disciplinary action against you) for tripping an ML routine without further evidence or proof, and that it seems more important to have "clean" corporate communication rather than actually act in good faith (as long as bad stuff happens on off channels, no one cares).

Hopefully it doesn't make it outside of the corporate world though.


It already has left the corporate world in China.

Technology, like advanced weapons, doesn’t solve political problems for long, as the other side eventually gets their hands on it.


They want you to know and realize that you’re being monitored so that you take the “bad” communications to where subpoenas can’t get them.

Be good cogs; don’t leave logs.


There is a difference between an investigation under subpoena for example and

An automated process that alerts whomever is chosen as overseers to all possible missteps and misdeeds.

One is a very targeted and conscious effort the other is automated and pervasive everywhere all the time.


Building and selling AI software to do this is also a targeted and conscious effort.

My view is this kind of thing is inevitable and pervasive because there’s a lot of internal risks that companies and governments are worried about. The only solution is to be so valuable that it doesn’t matter.


The targeted normally implies that you have a specific incident and a specific person suspected of a misdeed. Building an AI for blanked surveillance is the opposite of that, you aren't looking for a specific incident, have no specific person to suspect. You are basically accusing everyone of being a criminal without any evidence of wrongdoing. Most people don't want to spend their whole lives treated as criminals, anyone who does is free to live in north korea or russia.


Ok but on the one hand one choice is blanket pre-approved for all time for everyone with no time expenditure.

The other one is instigated and deliberate at the official request of legal and can take a lot of time.

It’s very different. It instills a climate of untrust. Everyone is “guilty”. In the other scenario everyone is innocent till a specific and circumscribed “matter” is started.


Right, and this functionality is to punish you for nothingburgers as much as actual crimes. "Leaver" detection is something that is entirely sane for corporations to do but will be abused by the usual suspects in HR to instill fear and to retribute.


Of course it is. But this is how it has always been even without this technology. The cost is lowering because the demand for is pervasive.


> Everything in corporate email has always been subject to read by others, there is no expectation of privacy.

Depends where you work? I expect my work emails to be private.


There is no way they will be able to make an AI at this point that will

A) Be accurate

B) Work across multiple contexts

C) Run efficiently on billions of messages

This will just result in many false positives, and unnecessary eavesdropping on employees personal conversations.

Once its revealed an organization is using this, people will quickly move all conversations to another platform, even if policy forbids that. Resulting in an even greater security risk potentially.

And as per usual, if Microsoft gets someone fired (e.g. comes in looking for money laundering, finds out the staff member is making fun of their boss), there will be no repercussions.


accuracy isn't a strict requirement though

if you accidentally fire 10% of good people you still have 90% of them left, and if that lets you fire 80% of the staff that are committing thought-crime it's probably a win.


Even if it works out as you say, the chilling effect on who's left will be very real.


That already exists in general - this is why CEOs get surrounded by yesmen.

We already filter out people who aren’t smart enough to keep their mouth shut when necessary.


That behavior has negative impacts on corporations though, from Amazon to Disney.

Scaling the problem is likely to increase the damage.


Part of what makes stuff like this surprising is expectations of privacy. Like for example. If you start a video chat on Hangouts or Zoom, even or maybe especially on a work account, you don’t expect that meeting to be recorded or analyzed surreptitiously. I think in many places, it would be illegal.

Because of this, one might feel like the same standard applies to other one-on-one and small group communication avenues, but it’s actually completely the opposite.


Reinforces that during interviews candidates should be determining what the company uses for internal communications and choose accordingly.

Anyone using Teams is already a red flag.


What's a good alternative? I feel like it's a matter of time before similar feature is added to Slack and Co.


A few open source options (some with hosted plans)

- Zulip - https://zulip.com

- Mattermost - https://mattermost.com

- Rocket chat - https://rocket.chat

- Matrix - https://matrix.org


Absolutely agree with this.


I honestly first thought this article was satire. It is so unreal to find myself in a world where this is acceptable. What's next? Installing cameras in restrooms to catch offline conversations?


I have zero confidence that this system is smart enough to differentiate between all these things and the legitimate variants thereof (e.g. collusion and cross team collaboration are basically indistinguishable) that companies actually want people doing or discussing and likely outnumber the bad by orders of magnitude.


Yeah and the sad thing is companies that don't know better will flip it on because "wow look at smart Microsoft's latest feature, we better use this!" and then inadvertently fire Sally in HR because her asking people to sign a card for the VP's birthday looked suspiciously like a violation of corporate gift policies.


People in HR don't get fired for that kind of stuff. They have a thin <whatever their color is> line to protect them.


The Thin Company Line


next they'll offer MS AI HR that'll do that for you


Technology like this will sometimes work and many times not, and the false positives and true negatives will cause a lot of harm on the way.


NLP has improved a lot in the last 5 years. I believe that this is now technically possible with the right training data.


Remember the gfood old days of Usenet with signatures that deliberately contained keywards to try and DDoS the NSA's "line eater"


You can still find them in the email signature of any communication from one Richard M. Stallman!


M-x spook


Yep.

Echelon is one keyword I remember.


This seems to be office 365 implementing monitoring of official communications of employees & contractors for the office account? I don’t think it extends to a personal office 365 account, at least it didn’t seem to.

Why is this exactly newsworthy? Any communication through official channels is the property of the employer anyway. To collude, leave & other stuff use personal channels maybe.


> Any communication through official channels is the property of the employer anyway.

Why is there always this attitude of "it's a private business, they can do what they want". Why does the fact that they can do something distract from criticism of them doing it? The fact that this tech exists is horrifyingly dystopian on its own merits. But it also has widespread consequences in a country with so many employment monopolies and opportunities for outright wage slavery. Heavy-handed workplace surveillance and heuristics-based crap are becoming increasingly difficult to simply opt-out of.


It's a classic assertion of power, pointedly ignoring any appeal to anything but more power.


> Why is this exactly newsworthy? Any communication through official channels is the property of the employer anyway.

Pretty clear one of the major things they're going for here is detecting "jobsite troublemakers", ie employees who are upset with job conditions/agitating for improvements/discussing salaries/etc, which is given specific legal protection. It is explicitly legal and protected for employees to discuss labor conditions, organizing, or salaries regardless of whether you do it "on company property" or "on company chat". Just because the company owns it doesn't mean you have no legal rights - just like a company can dismiss you for no reason but they can't dismiss you for any reason.

They are wrapping it up with "think of the children" justifications like "employees who are discussing salary might be considering leaving and they might take nefarious action if they do so" but that's the core of the situation here - these are tools to detect and fight against legally-protected activities by employees.

> Workplace collusion: The workplace collusion classifier detects messages referencing secretive actions such as concealing information or covering instances of a private conversation, interaction, or information.

> "The leavers classifier detects messages that explicitly express intent to leave the organization, which is an early signal that may put the organization at risk of malicious or inadvertent data exfiltration upon departure"

Hypothetically, do you think it would be a good idea for Microsoft to build a classifier and provide managers with a list of potential "religiously devout", eg based on correlated work/away periods, language patterns, etc? Sure, it's a legally protected classification, but there's an elevated risk of extremist activity, which surely presents a business risk, right? So why not?


> Why is this exactly newsworthy? Any communication through official channels is the property of the employer anyway.

Is this sentence meant descriptive or normative? Because there are definitely juristrictions where it is not that easy (e.g EU).

If it is meant normative then I wonder if you also think they "own" all conversations happening on corporate ground? Should they be allowed to record anywhere on corporate property, and use what they record in any way?


if the entirety of your being is the property of your employeer during "clock time" that don't seem like employemnt to me


Except it's an AI making decisions and flagging people who may then be the recipient of adverse actions.


News flash: there used to be entire departments of humans that did this (and in some companies there still are).

Your corporate comms are monitored and there is no privacy.


This is the part that I don't get.

15 years ago I worked at a place where there was an entire room of people who were hired to literally do nothing but read your internal mail all day. One of the deployment rules were to make sure they had unimpeded access to everything (except for executives of course). Please don't misinterpret me as suggesting I like the practice.

I can see why people are upset that this technology is being offered, I am too. But I can't see why people are suggesting it's a new low for corporations that have always been doing this.


Not to mention upwards of 75 years of science fiction literature that discussed the inevitability of this.

The only solution that I can see: Exploit the system….


There are engineers who thought it would be a good idea to develop and train these models.


Are there? I think it's much more likely there are engineers that were told to work on this, and thought that working on this would be fun, let them have an ML project, be a learning opportunity, give them something good for their performance packet, give them good experience, be good for visibility in the org, and many things like that.

I don't know why jumping to the most far reaching evil option is popular in threads like this is the default.


Evil is banal.



I am almost surprised it took this long to get to this point, but I suppose the recent resignation wave made it into a viable product offering. My last MBA class was HR analytics class that, among other things, dealt with email sentiment analysis and stuff like that. Part of me was thinking average HR person won't touch this stuff, but if a company just happened to offer something that would do it for them..


I've always had a preference against working with microsoft products but this is getting to the point where I'd find a new gig instead of being subjected to this stuff.


I think at least some my staff will likely resign because mandatory deep inspection / network monitoring is being forced onto everyone's computers by the IT department. It's probably the only way to stop it from happening at the moment. Unfortunately the buzzword of "zero trust" has been bent towards meaning "spy on everything your employees do".


Ah, another happy Zscaler customer!


you got it


Yeah if they've got "zero trust" for their own teammates, they are in for learning some hard truths about team building.


What is the effect on creative expression and sociability, between co-workers, when they know they're being analyzed by a computer to figure out if they should be fired?


If you don’t feel like turning off your adblocking: https://archive.ph/3XVFT


This is absolutely going to be used against unionizers, which is what's really meant by "colluding". In the US this is going to get a lot of people fired. In other parts of the world, it's going to get them killed. This kind of software is Zyklon B for the 21st century.


Sure, surveillance capitalism is pretty horrifying, but

> This kind of software is Zyklon B for the 21st century

is a bit of an over-the-top comparison


Fun fact: I used to work in this team.

We have come a long way now that we have these advanced classifiers. You would be surprised how low tech the initial product was, by low tech I mean devoid of any ML/AI. We went GA in end of 2019.

Saw a lot of interesting use cases too for e.g Japanese enterprises wanting to detect cases like suicide or intent to suicide, that is why we have multiple types of classifiers.

I worked on the Infra side (not ML). That too was “low-tech” or the more apt term would be “not the latest tech”. Core parts of the app were part of a monolith (think Exchange). Then we were using a really old .NET Framework version for our MVC app. Lot of the storage technologies we used were very MS specific as well. AFAIK, all of this is still valid today.


How, for the love of god, do you defend Microsoft after this?


B-but they write VS Code! They were supposed to have changed!


That's two strikes against them now.

(I kid, VS Code is great for many, but it's not my cup of tea).


Next step will be to detect potential attempt at unionizing.


They have "workplace collusion" as a category, and even more dystopian shit, like:

EDIT: apparently these 2 are just jokes, sorry for not checking my sources!

`Negative emotions: Expressions of sadness, unhappiness, discontent, anger, rage, anguish, or existential ennui, as these may negatively affect team cohesion.

Joy: Language suggesting hopefulness, optimism, anticipation of a brighter future, faith in humankind and/or in a loving and benevolent creator, as these may imply that the user is thinking about topics other than the best interests of the organization.`

From https://old.reddit.com/r/sysadmin/comments/v3b2mn/microsoft_...


Those ones are a joke. Look at the purported "Roadmap ID" that poster used.


Thank you for pointing it out. FWIW, the "workspace collusion" one is actually real - original source: https://www.microsoft.com/en-us/microsoft-365/roadmap?rtc=1&...


It reads pretty differently though? The specific obligations part seems like it applies to something federal.


After social credit score, you'll have your corporate score too.


Seems to only apply to messages, for now. My understanding is that unless a call on Teams is explicitly recorded, there's no capability for the organization to monitor the content within.

Is this still accurate? Are there any features in the pipeline planning to change this?

Microsoft offering "communications compliance" within the same product is certainly chilling enough as it is. The reality where people lose their job as a result of previously-protected casual [voice] chat doesn't seem so crazy now. All it takes is missing a quietly-introduced feature update by a week before the organization flips the switch and doesn't tell anyone.


I’m sure automatic transcripts of all calls is just a few developments away - it’d be desirable for call centers and I suspect it’ll be available to all eventually.


given how well voice recognition work nowadays it's really a matter of cost of implementation rather than if?


Not even. Transcription functionality already works for explicitly-recorded meetings in Teams. I'm sure their cloud resource costs to run it are negligible, too. A change in privacy stance is all that's required.

Pissing people off does incur a cost, though. Perhaps you're right.


That link was not happy about my pihole swallowing their ad links, so I could not read it.

I will say, however, that I don't use my personal phone to host any employer apps. It is my phone, not theirs. I pay the service fee.

So conversations I have on my phone are mine. My coworkers all operate the same way.


Sounds like its time to set up a scheduled batch file that sends a bunch of messages around that would trigger watchdogs like this, as well as the NSA prism keywords just for funsies.


It's interesting looking at the way they try sell this monitoring to the employees as being a positive thing[1]. At least the wider population can experience what it's like to live under DTEX[2]

[1]: https://www.microsoft.com/en-us/microsoft-viva/insights

[2]: https://www.dtexsystems.com


Oh great - AI thought police to make the corporate existence even bleaker.

Could someone head over to MS HQ and slap some sense into whoever thought blessing the world with this is a win?


I hope the people implementing all these policies and technologies are seriously weighing the consequences of their actions. I suspect that they are not.


MS Office Home > Admin > Exchange Admin Center > Mail Flow > Rules > Click the plus sign for New Rule > Create New Rule > Apply this rule if > Subject or Body includes > Specify a word or phrase

How good the AI is, depends on the flood of false positives the current system generates. If MS is true to form getting anything useful comes at great expense.

The #1 thing they search for is notably missing from the list.


At one of the previous employers, they sold part of the company to an outsourcing enterprise, including employees and founded a new one to move the remaining employees.

As a part "of being sold company" When I wanted to interview to the new company, my future to be manager send me his phone number, and advised not to use Teams for any sensitive conversation.


Why is everything duplicated in this announcement? The list of classifier descriptions effectively appears twice, the first time with the text of the "What you need to do to prepare" (which, btw, says exactly nothing on how to prepare) appended to each item.

What even is this site? It looks like grade A content rehashing from various MS sites...


There are good and bad for those feature. The only rule of thumb: Never perform non-working related matter in workplace or facility provided by company, no matter how good your performance in company or how good your relationship with superior.


> Microsoft Purview public preview provides policies protecting products, probably punting people’s privacy.


Point perfectly phrased.


Employees will just start colluding using external tools once the first few suckers get fired.


You’ll then just get fired for using the external tools.

Unless you’re too valuable for them to care.


This is pretty dystopian.


Someone is going to have to square this with executive branch records law. You can’t have a democracy and IngSoc. That’a not how either of them work.


hmm I think I have a new reason not to use any Microsoft products in the office. I can even claim an ethics issue with interacting with them now. Unfortunately the existence of this feature breaks trust that my management hasn't abused it, the only way to avoid this is by not engaging with Microsoft offerings such as word or excel in the office.


We're slowly closing the gap with China, aren't we? What will happen when this technology is applied more widely?


safe to say anything happening on a work computer or work software is looked at or can be looked at by someone in the company.


What ever happened to hiring a good team of trustworthy employees and setting out to achieve a mission together?


Wish I could read the page but apparently my ad blocker is too offensive. Well I'd be fine with supporting the publisher through online ads but I am really not okay with the tracking those advertisers do. You ditch the tracking and any annoying ads and I'll ditch the ad blocker. Until then, we have to agree to disagree, the Faustian bargain of internet advertising is untenable.


Only when conversations are in English right? I can still use the local language? :D


The transcripts are available in more and more languages so perhaps it’s time to learn Navaho.


Worked for WW2…


Has this been reported to EFF?


EFF has done a bit of reporting on this kind of stuff - https://www.eff.org/deeplinks/2020/06/inside-invasive-secret...


I get the 1984 vibe this has.

How should companies defend themselves from insider threats?


This is no way to defend against inside threats. Any real threats will use other means of communication. Meanwhile this is just treating everybody as if they can't be trusted.


I’m sorry, what?

Have you never worked for a bank or financial company? Never had to take a drug test for your programming job?

US Federal law and the Hundreds of billions of dollars spent on audit, insider trading, cyber security, ex filtration tools STRONGLY point to a corporate culture that is obsessed with defending against internal threats, because that’s the highest source of risk.


sure, highest source of risk. What’s the risk that, say, the FBI director is going to run a borderline op where he selectively exfiltrates information to the press. Still an insider or no?


How bout when a president does it? At some point your power/value transcends the system in place.


This is the whole point of culture and society. Mass surveillance didn’t/doesn’t work for the NSA/CIA and it sure isn’t going to work for corporate paymasters either.


Employees get what they need and give what they can..

But seriously, I always found it amusing that once you step into a corporate you can get food, drinks & other amnesties for free.. almost like it's a socialist society.. But when said employees step outside, they are the first in-line for the capitalist agenda..


One word: outrageous!


Or a classifier?


This is why we started https://skiff.com


The only way this sort of thing changes is with labor organization ie unionization.

The government won’t save you from efforts like this. The government represents the interests of the capital owning class.

The demonization of unions is one of the most successful cases of propaganda in the last century. It’s gone so far s people who will die on the hill of Jeff Bezos paying slightly more taxes because everyone seems to think they’ll be Jeff Bezos one day.


>think they’ll be Jeff Bezos one day.

I see that phrase thrown around a lot. It's a variant of "you're never going to be a billionaire (so you shouldn't be against X)." Why do people assume that you have to think you'll be a billionaire to be against something that would affect billionaires negatively? Is something only wrong if you think you'll find yourself in that position one day?


I think this counter argument is used when people are against something without offering a reason.

For example I often hear "The riches 1% pay 80% of the taxes" (or whatever the correct values are). The person makes this argument against the idea of raising taxes, however they aren't explaining why it shouldn't be done

Since they don't offer an explanation the assumption is they are either already rich or think they'll be rich.


> For example I often hear "The riches 1% pay 80% of the taxes" (or whatever the correct values are).

Top 1% earns 21% or income, pays ~40% of taxes (has ~34% of wealth): https://www.heritage.org/taxes/commentary/1-chart-how-much-t...


Even regardless of their motives, this statement is begging the question. The only way it is relevant is if that percentage (it's actually ~40%, if we're talking federal income taxes) is enough for them to be paying. So the argument is that they already are paying enough because they're paying enough. It's circular and therefor meaningless.


That’s assuming people can’t be purely altruistic and principled and they must hold every position they do for personal gain.


It would not be altruistic to be for a person with a monopoly of assets to hold more of those assets while other people don't have homes.

Principled makes sense though.


> Why do people assume that you have to think you'll be a billionaire to be against something that would affect billionaires negatively?

Because there is a group who struggles to reconcile what looks like a contradiction - another group who appears to advocate for policies which harm themselves. The quote and its derivatives attempt to explain this apparent contradiction.


I think it’s pretty paternalistic to think that you’re a better judge of what is good or bad for someone than that person themselves. There are plenty of people who are anti union because they’ve rationally concluded they would be worse off.


I'm genuinely curious to read what your edit would have been so that it didn't appear that I was putting myself into one of these groups.


Sorry, I probably shouldn’t have used “you.” I meant generally… “people who make this argument”


The answer is that such policies are more nuanced than just help or harm. Some people weigh effects differently than other people, leading them to believe that for their specific situation, one outweighs the other.


Completely agree. There are also people who would give everything and anything to be ideologically consistent - worse off on every metric, but right with themselves and the way they see the world. It can be very difficult to relate to them due to that very experiential chasm.


Not P, but I agree. One does not have to like a billionaire or even dream about being one to disagree with disproportionate taxation out of other principles or concerns.


It goes the other way around too. There are billionaires in favor of increased taxes on billionaires.


And the IRS allows you to contribute as much as you prefer to the treasury. It's voluntary and no one is compelling you.

So that group of billionaires who think the government can use their money more efficiently than they can in order to advance American society are absolutely free to do so! Go them! They don't need the government to compel them. They can form their own Philgubernatorial group -set their own donation rules and taxation (donation) bands and percentages and come tax season give it to the feds.


How do you know that don't? Really though, they shouldn't. That's pissing in the ocean on an individual level, and the government doesn't run it's budget off rando Treasury donations. They are not just advocating more taxes on themselves but on the very wealthy in general. A systemic change in taxation.

These same people likely also donate a huge amount of money to charities and etc. On an individual basis it's easier to draw a line between funds donated and outcomes.

> They don't need the government to compel them.

Do they? Do you? Why not just make the government a charity then.


I've never understood this either. It's like they only want it to happen if its forced on everyone, which is the opposite of the social altruism that they are claiming to advocate for.


I mean, that's the great thing about the rule of law right? That we agree to do stuff together that we might not individually?

> It's like they only want it to happen if its forced on everyone

Well if we take a law to mean "forced on everyone" that's really the definition of "it to happen".


Yeah it's not necessarily true. I think Innuendo Studios/Ian Danskin explains this mechanism very very well https://www.youtube.com/watch?v=agzNANfNlTs

Many people think Jeff Bezos should exist and have his wealth because he got there by playing the game better than everyone else, and that this game is just the way things are. He earned it. Attempts to change the game will just make everything worse and people won't get what they deserve, and thus these attempts are unethical. Equal societies are an absurd liberal fantasy.

My attempts to advocate that Jeff Bezos shouldn't have the money he does are actually just selfish attempts to cheat at the game and stuff my own pockets with money and get something I haven't earned. The real issue here is a lack of discipline.

Watch the rest of the videos. People who think like this largely can't be argued with.


I watched the whole thing. I think he gets a few critical points wrong, for example, the idea that the economy is a zero sum game (9:10), and someone can only have more if everyone else has less. I could make the whole "increasing the size of the pie" argument, but I'm sure you've heard it.

With this premise, the author doesn't even identify the argument that while members of the hierarchy have relative positions, the wealth creation resulting from the hierarchy ensures everyone's absolute position increases. A side effect of this is how a country can have poor people who are wealthier than other countries middle classes.

Destroying that hierarchy without a design to replace that progress mechanism means everyone's absolute position would not continue to move primarily up. If conservativism was just "a hierarchy where everyone stays in their absolute positions, but they may move around relatively sometimes," it would be a lot less appealing. The whole point is that it is the most effective driver of overall progress.


My anecdata is that I've literally heard people say that. For example - 'I don't want the rich to be taxed because I might be rich someday.' They don't specifically even mean Jeff Bezos or even a billionaire but assume that 'when' they are rich they will want the tax advantages.


That’s a really good point. It highlights the hegemony of self-interest in contemporary culture. Collectivism has become frail, immobile, even dubious. We are demographics. We demonize one another. We struggle for causes that affect us and ours, and despise people for joining popular social movements. Fatigue turns out to be the limiting factor for compassion, and boy are we tired.

I don’t expect I would become a billionaire (…anymore). I imagine that I would be a benevolent one, but fear the gravity pull of such wealth would collapse any good intentions. Capital demands such rigor. I would think that if some some policy or popular uprising made wealth distribution flatter, the billionaires of the world could exhale. The burden to care becomes much lighter when borne by many hands.


A lot of people will quote John Steinbeck [1]:

> “John Steinbeck once said that socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”

There's plenty of circumstantial evidence to back this up. The 2000 election is a good example although I can't find a good quote for this. Gore famously daemonized the "top 1%". An illuminating poll in 2000 revealed that 19% of Americans thought they were the "top 1%" and another 20% thought they would be someday. So 39% of the population thought of themselves as the "top 1%".

Americans also love the slippery slope fallacy. The idea, that you allude to, is that people will defend Jeff Bezos's taxes being raised because the next step is apparently them coming for the working class.

This too is propaganda. B does not necessarily follow from A. But political leaders and plutocrats are happy to use this argument to their own benefit. It's the sort of argument people make when they have no argument.

It's a byproduct of American exceptionalism [2].

[1]: https://www.goodreads.com/quotes/328134-john-steinbeck-once-...

[2]: https://en.wikipedia.org/wiki/American_exceptionalism


The "temporarily embarrassed millionaires" phrase was used by John Steinbeck to describe not the poor, but what we might call today "champagne socialists":

"Except for the field organizers of strikes, who were pretty tough monkeys and devoted, most of the so-called Communists I met were middle-class, middle-aged people playing a game of dreams. I remember a woman in easy circumstances saying to another even more affluent: 'After the revolution even we will have more, won't we, dear?' Then there was another lover of proletarians who used to raise hell with Sunday picknickers on her property. I guess the trouble was that we didn't have any self-admitted proletarians. Everyone was a temporarily embarrassed capitalist. Maybe the Communists so closely questioned by the investigation committees were a danger to America, but the ones I knew — at least they claimed to be Communists — couldn't have disrupted a Sunday-school picnic. Besides they were too busy fighting among themselves." (source: https://en.m.wikiquote.org/wiki/John_Steinbeck)

Ronald Wright somehow turned this into a quip about Gramscian false consciousness, in the great global game of telephone we're all playing with each others' words.


> I see that phrase thrown around a lot. It's a variant of "you're never going to be a billionaire (so you shouldn't be against X)." Why do people assume that you have to think you'll be a billionaire to be against something that would affect billionaires negatively? Is something only wrong if you think you'll find yourself in that position one day?

Obviously cappies (meaning people who support capitalism, who are not necessarily actual capitalists--most aren't) don't walk around believing they personally have a greater than 50% chance of being billionaires. It's hyperbole. That said, they do overestimate their future earning potential while severely underestimating the number of ways in which preexisting social class will block them. This is evidently true; behavior and preferences reveal beliefs, and no one supports capitalism and its extreme inequities unless they harbor a belief--perhaps an underexamined and irrational one--that they'll one day be invited to join the capitalist class, since there's literally nothing to justify the system but "It's good if you're one of them."


>The demonization of unions is over the most successful cases of propaganda in the last century.

It is possible to see unions as both the source of some and solution to other forms of abuse.


Why are you saying this? It's what every single corporate pamphlet and forced talk says. No one is saying unions will bring everyone enlightenment and cure cancer.

What people are saying is workers need a say in how the workplace is run and companies spending millions convincing folk otherwise should be forced to stop.


Few people are able to view unions objectively instead of picking a side, a union comes with tradeoffs not universal goods or evils. There are plenty of examples of unions fixing things and plenty examples of them making things worse, your opinion of a union shouldn't be based on keeping score but actually looking at the risks and rewards. Unless an organization is particularly nice, unions make a lot of sense for low skill, high turnover jobs in large corporations; unless a organization is particularly bad, unions don't make sense in high skill professional positions.



Yes I wanted to say something very similar. The idea of having an organization that collectively represents my interests against things I oppose (like this) feels good.

Unfortunately unions will not represent my interests in a huge swath of other areas (meritocracy, politics, etc). So choosing a union just trades one set of shitty things for another. For all but unskilled workers, the benefits are basically an illusion imo


Another case where Americans seem more than willing to ignore the experiences of the rest of the world.

Many workforces in Australia are highly unionised. Unions have been largely effective - even in recent years - at using their collective bargaining power for the good of the worker. Hell, our ruling political party is literally called the Labor party. It has strong union ties and a history of passing pro-worker legislation. Union corruption exists as it exists in all areas where power can be had, but in Australia’s case you’d truly be throwing the baby out with the bathwater by saying “unions are bad!”, even in industries like tech where we’ve never really sought collective bargaining on a large scale, the universal protections ushered in by the union movement benefit all workers here. It sounds like you’ve fallen for the same propaganda as everyone else but you particularly think that you have “smarter” reasons. You don’t.


Anyone who thinks differently than you is "falling for propaganda". Got it.


Already saved from this sort of thing by just being self employed for the past 10 years. I get paid for every hour of work with no spying and just a weekly status update with my clients. It's a simple relationship and I am honestly not sure I could ever go back.


Emacs would never spy on you like that, would it?


Emacs is open source and you can modify it, so even if it would spy on you, you can in theory remove the offending code yourself or wait for someone else to do it and publish the patch. Good luck doing that with Office 365.


M-x psychoanalyze-microsoft-pinheads


> The demonization of unions is over the most successful cases of propaganda in the last century

Ever notice how unions are somehow all the same entity, and seem to have to answer for things completely different unions in completely different industries did?

Nobody treats corporations this way, even though (if you look at interlocking BoD membership) there's a more reasonable case to be made for collusion in some industries...


> Nobody treats corporations this way...

Oh yes they do. "All corporations are evil exploitive money-grubbing polluting anti-democratic anti-worker..." I've seen it, here on HN, on the regular. I don't recall if I've seen it today, but I see it a lot.

> ... even though (if you look at interlocking BoD membership) there's a more reasonable case to be made for collusion in some industries...

The AFL-CIO looks (or at least looked) like the same thing, but for unions.


> Oh yes they do

You can nutpick to find people saying anything, of course.

Show me someone in a position of power saying that. I think they closest you'll find is someone like AOC, who has gone nowhere near that.

> The AFL-CIO looks (or at least looked) like the same thing, but for unions.

The AFL-CIO has been in decline for several decades. If you want to tar, say, the Amazon efforts with things the AFL-CIO did in the 60s, you're just making my point for me.


> nutpick

I don't know whether that was a typo or deliberate, but it's beautiful. I'm stealing it.

Yes, you absolutely can find a nut who will say anything - even several nuts. Absolutely. But in this case, I think it's a bit stronger than that. I see it too often. It could be just a few loudmouths saying the same thing over and over, but to me it feels more like, say, 5% of HN users actually believe that. True, that's far less than the number who believe "all unions are evil", but it still seems to me to be enough people to be significant.

> If you want to tar, say, the Amazon efforts with things the AFL-CIO did in the 60s, you're just making my point for me.

Well, I didn't want to do that, so don't put that on me. All I wanted to say is that, as corporations can collude (or at least appear to), so can unions, and we have historical examples of it happening - and, unlike corporate collusion, happening formally and in the open.


>The only way this sort of thing changes is with labor organization ie unionization.

>The government won’t save you from efforts like this. The government represents the interests of the capital owning class.

You realize that the power/existence of "labor organization ie unionization" is dependent on the government? Without government protection labor unions don't stand a chance.


In the same way that a completely controlling government and political system could stop literally anything, yes. The basis of collective bargaining does not inherently depend on government support and the proof is in it predating government support. It depends on governments not penning anti-unionisation legislation. There’s no need to be so snarky.


You realize that historically labor unions predated government laws about labor unions, right?


Sure, before labor unions existed, there were no laws about labor unions - why would there be?

But there were laws allowing freedom of association, so something like unions were default allowed, in the absence of any other laws.


You're missing the point of the comment you're replying to: labor organization does not require government permission to exist. It existed long before any sort of government quasi-protection.

There are many examples of this including Solidarity [1] and the Peasants' Revolt of 1381 [2] following attempts to freeze wages following the Black Death where demand suddenly exceeded supply and pushed up wages.

And this isn't even counting the cases where peasant and worker uprisings that led to revolutions.

The concept of a general strike is a relatively modern one but an extremely powerful one regardless of any legalities.

[1]: https://en.wikipedia.org/wiki/Solidarity_(Polish_trade_union...

[2]: https://en.wikipedia.org/wiki/Peasants%27_Revolt


I was agreeing with that point, with my comment about "default allowed".


The government stops them through things like outlawing secondary strikes.


Doesn’t some of this stuff add some legal liability to organizations?

Like if a manager learns something and takes action because of it?

Or learning about employee behavior and sentiment and using that information to suppress promotions…

Or being informed of employee misbehavior and not taking action against it…


I shouldn't have to belong to a particular organization (especially a politically active one) in order to have a job in my field.


You shouldn't, nor should organizations push the limits of what people accept before they snap. But here we are.


Anyone else miss Office 97?

You just installed it locally off a disc and it just worked when you needed it. You didn’t even need internet.


Office 97 and Windows XP was something of a high point in personal computing. The internet has enabled entirely new product categories, but it has also badly eroded old ones with the solvent of MRR greed. Merely selling a thing just ain't good enough, especially if it's software. Even offline applications are SaaS, now, where the "service" are frequent updates that leave you at the perennial mercy of every company from which you purchase software (and every company with which they do business, recursively). I'm normally pretty sanguine about business models, but when I lay it out like this, I find it quite disturbing.

So I won't think about it.


2 years ago I decided to fire up my Pentium#100 and write a technical plan on it using Office 2000, for my real life corporate job. It worked magnificently, no fuss, plan presentation went fine. Faster on a 100mhz machine with 16MB ram than whatever monstrosity underlies O365 and Google Docs.


I'll happily use an old Office, just please splice in the "What do you want to do?" search bar so I don't have to hunt through nested menus/ribbons for some obscure formatting option I use once every 6 months.


LibreOffice is actually pretty great, it's not "run on a Pentium 100 MHz" fast but it's stable and works well for basic office spreadsheet/word processing/powerpoint tasks.


It works well until you need to send and receive documents from MS Office organizations. LibreOffice mangles layouts and formatting in DOCX and PPTX files.


Get metric compatible fonts.


The UX is mighty rough. Last I worked heavily on Calc it wasn't terribly compatible with other spreadsheet software like gnumeric, though perhaps that has changed.


Older office didn't have the god awful ribbon.


In my view, the ribbon was an incredible UX innovation; I just resent Microsoft for patenting it. A ribbon could improve so much software. Don't forget that cascading menus are an antipattern.

What's not to like? Common functions are one click away, and others are two clicks away. Are you saying lengthy drop-down menus were better? I don't see how.


I haven't used office in a while, but I remember that once a function wasn't in the "home" ribbon, finding it required searching through the other sections. And the division of those sections was super counterintuitive for me. Whereas I could usually hazard a pretty good guess where something would be in the drop-downs.


Well I used to read a list of options horizontally, then click one and read a list of options vertically. Now, with the ribbon, I must observe a grid of different sized and shaped objects. That's harder to parse, in my opinion.


Not really.

Office suites were a mistake. Return to text editor.


Imagine a world with something markdown-like instead of Word...


I do not have to imagine such a world since LaTeX exists already in exactly the space you're describing.


I said a world, not a small subset of the world.


Less bullshit on formatting, far better content.


Everything trivially version-controlled.


That's quite the non sequitur.


Except for one: Ashton-Tate Framework. "Emacs for business" gets you within a stone's throw of how flexible and powerful Framework was back in the day. Of course its UI wouldn't fly in today's world, but back then (early 80s) it was a revelation.


Office 2003 was probably the peak, and then it started going downhill with 2007.


you can still ban internet access in firewall and activate it locally, I'm using last offline installable Office 2019, 365 can't touch my computer


I just want a simple version of MS Access in the cloud.


Airtable?


What year is this? 1984?


1984 got nothing on 2022


true.


"Just tell me what year you want me to believe we're in"


If you've ever thought your employer isn't monitoring the chat then you're a fool. I'd go as far to say that if you think there is any form of electronic communication that isn't being monitored on some level you're also being foolish.


> If you've ever thought your employer isn't monitoring the chat then you're a fool.

Mine doesn't. I know that because I am the 365 admin.


> ... I am the 365 admin.

For now. Remember MS can literally run these tools on your communications and if/when something gets flagged... raise it out-of-band to a senior business person at your company for follow up.

They likely have the contact details for senior business people at your company already. ;)


What makes you think Microsoft would care about this? They provide the tools to make managers happy but they certainly aren't going to start running the tools for you.


They'll do it if it makes them richer.


This is quite an imaginative take.


One approach is to run/work for small companies with adult smart people who trust each other without surveillance.


Lol depends what industry your in. Im a one man band msp so I literally set up these catch all employee tracking systems. Most folks don't realise they exist, even fewer use the data produced by the systems. Legit 90% of the time it's just kept in case of gov audit or something going pear shaped and we need proof it wasn't us.

Trust is all well and good, but trust ain't gonna pass an audit or get you out of trouble if shit hits the fan.


> I literally set up these catch all employee tracking systems.

Does that bother you at all?

Not just about the employees you’re doing this to, but about being part of the system that normalises this kind of surveillance generally?

Is this really the kind of world you want to live in?


Nope. Look from a private perspective and up until a few years ago even from a business perspective (i was very idealistic when i started out) I am linux/foss/privacy advocate through and through.

But in the business realm, you have no privacy whilst your at work on work devices, the company owns that data not you. Want to send a message privately about something not work related. Fine, but use your own device. Man I spent like the first decade of my working career in all forms of laboring being exposed to OH&S violations of epic proportion which were unable to be prevented or retrospectively acted upon because no data was captured that proved it happened. Think stuff like bosses bi-passing fire suppression systems that prevented machine operation on drill rigs punching holes in ground littered with methane gas pockets just in order to keep the rig running at risk of all employees running it.

I'm sick of companies getting away with abuse of customers and employees. Most of this can be prevented or at least discouraged via tech based monitoring. If you want privacy.....keep it for your private life.


> But in the business realm, you have no privacy whilst your at work on work devices, the company owns that data not you

This is really the problem in a nutshell.

Would you let the company install cameras in the bathroom to film you using the toilet? I am guessing probably not.

So why do you think you "have no privacy whilst at work". This is a fiction. Privacy is a human right if you're at work or not.


> But in the business realm, you have no privacy whilst your at work on work devices,

Only because some people decided that should be so, and other people worked to ensure it happened. You state it like it's an immutable law of the universe, but it's a choice we collectively make, and a policy we enact. Or, a choice we passively allowed others to make for us, and a policy we allowed others to enact upon us.


The great thing about capitalism and the free market is that you can choose not to enter into an agreement with a party if they run contrary to your ethics and morals.


This assumes you know that said party is doing such things, often one doesn't.


Yeah avoiding the only isp in my town ain’t happening.

How the free market will I get internet?


Some of the issues, depending on where you live, is that the government made agreements with ISPs to prevent competition in exchange for the ISPs paying for laying wires. If those agreements were not in place you might have additional choice.


Starlink


This is a fantasy version of capitalism that assumes perfectly symmetrical information. A huge percentage of the people being monitored have no clue it's going on.


Small software companies doing innovative things. Any business area that might need an external audit is a severe red flag in this context. That also typically means it's less fun, IMO.

A software company running a Microsoft-based email/etc system is also a red flag in this context. I mean, why...


Eh, it's in healthcare. Specifically disability support. I get a kick out of building software that makes providing support for these folks that need it. It makes their lives better, helps them achieve their goals. But it's also a largely tax payer funded industry in my country so hence the audits. Which is ok, as a tax payer myself I'd be pretty gutted if we as a country weren't auditing companies getting our hard earned tax dollar especially if they are in a sector like healthcare.


Yeah, makes total sense and I wouldn't want to work there.


All businesses require an audit of some kind. How do you think due diligence works?


That's an extremely strong statement that is obviously not true, except in the most weak and generic form.

How do you think reality works?


Yeah if you want to defraud people, sure. No auditing required. Who's being dishonest here and making false statements? (Can you see the irony???)


That's a false equivalency. You seem like you've never worked with honest people.


You have to be careful is all I’m saying, on both sides (as an investor or entrepreneur). Also, many times audit also means documenting your systems, reducing bus factor and decreasing time spent for new employees during onboarding.


What email services would not be a red flag?


Fastmail :).


There's a difference between monitoring and logging, and nobody is reading the chat logs or even paying attention to chat metrics in many workplaces because the value of doing so is dubious given the potential for employee backlash.


It's probably more common that they log it, and trawl it when there's some reason to. Still dystopian, but less work.


It's ok it's from Microsoft. Nothing in Office 365 works, this won't either.


Reminds me of that old joke: “the first product Microsoft makes that doesn’t suck will be a vacuum cleaner”


I'm sure this is intended as a joke.

Even if it doesn't work right - having it at all is going to result in all sorts of bullshit for employees where this is enabled.

Someone digging through your emails because you happened to mention some vaguely related keywords... yeah, no.


Normal operation of AI often involves exhilaration of data to the vendor, so I take the privacy qualifiers with a grain of salt. This most likely is turning your email inbox into something roughly as (non)private as a search engine query. It's the ultimate dark pattern, goodness knows what MS intends to do with this access.


Google docs is way better seemingly for now,


Has this been reported to EFF? Not seeing anything on their site https://www.eff.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: