Hacker News new | past | comments | ask | show | jobs | submit login
Hackers used Slack to break into EA Games (vice.com)
365 points by danso on June 11, 2021 | hide | past | favorite | 209 comments



The key bit is "The hackers then requested a multifactor authentication token from EA IT support to gain access to EA's corporate network. The representative said this was successful two times."

So this was primarily a social engineering hack after Slack was used to get access to a trusted messaging channel.


I'm interested in what sort of "multifactor authentication token" and how IT support were able to grant this request.

Are we talking about a physical token like an old-fashioned SecurID OTP keyfob or a Yubikey? Or something custom?

Or are we just talking about a code that real employees get via TOTP or worse SMS?

It seems like the same week large employers figured out they would need to FedEx the new guy a laptop since he can't come to the office, they would likewise realise they want to make sure they FedEx that laptop to his actual home, not some building site a scammer told them to send it to. And so you'd hope that physical tokens likewise can't just be sent to some random social engineer based on one chat.

Even if you do succeed in getting them to FedEx the token to a building site, that's not a trivial extra step to retrieve. If some teenage "criminal mastermind" gives their parents home address to get the token delivered, they've also told cops where to start looking for the "hacker".

Whereas if it's just a code then I can more easily imagine IT support just pastes it into Slack for you.


It was probably a one-time-use bypass code offered by a 2FA provider.


sounds like it's a simple software token. Hackers needed an account for this, which they got by paying 10$ for stolen cookies lol. So now they can log into some random employees account, then get an admin issued 2FA code. Never seen these "bypass" codes before, but that's my best guess.


The stolen cookies imply full access to one of the employees machines.

How is something like this advertised on forums?

Valid auth token for employee at EA Games for sale? Does it list the expiration? Refunds if it isn’t valid at time of sale?


nah not to the full machine. This is just a Slack token. Someone basically had the employee click on something with XSS or some other way to steal cookies (idk there's a bunch of ways you can steal cookies out of someone's browser)


Token most likely refers to a software OTP token. That would require having an account, but the same would be true for a hardware token.


A simple Zoom call with your camera on for anything private could have stopped all of this.


Not really, IT generally have no idea what you look like. I suppose if the company provided your ID photo to tech support, then that could supplement this though. However, I've never seen a company do that.


I’ve definitely worked at companies with a formal corporate directory that had your name, phone #, office location, and badge photo. Seems less common in startups though.


Even just having your face recorded (after which, if it turned out you were a hacker, it could go on wanted posters or whatever) might be a significant deterrent.


I assume with some OS plug-in you can have an AI model slightly change your face enough to not be traced back to you


Does it look legit? Visual glitches would raise a red flag.


Having seen some of the filters people show off on TikTok... Yes. Yes they do.


we have a company directory with everyone's photo in it all employees can access. It's part of the HR onboard to put a photo in your profile day 1.


>Are we talking about a physical token like an old-fashioned SecurID OTP keyfob or a Yubikey? Or something custom?

I used to have a citrix fob that gave me a OTP but then my employer switched to azure and now it's all text message based. That was just last year.


Exactly, doesn't sound like this has anything to do with Slack in particular.


The problem probably isn't Slack itself, but EA's policies around Slack. I think it's not nothing that the social engineering happened over Slack. For starters, somehow login cookies to the Slack were stolen and then sold, and those were enough to get into the Slack. And then in 2 separate instances, the hackers were able to convince IT to give them 2 new tokens.

Maybe the IT team/policy is just weak across the board, and they would've handed keys over the phone or through internal email. But it's not impossible for the IT team to have a complacent mindset around Slack.


Yea, I think most users think slack is a safe space. They haven't been conditioned to be suspicious of messages on slack like they have with email. It's a pretty ripe attack vector.

Was bored the other week and found a ton of slack web hook urls in public github repos. I think it would be a pretty great way to do some phishing, just need to scrape the urls and brute force channel names with messages with links to websites you own.


I think it's actually a pretty unsurprising assumption. Slack is significantly more vulnerable service than a lot of other chat applications due to the fact that it has a native webclient that allows cookie based authentication. Every chat program in existence is going to use something extremely similar to maintain user sessions - having either authentication credentials or (preferably) an access token written to disk to allow users to avoid re-entering passwords every time the program falls out of memory.

I think with slack the danger is exacerbated by the fact that this value gets stored in a cookie that is a lot easier to gain access to - and, once you have it it has a much better documented format than whatever MSN Messanger might roll in-house. We're essentially just talking about an added layer of security by obscurity when it comes to the format - but actually nabbing a cookie is easier than gaining arbitrary file access.


Modern web doesn’t need cookie auth so let’s hope they move away from it. Cookies in general are an idea past expiration date.


It's the same old same old. Like you correctly identified it's a new place, it looks different. People don't think an attacker could approach them over IM, but they can.

But the problem goes beyond that. Many organizations have disabled sharing executable file formats in attachments over e-mail. GMail flat-out prevents you from sharing executables, and macro enabled word documents as attachments, even when put into a zip file.

But on Microsoft teams? I sent a zip file with 8 unsigned executables to a colleague a few days ago. No warnings, no messages, no nothing :)


For general public I can see how that is helpful if they can't decide if a foreign executable is trust worthy or if they execute everything with admin/root privilege.

As for me, I really hate this "feature". I work with IHVs and often have to share private binaries and it's a chore using xcopy/sfpcopy to their bespoke network path, from where I guess then someone manually copies over to their local subnet.

We should have a more robust mechanism in place than to outright ban sharing of executable files. Windows Smartscreen and Mac's Gatekeeper method of online checksum/signature verification is sort of interesting.


1000 times no.

You need to build security for the lowest common denominator.


Which is what Defender smartscreen and Gatekeeper are. With their full on setting, they give you a big fat warning on running files from internet.

This is like banning guns and cars because they can kill people.

Also, you do 24 hours of lemons?


I don't know if it's true, but a co-worker of mine said that, "in the wild" signed binaries are positively correlated with being malware.

I don't think the signing itself does much.


Perhaps not in itself. The lack of scans is concerning though


Signed by real vendors?


Depends what you mean by that, but probably not. But, signed definitely. The main problem here is that there doesn't seem to be an authoritative vendor list of the "real".


The thing is you need MFA to log into Slack but having a valid cookie bypass that.

On top of that once you're in Slack you can access every public channel, search string etc ... I can tell you that large compagny have a lot of thing displayed on Slack ( CI/CD pipelines results, credentials in logs, alerts ect ... ) you don't even need to talk to someone to have a lot of info.


I work with a large enterprise company that is actively thinking about this problem within their organization. The long and short of it is that everything in Slack is ephemeral past a month or so. Everything gets wiped on a rolling basis, including all media and messages.

If you want some domain to be documented, it happens outside of Slack. Secrets go in a dedicated secret management resource that requires 2FA for every login with strict timeouts and audits.

For the team I work on, this means piling more crap in Jira and Confluence. If a decision is made over Slack, that decision is then codified in a ticket or in a Confluence document. This also means some people constantly send links to the same confluence pages over and over again since there's no history for someone to search through.

I think overall it's a decent solution if you're diligent managing the tradeoffs. I can't really think of a better way to keep things off of a platform where they shouldn't exist, other than taking the nuclear option like they're doing now (albeit with a generous countdown timer).


I don't see how the policies you described would help at all in this situation. The main use of Slack here was to get the one-time-pass, which allowed them to login to the corporate network.

If they did this in your company, then all they'd have to do is scan through the Slack channels till they found a link to the internal company Jira and Confluence sites and then they'd have free reign to start mapping out your network and prepare for an attack.

I think an effective mitigation that could be implemented on the Slack side would be to sign the cookies and include the origin IP as part of the cookie. If you get a request with a cookie issued to a different IP, then you invalidate it and have the user login again.

This might be problematic on mobile devices, so maybe another option might be to include a device id and a nonce in the signed token and each time the cookie is used to establish a connection, the device is issued a new signed token with which to establish the next connection. If a user logs out on a device or the same token is used twice, then Slack could immediately invalidate all tokens.


This is a concern of mine on systems I design/maintain.

How do I mitigate a stolen cookie from successfully authenticating someone else?

Do I store user browser-agent/IP and check that on every request?


I think we are living in a post "IP address" world to be to totally honest.

I can often switch thru many IP addresses in an hour (especially travelling) - various WiFi points, 4G, etc. Services will appear incredibly broken these days if they require a new login per IP address.

Obviously you could force all traffic to be routed thru a VPN and list it there, but it seems people are moving away from that approach.

To me, the better question is how are these cookies get stolen in the first place?


IP based auth is super annoying for legitimate users since it logs them out frequently.


If it's an internal corporate system where all the users sit at assigned machines and have fixed IP addresses, yes you can do stuff like IP address checking.

Otherwise you probably need short-lived cookies that get renewed by the client in the background, with a hard expiry of some reasonable "work day" length such as 8, 12, 16 hrs. Then even if it's stolen, there's a fairly short window of time that it's useful to anyone.


As long as your authentication scheme is based on a bearer token, you can't really prevent it, but binding to IP and setting short expiry can help motivate it.

If you want to avoid this, you have to use something in your authentication scheme that can't leave the device/user, so we're talking certificate or other public key crypto based schemes.

TLS mutual authentication is one common tactic for this, although the scenario itself is uncommon.


In my opinion you don’t. Rely on the authentication provider to handle that responsibility. Services like duo/Okta perform this risk assessment and may opt to request a mfa request.


I've never wanted to completely hand over authentication to a third-party.

Instead what I'd think I'd like is just the risk assessment to be be performed by a third-party when I'm handling authentication (i.e. a third-party that has a broader view of what's happening across multiple services over time). I just send the pieces of information that I'm willing to share as an API call and they make the best risk assessment they can.

Then I can take that risk assessment result and make a final decision if authentication succeeds or not.


There are risk services out there.

https://sift.com/ Is one you call out to that gives you a risk score.

https://datadome.co/ can sit within your cdn layer that does risk assessment.


That's not always an option.


You can downvote all you want. Some projects are sensitive enough to not allow third party authentication (military systems anyone).

Besides, if you're large enough it makes business sense to do it yourself anyway.


If the client device has a TPM or some sort of hardware that can manage the secret you can leverage that. Otherwise, protecting against "attacker has a valid session" is not very easy. Even in the TPM case attackers with code execution on the device can likely bypass it.


Well, since the user was able to gain access with a stolen cookie, these things are possibly true:

1) Slack does not invalidate sessions after a short enough period of time or inactivity (for a cookie to make it to another site, be purchased and used, probably takes some time).

2) Slack does not properly terminate sessions on logout or inactivity (allowing cookies to be reused after logout).

3) Slack is not using any more clever techniques to make cookies useless to attackers.


This, and especially #3.

Having the same cookie used on a different computer should be the reddest flag of them all. This could be identified by the user agent (different version of the Slack client) and a different IP (cookie accessed from a different country than it was originally created).


Session invalidation time is someting the slack admin (That is, EA in this case) configures themself.

Fun fact, if you want SSO for your Slack, you have to pay extra unless you are already using one of the top tier Slack editions. So either pay up, or have multiple ways of administering your users, with the security implications that causes.


I've rarely seen any enterprise Slack ever configured for session timeouts. Misplaced trust on the employee device combined with convenience over security.


Something important to keep in mind is that Slack has a lot of not so great defaults. Go check your settings - infinite sessions, infinite channel retention, etc.


Aside from that small part of a stolen and long lived cookie working on an untrusted device.


It could happen in other scenarios, if you generate incoming webhook URLs and don't treat them as secrets.

Once you get your hands on one of those, you've got a fair shot at a phishing attack.


Exactly. It has about as much to do with COVID-19 as with Slack.

In pre-COVID-19 days, IT handling that kind of request would have required it to come in face-to-face. But since everyone's working from home, that's intractable.

The failure point here is that IT should have confirmed via an independent secondary channel the identity of the requester, but it appears they either got lazy or their protocols assumed Slack could not be compromised in this way so the request was already authentic.


> In pre-COVID-19 days, IT handling that kind of request would have required it to come in face-to-face. But since everyone's working from home, that's intractable.

In pre covid times it wasn't uncommon for people in my office (which didn't have a dedicated onsite IT person) to ping IT on slack for the person sitting next to them to ask for a PW reset.

Face to face doesn't solve anything. If there's a 2000 person company and you can get into the building, chances are you could walk up to the IT desk and say "hey my name is John Doe I'm an engineer here and I locked myself out of X, can I get a reset?" And you'd be given it without any verification


The one time I locked myself out hard at work and had to go "in person" for a password reset, I had to show ID even when the person who was resetting my password knew me by sight. And this was well over a decade ago.

And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.

This is just really, really basic physical security stuff.


That might be true at some companies, but has not been my experience at 3 different companies.

> And in all but the smallest companies I've worked for, you needed a card swipe at the door (or an escort) to pass from public to internal areas.

And once you're past the barrier of those internal areas, you're unlikely to be questioned in all but the most strictly controlled places.


In my case, everyone is supposed to be visibly wearing their company ID, on a necklace/lanyard or something like that. However even if challenged without one, "oh sorry, I left my ID at my desk" is likely to work. However everyone but brand-new employees is a least vaguely familiar to everyone else, so I'm not sure how easily a complete stranger would be able to move around.


Previous company I worked at had a policy that you needed to include a photo of yourself with the current date written with any PW reset request, but of course that doesn't work as well at a 2000 person company as at a 100 person startup where IT knows everyone's face.

Plus that's still vulnerable to googling photos of the employee you're impersonating and photoshopping a piece of paper with the date.


If you’re over 2000 employees, there’s little excuse for not requiring a badge scan to queue for all IT desk interactions. That scan should display name and photo to both desk workers and on a ‘now helping’ screen others can see.

It’s not foolproof, but it’s incredibly difficult to prevent tailgating because people don’t want to start an incident by incorrectly challenging a coworker. I’ve heard of it somehow happening even with card turnstiles, double door scans and required photo confirm by lobby security.


If you walk up to the IT desk, they should check your identity. (At least for security-sensitive things like password resets or locked accounts.)

Most employees have a photo badge, so they could scan that and look up your records in the HR database. If you've lost your badge, they could ask your name and look that up in the HR database, which hopefully has a photo on file.


Most 2,000 person office buildings have security guards and doors/gates/turnstiles that won’t let just anyone waltz in.


Right, but once you get past that area nobody will question it. In the same way that someone who works for your company on slack pings you, you assume they work for your company.


In the companies that I am familiar with the internal space has doors and every single door opens only with a badge


I think more than 50% of people would let you tailgate their badge


A lot of people in this thread are acting like it’s super hard to get through or in. You can “social engineer” your way in.

I did it accidentally once at a big hedge fund sort of company. I thought I was being escorted by my friend’s friend. But it was actually just some one familiar with them/their names not fully understanding what I was saying letting me thru the ~3? Times a badge scan was required. I wasn’t paying attention. Only when my friend got scared he would get in trouble did I realize what had happened.


Risk profiles and scale for Slack attacks vs in person soc eng are way different.


I appreciate being able to open the comments and read the essence of the article along with a clarification correcting the messaging in the clickbaity title.

Thank you.


At the companies I've worked for we were prohibited from ever giving out OTP codes. If you lost you ability to generate one it meant you need to submit a request through the ticket system and come into the office if you need to provision it. Certainly a request like this should never come from slack.


That should have never succeed because the request and the requestor's identity should be confirmed by using other data known and trusted by both parties, and possibly by requiring confirmation by someone up your management chain to boot.


Humans, the unpatchable vulnerability.


Humans are somewhat patchable but I like your point.


Recent related threads:

Hackers breach Electronic Arts, stealing game source code and tools - https://news.ycombinator.com/item?id=27470577 - June 2021 (111 comments)

EA got hacked and games source code leaked including new game Battlefield 2042 - https://news.ycombinator.com/item?id=27468766 - June 2021 (139 comments)

EA hacked and source code stolen - https://news.ycombinator.com/item?id=27462952 - June 2021 (35 comments)


"the process started by purchasing stolen cookies being sold online for $10"

Anyone know the details of how this actually works? Does it mean browser cookies? How do cookies end up being sold in this way?


Slack authentication works like many other web applications and once you've successfully authenticated you get a cookie which can be used to read/send messages on your behalf.

If someone managed to get some malware running on a machine where you have logged into slack it's fairly trivial for someone to get your cookie. Something like https://github.com/djhohnstein/SharpChromium is an example of a tool used to pull browser cookies off a compromised host.

I don't know the explicit details of this particular instance, but I imagine the user in question had some kind of malware installed on their phone or computer. ( I keep seeing mention of a browser extension in the comments, and I have seen some working examples of malicious chrome extensions recently that would let you steal cookies once installed.)


PhantomBuster makes you fork over your Slack cookie in order to use it. Could easily be that or something similar.


This would be my guess too. I’d expect common sources to include scraping services (like phantombuster, but also lower-level infrastructure like proxies) and browser extensions.


Wouldn't proxies have to execute a TLS stripping attack to make this work? Those may have been common in the past, but I'd have to imagine it would be hard to make work against the modern Slack website/desktop app


Not if the proxies are for some janky scraping script or .NET application that ignores certificate warnings.


I don’t know for sure, however I presume if you have a compromised browser extension or malware on your device then the cookie can be extracted. And then they’re sold on marketplaces for black hats.


I would guess an EA employee or contractor device had a bot installed that was harvesting data (e.g., cookies). Check out this article on how Genesis Market allows bot herders to monetize this data : https://www.f5.com/labs/articles/threat-intelligence/genesis...


I see browser extensions mentioned. Maybe Tor exit nodes as well? Or a compromised corporate MITM proxy...that would be funny.


Shouldn’t be possible with ssl


In the context of a "MITM proxy" it certainly is.


How would a mitm proxy have a valid certificate for slack.com?


> a compromised corporate MITM proxy

A corporate MITM proxy does has it own certificate authority, or else it wouldn't be able to monitor its traffic like it's expected to do.


Doesn’t certificate pinning break these?


It does, but HPKP is effectively dead, removed from most browsers.


So I could take the cookies from my browser, send them to a friend, and my friend could then save the files and use the cookies? There’s no process to verify it’s being used in the same browser or computer?


This is non-trivial to do without gathering more info about someone's browser than users would be comfortable with (aka fingerprinting). Ideally, a website/app knows nothing about the machine it's running on and is perfectly sandboxed away from everything else.

A private key shouldn't be device dependent. 2FA was the solution here, but was not enforced properly. They were able to social engineer IT into resetting their 2FA token without any proof they were who they said.


The IP is something someone isn't comfortable with? It seems to be quite trivial to include the IP in the cookie/session and verify it's the right one.

Sure it's not perfect in a world where IP are not necessarily static (and a VPN may change it), but having to login again is not that bad, even more so if you are using SSO.


> The IP is something someone isn't comfortable with?

No good. Slack's biggest userbase is corporate, behind corporate VPNs. Many users have the same IP.


Still seems reasonable for preventing an outside attack though.


It can help, for sure. I think the main reason it typically isn't enabled in general is that it's pretty common nowadays for people to switch IPs regularly. It'd be annoying to have to re-login to Slack multiple times per day as you move around with your phone or take your laptop to different places.

(Especially if it's a user who isn't in a corporate environment, where your public IP will probably differ when you connect to different wireless networks. On a corporate network you're more likely to retain the same public IP no matter which room or building you're in.)

If they don't, they should provide an option for corporate administrators to enable IP locking for session cookies.


When I commuted to the office my IP would change multiple times as I crossed from 4G to various WiFis. It would be really annoying to have to log in each time.


This is why pinning user sessions to IPs isn't going to work - in a mobile first world most users have multiple IPs that they might access a service from during the course of a day.

Logging people out also isn't a minor inconvenience, especially for users who haven't figured out password managers yet.


It would be annoying even with a password manager. Most people are going to care a lot more about that multiple times a day inconvenience than the fact that it’s protecting security. On iPhone it is more than the steps of authorizing password manager and having them auto filled. You’ll have to click yes and possibly something else to have touch or Face ID to be activated again.

I know this isn’t actually a major thing. It’s a 20 second process. Still a couple times a day is a couple times a day.


There should be. You can't guarantee it will always work, but it's certainly possible to encode some fingerprinting into the cookie so that if the fingerprinting no longer matches what the client requesting data from the server looks like, the server throws up a red flag and asks for the authentication challenge response to be repeated.

But no, it sounds like Slack doesn't do that, which is a problem.


Security I implemented in the 90's: Encode the client IP address into the cookie.

Also include a timestamp to force re-authentication at some point.

This isn't rocket science.


And now as I connect between ipv4 to v6, connect to my VPN, switch from wifi to mobile data each change requires a login and I very quickly abandon your app for one that doesn’t force multiple authentications throughout the day.


Not only that. I tried the same approach, but one of our clients has some kind of VPN and the user’s IP address would change regularly.


Wouldn't the cookie then be invalidated if you e.g. switched between WiFi and cellular on mobile?


Yes, it would be invalidated. This would not work well on roaming devices.


Good luck with that on mobile.


Devices are much more portable since the 90s and are likely to change networks frequently.


Until CGNAT cripples your concept.


On underground forums you can buy a plethora of compromised accounts from an individual's computer and saved passwords for about this much money.


No idea in this instance, but it's trivial for rogue browser extensions to siphon off cookies


That's the real hack here.


Just ask the girl scouts in your area.


So cookie gets them into Slack. But how were they able to get into the network, spin up a vm and connect to p4? That doesn't happen with a cookie and MFA token.

They either had a username/password combo already, or support also allowed them to reset the compromised users password (ex through a one time link to set a new pw, or perhaps more egregiously "here is your new password: welcome2EA".

A video call and a few questions could have stopped this. Or a password reset flow that requires some sort of previous information. Would love more details here.


This is where I'm confused as well and yours is the first comment mentioning it.

A more plausible scenario is that IT reset their AD/IDP password because of the lost device and gave them the temp pw over slack + some MFA token.


They said they lost their phone, which means they could have told the IT person they also lost access to email "my laptop email logged me out!, Our game is already behind schedule I need to get online now before I get fired"

Remember: a good phish will push the target to make quick and poor decisions, by appealing to various emotions.


These hackers are about to find out that the market value of 780GB of source code is wayyy less than they think.


Yeah, who would buy it? It would be radioactive for any competitor. Even if they were willing to pay for it and launder the funds used, releasing any games that used any EA secret sauce would very likely be detectable. They'd need engage in a parallel construction of source code history to show how they arrived at their implementation independently.


It's useful for devs of multiplayer game cheats.


Is there much money to be made in those? Honest question; I don't play games at all.


I would hazard definitely given how rampant online game cheating is across competitive shooters. People give grief to Blizzard, Steam, etc for their overzealous methods of detecting cheats and possibly increasing the vulnerabilities of your own machine but without some form of way to ban or detect cheats competitive online gaming would be a dead market. It's a cat vs mouse market and I know back when I was playing WoW every day there was a huge market for buying bots to harvest in-game resources and fishing.



Wait i couldn't figure out exactly why they were arrested. Is it illegal to sell video games cheats in china? Was it at the request of the game devs involved? Or maybe money laundering? Even assuming that online cheating is illegal there for some baffling reason... why would the Chinese police care about cheaters on western video games when its not even illegal here? That bbc article is a bit frustrating, just not a lot of useful info there.


It would be interesting data to compare the "fence value" against the "book value". Often infosec decisionmaking is argued based on value-at-risk, and while everyone knows you don't actually lose much if any of the value if someone leaks the sources, it's still used in some sufficiently brazen exaggerations.


An interesting attack could be executed by an automaton that encrypted the payload such that the victim could trust that if they paid a ransom to some address before $my_fave_cryptocoin hit block #X then the key would never be revealed. If they didn't pay it, the key would be revealed to the attacker.

Pretty funny how important trust is in the existing ransom marketplace. But the current market is about restoring operations by recovering inaccessible data. EA didn't lose anything here but would likely prefer that no one else get access.


It’s pretty unlikely that a hacker would be able to run write/delete operations on a source control server with a regular employee’s credentials. And even if they did, there are probably enough copies floating around that they wouldn’t really need to pay a ransom for it.


You're correct about your claims, but I think you may have misread my post. I was suggesting that EA might pay a ransom to prevent disclosure, not a ransom to restore lost data. Even if the source control server was lost, EA would likely have backups.


“...some men aren't looking for anything logical, like money. They can't be bought, bullied, reasoned, or negotiated with. Some men just want to watch the world burn.” Might be a bit overdramatic as these folks were not looking for any monetary gain, just to send EA a message.


I'm guessing its worth a lot more for their street cred than anything financially.


It costed them 10$ and 30m chatting. I'm sure they won't be devastated


Anyone wondering how "easy" it is to obtain a token may be interested in the setup instructions for slack-term[0]. As for token expiration.. I've been logged into slack-term using the same token for weeks if not months, without needing to reauthenticate. I assume it refreshes.

[0]: https://github.com/erroneousboat/slack-term/wiki#running-sla...

Also free plug for slack-term, I guess. :)

TL;DR: socially engineer an EA employee with a link like https://slack.com/oauth/authorize?client_id=32165416946541.1...

and if they are half asleep they just might auth you.

STL;SDR: EA probably permit employees to authenticate with non-official apps/clients, which they probably won't be permitting anymore.


>A representative for the hackers told Motherboard in an online chat

Isn't it strange how hacking groups more and more seem to behave like valid businesses nowadays?


It's organized crime. They've always operated like a business.


Yes, but they buffed up their PR department


I can't decide if that's insulting to OC being compared to the ruthlessness of business like that or not. When big business comes after you, you lose everything. When the OC comes after you, you at least have a chance to fight back.


Why would you be surprised that the TCS or Cognizant or Accenture customer support rep in Bangalore would care one iota about security?

All they care about minimizing Average-time-to-answer and Average-handle-time.


Speaking as a consultant, let me tell you that literally none of my clients over the past three years had a password reset authentication policy. Employee just calls IT, and they reset the pw on the spot with no checks whatsoever.

Keep in mind that pretty much everyone hates passwords, and a lockout constitutes a work stoppage. Adding any additional friction is resisted at every level from the C-suite to the janitor. I try to mitigate it by proposing self-service workflows, and that's popular, but the IT hotline reset is incredibly difficult to get rid of.


What is interesting here is that the hackers actively describe and share how they broke in, rather then EA setting the narrative.

This isn't very common I think.


There may be a bit of a desire on the hacker's part to hummiliate EA. This isn't good PR for EA.


The reality is that it's tough to verify somebody's identity in remote work. The office used to be a walled garden, and now people seem to treat chat as a walled garden. We'll probably see more security incidents like this in the future.


Just do a video call with the employee when they request e.g. MFA resets. Have a policy that enforces that. Maybe have them show an ID.


Showing ID would work, but a video call? I've worked remotely for years - why would half the people in IT know my face from some other dude?


Even if the person in the video call doesn’t have an ID, the IT support person could:

- compare the person with an ID on file

- invite the manager or a colleague of the person into the call to confirm

- ask for PII (birthdays, address) that are on file

I assume someone assuming the identity is more reluctant to show themselves on video. Furthermore being asked PII they are likely not as prepared.


I guess you could compare the face with the Identity you have on file when the employee joined.


I like to think our policies would prevent this. Like you can’t just get an access token from support. And you have to know your employee network ID to even call support in the first place. And no one can arbitrarily spin up a VM (well, I say no one..no one that I work with including the guys who always need them). But I’m sure there’s one smooth talking person out there that could talk themselves into access if they tried and got to the right overworked and underpaid support person.


This article doesn't really talk about it but the Witcher 3 and Cyberpunk 2077 source code (including engine and whatnot) are up on torrents now. I suspect this one will be as well in the coming weeks/months. For those curious and with nothing better to do than to have some +ORC styled martini-wodka and spend an evening reading source code, feel free to google at your own leisure. It shouldn't take too long.


Afaik the Witcher 3 and Witcher 3 RTX archives are passworded and have been around since mid march without the passwords surfacing, so you can't really browse those. Cyberpunk has recently leaked without pw, even though it's included in the passworded dump as well.


There is a marketplace for stolen cookies? Anyone know where it is? Seems interesting.



Reminds me of using the machine email addresses for Zendesk tickets and other helpdesks where you can reply to the ticket via email to join domain-restricted services like slack, notion, etc. So many clever ways to get in without an explicit technical vuln.


hmm a lot of online services have smart to detect suspicious connections, even if you have q cookie. even something as simple as verifying the geo of the IP and client type couldve been useful here. im not sure if slack has this funcionality


Pretty trivial to route your request through an IP address in the same city as the victim.


Possible, but not exactly trivial to do the same with an IP address in the same city and from the same ASN as the impersonation victim.

There are definitely some mitigations you can do servers use to reduce the likelihood of success of this kind of attack.

Now I want to go build an auth system. :D


How would mobile networks fair with something like this if youre moving around?


Plenty of webapps already do this. You'll usually get asked to re-login or to provide a 2fa code to proceed. Very annoying when you hop between networks often.


Slack has an audit trail for some accesses, at the least, so EA could have done the GEOIP on their side.


Does using a stolen cookie allow you to bypass account 2FA in Slack? Or did EA's slack not enforce 2FA for all accounts?

I feel like this is an important question because if the former, the security lapse here is on Slack; if the latter, it is on EA.


> A representative for the hackers told Motherboard in an online chat that the process started by purchasing stolen cookies being sold online for $10 and using those to gain access to a Slack channel used by EA.

How much to bet that a browser extension was involved? Man, the whole landscape is the new "Download this shareware EXE file" but with a thin veneer of legitimacy "It's hosted by Google/Mozilla/Microsoft, it can't be bad!".


reminds me of the good old days when httponly didn't exist and basically every website was vulnerable to xss somewhere.


I have no idea at all, but I would assume that browser extensions are not just an exe with unlimited ability to run arbitrary code, but rather I would think they are sandboxed to have limited scope... right?


They aren't sandboxed at all. Almost all of them request full access to every page you visit. At least an exe might not be able to view and manipulate the dom inside your browser.


This is semi-correct, but basically leans in the right direction.

A bit more detail: browser extensions have all manner of permissions and authorizations, generally specified in the manifest file. A lack of permissions translates to constraints on the extension (often in the form of the browser refraining from populating one of the `window` child objects in the context of that extension's JS code).

Users downloading extensions can see what the extension is authorized to do. Problem is that much like the security ecosystem for mobile apps, end-users have no idea what any of those permissions mean and lack the technical competency (on average) to determine whether a given permission is high-risk or low-risk (how many end-users even know what CORS or XSRF are, let alone why they matter?).

Plus, some of the manifest permissions are basically "keys to the kingdom"-level open permission that allow arbitrary behavior within the limits of the browser sandbox itself. One of the reasons Google is changing permissions in Manifest v3 (as discussed here: https://news.ycombinator.com/item?id=27441898) is to nerf some of those permissions (in particular, the Manifest V2 scripting permissions allow an extension's behavior to change by changing some script on a remote site, so an extension that was well-behaved when added to the Chrome Web Store can have its backing server changed or compromised later and become an attack vector).


Even with Manifest v3, if you suddenly have access to a browser extension (say, you bought the company making it) with global "inject a known content script that's been vetted by the Web Store and included in the extension bundle" permissions... it's possible to add obfuscated code in that content script that provides RCE on the page to the attacker.

Imagine something like a modified form of jQuery that adds and immediately removes a script tag when provided specific inputs - inputs that would be expected to be controlled by the results of a web-based JSON endpoint that's core to the extension's functionality. Is the web store going to code review an entire minified jQuery? Can an automated system detect it? What if attackers hired someone who won IOCCC? The attackers have the upper hand here and can pull the trigger on any website they choose.

Which is why Google might say Manifest v3 is all about security, but unless attackers get lazy and let their injectors get fingerprinted, it's all just airport-style security theater... that happens to strangle ad blockers...

At any rate, cookies and other methods of XSS will begin to leak more and more, and defense in depth will become the paradigm of the day. Assume your employees' browsers are compromised and plan accordingly.


That's the thing though... 90% of attackers are bad at their job. Even if it traps mostly people who aren't investing IOCCC levels of cleverness into the problem or buying companies for the purpose of getting malicious extensions into the Chrome web store, it makes the system safer for the average user. Certainly safer than the system is if any script kiddie can just use the manifest V2 permissions to turn a clock app into a keylogger.

And if the model is as breakable as you claim, I don't worry about the fate of ad blockers.. after all, nothing prevents them from using the exploit you just described, right? Use a modified jQuery to add and remove a script tag to figure out what needs to be blocked?


As far as I can tell, you could, and will still be able to, make ads invisible based on dynamic data, but under Manifest v3 you would not be able to prevent the ads from downloading (and sharing private information, including your FLOC cohorts) based on dynamic heuristics, because Manifest v3 makes it impossible to use anything other than a simple, limited-length blocklist to prevent a network request from being made. See: https://developer.chrome.com/docs/extensions/mv3/intro/mv3-o...

It's a decrease in privacy being sold as an improvement in security and performance. Could they have instead implemented a persistent super-sandbox for network-blocking heuristic code that addresses the performance and security issues, while still enabling dynamic network blocking? Probably! But why would they, when the primary use case is heuristic-based ad blocking?

And I suppose you're right - part of any defense in depth is reducing the number of first-level successes. And it's very possible it was script kiddies who sold the cookies to the actual attackers in the EA case. But there's more going on here than just security.


If the extensions have to request permissions for "full access" to the page, doesn't that mean they're sandboxed?


It doesn't, in practice, if they all ask for it and people are conditioned to always grant it.


I suspect that these hackers are disappointed by how little they earned from stolen EA info, assuming they earned much of anything. This is why hackers target bitcoin exchanges so much, because that is where the money is.


The value might not be in the perps selling stolen IP.

If they can make new game builds, they can trojan them and hand them out as bootlegs. Boom thousands more intrusions.


Modding a game to add malware is easy. The source code wouldn't even be helpful since your malware is completely separate as opposed to having to interact with the game engine.


FIFA 21 is uncracked, the source code could potentially help with that step


How did they use a 10 dollar cookie to get into the slack channel?


The cookie is an authenticated token to log in to the EA slack. They put the Cookie in-place on their machine and as far as slack is concerned they are the person who the cookie came from.


It's one cookie, Michael. What could it cost, $10?


IMHO tricking an employee to provide a login token is a con, not a hack.


It's social engineering, which is definitely a hack/technique used in hacking.


Actually I am with the parent on this one. We call it "social engineering" because it sounds cool, but I think that issue muddies the waters for the lay people.

When people hear "hackers used Slack to break into EA games" they think something from the movies with some people in a dark room sitting around a screen and say "I'm in" in a deep voice. This convinces people hackers are prevaisve, unseen, and wield Godlike powers, clearly something they can't comprehend or do anything about, so no changes needed in their day to day life.

In comparison if you were to phrase it as "some employees got conned into giving out their login info" that changes it to, "wow look at these guys, everyone knows you shouldn't give out your password morons." then the next time they need to give out their info they may hesitate for a moment, because they don't want to be considered the "moron" in that situation.

I don't think it will make a big effect but it will help other people become more aware that most of these "hackers" are mostly just con artists, instead of letting everyone live with the constant anxiety of godlike hackers being able to wreak havoc at will.


But no employees in this story were conned into giving out their login info.

Instead, hackers acquired enough stolen data to become a passable simulacrum of an employee in the text-based virtual space of Slack, convinced someone they were a coworker, and had that someone hand over the simulated employee's extended credentials.

"Hackers can appear to be someone you think you know" is exactly the amount of paranoia the public should have. Faking an authentic-looking request is the most common form of hack, whether that fake is the right 1's and 0's to an API, "Hey Bob, got a small problem..." over Slack, or "I'm the CEO, and I will have you fired if you don't fix this right now" in voice over the phone.


You really think that any average person, given a login cookie, is equally able to blindly infiltrate a company and successfully exfiltrate its core assets, as long as they can sweet talk a single IT person?


I really think we need more resources in law enforcement to fight this sort of thing. Yes everyone needs better security practices as well.


anyone has a guess on how those who sell EA slack cookies for $10 got them in the first place?


If your employees’ browser cookies are being bought and sold on the open market and tech support is handing out 2FA tokens over chat then which IM software you use is the least of your concerns.


Of course, but on the topic of slack, shouldn't there be more security than a browser cookie required to get access?

I get that this in normal scenarios this cookie wouldn't leave the machine, I get that it's very convenient to not have to log in all the time, I get that Slack people know this stuff a lot better than me... but it sure seems like a weakness to have single point of failure to leak your slack channel by losing a cookie. Isn't it?


Basically every website on the internet uses cookies to authenticate and verify sessions. At the end of the day, you'll always have a "single point of failure" when you need to restore remote state. Sometimes that's a password, sometimes that's a cookie.


MFA, pin code, os or tpm or browser finger printing / cookie signed to your machine, ip address or ip range lock, reauth over a short but not unreasonable time, concurrent session lock out... I would think there are few options besides just throwing your hands up and admitting failure where someone can just pay $10 for a plug and play cookie.

I don't know about a giant company like EA, but you could really screw with my company's plans if you spent any time on our Slack.

So now I need to consider that any malware could (and probably is) looking for slack cookies to exfil. So, no, I don't really believe there is always going to just be a single point of failure and it's an insurmountable problem.


Giving your cookies short timeouts, say 5 minutes, and including the client's ip address in the cookie are both mitigating techniques.


ISPs (especially mobile ones) sometimes rotate IP addresses in an little as 4 hours. Relying on static IP addresses throughout a client session might be more secure but will result in people getting logged out very frequently.

Plus, even with the most strict filtering client IP addresses can always be spoofed.


Encoding the IP as part of the session is not very common practice as many people switch their devices between different wifi networks and mobile


Spoofing an IP address is easy. Receiving the packets sent to that address when you don't control that address - not so easy.


MAC address would work for this, no? Wouldn't solve the spoofing but solves the dynamic IP address issue.


MAC addresses are not exposed to web sites, and for good reason. It would be the mother of all supercookies.


Wouldn't a supercookie like that make things more secure though? You could even use it for fingerprinting to help combat things like spam.


MAC address cannot be seen past your local gateway.


MAC addresses aren't sent over the Internet.


Slack is used for long term communication. If I’m walking around town on my phone, I absolutely don’t want slack to get logged out every time I connect to some wifi network.


So you would be signed out of Slack every 5 minutes?


No, the client would need to renew the cookie every 2.5 mins (similar approach to DHCP leases). If the external IP has changed, the cookie should not be renewed. If it's expired already, it should not be renewed.

This wont work, though, as nobody wants to sign in once per day. That's too inconvenient.


If you are using SSO (to enforce U2F) and password managers (really, basic levels of enterprise auth competence in 2021) then signing in multiple times per day isn't a huge hassle because it's just a click and a tap.


For customers, yes, but I've worked at places where we required employees to re-auth daily (if not more frequently).


I worked at a place where re-auth was required every twelve hours, which was a convenient reminder of when you'd been working way too long.


No, Slack would reauthenticate every 5 minutes. It doesn't mean that the user needs to do anything.


If you closed Slack for more than 5 mins, you’d get signed out. Users would hate it, which is why nobody does this.


So, the alternative should be "don't trust anything in slack because someone can just get your cookie and log in as you"?

Maybe there is a middle ground?


The middle ground is the SSO reauth interval, where companies can decide that for greater security they’ll made you reauth every 8 hours, daily, weekly etc to balance it against user pain. Companies do this today already.


Who closes slack for more than 5 minutes though? My desktop slack is pretty much running 24/7.


[flagged]


I would assume that same kind of attack vector would work via Teams, Discord etc.


I would not make that assumption if they use proper security practices.

You can make cookies unusable (or harder to use) by anyone but the initial owner a few different ways. You could include a hardware or ip fingerprint in the cookie that has to be validated server side, for example. You can time out cookies after an hour. Etc.


What do you use instead?


Nothing, I just don't like slack.


Carrier Pigeons.


Ahh. CPaaS


It uses IPoAC.


smoke signals


I beta for a music instrument manufacturer. Getting into their slack channel felt completely arbitrary. I never entered a password I just clicked a link on whichever browser I was using. What? Is that normal?? I’ve never used slack prior to this.


I believe slack can send one time use login links via email. Given the short timeout and one time use on these links I would consider this more secure than a password based login. Most users just re-use passwords or use simple passwords.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: