A more poignant elegy to the modern landscape of compliance theater I have never seen:
> Security Standards. Okta's ISMP includes adherance to and regular testing of the key controls, systems and procedures of its ISMP to validate that they are properly implemented and effective in addressing the threats and risks identified. Such testing includes:
> a) Internal risk assessments;
> b) ISO 27001, 27002, 27017 and 27018 certifications;
> c) NIST guidance; and
> d) SOC2 Type II (or successor standard) audits annually performed by accredited third-party auditors ("Audit Report").
I don't think storing AWS keys within Slack would comply to any of these standards?
We’ve been monitoring this internally, as customers of an Okta-like service.
I’ve also been closely monitoring the responses from our CTO and VP of Security when someone from our DevOps team posted a link to the Verge article in slack this morning.
Which brings me to this inquiry: How are your orgs responding to this? We have a dependency on an Okta-like provider and my first thought when reading this news was “you know, wonder if we should give our shit a sanity check”, and someone beat me to this, proposed it in slack but the idea was turned down by our SecOps team.
I moved over to Azure AD this morning (we only have a few devs and were already using Azure DevOps so this was doable). I requested that Okta cancel our account and let them know the reason was the potential data breach and their CEO's response on Twitter. Okta's response was that we signed an MSA agreement and that cancelling isn't an option, nor termination of fees.
Auth0 is run as an isolated subsidiary in its own infrastructure, with the old CEO still overseeing operations.
Due to the massive difference in Okta & Auth0's implementations I don't see that changing anytime soon.
Sounds about right. Here there will be a staff security training symposium that runs everyone through a training course bought in from the lowest bidder that is tangentially related to the issue followed by a self-congratulatory management meeting and that will be the whole issue resolved to satisfaction.
And yet Okta is the ultimate in box-ticking technology. They are bought to tick the boxes. So what happens now that the box tickers are not ticking the boxes?
Usually a mass exodus to a similar service with the same guarantees resulting in months of capacity problems as they try and scale out from customer influx.
Mostly picking apart logical inconsistencies in the language and / or re-emphasising the already disclosed info. Does not seem like they were able to produce any hard collateral to contradict anything Okta stated which probably means it's at least ball park accurate.
You're not going to be even aware of the majority of the channels. In my experience having ~1.5 channels per employee sounds right across the company. I've started channels for: specific incidents, personal huddle, lunchtime game organisation, limited notifications target, etc. Also we don't know if they count archived channels or not.
Yep, same here. I create many topical channels to discuss some narrow-ish topic and archive them after the matter is concluded. Some of those channels just live a few days or even just hours - other live for several weeks but with several days of silence in-between messages. If I need to look something up I can grab the channel from the archive and won't have messages for unrelated topics in-between.
Channel sprawl is very normal. While my day job has a broad scope in the org, I’m a “member” of over 900 MS Teams. I’d guess that 3% are active and i interact with 0.
At a previous job a new slack channel was spun up for every incident, no matter how minor by automation. So yeah a few thousand doesn't sound that weird to me.
A lot of people want to have full control so rather than create a channel called "Project Phoenix" in a team called "Operations" they'll create a whole new team, and use the general channel.
Much like the OP here I'm in hundreds of teams, and almost all of them do all their talking in #general and they wonder why people never respond to notifications.
The Teams UX and general paradigm is awful. Right now I'm in 3 group chats and two channels (in two Teams) discussing the Okta incident. Huge overlap of them, but not 100% so some people aren't getting all the information.
It's not a bad idea to create channels for incidents. Ever tried to keep an incident timeline in a Jira ticket? It's absolutely horrendous. We used to automatically create a slack channel whenever a Jira incident ticket was created, and post it into the engineering channel with a message like "SEV1 incident, please join channel #INCIDENT-21". Slack is real nice for posting graphs, links, screenshots, going off on different investigation threads, pinning a status that gets constantly updated, calling out for people to join the channel using @you-hoo and running automations using bots. Jira absolutely sucks for live incident handling.
Of course, you still wouldn't have 8500 channels. That's a lot of incidents.
To note in some of the earlier screenshots you can see they have the EC2 Instances menu open in their tabs - that's a bit concerning, why does a support engineer need AWS EC2 access?
If it's just a tab heading, we don't know if they have access or not. You can always open that page as long as you can log in. But it may show "you don't have permissions to display the instances". If you're thinking instead "why does a support engineer need AWS access" - cloudwatch metrics/logs come to mind.
The LAPSUS$ post suggests that they queried the AWS keys out of Slack. So the support engineers just have access to Slack, and Okta engineers were dumb enough to put those keys in Slack.
I'm incredulous an auth focused company would do this without someone freaking out? Even my much smaller SaaS companies would react quickly to stop and rotate these if this happened.
The things that go on at small SaaS companies would horrify many. One SaaS company I worked for had a "god-mode" password hard coded into the source
code. It was visible in plain text in the source, and you could login as any customer if you knew this master password. Of course, this was not logged anywhere. When employees left, they'd change the password and deploy a new build.
looking closely at that I'm suspicious that lapsus$ is playing up what were perhaps trivial or ephemeral keys that were non-sensitive ... eg: test / dev / debug instances etc.
The reason I think that is because if they really had keys for production machines it seems very unlikely they wouldn't have used them to produce some more damaging collateral than they've presented.
Frequently, support teams have lab environments. Hopefully, it'd be an account per engineer, but at least isolated from production. If software engineers access these for reviewing issues which made their ways to bugs, then potentially this is a vector that organizations should be concerned about. Attackers will be happy to make 20+ pivots, so even an isolated AWS account for a support engineer is a nice base.
Can you open the web console with just an access key? My impression was you could only use that to act through a CLI tool, at least officially you need to have powers or act as a user with powers to use the web console directly?
I remember healing a while ago that certain Telegram channels would be blocked on iOS devices due to Apple's content policy, that's what this seems to be about. Update: Yeah I can view them on desktop just fine.
> Update: Yeah I can view them on desktop just fine.
Yeah, this is the fabled content moderation block for App Store distributed apps. The Mac App Store version of Telegram also blocks it, but the direct download dmg does not. I think the direct download apps are also updated more frequently, but I could be wrong on this.
I don't understand how they can say "unsuccessful attempt to compromise the account of a customer support engineer" . then can say "Following the completion of the service provider’s investigation, we received a report from the forensics firm this week. The report highlighted that there was a five-day window of time between January 16-21, 2022, where an attacker had access to a support engineer’s laptop. This is consistent with the screenshots that we became aware of yesterday." and the screenshots of the attackers looking at Okta's support portal.
It is really clever wording, but it is possible for the statements to be true, while being deliberately misleading. What they initially detected, and what the 3rd-party investigation found, were two different things. Okta initially "detected an unsuccessful attempt" - the successful attempts were not detected initially but the detected event did lead to an investigation.
Now, JUST this week (presumably, in the last 72 hours to explain why disclosures have not already been sent) "we received a report from the forensics firm this week. The report highlighted that there was a five-day window of time between January 16-21, 2022, where an attacker had access to a support engineer’s laptop."
It's misleading because grammatically, what one would usually say in this situation is something like "Okta detected what it believed at the time was an unsuccessful attempt", because the statement's narrative is set in the present - and we now know the attack (not "attempt") was successful. Wording it as they did in their statement obfuscates the events that took place, and certainly reads like a deliberate attempt to downplay the severity of the breach.
I have a bunch of screenshots in my laptop that I take for reasons like attaching to tickets and sharing on Slack. Some are very sensitive if shared outside the company. If the attacker had physical access to the laptop, that explains.
If somebody uses my laptop, my Gmail account is not compromised; I'm being dolphined. Of course 5 days is quite a long time, but this is just to clarify what you didn't understand.
If I use your laptop to get access to your Gmail isn't your Gmail account compromised? I might not have access to your Gmail username and passwords (and MFA), but I can read your email, I can send email as you, etc etc. I feel like I have compromised your gmail account.
If I steal your secure token and log into you through a cloned browser session and access your gmail have I compromised your gmail? It feels like it.
Maybe it is just a distinction without a difference.
The access is transient and you remain in ownership over the credentials and account, because the credentials were not compromised (just the programs/browsers with pre-existing auth). Though with physical access it's probably only a mater of time before local admin passwords are brute forced and access to keychain/browser saved logins is inevitable?
> If I steal your secure token
It's more like you just logged in using your secure token then someone stole the device and took advantage of that. I think we can all agree there's a difference between having access to the laptop and access to the account without the laptop.
> I think we can all agree there's a difference between having access to the laptop and access to the account without the laptop.
There is a difference, but that difference is not the one you seem to be implying. An account can be compromised even it it hasn't been fully and permanently taken over. The temporariness of an account being compromised does not mean the account was not compromised.
You can make a distinction and clarify that the account was compromised but that the account credentials were not. However if you extend that to saying that the account was not compromised when attackers did have temporary access, then you are simply lying.
If someone has access to your email account, they more than likely will be able to password reset quite a few accounts (I'd start by searching your inbox for which banks/stock trades you use and take it from there).
"had access to a support engineer’s laptop" is very vague, they could have:
1. some kind of remote access to the support engineer's session on that laptop
2. physical access, no login
3. physical access as a different user
4. physical access, logged in as the support engineer
If I have access to your laptop, logged in as you, and you have Gmail open in a browser, then your Gmail account should be considered compromised. (e.g. I could set up a forwarding address in your Gmail settings, set up a POP/IMAP password, steal your session/remember me cookies, install some dodgy software which makes sure I have remote access to your laptop in the future, etc..).
If someone has access to your laptop with a logged-in Gmail account, they could change your password and log you out of your other devices, effectively gaining total control of your account and locking you out.
How is that lots more details ? Your post is only about whether or not CF Okta account has been compromised not about what really happened for all Okta customers
They give more technical details than the Okta post and probably even the Okta customer contact.
Eg.
> Cloudflare reads the system Okta logs every five minutes and stores these in our SIEM so that if we were to experience an incident such as this one, we can look back further than the 90 days provided in the Okta dashboard. Some event types within Okta that we searched for are: user.account.reset_password, user.mfa.factor.update, system.mfa.factor.deactivate, user.mfa.attempt_bypass, and user.session.impersonation.initiate. It’s unclear from communications we’ve received from Okta so far who we would expect the System Log Actor to be from the compromise of an Okta support employee.
Also a fun exercise in "Show don't tell" when it comes to transparency.
Okta writes "We are deeply committed to transparency" but shows none in their blogpost in contrast to CloudFlare, that doesn't write anything about transparency but displays a lot of it.
A bit like queuing for your internet providers customer support for 3 hour and hearing "We care about customer satisfaction" every five minutes.
I thought that a CF person (can’t remember if it was Prince or not) said in the main HN thread yesterday, that CF uses their own homegrown SSO internally, and Okta externally, but this blog article seems to infer the opposite…
Anyway, I likely misinterpreted yesterday’s comment..
I believe @eastdakota is saying that Cloudflare has their own homegrown SSO internally, but they do not make that available externally to their customers (it's not a public product that one can buy). I don't think that the tweet was saying that they use Okta externally.
I gotta say, I don't make technical decisions at the "we're using CF" level, but I've been incredibly impressed with their track record over the years. I often evangelize blog post writing and transparency at my company and this is exactly why. What a way to build trust. It's basically free advertising for technical folk by "putting their money where their mouth is."
I'd love to see more of this in the industry. CF is, IMO, industry leading here.
Their blog posts are great, but for a service where reliability really really matters, they've had a few too many global outages.
Especially considering their core proxying service doesn't have any requirement for a globally consistent datastore, there is zero excuse for a global outage.
Sorry. Should have been clearer. There’s a screenshot showing a password reset about to happen for a specific person. We suspended that account. In parallel, we were using our own logging to look for any password reset/MFA change over the last four months and just went ahead and forced a reset for all those users.
> Support engineers are also able to facilitate the resetting of passwords and MFA factors for users, but are unable to obtain those passwords.
Very ambiguous statement, not really fitting in with the whole "deeply committed to transparency" image they are trying to emit.
What does "facilitate" really refer to here? If it was just triggering it, they would have said so, presumably. And why is only passwords mentioned as what couldn't be obtained and not the tokens from MFA as well, does that mean they could obtain those tokens?
I wonder how it fits in with the groups own statements that they still have active access. Gonna be interesting to see what Lapsus$ replies to this statement.
In what way would a Okta user be unable to trigger the reset while a support engineer could? If they are unable to access the Okta website where the password reset gets initiated, they are also unable to access the very same Okta website where the new password would be set.
And why use the more general word of "facilitate" when they could have been specific and say "trigger reset password flow" or similar.
It looks like there are some orgs where "forgot password" is restricted and has to go through an internal site admin- or an Okta CSE. There's not a "Reset password" link on cloudflare.okta.com for instance, just a link to contact their internal support.
Regardless, "reset" can mean different things and it definitely seems like they're being cagey here by intentionally using imprecise language.
User's laptop is lost / stolen. User notifies supervisor. Supervisor (with admin authority on the account) notifies okta support and asks that the password be reset.
Ex - You have gmail gated behind okta using MFA, and the lost device is the user's MFA (ex - phone using TOTP).
If the user has no session currently logged in on another device, they won't be able to create a session without their MFA device, and therefore won't be able to access their email.
How likely you are to hit this depends on org settings, like gmail session time, okta session time, mfa requirements, etc.
Ideally - they'd have backup codes somewhere... but most users won't (or they'll do things like store them in their email...)
As an enterprise customer, in the scenario you provided, I’d prefer if the vendor makes this my problem, by forcing me to reset the individual user’s MFA configuration, rather than try to be helpful in such a way that requires support personnel have the ability to do so.
(And if I lock myself out, make it very very expensive for me to prove my identity to recover access)
There are hundreds of call centers where all the workers do all day is fill out password screens. The exact nature about what they do varies by client and is probably subject to NDA.
Password reset is the soft underbelly of most companies.
what I don’t get is, if support can’t do anything but “reset” which doesn’t expose the ability to gain access… how is support helping users? If a user can access their email, then they can reset themselves — surely?
The idea that support can just trigger a reset email makes little sense. Perhaps Okta has some complex mechanisms that I am not aware of, but if this was any system I’ve ever worked on, an employee could take over an account if they so desired.
As much as I’d like to forget, I’ve done a lot of support and much of that was authentication issues. I can certainly imagine in a corporate environment that some contingent of users would prefer to be hand-held through the reset request process, but all of them?
My expectation (and experience of other similar systems) was that Okta would not allow password resets by anyone but the organization administrators. However, that doesn’t appear to match up with what has been disclosed here.
It's easy to find a large segment of enterprise customers where handholding password resets is the only option because of an enterprise policy which is getting misapplied. Further, these support requests are probably unlimited in support contracts raking in significant income for Okta, so it becomes the default for customers not to worry about it. Any time companies are selling a product plus enterprise offerings, it becomes much more opaque as to what the complete product is.
When organizations make the on-prem to cloud jumps they frequently are trading off oversight and experience. Many tenured internal teams have been broken apart by these types of migrations because they are ultimately sold to management as cost saving endeavors. These folks would make your observation about organizations admins controlling resets, but they are gone.
> The report highlighted that there was a five-day window of time between January 16-21, 2022, where an attacker had access to a support engineer’s laptop. This is consistent with the screenshots that we became aware of yesterday.
You can take my laptop from me and have access to it without having compromised my Okta account. In that scenario you wouldn't have my okra password or my MFA - what you would have is transient access to everything I'd already auth'd to until it times out and requests that you auth again.
I wouldn't have realized that unless I'd read far enough to see that Entity A did compromise Account B. Since the author didn't define what an attempt in this context means, I think it would be proper to assume that attempt meant the overall effort to compromise the account.
Ahhh you reckon they are being sneaky with language? So something like:
- unsuccessful login occurs
- successful login + screenshot + various nefarious actions
- (some time later)
- attacker's access locked down
Implying that they detected an unsuccessful login, but doesn't mean that they didn't prevent unauthorized access, which would make what they said technically correct but kinda useless for us users. That is something I did not originally consider, but wow ... maybe. I hope not. I mainly want answers about what is affected, and whether Auth0 was compromised.
Don't believe for a moment that this misdirection is unintentional. It's one big reason you might have contract workers instead of employees in this role, actually.
> Where that other processor fails to fulfil its data protection obligations, the initial processor shall remain fully liable to the controller for the performance of that other processor’s obligations.
IMHO it's more of an empty statement. They never pin down what end-state is being discussed. So we can't evaluate if "additional steps are needed" to achieve that (unspecified) state.
It could be that the speaker is intentionally equivocating. I.e., knowing that the audience will assume some particular end-state. But if cornered, the speaker can claim (lie) that he implicitly meant a different end state.
>The Okta service has not been breached and remains fully operational. There are no corrective actions that need to be taken by our customers.
despite an overwhelming preponderance of damning evidence from twitter (as well as the hacker themselves) you've somehow managed to find yourselves secure instead?
Christs whiskers thats some impressive doublethink. Its also an excellent opportunity to fall on a sword that gives future attackers --hat colour dismissed-- an immediate incentive to simply publish regardless as you dont appear to be acting with very much good faith. if this sort of an attack is a carrot, you've clearly shown a predilection for the stick.
If there is one thing you want from a 3rd party auth provider, it's trust - this is not the time to play word games.
I'd have far more faith in them if they were transparent about what had happened, what they're doing about it, and how they will make sure it can't happen again.
Instead, they are being weasels - I for one, will not be using their services again, and this behaviour is the reason why.
Here's another example from a couple of years back where they used some really weasely language to claim they weren't vulnerable to a CVE (spoiler: they were):
Monoculturing the west's authorization/identity stack - Is that something we want to do? Authz/Authn as a service is great, until it isn't.
There are pros and cons both ways. You're centralizing the risk, but also centralizing the effort for quality. Perverse incentives however causes loss of quality: saying you fucked up will lose you customers, and some of that sweet, sweet stock value.
The fact that you can work for these companies without security clearance seems a bit insane. The more they grow and centralize, the more of a national security risk they become. It's not hard to get hired at Okta, and when you're in, you're in. At what point is not solving a security bug at Okta that you've discovered and selling it for monero more lucrative than actually fixing the bug? I realize that not everyone is like me and has a strong values that prevents them from doing so, and that scares me. If you're talented, it's super easy to backdoor a system and get around code-review
Maybe I’m just an unimpressed security professional but I’ve still not seen evidence I’d call a breach. At least not a significant one if you want to argue sublantics.
Workers at organizations get compromised all the time. This doesn’t mean their systems/products are compromised.
I do security (albeit not CISO or compliance-style, but commercial anticheat), and in my opinion, if a support agent's account was used by a third party to view anything about my account without permission - any undisclosed email address or name, their system was compromised and it is a data breach.
IMO, support agents also should not have the ability to view or access a customer's account without some form of time limited, auto-resetting-to-opted-out default confirmation that support can view the account from an existing logged in admin.
Yeah, the screenshots they admit are real clearly show Slack, JIRA and AWS being open. What did the attackers see there? Were the customers whose data was viewed notified? How can Okta tell if that data is sensitive or not without taking to their customers?
A competent security response to this would have been "Yes, they compromised one of our support technicians. We've initiated an audit and are sending out e-mails containing all of the actions that support representative performed for each customer to that customer's administrator"
If through compromising those workers outside parties gain access to sensitive systems, and that situation is not promptly detected and corrected, then the system _is_ compromised.
Okta is not just a bunch of software, it's also staff and processes, and the result is a trusted service they provide to customers. If that service is compromised, it doesn't really seem to matter how?
> If that service is compromised, it doesn't really seem to matter how?
I hear what you're saying, but the how does really matter, and will change how customers perceive the issue and make decisions about how to react.
e.g. "databases were open to the Internet and all data has been siphoned" lands quite differently than "a staff member abused their privileges but the scope of abuse was limited to xyz".
If I'm a customer, it tells me a lot about what Okta needs to do next, and how much I should freak out right now. It's still extremely problematic that a staff member (1st or 3rd party) could abuse such privileges, and I immediately have questions about how those privileges were abused and to what actual effect, but it's a fundamentally different problem than other types of breaches.
How it happened doesn't change the fact that they have been breached.
If I was a bank and claimed that I haven't been robbed, an insider just transferred billions of pounds out of the bank and then fled, I think everyone would rightly say "What are you talking about, you have been robbed!"
It doesn't matter if it was done by a guy in a black and white stripey t-shirt, or if it was done by a rogue internal employee, a bank robbery is a bank robbery.
In fact, the ability of an internal staff member to transfer lots of money out of the bank probably signifies a more significant and systemic issue - particularly if i've lost my money and the bank refuses to acknowledge they have been breached/robbed (it was just an internal rogue staff member, not a robbery! our security hasn't been breached!).
A bit of a stretched analogy - but i'm sure everyone gets the point. Security isn't just about technical security - it's the whole process involved in making sure these things don't happen. A banks 'technical' security might be great, but the bank would still be considered horribly insecure if a staff member can transfer any money out of an account. Equally an auth service might be 'technically' secure, but the ability of a single rogue staff member to impose a lot of damage suggests more systemic issues.
> It doesn't matter if it was done by a guy in a black and white stripey t-shirt, or if it was done by a rogue internal employee, a bank robbery is a bank robbery.
I have to respectfully disagree.
Yes, the end result may be the same, but even in a bank robbery, the how matters, and will drive different behaviors from everyone involved: the bank, law enforcement, and customers of that bank.
If as a customer, I learn that a guy in a stripey t-shirt holds a teller at gunpoint, my conclusion goes something like "that's a terrifying situation for the teller, and I hope they're ok". I'm probably not going to stop using that bank.
If on the other hand, I learn that there are systemic issues with bank security, and internal employees have been embezzling funds somehow, I'm probably going to think hard about whether this is a bank I want to do business with.
> Security isn't just about technical security - it's the whole process involved in making sure these things don't happen.
Yes, and when factors are involved that are out of the bank's control (e.g. a crazy person walks in with a gun), it might be fair to ask why the guy got inside to begin with, but the conclusions you draw about such an incident are far different than the conclusions you'd draw if internal employees were involved.
In case this wasn't clear from my earlier comment, I didn't mean to imply that an internal process issue makes any of this ok. But it does make it different than other types of breaches.
Bottom line: the how still matters, not because one type of problem is ok and the other isn't, but because the actions a customer should take / consider will be different depending on how the breach happened.
Yeah, I think we're on the same page. The primary focus of my original reply was that the "how" really does matter. I agree that this is still a breach regardless.
If you follow the lapsus telegram, you will see they are claiming they got AWS API keys from the corporate slack. That might be more dangerous than accessing the support console
I can see a Slack breach being far more damaging than policy should effectively permit it to be because plenty of people use it to share things they technically should not.
Without proper separation of duties to limit blast radius, it's just as damaging as a software vulnerability. It sounds like that's the real issue here: Compromise of a support engineer lead to far more access than should have been permissible.
IANAL but this definitely seems like a breach of Art 33 of GDPR as it meets the criteria of involving personal data (list of users was exposed) and the 72 hour window has passed.
> Support engineers do have access to limited data - for example, Jira tickets and lists of users - that were seen in the screenshots. Support engineers are also able to facilitate the resetting of passwords and MFA factors for users, but are unable to obtain those passwords.
This means they could have reset anybody’s credentials and logged in. There would a record of it if the audit logs are valid, but saying no action is needed may be a stretch.
> This means they could have reset anybody’s credentials and logged in
Does it? It specifically says "but are unable to obtain those passwords," which reads to me like they are able to trigger a password reset email to the user, but are not actually able to set the password themselves.
The text is a bit ambiguous (and probably on purpose, I'm sure it passed through multiple layers where multiple lawyers have reviewed it too). Okta says Lapsus$ were unable to "obtain" the passwords, but they didn't say they were unable to set their own passwords (for example). Neither is the MFA tokens mentioned, although they do mention MFA in the text.
What if they have access to some users' emails and then can selectively fire off password reset emails to them? It's probably less likely, but could be a vector.
They wouldn't need to have an Okta CSE account for that, since anyone can go to *.okta.com and fire off a reset password email for any account. (Then again it does look like that's not available for every org. There's not a "reset password" option on cloudflare.okta.com for instance. Why give a CSE the option to do a password reset in those cases, though?)
It still sounds to me like they're saying that they could set the password on an account to anything they wanted, but not retrieve an existing password.
This would be a big red flag to anyone who's observant and realizes that their password had been changed, but plenty of people would probably simply take it in stride and reset their password thinking they'd forgotten it. Either way the horse is out of the barn.
I don't understand all these crazy assumptions being made. A support engineer being able to set a password on any account is unthinkable, there is zero chance this is actually the case. Especially for a security company. It would be the equivalent of giving the doorman a key that opens everyone's apartment. Obviously support can trigger a reset, that you have to complete from your own e-mail account.
I've actually had a landlord give the keys to one of the tenants since they didn't have a doorman. The tenant in question was actually a registered sex offender too, but the judge is friends with the landlord so no harm done. Crazy world.
> A support engineer being able to set a password on any account is unthinkable, there is zero chance this is actually the case
I have no idea what an Okta support engineer can or can't do but I know that a regular account administrator can set an arbitrary password for a given user via the Okta UI.
That's pretty common and wouldn't lead to a compromise. It's a potential DOS vector, yes, but not a compromise. However, I wonder if Okta tech support is also able to change a user's registered email...
If they have access to some users' emails, they wouldn't need access to okta to reset the password and take control of the account. If a hacker has access to your email then you've already been pwnd.
Can't you request a reset without any special access? Many (but not all) applications have a publicly accessible log in page with a "Forgot your password?" link that takes an email as input.
I think the original reason I posed the question is because I used Okta a while back for one org and IIRC there was no password reset available to me; I had to contact one of our IT admin to do it.
It's been a minute since I was an admin in an Okta directory, but don't all resets use a self-service flow? In order to log in to someone's account, I think you need to compromise their email, too.
Super admins can set a temporary password. But they can also change a user's email address, disable password and email change notifications, lock out all other admins... They have complete control of the org.
It's pretty clear there is no super admin breach here. None of the screenshots show an admin interface - the app portal shot clearly shows a distinct lack of the "Admin" button. No user directory shots. There is a single image of a password reset confirmation, but those are also in documentation so it's hardly a smoking gun.
It seems far more likely that a support engineer had their laptop compromised and their limited access was used to try and make a mountain out of a molehill. Not like a ransomware group would have reason to over-exaggerate their access and ability, right?
besides the whole "we dug through slack and got your AWS keys" i think the worst case scenario would have been removing MFA from an a compromised account and having access to the e-mail to self-serve the password reset.
that and maybe going through JIRA and seeing if anything was of any interest?
You can indeed change the email / login on a user's profile an an org admin, but we probably won't know if Okta support agents themselves can also do that.
More than a little concerning that they apparently investigated this in January, and while there's a lot of talk about what could or could not have happened, they don't seem to know what actually happened, and no customers were notified (but now: "[we are] identifying and contacting those customers that may have been impacted).
I saw the tweet a few mins after it was posted (thanks to the original article here on HN). We are a small startup and made the call to treat this like an exercise in failover process and ripped Okta out from our systems in around an hour. It was satisfying to see the process work. Original emails to staff were to wait for Okta to weigh in with their response before moving back to SSO but… not sure how that’s looking now given how they’re handling things so far.
Where does it appear like that? I have tried to follow this but the only thing I have seen shares is info from within Okta. I have seen email addresses of CloudFlare employee in screenshot of Okta, is that what you are referring to?
There were a lot of doomsday predictions in yesterday's thread before any real info had been shared, but it was always the more likely scenario that a support agent contracted through a vendor would have limited read access to their internal systems and wouldn't be able to cause any real damage.
That's just a reality of corporate disclosures, I'm afraid. No one is going to let something like this go to press without a full round of legal and PR editing.
I think there's two ways about it though. Most "good" companies (e.g. Cloudflare) will try to be transparent and proactive without taking on liability.
In this case it reads as though Okta are obfuscating the truth, and that's not good.
Besides the opening it doesn't appear to have moved very much. I wonder if LAPSUS$ have short positions open and are frustrated it's not moving which is why they're posting responses to Okta and then updated their response with more information (as linked above).
> The Okta service has not been breached and remains fully operational. There are no corrective actions that need to be taken by our customers.
> Support engineers are also able to facilitate the resetting of passwords and multi-factor authentication factors for users, but are unable to obtain those passwords.
Probably not. It's really inconvenient for a large organisation to drop them given how embedded identity gets into everything. It's reputational damage but something worse [0] happened to OneLogin a few years back and they're still around.
I mean, ultimately, it is now up to Lapsus$ to confirm this. If everything they say (and the Cloudflare post, also) is true then I don't think anyone should be worried.
It's an interesting world we live in if the word of an organization that earns a living by stealing data and extorting companies is trusted more than the word of a public company.
I would say it's more about what each entity has at stake, the public company could be read as "An entity which sold something that didn't have the capability to offer and has a lot to lose if the issues are uncovered"
Against
"Someone that what has to lose at this point?"
Both sides have incentive to stretch the facts. But Okta has more accountability since if an Okta customer comes forward and says "We had credentials of several of our users maliciously reset during that time period and have the logs to prove it", then Okta is going to have a hard time of it.
If Okta comes up with proof that Lapsus didn't have the access they said they did, Lapsus is not going to have many "customers" complaining "you didn't crime that other company as much as you said you did"
1. We didn't compromise any laptop? It was a thin client.
2. "Okta detected an unsuccessful attempt to compromise the account of a customer support engineer working for a third-party provider." -
I'm STILL unsure how its a unsuccessful attempt? Logged in to superuser portal with the ability to reset the Password and MFA of ~95% of clients isn't successful?
4. For a company that supports Zero-Trust. Support Engineers seem to have excessive access to Slack? 8.6k channels? (You may want to search AKIA* on your Slack, rather a bad security practice to store AWS keys in Slack channels )
5. Support engineers are also able to facilitate the resetting of passwords and MFA factors for users, but are unable to obtain those passwords. -
Uhm? I hope no-one can read passwords? not just support engineers, LOL. - are you implying passwords are stored in plaintext?
6. You claim a laptop was compromised? In that case what suspicious IP addresses do you have available to report?
7. The potential impact to Okta customers is NOT limited, I'm pretty certain resetting passwords and MFA would result in complete compromise of many clients systems.
8. If you are committed to transparency how about you hire a firm such as Mandiant and PUBLISH their report? I'm sure it would be very different to your report :)
21. Security Breach Management.
a) Notification: In the event of a Security Breach, Okta notifies impacted customers of such Security Breach. Okta
cooperates with an impacted customer’s reasonable request for information regarding such Security Breach, and Okta
provides regular updates on any such Security Breach and the investigative action and corrective action(s) taken. -
But customers only found out today? Why wait this long?
9. Access Controls. Okta has in place policies, procedures, and logical controls that are designed:
b. Controls to ensure that all Okta personnel who are granted access to any Customer Data are based on leastprivilege principles;
kkkkkkkkkkkkkkk
1. Security Standards. Okta’s ISMP includes adherence to and regular testing of the key controls, systems and procedures of
its ISMP to validate that they are properly implemented and effective in addressing the threats and risks identified. Such
testing includes:
a) Internal risk assessments;
b) ISO 27001, 27002, 27017 and 27018 certifications;
c) NIST guidance; and
d) SOC2 Type II (or successor standard) audits annually performed by accredited third-party auditors (“Audit
Report”).
I don't think storing AWS keys within Slack would comply to any of these standards?
Posting a key in Slack is obviously not good opsec, but whether this is a big problem depends whether those keys allowed access to anything sensitive.
They could be keys allowing access to random test data, or public data, etc.
They could have been revoked immediately they were posted.
It does directly/partially contradict their statement though, i.e.
> The potential impact to Okta customers is limited to the access that support engineers have
Well the implication here is that the support engineer only had some limited set of tasks, but if that Support Engineer also indirectly had access to AWS keys then it suggests that Lapsus could have had broader access than just what the support engineer had direct access to.
My booking.com account was hacked and someone from China booked a hotel using my account (Im in UK). Managed to cancel it for free, changed password, logged out of all sessions, turned on 2FA and removed all cards.
My booking.com account also notified me someone booked a hotel in Brazil, and I went in and cancelled it and reported it. Extra weird because the notification came in through an old email address. That was on Saturday, Mar 19, 10:43 PM EDT. But it could have just been a large attack on Booking.com, unknown if it's Okta-related (and not necessarily a compromise, if I accidentally reused a password)
I've never worked at Okta, but I've worked with several production ticketing systems at big and small companies, and all of them contained critical information about customers and operations. Including credentials.
I see the benefits of Okta. Does anyone have decent experience working rolling a in-house single sign-on solution, or similar, that can share a bit about maintaining such service?
Looks like Lapsus didn't do any damage and thus Okta dismisses the breach as not a breach - just like a proof-of-breach starting notepad.exe on compromised machine was at one of my old jobs dismissed because notepad doesn't do any damage. The security professionals today are the second sellers of snake oil after MBAs. Cue SolarWind with their it is ok for binaries' signatures to differ as they get deformed while being forced through the internet pipes :) Lapsus not being a "state level actor" denied that easy get-out-of-jail-free card to Okta.
>The security professionals today are the second sellers of snake oil
If someone breaks into your house, and stays there for a week, would you say they didn't really break in because they didn't murder you?
>proof-of-breach
Maybe you're referring to a "proof of concept"? A "proof of breach" in the security industry is an incident to be investigated, and if necessary, involve law enforcement. It isn't meddling kids that didn't cause any harm. As a side note, I've never heard anyone use the term "proof of breach" in that context in the industry.
The fact that Okta didn't detect this earlier is concerning in its own right, let's not downplay the fact that the level of access that third party providers have is not a solved issue in the industry. The RCA/post mortem/follow up actions from Okta's side should not be "they got access but didn't do anything with it, we don't need to change anything".
1. We didn't compromise any laptop? It was a thin client.
2. "Okta detected an unsuccessful attempt to compromise the account of a customer support engineer working for a third-party provider." -
I'm STILL unsure how its a unsuccessful attempt? Logged in to superuser portal with the ability to reset the Password and MFA of ~95% of clients isn't successful?
4. For a company that supports Zero-Trust. Support Engineers seem to have excessive access to Slack? 8.6k channels? (You may want to search AKIA* on your Slack, rather a bad security practice to store AWS keys in Slack channels )
5. Support engineers are also able to facilitate the resetting of passwords and MFA factors for users, but are unable to obtain those passwords. -
Uhm? I hope no-one can read passwords? not just support engineers, LOL. - are you implying passwords are stored in plaintext?
6. You claim a laptop was compromised? In that case what suspicious IP addresses do you have available to report?
7. The potential impact to Okta customers is NOT limited, I'm pretty certain resetting passwords and MFA would result in complete compromise of many clients systems.
8. If you are committed to transparency how about you hire a firm such as Mandiant and PUBLISH their report? I'm sure it would be very different to your report :)
21. Security Breach Management.
a) Notification: In the event of a Security Breach, Okta notifies impacted customers of such Security Breach. Okta
cooperates with an impacted customer’s reasonable request for information regarding such Security Breach, and Okta
provides regular updates on any such Security Breach and the investigative action and corrective action(s) taken. -
But customers only found out today? Why wait this long?
9. Access Controls. Okta has in place policies, procedures, and logical controls that are designed:
b. Controls to ensure that all Okta personnel who are granted access to any Customer Data are based on leastprivilege principles;
kkkkkkkkkkkkkkk
1. Security Standards. Okta’s ISMP includes adherence to and regular testing of the key controls, systems and procedures of
its ISMP to validate that they are properly implemented and effective in addressing the threats and risks identified. Such
testing includes:
a) Internal risk assessments;
b) ISO 27001, 27002, 27017 and 27018 certifications;
c) NIST guidance; and
d) SOC2 Type II (or successor standard) audits annually performed by accredited third-party auditors (“Audit
Report”).
I don't think storing AWS keys within Slack would comply to any of these standards?
I believe they are Brazil. Also a month or two back there were claims that a member accidentally doxxed themselves a couple times, and they were in Spain.
Not saying this is what happened or defending Okta, but these two statements could be true. Assuming the support engineer had ACCESS to allow him to do some bad stuff, but they have audit logs to prove his account didn't use that access (except to take screenshot), both statements could be true. It's unlikely but possible.