Hacker News new | past | comments | ask | show | jobs | submit login
British Airways faces record £183M fine for data breach (bbc.com)
232 points by adzicg on July 8, 2019 | hide | past | favorite | 116 comments



> At the time, BA said hackers had carried out a "sophisticated, malicious criminal attack" on its website.

Compromising a single JS resource that was being carelessly loaded on a payment page doesn’t qualify as sophisticated in my mind. It might not be uncommon in the industry, but tools like SRI and CSP stop these attacks dead in their tracks.

I believe we are about one huge attack[0] of this kind away from realising how dire the situation truly is.

As a victim of the earlier Ticketmaster attack I’m curious as to if the ICO is investigating that too.

0: https://hugotunius.se/2018/11/29/how-to-hack-half-the-web.ht...


So: now what?

This is an absolutely vast fine. There have to be companies around the world that are all of a sudden going to take the security of Javascript seriously, who already have a big web app that pulls in hundreds of scripts compiled from tens of thousands of NPM modules. Is security band-aiding going to be applied, or are we going to have to see a radical re-architecting that acknowledges that every dependency is a liability as well as a benefit?


It would be reasonable to expect an increased interest in security architectures. Unfortunately security has not been a main focus of businesses - research, development and actual use of secure systems is mostly limited to academic, financial, defense and intelligence circles.

Current programming languages provide no effective means of applying the principle of least authority to third party dependencies.

Examples: restricting I/O of an imported library, in many cases down to pure computation, as well as accounting for and limiting their resource use.

People, take a look at (object) capability security: https://en.wikipedia.org/wiki/Capability-based_security

Systems implementing this let you handle permissions like other objects, transferring them by simply including them as a function parameter.

There is no ambient authority, meaning a string of a file path is not sufficient to open a file. The function needs to have access to a capability object, which combines authority (the right to do something, say read a file) with designation (the path to a specific file).

When you create a new process, it starts without any capabilities, in which case it is restricted to pure computation.

Unlike current systems, this passing of permissions allows you to reason about what your code actually can do, it effectively limits its behavior, it creates reliable boundaries, because the program can't just construct permissions out of thin air.


Java has a SecurityManager[1] which does exactly that. Is there anything similar in other mainstream languages?

[1]: https://docs.oracle.com/javase/8/docs/api/java/lang/Security...


As far as I understand SecurityManager, it is vastly different from capability based security.

In a system based on capabilities you would explicitly have to pass capabilities to any code which needs them (either as arguments to the call, or when loading the library – depending on what is natural). This means that priviledge is naturally minimised and transparent.

Compare that to SecurityManager where you have make a separate policy and checking process, separated from the actual calling of functions.

There was some work done to try to define a capability safe subset of Java (Joe-E was the name [0]). But it is sad that this is not the default security behaviour in Java, and thus no libraries use this.

[0]:https://en.wikipedia.org/wiki/Joe-E


Stackless Python allows you to run tasks for a certain number of instructions: https://stackless.readthedocs.io/en/latest/library/stackless...

Unfortunately, the Python VM is impossible to sandbox much further as it is. You have (internal) resource security, but can't limit access to the network, or say a specific file in the filesystem. Mutable globals everywhere.

The https://en.wikipedia.org/wiki/E_programming_language has capability security, but no resource exhaustion protection.

The only systems were both where integrated were a series of 1980s operating systems (GNOSIS, KEYKOS, https://en.wikipedia.org/wiki/KeyKOS) that were designed to allow secure and accountable multi-user timesharing on mainframes. They are predecessors of the formally verified sel4 operating system.

There has never been a similar system on an OS/Processor independent virtual machine level. For fun, I created this little VM: https://esolangs.org/wiki/RarVM


That's a different, but related concern. Yes deep dependency graphs have a big trust problem, but at the very least in that case you can use CSP and SRI to verify the integrity after the asset has been built. A compromised node module also has the mitagting fator that it slowly propagates as people update which doesn't make the attack as immediate and it gives people an opportunity to review the code as they update.

For remote third party JS that's included on many websites you have an immediate and direct remote code exec vector. For something central like Google Analytics this would be devestating.


Unfortunately I expect just some insurances to pop up that cover data security fines. Unless the fees are getting really in-insurable, the cost of an insurance is probably just cheaper than actuary improving security.


It may well be that the insurance companies are less likely to get their minimum security demands for agreeing cover ignored than the internal managers looking to spend more money on security.


> but tools like SRI and CSP stop these attacks dead in their tracks.

this isn't... usually true. in the case of this attack, the Javascript was appended directly to a 'known good' library resource[1]. Typically CSP whitelists based on origin, which would make CSP ineffectual in this case. Also, websites like BA have advertising, and it's nearly impossible to run CSP with ads unless you use the `strict-dynamic` directive, which whitelists any javascript loaded by your javascript recursively... including this javascript[2].

There are other, more uncommon modes for CSP which you might be referring to:

1. Nonce, which provides protection by ensuring each script has a server-calculated one-time-use token. This would not have any effect, as the script is part of one naturally loaded by the server.

2. Hash[2], which provides protection by checking the script's content against a predetermined hash. This possibly could be effective, but in practice rarely is. If the attackers could edit this script, there's absolutely no reason to believe either (1) this just updates the generated hash or (2) the attackers cant just update the hash. (this is essentially the same as SRI). Hash is potentially effective where taking advantage of a third-party CDN with known content that should not change, but by the look of the url: http://www.britishairways.com/cms/global/scripts/lib/moderni... the resource was not on a CDN.

[1]: https://medium.com/asecuritysite-when-bob-met-alice/the-brit... [2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...


>Also, websites like BA have advertising

Third-party advertising is a strict no-go on ecommerce sites imho. Anybody who does this should face the full 4% fine on the first offense.

I hate sites where I am willing to purchase and which serve me ads with a passion.


From the reporting I've read, primarily by RiskIQ, in both the Ticketmaster and BA case it was not the origin servers that were compromised in which case SRI would have been effective in so far that it would have required the attackers to compromise the origin servers too at which point it's all game over anyway.

So I would assert that SRI would have stopped both these attacks.


i cant find any hits for "origin" or "CDN" on the original RiskIQ article: https://www.riskiq.com/blog/labs/magecart-british-airways-br...

the RiskIQ article identifies the location of the script as "the main website"

additionally, I'm not sure how this would solve the problem. If BA went to all the effort of putting their frontend assets on a CDN, why wouldn't their HTML, with the hashes also be on the CDN? and if the hashes are not generated by the CDN, how does the server generate the hashes without being affected by changes on the CDN? regardless of outcome, if the process is automated, does a human being need to read a diff every time any file on any CDN changes? Can't an attacker then just wait until the hashes are automatically updated?


Sorry I should have been clearer here. RiskIQ doesn't distinguish between the CDN and origin in their write up. That's an assumption on my part, I believe it be the case that the CMS that serves the JS file is different from the origin server that would be serving the checkout pages and other parts of the BA website. From a quick Google it looks like the CMS path on BA is just a file storage server of some kind.

Most likely the origin server had a hardcoded reference to the CMS path(https://www.britishairways.com/cms/global/scripts/lib/modern...) and if that had been done as

    <script type="text/javascript" src="https://www.britishairways.com/cms/global/scripts/lib/modernizr-2.6.2.min.js" integrity="sha384-5XrDTQbmmgpJKmfKW8outDDdYpRCnIf+nxX2nVR10NyWby6pPcujAELgWVmCu2P/"></script>

I speculate the attack would not have been successful. For an extremely static resource like this it wouldn't have been complex to adopt SRI.


Running ads was totally worth it.


There is absolutely no reason to be loading third party ad scripts on the payment gateway.


How would SRI and CSP help in this case? Wasn’t the compromised script served from their own infrastructure? And if hackers had access to that, why wouldn’t they they be able to change the SRI and CSP too?


In this specific case the attackers seems to have compromised a CMS which for some reason was used to serve javascript libraries. It’s likely that they did not compromise the origin itself, only the CMS. Thus using SRI on the origin would have prevented or at least drastically complicated the attack. As a general pattern CSP and SRI are very effective protection against CDN compromises and XSS. With CSP and SRI the stack surface is reduced significantly.



It looks like a CMS that is being repurposed as a CDN. I would bet the path https://www.britishairways.com/cms/ resolves via a gateway to a different set of servers than the main BA site. So for all intents and purposes it’s a CDN


Whilst CSP wouldn't have stopped the modified script from loading it would have stopped the AJAX call to the attacker-controlled domain and thus would have been rendered impotent. Whilst it is very worrying that a bad actor was able to get their malicious script served up by a BA server, even if it was only acting as a proxy for a CDN, I suspect that the reality is (or at least should be) that this is much easier than changing the headers sent by that server (and hence the CSP) and that access to one does not imply access to the other


Depending on one's paranoia level, you could have a monitoring script on independent (third-party?) infrastructure that fetches pages and goes through them checking resource hashes. If the SRI hash listed in the origin, or the actual hash of the resource, changes, you get an alert.

If an update was not pushed, then you know something hinky is going on.

Some folks regularly check the signature of their SSL certs, and monitor certificate transparency logs. It just depends on how vigilant one wants to be.


Exactly it was a crude and unsophisticated attack. I am pleased the ICO looked at the facts, not the spin and recognised the negligence of BA.


It seems the ICO website, itself was infected in a similar way last year with a crypto miner.

https://scotthelme.co.uk/protect-site-from-cryptojacking-csp...


But they most they can fine themselves is 4% of their turnover.


I suspect your comment is at least partly a joke, however the actual maximum fine possible is 4% or €20 million, whichever is higher.


It might not be uncommon in the industry, but tools like SRI and CSP stop these attacks dead in their tracks.

It depends on what you're using them for.

If you're talking about static resources loaded from something like a CDN -- in other words, a situation where you also control the intended content to be downloaded -- then SRI is potentially useful. But how could that sort of system work if you rely on a payment service to supply the client-side scripting that runs behind the credit card form on your site and turns the user-provided card details into a token of some sort?

For example, if you use a service like Stripe, you're almost certainly loading a script from their servers that they control to run the card details form on your site. Between PCI-DSS rules and the obvious need for Stripe to be able to deploy changes quickly in the event of discovering a vulnerability, you can neither self-host the equivalent script nor provide any useful checksum or similar that is guaranteed to remain correct.

I suppose in theory Stripe could provide an API that you can access from your servers to fetch the current checksum for the current version of a resource like https://js.stripe.com/v3/, and then you could render your link to that script with the integrity attribute included when you serve your page with the card details form, and as long as no-one breaks anything with a cache or the like then that might work. I'm not quite sure what attack vector it would be guarding against though, since any scripts like this will surely already be served over HTTPS and therefore trigger warnings if anyone without Stripe's cert tries to serve them via some sort of MITM attack.

So SRI and the like take care of some aspects of this problem, but given you might be relying on a lot more external services than just (presumably relatively careful) ones like Stripe, it doesn't solve the whole problem.


Similar way that TicketMaster were attacked ... that wasn't fun.


I'm glad to see a solid fine given for a data breach.

I've worked on projects in this sector before, and it's a common story to others - client cuts cost as much as possible, until the risk of an inferior product has grown too high to handle. It's a race to the bottom, and security rarely comes into consideration outside of a basic pen test being mentioned (if it happens).

Still, I'm quite annoyed at the lack of follow-up against what is blatant bullshit from BA. When your business is so heavily reliant on taking payments online, their security procedures should be airtight. I can understand that it's quite a clever hack, but it's security 101 to know what third-party code is doing on your server.

The fine is good, but it would be nice to enforce rules where a company caught in a data breach has to accept liability and not contest the severity.


Not quite sure how BA is getting away saying there was no fraud because of this. Many complaints on my twitter of people that used cards only with BA and ended up getting cards used.


This amount represents more than their profits for roughly two years. It isn't realistic and is likely a number released to make headlines.

The point of regulation is to change corporate behavior, not drive them out of business.

The stock market does not seem to think the fine will ultimately be anywhere near that high, with a drop in stock price of only 1.5%.

As always, beware of headlines.


I'm not too bothered by the number, but am worried that a company will be able to talk their way out of punishment through misinformation.

You only need to look at the latest Zoom vulnerability, and the Panera Bread data leak to see how companies will happily tell its users that everything is fine, while doing absolutely nothing to publicly resolve the issues being raised. To make things worse, a lot of people will blindly believe this, or take their side to be contrarian.

As you've rightly said, regulation needs to change corporate behaviour, and while a hefty fine will do that, it's also a minor risk to a firms ongoing reputation. It's a cost that can be offset elsewhere - more than likely onto the customer. I would rather the fine be halved or even quartered, if it meant that BA had to publicly accept full responsibility, and to push for independent auditing of all of their software.


According to wikipedia BA's 2016 net income was 1,473 million pounds.


I think what happened to D-Link should be standard. They have to have 10 years of independent security audits.


The question is, what kind of people do those kind of companies look for / take.

Are they looking and taking "just here to mark a check box, I don't care if it works or is true...

Or do they take people who will tell them (and they react) what is wrong?

More often then not you see something like this: encryption too weak: me guys, this is too weak!

Answer : it's good enough for this scenario in this network.

And they proceed to mark the box... Encryption existent..


I like this idea, although I would also like to see a push towards the improvement of their tech team.

IMO, these issues are rarely down to the people in the trenches, and more to do with a lack of budget or care from management. I would push for independent auditing, with a view towards the auditors ensuring that the team in place is capable of continuing this work.


For those who are wondering what happened, British Airway’s website had malicious JavaScript included in some files they were using. Compromised third party libraries (in this case, the Modernizr library) was the attack vector. The malware would take sensitive data off the webpage and send it back to the hackers surreptitiously.

Most sites still would have no idea if this were to happen to them today.

That’s why I’ve developed Enchanted Security (https://enchantedsecurity.com/) - a virtual content security policy that tracks the network requests and even blocks malicious ones. It’s like a firewall but running on your users’ browsers. This would’ve prevented what happened to British Airways. Get in touch if you’re interested in learning more.


Interesting approach, but doesn't using your product add another attack vector? Surely Enchanted's CDN will be a rich target for attackers.

Full disclosure: I'm in private beta with a product that periodically loads production sites and compares all executed JS to the last-known-good profile (usually from CI prior to deployment), and raises warnings if anything changes. We don't run in actual users' browsers, so we won't see malicious code as early as Enchanted, but you don't have to trust us.


For your first question, there's an optional hash you can add that locks the code to its known hash. Also, you can self-host everything if you like.

The type of product it sounds like you're running is a scanner and scanners have a couple problems. First, there's cloaking: if the malware can detect that it's a scanner (usually pretty easily) then the malware doesn't activate. Second, the malware comes from trusted code that you already think is trusted, and that's a problem if you overlook something because it's trusted. By contrast, Enchanted Security would see it since it runs on every users' browser all the time.


To be honest, if you can get clients to reliably use SRI, that's probably better protection than either of our products.

Yeah, our product could be classed a scanner, but there's no (little) risk of cloaking, as the network request or JS loaded to work out whether we're a real user is enough by itself to raise the alarm. To pull it off, the attacker would have to compromise the host of a trusted resource, determine that we were a scanner from our network request alone, and serve to us the original resource. There's a few ways to help ensure that our network requests aren't easily recognised, which is part of our "secret sauce".

Our approach does protect against the case where an attacker gets access to the client company's server and makes changes to the resources served, which can allow them to work around / selectively serve embedded snippets.

The second problem you mention isn't an issue, as we don't trust hosts, only the hash of resources (the same idea as SRI).


Depending on how the code is being served, cloaking is a big problem for scanners... if you think it's no risk you're probably overlooking the risk. Google put out a report on it and there's lots of papers on it and related topics like VM detection in browsers.

You can use SRI for something like a specific version of jQuery, but not for most of the products people are relying on that have more dynamic functionality.

SRI isn't practical for third party resources that are expected to change over time, which is actually a lot of third party resources. For example, a chat script like Intercom will change when someone from marketing makes a change that affects its frontend settings. This may be changing some text or coloring. So you can't pin it with SRI.

In your case, if your product tells you that some marketing script you're using changed hashes, how would you know whether that was a valid change or whether an attacker introduced malware?


> Depending on how the code is being served, cloaking is a big problem for scanners... if you think it's no risk you're probably overlooking the risk. Google put out a report on it and there's lots of papers on it and related topics like VM detection in browsers.

Almost all bot/automated test/scanner detection relies on JS in the browser, which is too late to avoid triggering an alert with our approach. The attacker would have to identify us from our network request to load the compromised resource alone, for which we have (so far) effective mitigations.

> In your case, if your product tells you that some marketing script you're using changed hashes, how would you know whether that was a valid change or whether an attacker introduced malware?

We'll alert the client company for any resource that's changed - you can't trust that any third party resource change is okay, especially if the reputational and legal cost of a breach can exceed 10's or 100's of millions.

Most third party code from reputable companies, including tracking tagging, doesn't change that often, and client security/ops teams tend to be horrified by the ones that do. Each time some third party silently replaces code that they're serving, it introduces new opportunities for bugs, breakages, and attacks.

I really disagree with you about SRI. SRI isn't sufficient to prevent all client-side attacks, but it is very effective, and when enterprise customers demand it, third-party services almost always find a way to support it.


3 things I don't understand about your product (but I like the idea in general of helping people get over the initial cliff of enabling CSP):

1. How does including a bit of JavaScript apply a CSP and block things if the CSP is not sent via HTTP headers? Are you installing a service worker and proxying every request through that so that you can apply the blocking or something? (I have not looked into this area for a while)

2. How is adding another 3rd party resource not expanding the attack surface rather than reducing it?

3. How do you know what is good/bad and should be allowed/denied?

On #3 I run a few hundred forums that have user generated content and whitelist a small number of 3rd party embeds such as YouTube, Google Maps, Strava, etc. I want those to keep working but don't wish anything not in my whitelist to work. But if I expanded the list in future to allowed embedding Twitch... how would your system know that this is a "good" action and to allow it?


You're correct that it's not using CSP headers (which is difficult in practice to use), which is why I call it a virtual content security policy. Yes, it's another resource that expands your attack surface but when you have sites running tens of scripts already, I think one more is a small attack surface delta, and there's ways to minimize that risk too.

For knowing what to allow/deny yes there's blacklists but also various more intelligent things as well. That's one of the problems with CSP headers is that they're too black and white today. With Enchanted Security, for example, you can configure it to say things like "allow Google Analytics but only for my tracking id UA-123456789-1" (this is because hackers have used Google Analytics as a data exfiltration mechanism where they sent data back to their hacker-controlled GA account)


I think this needs some clarification. From my reading of the issue the Modernizr library and its NPM entry were never compromised, instead the version hosted by BA on their website was overwritten by the hackers with one that exfiltrated the sensitive data from the payments pages.

Not only is the version of Modernizr used on the BA website extremely old (2.6.2 was released in 2012) but for hackers to be able to modify hosted scripts demonstrates an extreme lack of care; I wouldn't be entirely surprised if they just hadn't updated the CMS hosting the script.

Some people have been saying that anyone could be caught by this but I do think that the lack of process to allow this sort of thing to happen warrants this sort of fine.



Thanks. That was really interesting. I concluded from the article that as a noScript/uBlock user, I wouldn’t have been protected from this attack.

In general, I’ve noticed that for many ecommerce sites, I have to enable some third-party script and refresh the page a number of times. Not only do you have to guess which third-party script is the one providing the functionality to complete the transaction but it’s an iterative process where the loaded third-party scripts have a further dependency on yet another third-party. Even as a technically aware user, I don’t have time to go examining whois records and TLS certificates to attempt to determine if baways.com (or baplc.com or bacdn) are genuine.

In any case, I doubt either of these extensions would protect use from “trusted” JavaScript from making a POST request to a third-party site. (BTW, I’m not arguing against the use of these browser extension as they do a great job of protecting against other threats).

I’m glad the ICO fined BA a non-trivial amount because users/customers shouldn’t have to be constantly on the alert, exercising a high degree of vigilance – and staying up-to-date with ever-changing and increasingly complex stacks of web technology.


There's a typo on your page. Where it says "this require reworking" should be "this requires reworking".


Seems very just. BA clearly had no controls to understand the code running in production was the code they had deployed. I hope this serves as a wake up call to other companies who have a blatant negligence for infosec.


> clearly had no controls to understand the code running in production

This applies to everyone who has advertising or third party anything on their page, no?


If they choose to import some code from elsewhere sure. They could run their own advert by hosting "advert.jpg" on their server and wrapping it in an anchor tag.

BA however are a company selling their own product, no need for them to have adverts. If they want to dilute their product with adverts then they take the risk of fines (as well as losing custom)


Why does an airline even need to show advertising on their page in the first place. You'd think they would be most concerned about selling their own product.


Many of them show ads for partners to generate ancillary revenue, which is a large source of profits for airlines.

In addition, if they themselves _advertise_, they'll need to add tags to track attribution, conversions, etc.


Funny how people advertised for decades on tv, in newspapers, and with leaflets through the door, without invasive tracking


Fair enough, but if you're advertising online with any of the major companies today, there's no way to do so effectively without instrumenting at least your landing pages and conversion pages with advertising tagging.


Generally the flip side of service cutting is aggressive monetization


Yep pretty much :)

Actually it's worse than that. Even without 3rd party JS, most major websites these days run on large piles of open source libraries that have never been security reviewed (e.g. code from npm, rubygems, PyPi, NuGet) and attackers are increasingly targeting those underlying libraries for compromise...


> This applies to everyone who has advertising or third party anything on their page, no?

Yes. Which is why you shouldn't run third-party advertising.


Basically, yeah. You can specify a hash for third party JS and audit it at that version and you can sandbox JS in iframes, but most people do not do that.


At my company, we used fixed dependency version numbers for external libraries. The libraries are tested and we do take sample traffic snapshots from browsers using automation to see what our users see.

But this approach only takes care of simple, entry level attacks. A highly targeted attack, that lies dormant in a compromised library for ears, and it's engineered to avoid detection, e.g hiding for certain IPs or self-removing the code when the debug console is open - this is impossible to defend against, to my knowledge. How would you defend yourself?


I would daily compare the file running on the website that I deployed matched the file on the server verbatim. I think this would have been enough with BA. Secondly I would run a daily test in production which verifies the requests the browser makes match those expected. This would only need doing on the pages which capture payment information. Neither solution is particularly difficult. BA clearly did nothing. Hence the negligence and huge fine.


"The information included names, email addresses, credit card information such as credit card numbers, expiration dates and the three-digit CVV code found on the back of credit cards, although BA has said it did not store CVV numbers."

Is it standard for airlines to handle storing payment card details themselves and hence having to be PCI certified instead of delegating to a PSP?


Someone injected Javascript into their pages which collected this information. But, yes, it's standard practise for airlines to store card information (excluding the 3/4 digit code) in the customers' PNR (Passenger Name Record) in the airline's GDS (Global Distribution System). The details are on this page: https://servicehub.amadeus.com/c/portal/view-solution/965353...


Just a quick clarification, in the case of an airline website, the PNR will not be created through the GDS (too expensive), it will be created directly in the PSS (usually managed by the same company)


Think it depends on the airline and whether they allow agent bookings. Last I knew, BA.com created them on 1A but that may have changed with NDC.


For web they create them on 1A, but directly in 1A PSS, not going through 1A GDS (and therefore not paying the GDS fee, just a PSS fee). This is the case for all direct channels (websites or airline call centers) of all airlines


It was stolen via JavaScript injected on the payment page, not from having stored data exfiltrated. This writeup calls it "digital card skimming", which seems to be a good analogy for the attack: https://www.riskiq.com/blog/labs/magecart-british-airways-br...


Given that the airline industry actually runs its own payment card network (UATP, which has been around since 1936 apparently) it does not surprise me at all that airlines do much of their payment card stuff in house.


Statement on the Information Commissioner's website: https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...


The statement shows that the ICO investigated this data breach on behalf of other data protection regulators of other EU states. As an Irish citizen, I wish our Data Protection Commission was as serious about protecting the personal data of EU citizens.


UK's ICO and other data security enforcers are acting. That's good. They're changing companies' calculus about putting resources into infosec. That's even better.

The public and press perceive that "justice is served," so we're tempted to think the problem is solved. I don't think that's helpful. These fines don't address root causes of the problem. They don't make our systems more resilient.

They're drawing a significant amount of money from the system and transferring it to their governments' general accounts. Is that the best use of that money? Should some of that money be used to help address infosec problems? To fund training for citizens, legislators, and governments? To step up law enforcement efforts against cybercreeps? To publicly fund independent security researchers (white-hat hackers) to help detect this stuff and nip it in the bud? To help subsidize the significant expense of comprehensive infosec audits for municipal governments, ngos, and small firms?

Here in USA, the National Security Agency has, by hoarding zero-day exploits and inadequately protecting them, done major infosec damage to civil institutions worldwide (UK's NHS, the Baltimore city government, you name it). I suspect similar things have happened in other governments. To what extent is it their responsibility to help clean up the mess? Can other governments use their resources to backfill where the US government can't or won't act?

Do governments now join identity thieves as enemies of people doing infosec? That cannot be good. We have to get this right and we can't do it if we're fighting each other rather than the criminals causing the trouble.


>They're drawing a significant amount of money from the system and transferring it to their governments' general accounts. Is that the best use of that money? Should some of that money be used to help address infosec problems?

Well the point of these fines is to say to organisations "put your money into infosec, or when you have a breach it'll probably be your fault and we'll take that money as a punishment".

>Should some of that money be used to help address infosec problems? To fund training for citizens, legislators, and governments? To step up law enforcement efforts against cybercreeps? To publicly fund independent security researchers (white-hat hackers) to help detect this stuff and nip it in the bud? To help subsidize the significant expense of comprehensive infosec audits for municipal governments, ngos, and small firms?

The NCSC[0], GCHQ's defensive side, publishes advice for the public, companies, charities, schools, government departments etc.

[0] https://www.ncsc.gov.uk/


FWIW the NSA has been behind many coordinated 0day disclosure efforts. It’s a balancing act.


Just FTR, note that BA might appeal against this, so it may be subject to revision before it's all over...

“BA has 28 days to appeal. Willie Walsh, chief executive of IAG, said British Airways would be making representations to the ICO. "We intend to take all appropriate steps to defend the airline's position vigorously, including making any necessary appeals," he said.”


This may be the beginning of the end of hiring front end devs in house. Suddenly they are a serious liability... much nicer if you can pass on the fine to a third party!


I'd say it's likely to be a game changer when it comes to skimping on IT in general, whether in-house or outsourced. It's good news.


So this is ~$366 per person whose data was compromised. That seems fairly cheap all things considered.

It's a far sight better than the "credit protection" they normally provide (from our point of view, rather than the people who are used to not having any penalties for abusing their customers). Remembering of course that the typical cost to companies making when they settle with "credit protection" is much lower than the already low $30 individuals would have to pay.

I'm also tired of newspapers parroting press releases that say things like "sophisticated, malicious criminal attack". Just like a few years ago every publicly exposed+default password service was compromised by "Nation state attackers", and before then "Advanced Persistent Threats". If you make a claim like this, you should be required to provide the full details of the attack:

- what level of employee account was compromised, and if none was needed, why not? Otherwise, did the targeted employee need the level of access that the attackers used? If not, why did they have it? Simply being a C-level executive does not imply requiring access.

- Did it make use of any software exploits? If it did, were those exploits fixed in the release versions? If those exploits were fix in released software, why was that out of date software being used?

- Is your company using established best practices: 2FA for all accounts, TLS for all networking, service isolation.

- Did the compromise come about due to loading content from a third party? If so, how was that code authenticated (multiple browsers support SRI)? Was that code used to support the site functionality, or was it for tracking or advertising?

This seems like a perfectly reasonable bare minimum if you want to support a claim that the compromise was unavoidable.


Do the regulators take into account whether the firm is actually at fault?

Without considering what happened in this specific scenario, surely there are cases where companies take the utmost care, follow standard security principles and still get hacked; or the issue was not with the company operating the website but rather with, say, a hardware manufacturer?


> Do the regulators take into account whether the firm is actually at fault?

To echo others: yes, a lot. To quote the Information Commissioner:

> "I have no intention of changing the ICO’s proportionate and pragmatic approach after 25 May [the GDPR intro date] ... Hefty fines will be reserved for those organisations that persistently, deliberately or negligently flout the law."

A good overview of the ICO's approach: https://www.pinsentmasons.com/out-law/news/gdpr-uk-watchdog-...

The whole draft policy for how the ICO applies its powers is here. It's a good read, but not short: https://ico.org.uk/media/2258810/ico-draft-regulatory-action...


Yes, the regulators do take all things into consideration. A fine is the final measure.

In case of BA they ran third-party scripts on account and payment pages without users’ consent, did bot remove them even after being alerted to it, and then succumbing to a data breach because of that.


They do in some form. Largely though, "regulator" action tends to be outcome based. Relying on "standards" can be difficult. In some caes, standards exist and ignoring them can point to negligence. Conversely though, standards don't exist for a lot of things and when they do, they're not a full solution. IE, it's possible to follow "standard security practices," while still being insecure. If regulators make that a "get-out"... you may as well just have legislation instead of a regulator.

In recent times, regulators and legislators don't understand the problems (maybe no one does) sufficiently to be specific with rules. They demand general things, outcomes (you will not lose data) and general operating principles (you will secure your users' data , have good policies, and enforce them).

Both data protection (eg gdpr) and anti money laundering rules are examples of recent areas that work this way. If a bank's customer has been depositing stolen money, financing terrorism or something... the bank is at risk. Their policies will be examined and circumstances do get taken into account, but the "standards" they're judged against aren't absolute and standards compliance doesn't totally protect them. OTOH, if they don't adhere to their own policies or the policies are bad... it is enough to get them in trouble.

Lawyers, btw, hate this emerging system.

In short, modern "regulator enforcement" is a lot less legible & "letter of the law" oriented than legal environments that we are going used to.


This is far from being a new development in Europe. Regulators have never been a strict rule interpretor


Excellent. Now I wish they pick another big corporation (just pick one) and hand them a similar fine for using a standard GDPR opt-in-by-default popup.

They need to make it clear through action, not just vague wording, that having a default of allowing all tracking is not ok.

Pop ups should say “hi and welcome to site X. Click the yelliow button to enter with tracking/personalization and the blue button to enter without”.


The company I work for explains it to our customers that we can't enable any tracking cookies, analytics, external chat scripts and other external scripts unless visitor opts in. I explain that it is against the law and they back down.

Sadly not many people in the industry realize it and they keep doing whatever they were doing for years.


Websites need to really up their game, specially given the amount of third-parties they are using.

I've tried highlighting similar issues in the past, where even if there is no active breach, but they are leaking sensitive data to multiple third-parties when it's not needed in the first place.

https://dev.to/konarkmodi/watching-them-watching-us-how-webs...

https://news.ycombinator.com/item?id=16516687



Is there a good reason for them not to launch a bug bounty program?

The cost of doing so would be significantly cheaper than any future fines, and would reduce the chances of future breaches.


"amounts to 1.5% of its worldwide turnover in 2017"

I imagine that's a significant sum but I'm struggling to get my head around it. If so then good for the ICO I suppose. I remember reading endless comments a few years back speculating GDPR would never have any bite.


The maximum fine of 4% of parent company IAG's turnover would have been almost €1bn.


> I remember reading endless comments a few years back speculating GDPR would never have any bite.

I remember those. And those about how the GDPR will be bankrupting every company ever.


Is good that we see on the font page of HN more GDPR fines for non US companies because I seen a lot of US users accusing that only US companies are targeted(there are other non US examples but those did not appeared or stayed on the first page here on HN ).


Yes, there was always that weird nationalist claim that GDPR was just a protectionist measure rather than a sincere attempt to achieve its stated aims.


So who gets the money? The people who had there data stolen?

Who gets the money are the people who create laws. The more crimes commited, the safer there jobs are. The people who had there data stolen, are now on a register sold to the insurance industry, and the insurance industry decides they are a greater risk to insure, so the costs to the consumer go up. Strange how crime really drives the economy.


> Who gets the money are the people who create laws. The more crimes commited, the safer there jobs are.

Hu? Of course the jobs of people working at ICO etc are slightly safer if more criminal activity happens, but office workers at ICO do not get that money. It goes to the treasury and hence, by extension, the British public.

> The people who had there data stolen, are now on a register sold to the insurance industry, and the insurance industry decides they are a greater risk to insure, so the costs to the consumer go up. Strange how crime really drives the economy.

The fine punishing a criminal activity nearly nowhere goes to the actual victim of the crime, the victim is instead compensated in a second payment. Of course it would be nice if in addition to this fine, some kind of blanket compensation mechanism (e.g. 1000€ per datum per person) would be installed.


The Treasury. Fines are sent to the Consolidated Fund (the government's general account at the Bank of England).


From the article:

    Where does the money go?

    The penalty is divided up between the other European
    data authorities, while the money that comes to the
    ICO goes directly to the Treasury.
So it seems most of it will go to other European data authorities.


Ah thanks, I missed that. It usually all goes to HM Treasury but this is a joint operation between multiple EU supervisory authorities.

"ICO has been investigating this case as lead supervisory authority on behalf of other EU Member State data protection authorities. It has also liaised with other regulators. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings."

https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...


Yeah, people got "we're sorry" paragraph and that's about it. They won't get "we're sorry" paragraph when their bank accounts will get wiped though.


This is a death knell for BA, my friends father is a high level manager and if he’s to be believed they are running on major thin margins.

Mostly due to compensating employees fairly in the 90’s-early-2000’s. Now they’re desperately trying to remove those compensation packages.

Although it could just be cost aversion masquerading as a hard requirement.


£183m is about 10% of the profit BA made in FY 2018 before exceptional items:

> Despite these challenges, our revenues have held up, increasing 5.7 per cent versus last year. ... we achieved an operating profit of £1,952 million before exceptional items and a return on invested capital (RoIC) of 17.3 per cent

source: page 18 of https://www.iairgroup.com/~/media/Files/I/IAG/documents/annu...

I think BA (and their parent IAG) are going to be just fine.


BA has becoming a low cost carrier over the last 10-15 years and make a large profit from it. They're trading on their past glories and failing to innovate.

Last time I traveled BA in first it was shocking, rude crew, dirty seats. FinnAir in Business on the other had is a joy to behold.

I used to hit 2500 tier points a year on BA metal, I barely make 300 now, just occasional short flights in europe, but they've doubled-down on the 737 max, so I guess that will end soon too.


>Last time I traveled BA in first it was shocking, rude crew, dirty seats. FinnAir in Business on the other had is a joy to behold.

It seems like the American and European airlines are all going down the tubes, unless you shell out for business class on certain ones. The Asian airlines are the best ones these days; you don't have to pay extra for a first-class or business-class ticket to get superb service and cleanliness there.


Don't forget, this is the same company (IAG, their parent company actually) that just placed an order for 200 Boeing 737MAX aircraft. I'm going to stay far away from this company and its planes.


Too bad this will all go to the state and not to any of the people who were actually damaged in the breach. :/


The state is made up by the people it represents so by this, society benefits as whole.


"The state is made up by the people it represents so by this, society benefits as whole."

almost all of the people that make up the "state" represent only themselves. and i'd wager most of the people that are elected to represent people are the same.


TIL only UK citizens can use British Airways, not really surprised given how dystopian UK is.


Leaving aside the fact that the article does say though that the fine is also divided up between other EU data protection organisations... Equifax lost my data, it is unlikely that the American government be compensating me from any fines / enforcement action they take (I am not an American)

Conversely, if someone went to jail over this, at a net cost to the UK taxpayer, would the British Government be able to claim expenses from around the world? Has every other government that has citizens using BA chipped in to pay for the ICO?

Fines are a punishment not compensation. Whilst as a UK taxpayer I am delighted at this tremendous bounty my nation has received - and the thought of it getting into foreign hands disgusts me (/s) I would rather have law-abiding organisations.


> Conversely, if someone went to jail over this, at a net cost to the UK taxpayer, would the British Government be able to claim expenses from around the world? Has every other government that has citizens using BA chipped in to pay for the ICO?

I would think that is enough for at least 3,660 man prison years - so I guess a lot of people are going to go to jail for something which was just a mistake. But then again not surprising with UK being so dystopian and all. Glad I don't work there.


Not only is this false in the national sense, as the machinery and personnel of the state are not coterminous with those they rule over, but it is also false in this specific sense, as many of the people who were harmed by this breach (such as myself) were not involved or are not permitted to be involved with the regulatory body that is assessing this fine.

Many people who are not British subjects are customers of BA.

The ICO does not represent me, yet somehow they are getting paid for the misuse of my data.

It’s nice work if you can get it.


The fine is not there to repair the damage made to individual customers. The fine is there to punish the Corp for its bad practices, and to scare other Corps into having good data protection practices. It's a punishment and a deterrent.

If you want personal compensation, free to sue BA.


From the article:

> Where does the money go?

> The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury.

> It is up to individuals to claim money from BA, which provided no information on whether any compensation had been paid.

So you are right.

It would be nice if a portion of the money goes back to ICO, so they can grow and have a chance to focus on smaller fish as well.


> Not only is this false in the national sense, as the machinery and personnel of the state are not coterminous with those they rule over

... I'll bite - explain?

> The ICO does not represent me, yet somehow they are getting paid for the misuse of my data. It’s nice work if you can get it.

The funds aren't going to the ICO specifically, any more than they are specifically paying for my mother's NHS kidney operation.


Fine: The UK Treasury/Government does not represent or benefit me, as I am not a crown subject.


Remind us how the investigation and prosecution of this fine was funded?


They can sue the data controllers to be awarded compensation.

Article 82(1) GDPR:

>Any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the controller or processor for the damage suffered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: