Hacker News new | past | comments | ask | show | jobs | submit login
Moonpig.com Vulnerability – Exposes customer data (ifc0nfig.com)
256 points by PaulSec on Jan 5, 2015 | hide | past | favorite | 121 comments



I've seen dumber. In my second real job, I was a book editor, but I noticed our web master literally had a file called accounts.js which held a static array of usernames, passwords, and billing information for all of our customers. I told him this was terrible security, and he said, literally, "You'd have to view source to even know passwords.js exists, and our source is pretty hard to read. I'm not worried."

I took all the info to our CEO and got him demoted to server maintenance guy, on the spot, and I took over his job.

He later gloated that my store was much slower than his, since he downloaded our entire database as JS flat files and did absolutely everything client-side except payment processing and order fulfillment. I pointed out that my store didn't require 10 megabytes of download for the first page view, plus I had industry-standard security.

He was in even more trouble a couple of weeks after that, because some russian hackers pwned our server so bad that we had to drive to the colo and replace it with a new piece of hardware. I've got a dozen stories about this guy, he's a hoot.

Okay, last story, I promise; he's allergic to electronics power supplies, so he was the only employee who got to work from home (where he kept his CPU in a separate room from his keyboard and monitor).


Ha ha. The real WTF is moving him to a job where security is even more critical.


"I took all the info to our CEO and got him demoted to server maintenance guy, on the spot, and I took over his job"

WOW. You are a terrible human being.


> WOW. You are a terrible human being.

Yes, heaven forbid someone qualified run their IT dept. What's he supposed to do? Sit around, idly hoping that someone else notices the incompetence?

I think OP made the right move. To me it sounds like the guy should have been fired rather than demoted.


really David ? Come on. How many times you made mistake ? Were you demoted and/or fired for mistake ? Now, let's not argue that you or all of us has not fucked up. In my 7 yrs. as engineer I have seen worse. However, that's not excuse to run to boss/CEO to demote someone and take over their job. Think about their family,kids before you do such act.

If you defend such behavior for taking over job/demotion I seriously think there lies greater problem in tech community.

Edit: HN is getting fucked up day by day. Any simple disagreement is greeted with downvotes. Carry on.


"you are a terrible human being" is not really a simple disagreement.

"That seems like a rude thing to do" would be.

What you said was a personal attack, and a quite rude one at that.


> What you said was a personal attack, and a quite rude one at that.

That's correct, and no doubt the reason for the downvotes.


The downvotes here have grown way out of control. Simple disagreement with the majority opinion results in massive downvoting.

I've even seen numerous posts that contain nothing but factual information that displeases the audience here be voted down into the gray. The post can be in the flattest, most neutral tone possible, and if it's not what people want to hear, down it goes.

It's discouraging, and it's to a point where I no longer feel a desire to participate in this community. Frankly, I'm finding a number of subreddits to be more inviting and more interesting these days.

I don't really see what can be done about it, if you even agree it's an issue, but I did want to make a point of letting you know about a problem I've seen grow worse over recent months.


Do you have links to examples?


Everyone has made mistakes. But it takes a certain special person to stick their head in the sand when their mistakes are pointed out.

Make your mistake, take responsibility, learn and continue on.


No, I'm really not. This guy was an arrogant ass who ignored me because I was 22 and he was 51 and he "was doing this stuff when I was still pooping my pants". He refused to follow best practices, and he refused to take advice.

I told our CEO what this guy was doing, why it was bad, why nobody else does that, and how it ought to be done instead. I honestly thought our boss would just force him to follow my recommendations. But instead he told me to just re-do it the right way myself. Boss made the best decision for the company.

You could have presented your objections in a more tactful manner, but you didn't, because you're a judgmental asshole.


I'd be willing to bet the customers would disagree. And he did give the guy a chance to change this ways.


I am a former customer of theirs (in the UK) and just contacted CS about this. I'm also looking into contacting the Information Commissioner's Office as this issue is still open and my personal information (and that of the people I send cards to) is still available to anyone who may want it.

I'm pretty sure them ignoring this for a year is illegal as it involves personal information which their privacy policy didn't authorise them to publish. However I'll leave it to the ICO to make that determination.


I've also sent customer services an email demanding an explanation and the closure of my account and deletion of personal data if true and sent an email to the ICO.

In reality I don't hold out much hope but fingers crossed we can get some pressure behind this and force companies to take security seriously, especially when the vulnerability is responsibly reported as this seems to have been originally.


In my other comment, I said the ICO should have been the first place this was reported rather than putting it on the net for opportunistic bad actors to dump all their customer data in pastebin.....


Please do contact ICO! Regulation needs people to complain. ICO don't investigate complaints if there's been 3 month (?) delay.


My guess is that the ICO wont fine them very much as it did not include full credit card numbers. However they might up it for failings in process, lots of remedial measures etc.

They might not even have PCI compliance issues alas.

The management will argue that they knew nothing, although that is becoming less of a defence now.


Doesn't matter, if they're a UK based company they fall under the EU GDPR and can receive a fine of 5% of their worldwide turnover for any loss of personal data, blanked out credit card numbers or not.

http://en.wikipedia.org/wiki/General_Data_Protection_Regulat...


There are more egregious examples of data protection violation here, and the fines look pretty small:

https://ico.org.uk/action-weve-taken/enforcement/


A cursory read of your own link would have told you that the new Data Protection Regulation is not yet in force and so the figure you quote is incorrect.

The ICO in the UK currently has the ability to fine up to £500k as I understand it.


Social engineering once you have the last four digits of the credit card number and the billing address is almost certainly enough to score full credit card numbers. (e.g. use them to reset password for e.g. Amazon account).


Not to mention that the first few will be in a certain range (or possibly with a certain prefix) depending on the card type. Oh, and the last one is the check digit.

SSNs are worse, though. The last four digits plus your birth date & location might just give the whole thing away.


Do it.

I work in eCommerce, we develop a platform - and this stuff pisses me off no end, as it tarnishes the entire industry, and we'll now be dealing with jumpy clients for a month after this news hits the trade rags.


http://www.conosco.com/case-studies/moonpig-outsourced-it/

>Protection against cyber attacks

Wow...


To be fair to them they were just infrastructure not backend. I'm sure their firewall works perfectly, the trouble is the legitimate traffic that's allowed to do anything it wants!


So they delegated security to a separate team, which only got to put "reinforced firewalls and IPS appliances" around an app which was still missing basic internal security checks. (And it's hard to see how firewalls could do the checks on their own, without access to the app's data stores or duplicating app logic -- either of which makes it no longer a firewall.)

Unfortunately, it's all too easy to get this kind of partial solution from a "security team" that's distinct from (and worse, sometimes hostile to) the team that actually develops the app.


They aren't a full stack security team though and it's not fair to be putting any fault on Conosco; they are enterprise IT consulting and support and that's clear enough from a look on their website.

To be basic, a firewall does stateful inspection of inbound and outbound TCP/IP packets and an IPS guards against vulnerabilities with signatures; neither of which understand the applications logic --- there is nothing in off the shelf hardware/software that will prevent a shitty app from giving up the keys to the kingdom.

The firewall might block inbound connections to port 22 and the IPS might detect a SQL injection attack and stop it, but if you have an API that just gives up data you're screwed and that's precisely what happened. A legitimate request for information was made on legitimate ports, using legitimate protocols and as far as the hardware defense is concerned, everything is as expected -- the problem is the application.


They've already removed it...




Awkward.


To anyone thinking of enumerating the customer IDs to play with this, be very careful as it's illegal in the USA. That is exactly what weev was arrested and convicted for.


> That is exactly what weev was arrested and convicted for.

Please don't spread this misinformation, the USA justice system doesn't work (... like that). Weev was arrested for having a (very, very) loud mouth and pissing off the wrong, powerful people/businesses/corporations.

If he'd have enumerated customer IDs for a smaller, lesser-known company such as Moonpig, reported it to the media like he did, without being all inflammatory and trollish[0] about it (or without having a history of allegedly doing such things in very different contexts), he'd have gotten a slap on the wrist, a fine, or something (if anything), but not been thrown into prison as he was.

Your post makes it seem like Weev was convicted "for" doing something that is illegal in the USA and that the justice system worked "exactly" how it is supposed to, equally as it would apply to anyone.

[0] stating this as a fact of how it happened, not judging him about this, at all


There is more context to his arrest but the actions and evidence supporting his conviction were as I described.


Does anyone know what the legal position in Great Britain is?


Generally it may fall under the Computer Misuse Act and 'unauthorised access to computer material'. Presumably from Moonpig's perspective inputting alternative customer IDs would be considered to be unauthorised access...


Or don't do it in the first place, because it's obviously wrong...


>because it's obviously wrong... //

Are you trying to say it's morally wrong to read data made publicly available through a site's API? I think that's a stretch. Clearly there are very obviously malevolent things you could do with data acquired with such queries, but just iterating on a URL query string seems pretty far from an obvious moral wrong.

Legally questionable, for sure. Morally forthright, doubtful.

The wrong comes in using data nefariously, surely; not in merely observing it.


> Are you trying to say it's morally wrong to read data made publicly available through a site's API? I think that's a stretch. Clearly there are very obviously malevolent things you could do with data acquired with such queries, but just iterating on a URL query string seems pretty far from an obvious moral wrong.

I used to think this. But I changed my opinion and yes, in this particular instance, it's pretty much unambiguously morally wrong.

Why did I change my opinion? Because the previous one was wrong (morally). Ethics isn't rocket science or brain surgery. Well, maybe a little like brain surgery.

I could download that data and IMO, it'd be wrong to do so. I'm not always a good person, even by my own standards of ethics, so I might download that data. I wouldn't use the data maliciously because IMO that'd be even wronger (but by now why are you taking my word for this? I already violated my own ethical code once!). So all in all (if you take my word for it), the consequence of me downloading that data is strictly less bad than some malicious actor doing the same. I'm not really a big fan of Consequentialist Ethics. It's nice in theory (say, Utilitarianism), but in practice people simply have to use a derived code, which is not always as clearly defined. I like to keep my hypocrisies at surface-level.

So I could do it, things would probably turn out right for everyone involved, but I'm not going to kid myself and tell myself it's not wrong to do so in the first place.

(Also, there's the risk where having a copy of the data could mean I could lose control of it, fall into more malicious hands, and that'd be bad. Practical considerations I do not disagree with, but I should not need these to determine whether something is right or wrong)


I don't think there's any ambiguity here. Deliberately downloading personal information—clearly not intended to be released publicly—does not seem to be a defensible action.

We're not talking about downloading a couple of records and alerting someone about it, after all.


>does not seem to be a defensible action //

What harm is there in viewing data? None.

Defended.

Which do you find is indefensible, seeking to consume data or consuming it? Or, does one need to actively seek it and also consume it to cross your threshold of immorality? Or ...


What harm is there in viewing data? None.

Yes there is – you've consumed other people's data without permission.

Would the same apply to physical trespass in your mind? Is there any harm in entering an accidentally unlocked house and snooping around? There's nothing preventing you from doing so...

I'd argue that it's wrong, and equivalent to consuming data which is obviously intended to be private. It's not like there's ambiguity about it's status.

Which do you find is indefensible, seeking to consume data or consuming it

Surely you can only consume data if you seek to do so?


>you've consumed other people's data //

Except you don't consume it, you view it. The data remains and is accessible at all times to others. If you don't use it you haven't consumed it in any way.

>Is there any harm in entering an accidentally unlocked house and snooping around? //

There is a lack of equivalence here IMO as personal space, such as in a dwelling place, is quite different from non-dwelling space. The case of viewing data (to me) is like a person walking across your farmland without permission; quite different to finding them in your bedroom. The lack of equivalence between physical and virtual spaces makes this analogy fundamentally flawed.

If it's addressable on the internet then it's not private: If you hide your diary under your bed, that's private. If you hide it under a bush in the park, that's not private.

>Surely you can only consume data if you seek to do so? //

I shouldn't have used "consume", as the data is not consumed but viewed (unless it's used in later actions that "consume" it somehow). That said, you can view data without seeking to view it; you can seek to view data without being able to view it. If in the OP the person had tried altering the account ID and they couldn't view data from their other account would they still be committing an indefensible wrong in your opinion?

Interestingly I was just on a site called PC Builder that had price data in INR (Rupee), switching to USD added a section to the URL and I, to see if I could use the site in GBP, went to the URL and altered it ... did I commit a crime in your opinion?


Given the context is scraping, I'd argue enumerating a customer ID is pretty obviously wrong -- if that wasn't obvious enough, the response data is. And to accidentally, unknowingly harvest and store that data is much more of a stretch.


Apparently they hired these guys to help with "protection against cyber attacks"

http://www.conosco.com/case-studies/moonpig-outsourced-it/

Awful...


It's worth pointing out that the case study is from 2007, there's a good chance that this company is no longer involved and likely wasn't involved in building the API for apps and the security on them.


In any case, once this is out, they will have to take the Moonpig case study from their site.


Yup that link is now 404


Their first "solution": Fixed price outsourced IT department


To be fair, the complete security failure outlined in the article is at the app level and not something I'd expect most IT departments to bear responsibility for (unless they were directly consulted about how good of an idea using basic auth with hardcoded credentials is and gave an OK on it).

Of course, I wouldn't be too surprised if the app/API here were also outsourced to a low fixed price development shop.


Surely this is bad enough to warrant criminal prosecution? Not sure if that's even possible in the UK but it ought to be...Shameful to have sat on that for over a year. Shameful.


If this were the USA it would certainly be bad enough to warrant prosecution of the researcher. I am not familiar with laws in the UK, however. Keep in mind the similarities between this research and weev's research.

This type of blatant insecurity definitely should be punished and I wish more policy makers both cared, and made the effort to understand the terminology behind phrases like "No authentication", "Plaintext", Etc.


First of all, the company could definitely be sued for negligence in the US. Not sure if they could in the UK.

Second, there are not that many similarities between this research and weev's research. In this case, the researcher created 2 accounts which he had control over, then read data from both of the accounts despite not authenticating to either of them. He did not access any other customer's information (or at least he's suggesting he didn't).

Weev on the other hand scraped private information for over 100,000 customers and shared it with friends and reporters.

Both technically violated the CFAA, but weev's offense is a much greater violation of customer privacy, while this researcher has not violated anyone's privacy.

I still don't think weev should have gotten any jail time, but you're making an unfair comparison.


Personally (and I know this is likely to be an unpopular sentiment on HN) I have very little sympathy for weev.

He knowingly and deliberately attack a weakness he had found to scrape data, knowing that the access was unauthorized. I disagree that the data was in the public domain (although the Third Circuit disagrees) - just because something is accessible to the public doesn't mean it's in the public domain.

Just because he wrote it up as a security researcher doesn't mean he should be immune for his actions - in fact in some ways it makes it worse because he did it knowing that he was unauthorized.

He exposed the vulnerability to the press (so he didn't act in good faith regarding the disclosue) and he did so potentially for monetary gain (he claimed to be a member of a hacker group called “the organization,” making $10 million annually).

I think one part of improving cyber security is prosecuting people who deliberately and maliciously hack into other systems who do so for either monetary gain or fame. I think this is especially the case whereby they don't act in good faith (e.g. providing proper disclosure).


>I think one part of improving cyber security is prosecuting people who deliberately and maliciously hack into other systems who do so for either monetary gain or fame.

This would do nothing except cast a chilling effect over the security community. Everyone would sit on exploits, too afraid of overzealous prosecutors to publish them or even reach out to the affected parties.

Unless, of course, you believe the US justice system to be the paragon of restraint and reasonableness.


No, it would be better if responsible disclosure was codified in the CFA. That's worthy of a campaign - but weev didn't practice that, so he's a poor figurehead for such a campaign.

Such a protection could provide an equal level of footing with the DMCA (i.e. you aren't liable for malicious attacks on a computer company if you provide full disclosue and advance notice, in the same way YouTube isn't liable for hosting copyrighted content if they provide a takedown mechanism).


Doubt it's as unpopular as you think.


Apparently - probably because of a silent majority instead of a vocal minority.


I agree, and feel that the EFF made quite the strategic error in supporting Auernheimer's appeal.


Weev should probably be in jail for several reasons. Just not the specific reason they sent him to jail for. The EFF had to fight because the conviction set a really bad precedent for other research.


The trouble with fighting for human freedom is that one spends most of one’s time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

— H. L. Mencken


> "For it is against scoundrels that oppressive laws are first aimed..."

[Citation Needed]


I honestly don't think it is unfair. "Both technically violated the CFAA" is an important sentence.

The legal system is very complicated and sometimes small details make very big differences in cases. I'm not convinced others in the legal system would see this as different


I don't think that the author violated the CFAA, though: in both cases, he was acting on behalf of his users that he had created in the system -- the same requests he would normally make when using those accounts. ("BobAtHome", "BobAtWork" could concievably be two accounts for Bob.) That seems substantially different than what Weev did, which was try to read ${Everyone}'s data.


Moonpig.com is not an application you run on your own computer, though, it's a service operated and hosted by Moonpig. Any tampering with that application in a way that's not intended is a violation of the CFAA.

As you and I have essentially both just said, it's very unlikely there would be any prosecution due to the facts and the researcher's intentions, but I think it is still a technical violation. Paraphrasing, but the first line of the CFAA is "having knowingly accessed a computer without authorization or exceeding authorized access" (that line is explicitly for access that could jeopardize national security, but it goes on to set similar limits for general unauthorized access of any entity).

In this case it is not necessarily unauthorized access of a customer's account, but unauthorized access to a component of Moonpig's system.


That's difficult to argue given the app underlying it knowingly makes these requests.

It's arguable that he could be reverse engineering the API to make a compatible client - I think that should be legal, although IANAL.


The CFAA is a very broad statute, but the US legal system still does focus heavily on intent both for charging and sentencing, as well as deciding whether to charge at all. Even if in theory 2 people are convicted under the exact same law, they could get drastically different sentences based on how the judge perceives the defendant's intent.

In this case there's almost no chance law enforcement would charge the researcher unless Moonpig decided to press charges. And even then, they may decide not to charge due to the facts of the case (though of course they legally can).


    > If this were the USA it would certainly be bad enough to 
    > warrant prosecution of the researcher
Sounds like he didn't access any data he wasn't allowed to, if he read the data of test accounts. Not sure how you'd prosecute this in the UK.

Also you'd need to convince the CPS that it was in the public interest to prosecute, and they're not elected officials who need to appear Tough On Crime unlike the US. And even if both of those things happened, you'd then need to convince a magistrate that the case warranted a conviction.

Still, he should have gone to ICO first and foremost.


In the UK the relevant law AFAIK is the Computer Misuse Act, http://www.legislation.gov.uk/ukpga/1990/18/.

He has authorisation to access the data, and authorisation to access the computers in question. He doesn't, perhaps, have authorisation to use the specific mode of access but that isn't pertinent to the Act as written AFAICT.

The only possible part he falls foul of is Section 3(3) in that his actions might have caused the system to fail, but "recklessly" has a suggestion of him knowing that such deleterious outcomes were likely, and I don't think that's really true. I think his actions as reported are not in breach of this Act.

However, the proposed Section 3A will cover such actions if he [the reporter of the security lapse] believes that the information (see 3A(4)) he published is likely to be used to assist in the commission of an offence.

>"A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence under section 1 or 3." (CMA 1990, proposed S.3A(2))

This section is exceptionally broad. Indeed it appears to outlaw the disclosure of bugs found without malice and without intent. Communicate to Google, say, a program/data that could be used to break in to their system and it seems you fall foul of the letter of that Section. Chilling indeed.


    > warrant criminal prosecution
Here's a not too dissimilar case:

https://ico.org.uk/action-weve-taken/enforcement/worldview-l...


Disgusting - this should be priority one for them to fix.

I just changed all my details to ones from a fake name/address generator, then emailed moonpig to close my account. I will lose about 80 pence, but nevermind.

I didn't see an option to get rid of my credit card details, so that may still be vulnerable, especially with the NameOnCard field in the api.


I know my mum has a Moonpig account so I'm pissed about this, but I don't recall if I have an account.

Recently, I have mostly been using CFHDocmail. It's 96p for a full colour A5 greeting card of your own design.

(It's also cheaper to use them to send letters than it is for me to buy a stamp. They also do postcards, going as low as 38p delivered. Lots of mailmerge and API stuff available too iirc, but I've never used any of it.)

Edit: They may use windowed envelopes for the cards, when I tested they didn't, but now I've been told they do. I've not sent one to myself since my original testing, and none of the recent recipients have said either way. I'll make a quick one and sent it to myself!


Wow. This is actually still wide open. This is really bad.

Fun fact - you don't even have to send the basic aut header - it'll respond just fine without it.


I'm sure the (outsourced) dev team will have a bad day tomorrow. This is just unacceptable. According to the blog post he first made contact in 2013! Bugs happen, but this is just bad design.


My comment from the other thread:

They also make it very difficult to delete your account. Rather than just have a link on the site, you have to contact customer services and they say they'll respond in 24-48 hours.

Not to mention the ways they try to hide you removing your card details. If you want to remove your card details, do the following:

The easiest way to do this would be to go to the My Account page then click on the ‘Add Moonpig Prepay Credit’ link, click on the Buy link and your saved card details will be shown onscreen. Click on the ‘Remove Card’ option.


Looks like the API is no longer accessible from here. Seems like they have pulled it down.


In the circumstances that might be a generous explanation for their ID enumerable non rate limited API going down.


Thats good, thats their entire business down, so they are going to have to pay attention.

I wonder who made the decision to take it down. I hope they don't get fired.


Yes - appears dead now. Open for around 3hrs after first post of the vun I think.


In the address example you can even emit the arguments and it just returns you a large list of addresses. Would expect this to be hitting the news here in the UK tomorrow!

Judging by their parent companies website they seem to be PCI certified (http://careers.photobox.co.uk/security-officer-moonpig/) which is likely to be removed from them after this, also given the private information on show I would expect this breach of the data protection act to be meaning a large fine for them.

For anyone at risk from this you can't just cancel your account, but you can manually go through and delete quite a bit of data such as address books and they then disappear from the API calls.


Been a while since I read PCI DSS but if the PAN isn't there, does it specify you have to protect that information? Also, if they don't actually have the PAN touch their servers (like, using a BrainTree or Stripe-like solution), PCI compliance is quite minimal. Even PCI DSS 3.0 is trivial to deal with using Stripe (they just insert an iframe so the CC info goes directly to their site).

Of course, yeah, they don't deserve the benefit of the doubt here. Given such a terrible API they probably are a mess inside, too.


Reading that job spec I assumed they handle all the PCI side of things themselves, if using stripe etc I doubt you'd need such an involved role.

Given the mess it looks like on the front, I would bet PAN's are stored in clear text too!


They have 3 other brands: http://photobox.co.uk http://uk.paper-shaker.com https://sticky9.com

Only the later seems to enforce SSL. I registered a dummy account on photobox, username/password/email, via their form which was not using ssl.


Photobox acquired Moonpig in 2011 [1]. In 2010, Photobox got called out for emailing passwords in plaintext[2], and were quick to take to twitter to say "It will never happen again."[3] At that point, it had only been happening for 4 years [4].

Coupled with the tone of the job advert already posted by others [5], it doesn't seem too hard to imagine a corporate culture where security is not a serious concern until things go wrong.

[1] http://www.bbc.co.uk/news/business-14275632

[2] http://www.pcpro.co.uk/news/security/360163/photobox-sorry-a...

[3] https://twitter.com/PhotoBox/status/20719242964

[4] http://blog.dave.org.uk/2006/06/more-password-s.html

[5] http://careers.photobox.co.uk/security-officer-moonpig/

[edited for clarity]


The number of companies that send (and possibly store) plain text passwords is scary. I keep reporting them to http://plaintextoffenders.com/


I was about to ask why anyone would bother sending plain text passwords and store them encrypted. I then remembered a high-school friend's first (and largely unsupervised) job where IIRC he devised a ridiculous password encryption (not hashing) scheme in PHP (on shared hosting).

Unrelated horror unfolded a couple of years later when for some peculiar reason he had to move the site to a godaddy VPS. An unencrypted customer database sitting at /db.sql, fully accessible to the world. Apache had been configured to show directory indexes and, to take the site offline, /index.php had been removed. I think at the time I even needed to explain the possible consequences. I just remember being told that the database was restoring and it wouldn't take too much longer!

I think any remaining part of me that implicitly trusted interesting websites with personal data died that day.


Photobox is the parent company, which bought out moonpig, Papershaker and sticky9. Each product is an entirely different codebase and different team working on it (I know because I did some work for Papershaker, part of which was working on a site wide switchover to SSL - which for now you can manually opt into: https://uk.paper-shaker.com/).


It's astonishing that somewhere out in the modern world there's an api that returns personally identifiable information without requiring any sort of authentication.

What I find absurd is that the company hasn't done anything about it. Even if they don't care/know about security they must at least care for bad PR...

But with all of that in mind, I don't know what's the best way to fight these clueless behemoths. You disclose and thousands or even millions of people will be compromised. You don't and those same people could be compromised but no one will know because the attacker(s) will just continue to siphon information quietly.

They should be waterboarded for making a responsible individual have to choose.

For the record, I approve of this disclosure. Better to know the evil than let it go on unnoticed.


> They should be waterboarded

Except, you know, for the part where that is an inhumane thing to do, even when done to people that are actually guilty of committing terrible crimes.

> It's astonishing that somewhere out in the modern world there's an api that returns personally identifiable information without requiring any sort of authentication.

Hello, have you met the 21st century? It's a freakshow and clusterfuck of planetary proportions. Although even accepting that fact, yes, I suppose that doesn't make it less astonishing. Spoiler alert: things will probably get even more astonishing before it gets less. Fasten your seatbelts, wear a hat, etc.


On top of this clusterfuck, I find it galling that I can't just close my account and have all my details removed. Oh, no you need to fill in a contact form.


Lots of users on Twitter saying to delete your account, but is there any proof that this will exclude your account from the API?


It would probably be more effective to update your account with nonsense details.


Odds on it adds a "deleted" flag to your account record and nothing more...


This is irresponsible disclosure. You should have contacted the information commissioners office. They would have used legal powers to force Moonpig to rectify this. There are very steep penalties for not protecting customer data.

Now that you've publicly disclosed this, opportunists (people one level above script kiddies) will probably grab a data dump and compromise every customer.

Dealing with this via legal channels would have ensured a resolution whilst protecting customer data from any opportunistic bad actor.

Shame on you. I can't wait for myself and my wife to get doxxed now. Thanks.

Also, FYI; the whole card number isn't returned because they are probably tokenising the full card number with their payment gateway.... Or at least, I hope.

DOWNVOTING because you don't agree with me? How rude. I believe I'm a making a valid point, there are legal channels in place to help with this sort of thing.

EDIT. someone people think I do no hold moonpig responsible for this. I do! I am not blaming the security researcher. What I am saying is that some countries (like the one where moonpig is incorporated and operates) have agencies that deal with issues like these. Getting these agencies involved before public disclosure is a much nicer way to deal with these sorts of issues.

I'm aware that this exploit may already have been used but that doesn't mean that we should tell everyone about it until it is resolved. Getting the ICO involved may have resolved this issue a long time ago.

My disclosure - I have a friend that works at the ICO and she tells me that these issues usually take them (on average) 2 months to sort out. COmpanies get very anxious when the ICO contact them.


That's a pretty heavy handed definition of irresponsible disclosure.

The onus of patching security flaws is on the company, not the security researcher. Responsible disclosure is a courteous and respectful form of helping a company fix their vulnerabilities, but it ceases to be responsible if agreeing to keep a vulnerability private enables the company to swipe it under the rug.

Top security talent at Facebook and Google can patch complicated vulnerabilities in a matter of hours, days or weeks. 17 months, even for the most unsophisticated engineering team, is inane. At that point, you could have spent 17 months rewriting the entire codebase from scratch.

What the discloser did here was perfectly reasonable - 90 days is typically considered the upper limit of time for a company to fix a vulnerability. This is typically the time that a vulnerability will be automatically eligible for public disclosure on, say, Hackerone. 17 months? No way.

Also, downvoting is a valid way to express disagreement, see this comment by Paul Graham: https://news.ycombinator.com/item?id=117171


I still don't see why he had to do this?

He has plenty of time to inform the ICO of this issue. He contacted moonpig then let the sit on this for a year.

If he wants to be a disclosure hero, he could have at least told the ICO at the same time he told moonpig.

The issue is 100% Moonpigs fault but he chose to disclose publicly rather than use the legal route set up to deal with these kinds of issues.

The whole responsible disclosure scene needs a reboot and people need educating on the responsible way to deal with these issues. Public disclosure should be a last resort (within reason). Not even contacting the ICO before doing this is shocking to me.


OP here and I agree with you. The ICO genuinely didn't even cross my mind and in hindsight I probably should of gone via that channel before publicly disclosing. Are there any set procedures to follow for this sort of thing?


Another minor consideration - here in the UK this was posted at 10PM - not exactly a friendly hour. It would have been nice to schedule the post for a time when UK businesses expect to operate. I don't expect they would have thanked you for it in any case, but they would probably have had both a better response time and a better organised response


They had 17 months, and their Twitter account was still posting at 9pm this evening.

If they gave two shits about our data (and it might include mine, it definitely includes my mum's), or if they were capable of a sensible helpful coherent response, they'd have done it 16.5 months ago.


In reply to the child post (of my other comment), because I can't do so directly due to nesting limited:

>Have you read any of the above?

>It's clear they didn't care, that's why I'm saying the ICO should have been informed. That would force them to give a shit.

I had, at the time of writing my post, read all the comments on this story. I was commenting specifically on the parent's point about what time of day the story of was posted. I don't disagree with you re: ICO.

However I don't think it's fair to characterize the disclosure as irresponsible. The fault lies with the vendor for not patching. Publicizing guy followed industry standard practices for responsible disclosure. Vendor is just fucking useless.

I'm unhappy, as I'm sure it'll cause an increase in spam and possibly spearphishing to my mum, which I will subsequently have to deal with. Yey. But that's Moonpig's fault.

Edit: And in response to the response to the response...

>Why are you saying the fault lies with the vendor? Do you think nobody knows that? Do you think that's not obvious? Do you think that's what I was commenting about?

Because it does. No, I think everyone knows that, however it was relevant to the rest of the paragraph. I don't think that was what your child post was about, however I didn't want to make ANOTHER post to voice my opinion.

>There's a difference between reading and comprehension.

I read AND UNDERSTOOD the comments, I was of course referring to your rhetorical question implying that I hadn't even read them. Apparently you didn't comprehend that?


Why are you saying the fault lies with the vendor? Do you think nobody knows that? Do you think that's not obvious? Do you think that's what I was commenting about?

There's a difference between reading and comprehension.


yes, quite clear that they didn't give it the priority it warranted (aka giving a shit) - just wanted to point out that there was a friendlier option timing wise. For my money, I'd have seen this disclosed 11 months ago - it's a serious vulnerability to the extent that I'm glad I've never used moonpig.com - but I'd have seen it disclosed in the UK daytime when the company was awake to be able to shut down its API. There's even an argument to be had that waiting as long as this is a little irresponsible - although that's covered to some extent by following up.

I don't know if it's legal to give advance warning of public disclosure - that could easily become a minefield as it might be interpreted as a threat, and linking it to a request to fix could seem coercive.


Have you read any of the above?

It's clear they didn't care, that's why I'm saying the ICO should have been informed. That would force them to give a shit.



You're getting mad at the wrong person here, full stop. This is gross, inexcusable negligence and incompetence. I'm surprised this guy didn't wait more than a few months, given the severity of this problem.

> whilst protecting customer data from any opportunistic bad actor

Riiiight. Do you honestly think something this basic wouldn't be discovered by criminals soon, if not already?


> You're getting mad at the wrong person here, full stop.

No I'm not. I;m not angry. I realise this is the fault of Moonpig

>This is gross, inexcusable negligence and incompetence. I'm surprised this guy didn't wait more than a few months, given the severity of this problem.

I agree

>Riiiight. Do you honestly think something this basic wouldn't be discovered by criminals soon, if not already?

We don't know if anyone has already used this. We don't know if anyone ever knew about his. But now we know everyone knows about it. To be honest, I would not be surprised if someone may have already used this for nefarious purposes but at this point in time there doesn't seem to be a public dump of data for low skilled hackers to continue using for years to come.

I still think this should not have been publicly disclosed in this manner. He did not contact the ICO and he left this exploit open for a year because he didn't know the mature way to handle this.


You do know that this is the first time a lot of people that do not live in the UK are hearing of the ICO


I would say that the period August 2013 to January 2015 is more than "a few months".


My wording was crappy there. I meant I'm surprised he didn't wait just a few months. As in, I'm surprised he didn't get impatient and do this earlier.


While your point is valid I think you're getting doe voted because you're completely forgetting that the probability of someone malicious finding out about this vulnerability and exploring it without disclosing is quite high. Going through legal channels would just mean the api will be live for longer. Lawyers like to take their time.

Instead, the disclosure resulted in the API being shut down within the hour. A much better result IMO.


I don't disagree with you but I still think I have a good point.

He should have gone to the ICO straight away as well as report directly to moonpig then if it wasn't fixed within x amount of time, take next escalation step (which may or may not be public disclosure). Given that it's midnight in the UK now, we're lucky that they acted so quickly (assuming the offline API isn't just scheduled downtime).

Going public had no guarantee that would have taken the API offline. I guess taking risks like that is easy when it's now your own data that's being compromised....


I'll add my two cents a non-Brit: I have never heard of the ICO until this thread. Someone please correct me, but the closest thing we have in the states may be contacting the Attorney General?

I say this thinking of the argument the rest of the world makes when the DMCA threat is used against a non-US entity.


The ICO is a bureaucrat with responsibility for enforcing the Data Protection Act. There is a small amount of overlap with the Surveillance Commissioner who oversees all surveillance, especially under RIPA (regulation of investigatory powers act).

The ICO is reasonably good - I don't get any (personal) junk telephone calls or junk mail because of our laws about how companies handle my data. (This seems like a trivial example now I've typed it! But it did mark a clear difference between before and after ICO).

https://ico.org.uk/

The website and reporting is much better than it used to be. ("Please download, print, and complete this MS Word document, then post it to this address")


Moonpig is UK based. He could have looked up how to report a data breach in the UK.

Not sure what the DMCA reference is about. I understand that people use DMCS on companies that are not US based therefore it has no power. Still not sure why you mentioned that though.


Yea, you're right; I thought that some context might be needed after I posted.

They aren't related whatsoever, however the thought process of being put into the same position as the security research in this article is what made the connection for me. Assuming that the author wasn't from the UK (he probably is, but bare with me), as someone from the States I would have assumed that having an email exchange with the company was more than enough especially if there a reply on their end.

From my perspective, again knowing nothing about UK law (as much as people in the UK, China, or Fiji may know about US Law), I wouldn't know where to turn after that. Maybe a teaser post, without disclosing everything? If it weren't for the fact that the author stated that he had several two-way conversations with a representative of the company, I would have more sympathy for moonpig.

Speaking of which: How effective is the ICO?


The ICO is pretty well known in the UK though. I'm not from the UK and I know about them. (Mostly because of their role in the whole eu-cookie-law farce)


The guy who found the vulnerability in 2013 could have simply reported it to authorities at the time. If their turnaround was earlier than 2015, it would have worked out better, yes?


I'm guessing he didn't think the company wouldn't fix such a huge issue...


He could have reported this to the Information Commissioner's office in 2013, and then if either the company or the IC failed to do anything, then disclose, at this exact timeline.

Then, at least, the legal system would have also been given a chance to resolve this without full disclosure and potential doxxing.


Probably downvoting because the whole (ir)responsible disclosure discussion has been had, for decades, with all arguments from all sides and repeating it here, again, would be just going through the motions.


Agree with clobec, I think this is irresponsible to disclose this so publicly.

There'll be a lot of collateral damage now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: