Terrible response by the Puffchat guy: https://twitter.com/MikeSuppo (Google Play's dev listing goes to the Puffchat blogspot site, which links to this Twitter account.)
"This is a friendly message to advise that you remove all web based content about Puffchat"
"Please remove within 1 hour."
"Puffchat will be fixed in due course. Every piece of content with the original author's name attached to it after GMT scheduled will only provide evidence that can be used against him."
Edit: Actually, this could just be a publicity stunt. Do something boneheaded like this, get some exposure. Take flak from users that don't necessarily matter, and hope to score a lot more users. If you're not getting the growth you hoped for, what do you have to lose?
No such thing as bad publicity. It gets the brand known "oh, yeah, puffchat, I've heard of that, can't remember why", and if they tell users that it's secure often enough, no amount of evidence that it isn't will make any difference - humans respond more strongly to an authoritative voice than to objective reality.
> @iH8sn0w @NinjaLikesCheez All content, including articles, scripts, reddit posts, tweets, everything. By 11.40pm today (3/3/2014).
Hahaha, that is a pretty hilarious bit of fail, there. I don't think it could really be intentional... it might make him kinda famous (in a probably unwanted way) but it won't net him new users.
I'm not too impressed with the blog's author either. He documents breaking into another website in a previous blog post:
http://faptrackr.org/blog/?p=45
Exactly. A lot of people don't know that you can easily crack a .ipa binary and see things like method names and string constants with about 5 minutes of work. You can do the same with Android .apk files. Seriously, if you're doing security intensive software, try to crack your own binary and see what information you can get. You'll probably see way more than you thought you would.
Agreed, but it makes me wonder if we can trust the author's description of his disclosure efforts. Not that that gives a free pass to the app developer(s), of course.
You're right, it is silly, it was a dumb thing to do - but most_unique was actively commiting credit card fraud of innocent people to run his site and wasn't going to stop anytime soon.
Aren't burner phones that way because you want to ditch the entire phone to erase any link to you after using it in an incriminating way?
Even if this app was "secure", it wouldn't prevent the need to ditch a phone. LE can subpoena the company, find out which IP:port connected for whatever user/message. Then go to cell company and get records and track the cell.
11 (or is it 12?) months in, Andrew "Weev" Auernheimer is still serving a 3-year conviction (on appeal now) for "hacking" the AT&T iPad signup script to get email addresses out of it ... using a web request and random numbers. In case that's not clear enough, it was published, public data waiting to be requested, no security restrictions except the numbers to be guessed. I'd say that's the same for any such "private" (hah!) service that uses ID numbers to access data over public channels, wouldn't you?
Ultimate Streisand effect - I have literally never heard of this app that seems geared towards drug users; and yet I learn about it from it's incompetance.
How do people release public API's without THE MOST BASIC OF SECURITY CHECKS. Really? You can add a friend without any checks and even send messages as someone else? Christ.
A) Who funds these guys?
B) How can I get a piece of that seemingly-easy-as-hell-to-get pie?
I triggered executive-level uproar just yesterday by pointing out what should have been obvious security issues in an API we were about to be asked to integrate with. I was not the first technical person to look at the document we were given, and in fact I was the only one to look at it who couldn't actually read it in detail (it was in Chinese, I only speak English, but the identifiers were in English), but nobody else had spotted the problem.
I'm not a roving security consultant, so my sample size is limited, but I have seen little evidence that even basic security awareness is part of the toolkit any substantial number of developers have.
I was a security specialist at a large software company for a couple of years, and I did some developer training.
> I'm not a roving security consultant, so my sample size is limited, but I have seen little evidence that even basic security awareness is part of the toolkit any substantial number of developers have.
Agreed, and I think that's when a (good) CS education makes the difference, by helping you grasp how t design and code for security, which are fundamental concepts that a lot of "junior" developers have no clue about. And then you see the same basic attack vectors creeping up all the time...
I have no CS education. I don't have any degree, or even a high school diploma. Most of those around me have had CS or related degrees, many from quite well-regarded programs, but there has been no apparent correlation to security awareness. To the extent they have an edge, it's in mathematical analyses and algorithm design/implementation[0], which are of limited direct use in most day-to-day things like noticing "this endpoint uses plain HTTP", "this isn't an HMAC, also serial numbers aren't secret keys", or "a 4-digit PIN is not a secure password".
[0] And even then, I've wondered more than once what the hell goes on in CS programs when I've found myself explaining concepts like entropy and the difference between speed and scalability.
I can definitely see a developer's concerns being brushed aside as a "business decision" on the grounds that growing their userbase or adding new features is more important to the startup's survival than security at that time.
It's actually a pretty damn good line, and I think it's really, fantastically hard to know when your ethical responsibility as an engineer starts to outweigh your obligations as an employee.
"In the interest of responsible disclosure I did try and contact the dev multiple ways, I was either ignored or not replied to and I feel users deserve to know what’s happening with their data."
In the interest of responsible disclosure I did try and contact the dev multiple ways, I was either ignored or not replied to and I feel users deserve to know what’s happening with their data.
"In the interest of responsible disclosure I did try and contact the dev multiple ways, I was either ignored or not replied to and I feel users deserve to know what’s happening with their data."
As you can read in the article, he did try to contact the developer.
That aside, though, when the issues are this egregious I'm honestly not sure what the right approach is. With flaws this bad it's hard to imagine that they're even capable of fixing the problems, let alone responding appropriately to the disclosure.
"In the interest of responsible disclosure I did try and contact the dev multiple ways, I was either ignored or not replied to and I feel users deserve to know what’s happening with their data."