I think what might not be immediately obvious to people outside of the bug bounty scene is that Sam Curry, Brett Buerhaus, Ben Sadeghipour, Samuel Erb, and Tanner Barnes represent some of the best bug bounty hunters out there which is definitely one of the reasons they absolutely pwnd Apple here.
I would be genuinely shocked if Apple doesn't end up paying out much more for all the bugs found. Frankly, it would be genuinely concerning if they didn't acknowledge the severity of the bugs and the time invested by this particularly skilled team.
To Sam and the others involved. Fantastic job and amazing write up. 10/10
"Within the article I'd mentioned that Apple had not yet paid for all of the vulnerabilities. Right after publishing it, they went ahead and paid for 28 more of the issues making the running total $288,500" https://twitter.com/samwcyo/status/1314310787243167744
Even if they would pay a full $5.5 million, at 100k per issue, it seems reasonable for the breadth of the findings and potential losses prevented. The warehouse access alone could cause far more damage, while some of the smaller vulnerabilities are clearly not worth that much.
In the instance yes, but bounty programs need to be sustainable so parameters are set up front. Folks can choose to participate or not. If they don't like the offering, they can find something else.
Bounty programs are in place so that bad actors are not the only ones on the lookout for bugs. If experts get paid pennies for finding enormous security vulnerabilities, what's stopping them from selling them to actually bad actors for a potentially much greater cut? I can imagine that someone would be willing to pay far more than $5M to gain access to Apple wharehouses.
But why would an expert spend any of their valuable time outside of work looking for bugs if they didn't like the terms of the program? That's irrational behavior.
And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers? The vast majority of companies in operation today have no such program and would likely be much more fruitful.
And lastly how would paying more for bugs prevent someone from also selling it to criminals?
That's not something a company the size of Apple can count on.
> And why would someone who's willing to sell bugs to criminals bother with a site that's already been picked over by bug bounty researchers?
Because it's Apple, it's one of the biggest companies on earth. iPhone jailbreak vulnerabilities alone fetch millions on the black market.
If you know the bug bounty program doesn't pay much you can expect only the trivial things to have been found, and if you're very skilled you know you still have a good chance of finding things to sell.
> And lastly how would paying more for bugs prevent someone from also selling it to criminals?
It would keep more honest people interested in your bug bounty program instead of doing something else.
>Because it's Apple, it's one of the biggest companies on earth.
Yes, and do you think you have a better understanding of the situation than the security and risk management folks that work there? There's absolutely nothing that has been said in this thread that they aren't keenly aware of. There are people in Cupertino that are going to wake up in a few hours, grab some coffee and pore over the threat intel reports from last night. They know who is buying and for how much and have a long detailed analysis of what happened with previous jailbreaks. There is another team of people dedicated to staffing the bounty program, rifling through stacks of reports with a signal to noise ratio that's approaching the Shannon limit, triaging findings, tracking down product and engineering teams to get a quick response so they can get back to the researcher in a timely fashion, handling rejections for out of scope and dupes.
These people are in it up to their eyeballs in this every day. They live it, breathe it, love it and they'll move the needle when moving the needle makes sense. Until then anyone that participates in the bounty program and then cries foul when payouts are in line with the posted max and not with what could be had on the black market are going to get zero sympathy from me.
Maybe there's demand for agents/managers for less famous/media-savvy bug hunters, quintupling the payout would easily pay for the agent's fee in a case like this.
> They went ahead and paid for 28 more of the issues making the running total $288,500"
That's barely the yearly cost of one generic software engineer at Apple.
I was expecting multiple millions in payment given the severity and quantity of vulnerabilities found. State actors could easily 10x that amount legaly through gov contractors.
I’ve interacted with some of them directly when I worked on a bounty program. Definitely some of the best in the business (and actually pleasant to work with).
Apple only paid them $52k? Apple is a trillion dollar company. These hackers saved them easily millions of dollars in expenses.
China or North Korea could easily allocate a much larger team to something like this and disrupt Apple (not for bug bounties). Although, China and North Korea dedicate their resources to financial fraud where there is real money to be had.
Apple is a tightwad joke. If they laid out a scope of work for a professional pen testing company that included pen testing their 17.0.0.0/8 range then that contract would easily have been in the hundreds of thousands.
I’m sure foreign adversaries will take notice now. Apple’s cybersecurity posture has always been very weak. It’s known they don’t dedicated any resources to it.
Gross pay (not including employee benefits and before payroll tax deduction), split among a team of 5 people, unclear if they were working on this one project full time, and amortized over other months with less renumeration. It may not be better amortized pay than a regular software job.
Especially considering that the authors are some of the best bug bounty hunters in the world. $500 an hour is a fairly normal rate for a top security consultant, as far as I'm aware.
But that is the thing... their official Bug Bounty program scope didn't include most of these exploits so any payments/awards would have to be made outside of the traditional system and thus probably take more senior approval/time to make payments. They knew that they would possibly not get paid for them but took the risk anyways. I have a feeling they will end up getting at least a hundred thousand dollars total.
Not to discredit the great work all these people did but not all exploits are created equally. Generally speaking the bug bounty amount is directly correlated to the blast radius of the exploit.
Not just saved Apple, but Apple users too. Wasn’t the “fappening” rooted in hacked iCloud accounts with weak credentials? Imagine what juicy political targets are out there using iPhones syncing with iCloud.
It wasn't that, they did do password cracking, if I recall correctly iCloud itself had a limit on password attempts through the site but there was a way to attempt logins through the API that didn't have that limit which let them target those users for brute force attacks.
I think this was speculation at the time, but it later came out that they were phishing attacks which got access to iCloud accounts, and once you have that you have the person's device backups.
You're right iCloud itself wasn't "cracked", per se, but the XSS exploit is (was) incredibly malicious and did not require any social engineering that I can see.
It's safer not to underestimate them. Every country has smart, competent people in it, no matter how poor or how oppressive their government. A cybersecurity / black hat program is much cheaper than nuclear or ballistic missile programs; and mostly just a matter of human resources and education. And North Korea can set up very strong rewards and incentives for those who do well. Put their whole family up in Pyongyang luxury apartments, etc.
They aren't competing with FAANG salaries, except maybe for a few outside experts that they might bring in to kick off a program.
Number of security researchers isn't a function of population size it is a function of population size * fraction with propensity to show requisite skill * fraction who go to work in the profession.
Shockingly adding millions of more starving people who have never seen a computer doesn't get you many more cybersecurity specialists.
It's also one of nine nuclear powers and one of ten to have developed space launch capability. It also scores near the top of the international math olympiad regularly. What makes you certain North Korea hasn't similarly invested in developing security researchers?
Apples net income is bigger than their GDP and most of their people as impoverished, malnourished and poorly educated.
The upper caste from which all their "talent" is drawn is a small fraction of its total population mostly composed of the descendants of the lower class peasants and workers who supported the rise of the current regime. They supported not an establishment of such a system but rather an inversion of the prior order. It's an impoverished field from which little of value grows. To be clear there is nothing inferior about North Koreans by nature it is that the regime wastes the value that it does get.
Einsteins are in theory as apt to be born from the less privileged classes but there is only a 50 50 chance of getting enough food to be healthy let alone a chance at intellectual development.
They just need to send a couple dozen of their brightest talent. You know their engineers & scientists train in Russia and China, right? Which from my last recollection, invests heavily in offensive cyber warfare.
By your same logic, they would not have any Olympic competitors, let alone medalists.
You get a bright talented individual by offering 1000 ample nutrition and educational opportunities historically with computer tech you give people the opportunity to learn and play with it from an early age.
Most nations educate millions to 10s of millions in order to get their doctors, scientists, software developers, leaders. Someone who educates merely thousands of the kids of new rich kids selected primarily from the ranks of the lowest echelons of society a century OK is poorly positioned to be the best in any field.
I sorted exploits by date, it made a fun short headline summary of how productive they were. Short answer: very.
I know it’s hard for senior management to want to really commit to bug bounty programs like this because it feels embarrassing and vulnerable, but posts like this should be sent around the boardroom when discussing — apple rented an AMAZING security team here.
Sam, can you disclose what you got paid for all this?
End of the post it says 51k so far. I'd expect the price to go up a LOT more, because otherwise the sane (monetary) advice becomes "report some vulnerabilities to apple, and then keep finding them and sell them to third parties".
Yeah, that is only for 4 vulnerabilities out of 55. And 3 of them were only "High." They still have 10 (!!) more critical vulnerabilities they may receive payment on.
Also, they state in the article: "However, it appears that Apple does payments in batches and will likely pay for more of the issues in the following months."
There are defense contractors that do exactly this. Governments pay more than Apple will ever pay, so if you are in it for the money (and don't care about the ethical repercussions), selling the discovered exploits to governments is the way to go.
It seems like this cooperative approach was very effective. I assume rubber duck debugging and having multiple minds attacking the problem from from multiple directions greatly improves the efficiency.
Jesus, that prebaked password on the Jive platform was really bad. Especially as one could ultimately access nearly the entirety of Apple's internal network from that.
Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?
> Makes me wonder, if these guys could do it, how many Chinese industrial espionage units have?
And Russia, and Iran, and so on... It seems safe to assume someone else out there found at least one of these and got in to the Apple internal network and has been quietly doing their job, whatever it may be.
"Our proof of concept for this report was demonstrating we could read and access Apple’s internal maven repository which contained the source code for what appeared to be hundreds of different applications, iOS, and macOS."
This itself is massive. How many 0-days could emerge from something like that?!
I work on a giant famous multi-billion dollar company where all the internal stuff is full of permission requirements, training requirements, etc. It is absolutely HORRIBLE to be productive here. Every single kind of information you need to be able to work is hidden behind someone's wall. I often lose entire weeks of productivity just trying to find who owns a certain information or knows which permission I need to request in order to read a link. There was a time I literally had to wait a whole month before the person was on vacation and their manager didn't know how to authorize me into the system. All I needed was a binary file they provided.
Even worse: every team thinks the thing they do is absolutely the most important thing in the world so they hide it even more. They create empires around the information they control and explicitly force you out of it. So instead of just reading their freaking source code or documentation you have to get permission to open a ticket in their system, then you open it, then one person will triage your ticket, another will forward it, another will create an internal Jira about them, a PM will prioritize it, then a dev will gather the information and pass to the Senior Information Proxy employee who will instruct the intern to finally reply it in your ticket. And of course your original message was misunderstood so the thing they gave you is useless. All you needed was access to the damn thing, but they built an empire around it and now you have to fight a war of improductivity.
To add insult to injury, your account of the state of things gives me no reason to think that their internal systems aren't rife with similar vulnerabilities, so rather like DRM only making life hard for paying customers, I suspect that these measures only make access difficult for honest employees.
The point is, at a certain scale you _are_ unable to secure your perimeter. Are you surprised that a handful of likely thousands external facing application can be hacked?
Especially, if most of your colleagues never have to bother with security, because they think, they are safe behind the perimeter, how can you expect a secure perimeter? With so many applications, there is bound to be one to have a hole.
The argument is more on the meta-level. Most of the shown ones are implementation issues. Hundreds of people have their hands in here.
But being able to gain more privileges because you have managed to compromise a service, that is one of design. And here, only few should have a say in.
That's certainly one view of things. The other view is taken by the beyondcorp/zero-trust model. But the lesson I take from this article (and my own experience) is that if you allow commercial off-the-shelf and open-source software into your network the end result will always be an insecure mess. If you absolutely must adopt off-the-shelf software the only safe way to do it is to put a proxy in front of it that's completely integrated with your authn/authz systems such that the native protocol of the third-party system is completely hidden and inaccessible.
The Google model is frequently derided on HN as "not invented here" but at least you can say that they aren't getting rooted via some kind of toxic waste like Jive forums.
If I understand correctly, Google’s model is to basically roll their own authentication frontend to any service they run. Now, this is likely better than what some off-the-shelf open source library might be using (which might actually have been fine if you had configured it correctly) and I have nothing against running further authentication before giving access to your things, but calling this the “only safe way” to do something is not really true at all. There’s a number of companies that run without this model that do fairly well, and Google endpoints are occasionally are hit by researchers. So it’s good on Google that they have a policy up for this and it mostly seems to work for them, but it’s not the only solution like you’re suggesting.
I think the main lesson is just to not tolerate third-party protocols. Having a uniform RPC interface with integrated authentication, authorization, and delegation makes it much easier to get your security situation under control. If you're out there with your MongoDB password in a secrets vault, you're already in an unsustainable situation.
I had the same thought. Really basic front end web vulnerabilities right in HTML response. One can only speculate on the state of Apple's web hosts' ip filtering, rate limiting, ddos protection, etc.
As an iPhone user (and Mac for work), of late I've often wondered whether Apple is really just all about the pretty looking things and overhyped launches and marketing campaigns, giving specific (usually trademarked) names to features that have been common place on other platforms for years. And a large portion of their user-base also being their staunch fans. Do they just glue things together under the hood this way and that?
In my experience all enterprises I have experience with do this. There’s just something about the whole process that makes everyone worry about security later.
> I had even tried emailing the company who provided the software asking how you were supposed to form these API calls, but they wouldn't respond to my email because I didn't have a subscription to the service.
We talk about the ethical responsibility (and common-sense practicality) of companies cooperating with white-hats who have found vulnerabilities in their systems.
But how does HN feel about this policy as it applies to third-party ISVs contacted for knowledge relevant to a vulnerability exploit?
Should there ideally be a framework in place where the company running the Bug Bounty program puts white-hats in contact with its upstream ISVs’ engineers; or perhaps even treats the white-hat as an employee in terms of eligibility to receive support from the ISV on the Bug Bounty hoster’s tab?
Or, to flip that around, maybe the ISVs themselves should be willing to help the white-hat for ethical/practical reasons as well (for the exploit might, in the end, be as much their problem as it is their customer’s.) But in that case, should there be a some sort of best-practice approach to authenticating that J. Random Hacker who emailed a question to you, is actually a white-hat—e.g. by validating that they’re registered with the bug-bounty program of your client? Or does it not even matter, and you should just answer even a black-hat hacker’s questions about your APIs, since “vulnerability research is vulnerability research and has a long-term result of hardening the ecosystem either way”, and then let the cards fall where they may?
Thanks, great point. At first I was thinking that it was the ISV playing "security by obscurity" but it's probably much more simple than that: labor costs!
If you don't have a way to share documentation for your API with anyone for a cost of about $0.00, then you're signaling that your development process is a bit broken.
its probably just a case of they emailed support@ without a support contract, and didn't get very far. I don't think that's very indicitive of much, especially for "enterprise software".
Sure, but there's no reason for something like API documentation to require emailing support@.
Let me cite a specific recent example: I was tasked with building an application that integrated document e-signatures. The spec called for Docusign specifically, so I looked at their documentation. What I could find of it was written unclearly and much of it was hidden behind a developer account login. Getting a developer account was "free", as long as my time was worth $0. (You had to fill out a form of some kind, I don't remember the specifics anymore.)
So then I looked at HelloSign, a competitor. Their documentation was public, freely available, and beautiful (https://app.hellosign.com/api/documentation). It included specific examples and walkthrus. This says things to me like, "we care about the developer experience".
I practically begged the customer to use HelloSign instead. I expected, from experience, that Docusign integration was going to suck, and HelloSign integration would suck a lot less. The customer said, "the spec already says Docusign, so we can't switch".
And the Docusign integration did suck. It was terrible. Lots of it was incomplete. Their vendor library was a godawful mess, built from some automated tool that converts an API into a bad class library. Their support was basically useless even after a contract had been negotiated and signed. The client ended up spending an extra ten grand or so and at least a couple weeks worth of delays just on Docusign-related issues.
This is a pattern that reoccurs often enough that experienced developers use documentation as a proxy for the quality of the service.
Your example is comparing apples to oranges. We're talking about software that doesn't even have "contact us for pricing" on the website because nobody that needs it even asks what it costs.
Developers will not be shopping around, making the decision to buy a ‘global manufacturing suite’ like DELMIA Apriso, whatever that is. If they are, they will have the documentation made available for them.
Am I being hyperbolic or is this an absolutely enormous compromise of trust in Apple? XSS in iCloud Email allowing for data exfiltration of emails, pictures, videos??? That's absolutely insane. It just comes to show how vulnerable we all are to exploits like this, especially if you're a notable person of interest.
Software is made by people and people are not perfect.
The bigger the project the more moving pieces there are and the more likelihood of flaws.
My experience in bug bounty programs has taught me that if you do start a bug bounty program you need to be serious about it and when a report comes in that is actually serious that you need to act on it quickly. And it seems Apple is doing that.
What would be more concerning is if they weren't acting on fixing issues quickly. Some take longer, that's to be expected depending on the problem, but what has been reported so far has been fixed in a timely manner.
I agree. The bigger lesson here, imo, is that companies need to take security of their tools even if they're only used by a handful of people and not just focus on their front-facing tools. A forum that everyone had access to but that likely didn't get much traffic was used to get access to internal networks. That needs to be taken seriously.
Oh please... EVERY SINGLE piece of software has some security issue. Apple is no exception. Assuming they should be perfect is just petty BS and short sighted.
Also, keep in mind that security and privacy, while related, are not the same things.
You can have privacy (i.e. minimal data gathering) and poor security. You can also have poor privacy but amazing security.
Not sure why I'm feeding the troll here but whatever.
Imagine how many thousands of exploits Apple has found and fixed internally, that weren’t found by outside researchers.
Bug bounty programs aren’t a replacement for internal security, and they have the potential to be very expensive compared to paying someone a salary.
Is it an enormous compromise of trust? Dunno. With an average fix time of a single business day, I’m inclined towards “no”: that’s an awfully rapid response for incompetence to deliver.
First, practically nobody uses iCloud Email. I'm honestly surprised it still exists. You can confirm with Google searchs the C.W. that iCloud Mail isn't a serious contender among email platforms.
Second, you'd be a little naive if you thought Google Mail has never had XSS vulnerabilities.
People who have mac.com and me.com email addresses (which are now part of iCloud email) are many and have the same variation in security posture as any other cloud email user.
July 6 - August 6 - September 6 -- that's 2 months elapsed, not three.
Five people working for 2 months is 10 person-months. Apple paid them just under $52,000, none of which was guaranteed. They had to pay whatever taxes are appropriate for their jurisdictions.
The amount of effort put into finding multiple critical - high vulnerabilities of a $1TN+ company and the result is $51k + taxes to possibly share between 5 hackers for 4 qualifying bugs for that bounty sounds like Apple took them for a cheap ride through their campus.
Compared to 1 hacker, 1 month, JWT signature check failure = 100k from Apple [0]:
The 4 exploits they got paid for don't seem like the biggest ones though.
I would expect Apple to pay $500k - $1M for this session in the end, and it would be in the best interest of all parties if this happened. Apple would encourage responsible disclosure (and attract more white-hat bug hunters) this way. The amount of vulnerabilities found is a proof by itself that team work does pay off, if the team is strong. Also, this is a drop in the bucket for Apple. It would probably cost them much more to have them on the payroll for the same amount of time.
Where did you come up with that number? $500k is much more than a sitewide external app pentest of comparable scope would cost Apple, by an integer multiple. The bugs here are good, but they're not "bug bounty black swan" good; they're what you'd expect from a sitewide pentest.
I agree Apple got a great deal here (that's the point of bounties, and anyone who thinks they're a bad deal for strong researchers is... right). But I'm always going to point out that HN has weird misconceptions about the economics of this stuff.
That second bug they describe would have allowed them to mess with inventory in a warehouse. They could have easily "disappeared" millions of dollars of products. Some of these other bugs would have required apple to disclose PII leak disclosure which could do tens of millions of dollars of damage to their company valuation.
You'll find, if you talk to people that do this work professionally, that bugs where you can tell yourself a story about the millions of dollars you could make are not uncommon, and that the rack rate for generating those bugs doesn't scale with their hypothetical value. I've done multiple projects for FIX gateways at exchanges. Those are fun stories to tell yourself! But those projects weren't even especially lucrative.
Pen test that took 6 months with 10 people would cost at least $2mm using an extremely low $200/hr rate. People who are best in the industry will be significantly higher.
Yes. I'd say "word to the wise", but I think very few people reading this thread buy pentest time in such large blocks: past a month and you start getting into steep discounts.
(This was not several months of full time work, but rather several months of part time work; but I'm stipulating the former condition.)
Your comment got me thinking, Apple probably was already buying large blocks of pentest time, and the comments in the thread make it seem like these were obvious flaws. Is that right? If we assume Apple already had a contracted pentest firm, can you speculate why didn't they find these flaws?
I don't know what "obvious flaws" means. I know from like a dozen years of consulting experience, and from 10 years of vuln research prior to that, that putting a different set of eyes on a target tends to get you a different set of bugs. Finding vulnerabilities is as much an art as a science, which makes sense when you think about what hunting for software vulnerabilities actually entails. If you could do it deterministically, you'd be saying something big about computer science.
I think we're on firmer ground saying that there are ways of delivering software that foreclose on "obvious bugs". But when we talk about fundamentally changing the way we deliver software --- in secure-by-default development environments, on secure-by-default deployment platforms, with security as a primary functional goal prioritized over time-to-market --- we're actually into real money now, not just another $250k on pentesters.
Yes, because it is worth in pentesting services 180k USD, no more no less. I mean, you can pay around 360k in London or SV rates and 180k in European for _similar_ skills people.
Calc based on 3 months, 5 people, 600USD/md rate.
EDIT as I can't reply to tpaceck below: no, those 2000usd/day rates do not exists in projects in size of 300MD like here. In general they do not exist for big projects.
Yes, I agree, you have rates around 1200 in high cost countries, yet as I wrote earlier, you can have similar/the same skill level at 600 usd/md if you're willing to work with guys not from HCC.
If "md" means "billable day", a $600 billable day is extremely low for this kind of work; that's closer to what people pay for network pentesting. $1500-$2000 is closer to the market (before discount, assuming senior but not principal level delivery).
When I worked as a 'consultant' (glorified contractor) .Net developer, the company charged > 90 Euro / 105 USD per hour for my time. So that would make my going rate be > 800 USD / day. This is in a country where 50K / year is a decent developer salary.
I do not believe you can find pen testers worth their salt who would cost _less_ than a non-distinctive developer. At least not one who will do more than run some automated report over all your endpoints.
A classic false comparison: the four experienced security researchers working for multiple months covers 55 issues, not "that one issue".
If we're cherry picking a single one, the associated involvement and timeframe drops dramatically, to something much closer to one or two people, tops, over the course of just a few days, tops.
That's something a pentesting team can absolutely achieve for far less than $500,000 over the course of a few days, too.
I’m unsure what your point is? I see dozens of different issues listed in the post, on different endpoints, all of which presumably took time to find. When they said they had a team of multiple people work for months on this, I am unsure why you think they haven’t spent their time as efficiently as “a pentesting team”. Actually, I’ll be stronger: looking through the list of things they discovered, it seems like they were absolutely churning out vulnerabilities for the entire period. A real team would have certainly cost much more than what they’ve currently been paid.
Issue count != time spent. I found about a dozen issues in a day once. And once, it took me three days to find one.
Always found at least a medium severity issue though.
Big engagements were typically a week, max. Usually one day of kickoff / getting “in the zone” for a project, three or so days of intensive testing, then the final day is usually writing reports (ugh, reports) all day.
There's really 2 options here. One, Apple doesn't employ a pen-testing team currently, which would be nuts, or, two, the pen-testing team couldn't find these bugs, or they'd already be found.
Apple has product security teams, in infra security team that covers a lot of this web attack surface, a large red team, researchers, and employs 3rd party firms to do sitewide tests.
Apple is also huge, and no huge company avoids vulnerabilities; staff as ambitiously as you want, but any disjoint group of competent testers attacking a new target is going to find a disjoint set of bugs.
Or option 3: apple is HUGE, in all respects: physical space, people with access, code base, etc. etc. and they already have plenty of teams in place, but a bug bounty program is a cheap supplemental. In which case paying out more for your bug bounty program than you pay your real teams would be really weird.
In that case, do you think that Apple is incompetent for not stumping up $250k or less for an external pentester to find these bugs? Plus maybe $100k more for an internal PM/point of contact for the pentester? Or do you think Apple handled it fine, the expected cost to the business of their security holes was less than $350k and they could just wait for them to come through the bug bounty program or for internal engineers to find them?
I think everything is complicated, and that is certainly isn't as simple as "Apple should pay paid $250k to a pentesting firm to find these bugs", because you could keep paying $250k over and over again and keep finding different bugs of comparable severity.
It's not a question of whether any spot assessment is worth $250k (though: Apple can get a sitewide pentest from experts for substantially less than that). It's a question of whether paying that continuously is worth it, or whether that many can be spent more productively on something else.
For what it's worth, "reputational damage" has always been a kind of rhetorical escape hatch from arguments that have become too mired in facts.
Apple paid with public exposure. Anything Apple is a story of interest, which has a value especially in security circles where half the business is a pure PR exercise.
I’ve spent time in my career with a “big gorilla” employer whose business is very visible within its community. Companies will “pay” a lot to say “We solved FooCorp’s problems with <x>” or “FooCorp bought our <y>”
Lazy buyers assume that their peers have their shit together.
While it's a great marketing and reputation building tool, it's still pretty poor to pay people in exposure; they could have taken each and every one of these exploits to the black market instead and they probably would have earned a lot more money.
Alternatively, the Apriso exploit alone apparently would have allowed them to create fake manufacturing-level employees with fake payroll going to arbitrary bank-account targets; so an unethical attacker probably could have collected an unbounded amount of money just from that (since it likely wouldn’t have been caught until after the first event; and payroll would happen all at once, paying out to as many different accounts as the attacker wished.)
PR is good but it wont keep the lights on. If you want them to return to work for you, pay them with exchange currency. Apple's motive should be to encourage skilled hackers to come forward with exploits - i.e. make it worth their time. Not drive them into the arms of a competitor.
They may have corrected it. Whilst $52k is cheap for 15 months of labour, that's as of October 4th. So it's not unreasonable for that number to go up significantly over time. It'll be interesting to see what their final total is.
I don't know what Apple would value 15 months of highly skilled security consultants at, but I can't imagine it'd be below $200k, so Apple is likely still getting a good deal even if they pay out a lot more.
Something tells me the real money comes from future consulting contracts and that this PR will more than pay for itself. Just like how everyone on HN agrees writing a book isn't a great use of time besides what it allows you to put on your resume.
Just because Apple got an amazing bargain doesn't mean the payout for them won't be great as well.
One problem is this puts a downward pressure on others who demand fair compensation for their labor. Not everyone wants to play a long game of "maybe i'll get paid in the future from the 'experience'"
This is the professional equivalent of having interns do a bunch of real work and throwing them a pizza party.
Unfortunately, it doesn’t matter if other people don’t want to play the long game. This team does, they’re executing it well, and it will boost their careers as a result. Everything was done voluntarily by consenting professionals with the rules of the game outlined up front. Can’t really fault them for that.
People can consent or do plenty of things that are allowable. That doesn't mean I can't fault the actions or dig deeper into whether or not it has other drawbacks (or even pros). Just because something is allowable doesn't mean it doesn't have other impacts.
But to be clear that doesn't mean I think they (or someone else) should not be allowed to make this choice. The possibility should definitely exist. I just don't think it's a good choice in terms of it being a norm.
This is very fair criticism for standard jobs like a regular software developer.
For a role like this, where outsized skill of someone who is and needs to be elite should be rewarded with enormously outsized pay, I think this a good model.
But, I do find it wild that a group as decorated as this already can't even get compensation that is commensurate with their skill and experience without having to rely on intangible future benefits.
This is exactly why they’re writing a blog post about it.
This type of social proof, when executed well, is a boon to one’s career opportunities and credibility for getting future consulting jobs.
If they’re not hired by Apple, they’re going to move to the top of the list for info section recruiters everywhere. Being able to point to this blog post makes them an easy sell relative to some other person with a generic resume.
As full time job might not mean 40 hours per week during the pandemic. This is briefly mentioned in the article.
“ This was originally meant to be a side project that we'd work on every once in a while, but with all of the extra free time with the pandemic we each ended up putting a few hundred hours into it.”
It is not about $/hour, it is about the knowledge they have learnt and that will help Apple protect against some bugs which would result in losses and other damages to consumers..
Everybody wins here. It's a bargain for Apple, because their ledgers deal with numbers that require the -illions suffixes, but it's ALSO $10k per person, which even after taxes is still a lot of money on top of their regular salary for anyone with bills to pay.
Good pentester costs typically $2000 per day. Given the amount of work they did, the return feels like a slap in the face. Certainly it won't encourage highly skilled people to hunt for security holes.
Apple hasn't paid them for the largest exploits yet. That $52k will likely blow-up to far more when Apple pays them so their work will end up being much more lucrative. Apple also pays in batches so they'll likely get a few more batches and some of those will be yuuuuuuuge!
These are not equivalent propositions. There is an incredible amount of value in working outside of a big corporation and its management hierarchy. It is a Dog and the Wolf situation. The food is always better under the collar.
Qualifying people for highly paid info security positions is shockingly broken right now. No one who knows what they are doing cares about credentials you can get from a training program or school, but they also complain constantly about how hard it is to find and hire qualified people. The result is: there is a lot of salary out there for people who can figure out how to get it.
Developing exploits that are acknowledged by major targets--even if done freelance or as a hobby--is one of the few ways to gain lines on your resume that everyone in the security field will pay attention to.
It's the whole "you need to volunteer for a year before we'll hire you" hiring method typically seen in low paid positions in the arts, but this time for high paid infosec positions...
The art world might not be a bad comparison. In both security and art, established people with money are looking for new people who have the ability to make an impact.
But the established folks don't know in advance what exactly that will be... if they did, they'd already be paying someone to do it.
As a new person, there's no better way to demonstrate your ability to make an impact than to just do it.
I work at a company that has an infosec division and I don't know how we got so lucky with the people there. They're seriously legit low level kernel type programmers who seem to be able to reverse engineer anything given enough time and are able to seriously reason about what's going on in security. The types of people who speak at and headline at the largest security conferences, etc. Again, no idea how we got so lucky to have a great crew.
I'm not an infosec person myself. But my experience is that upwards of 80% of the ones I interact with who aren't like the people I mentioned above are just hangers on because they like the group or being associated with "infosec" because it sounds cool or something. Maybe it's because you don't need to be an engineer to regurgitate OWASP vulnerabilities and tell people to use password managers, but perhaps that's enough to, after you look around the room of infosec people, feel like you're an "infosec person." To be clear, that stuff is important, but not anywhere close to sufficient. So a lot of applications for our roles come from these people, who just sit on twitter all day and retweet the Taylor Swift security person, but they're totally not technical and have done nothing of note other than write compliance plans.
My hypothesis is that it's all this noise that makes hiring good infosec people difficult. If I'm hiring a kernel programmer or SRE I seem to get much more signal in my applications, but hire someone for security or infosec and there's too much noise from people like above.
Information security is just a super wide field. To pick a couple famous examples: what Google Project Zero does, and what the "Swift on Security" person does, have almost nothing to do with each other.
They both matter, though. Basic blocking and tackling at the IT level is important, especially to large old institutions. Apple is obviously an apex technology company, but they're also a 45 year old public corporation... I'm not surprised they've got some vulnerabilities lurking in their subdomains.
Patrolling DNS and 3rd party corporate applications is not usually what people think is sexy security work, though. Problems avoided are harder to sell than problems discovered or bad guys defeated.
Oh totally, as I mentioned above I am not an infosec person and I hope I didn't imply otherwise (I did mention this specifically above). The above is just my impression from the outside but as someone who talks to and works with a lot of security/RE/infosec people.
That was just a really snarky way of saying that RE people and people who pay attention to OWASP are not comparables. Sorry, I should have just been direct about it.
It is impossible to quantify what is a good use of their time without knowing them. Also not everyone does things in the pursuit of money. I sell eggs and could easily ask 5$ a dozen with the demand I have. Instead I only ask 4$ and have lots of clients I only charge 2$ and some I just give eggs to when I have extra. These are people with no money or means. I don’t expect to ever get anything from these people but every once in a while ‘oh my car breaks down and guess who has the knowledge or tool I need the guy I have been giving eggs’. I know the world will eat you up and take all you have but I personally “invest” my time and effort into a few of the things I enjoy even if the reward is low. These researchers now have an excellent start to a resume which is always a good thing.
Well after covid started and the stores ran out of a lot of food I decided to get some chickens again. I have had a maximum of 6 in the past but decided to increase the flock since 6 birds is pretty much the same effort as 30 birds. I now have 33 in total and at this point in their life get one egg a day. They average something like 300+ eggs a year. I have sold enough to buy an automatic egg washer and now mainly worry about selling enough to cover feed costs. I do it because chickens are very therapeutic and I find them relaxing to be around. I have young kids so they are also learning the value of food and can eat all the eggs they want. So I wouldn’t really call it much of a business it is more of a hobby that I reap little reward other then my eggs and to help out a few others near me. I think if I ramped up to a few hundred birds I could make a bit of money but at the small size it keeps me from getting overwhelmed with too much work and I can just share my harvest with those around me. I have learned that making money is nice but I also get a great deal reward from helping others in need.
Bug bounties are not generally considered a good source of income. It's a way to hone your skills, gain experience, develop a bit of industry cachet and get paid a little in the process.
For one they did not only get the money but also the exposure that comes with anything Apple. A lot of people will probably want to hire these researchers.
"To be brief: Apple's infrastructure is massive. They own the entire 17.0.0.0/8 IP range, which includes 25,000 web servers with 10,000 of them under apple.com, another 7,000 unique domains, and to top it all off, their own TLD (dot apple)."
Wow. I would think it's just impossible to secure all that, and that's not even everything.
This is the truth. I've worked in large organizations and it really is impossible organizationally to be fully secure. People come and go. Responsibilities change.
It's interesting that by owning and using that Class A block, Apple are making it easier to scan for their infrastructure. Moving that to IPv6 and releasing the Class A would help them avoid the preliminary scanning that was performed.
There's also something to be said about migrating internal DNS to a subdomain of apple.com that is only visible internally.
Not solutions to security, but making things harder to scan makes it harder to find the vulnerabilities.
Because back in the early days you could get one just by asking and they did?
The internet was just a research project to connect some universities, government sites, and a handful of companies. No one realized where it was going.
By the time it was clear the IPv4 address space would be exhausted it was also clear reclaiming those IP blocks (for which there is no legal basis) would merely temporarily delay the exhaustion - likely by a year or two at best.
Wow Prudential and Ford (if USP is supposed to be UPS, that too) are the odd ducks. At least the others have the internet as a core competency.
My guess as to the answer of “why” is power and leverage. It’s the same as nations claiming physical land. “Maybe we’ll need it, maybe we won’t. But either way, now it’s ours to decide.” Writing that out, do they own those? Can someone take those back?
You can use those IPs for something other than webservers.
But yeah, that a bit much for one company. I'll give hosting providers a pass on owning a million IPs, because they're for the lending out to customers.
7,000 does seem REALLY high, but I can imagine them needing the TLD for every possible spelling of Apple. Maybe applesucks as well. appl3, 8ppl3 and so on. Anything close to apple. Same goes for icloud, and I anything else. I guess you get to 1k pretty quick just covering typo squatters. They must have a team of people just to manage domain names!
That's probably right. A quick internet search shows up domains like applecoronavirus.com and similar, as well as this court case [1] where they acquired a bunch of ipod related names.
I suspect they are only parking those names after recovering them or buying them preemptively. Domain names are cheap, so why not. I don't think that's any argument for the possession of the /8 though.
I remember Google had ownership of duck.com until recently, so they probably participate in the wholesale acquisition of random domains as well [2].
All parked domains could lead to the same IP. A single web server could distinguish which domain it’s contacted for, using the HTTP headers for example, and serve different content (probably all 301-redirects, but to relevant other websites of Apple).
Bug bounties have always been mispriced. Either the damage estimates for a given bug are wildly over-estimated by risk analysts, or the price paid to find them is based on some kind of stupidity-arbitrage play. I think it's the latter.
Consulting firms bill between $1500-$2500/day for senior staff. 2 hackers for 10 days could be the $50k they got paid. Instead, this crew used 5 hackers for say 45 days, or 225 person days. Napkin arithmetic suggests that's somewhere between $240k and $560k.
I could say it's consulting firms who are overpriced, as a group of amateurs will do better work for for %10-%20 of the cost, but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them. That's what costs extra, and given their transparency maybe this work wasn't mispriced at all.
$1500 a day?!?! They are getting ripped off. I had a family member that worked for a large Fortune 500 company tech company. He got to take a look at the invoice for consulting on a project They pay consultants $1500 an HOUR for anything from QA to software engineering.
I had another family member who worked in a big 4 accounting firm. These companies regularly pay in excess of $800 an hour for the most ridiculous consulting. $1500 a day for two people is robbery in the world of consulting.
A bug bounty program is aimed to find individual instances of a security hole in your technical architecture. Like finding a weak spot in a ship's hull, and punching a hole.
A security consulting firm would do more for you. They'd basically be telling you how to make your entire hull stronger. And one of the things they might tell you to do, is start a bug bounty program. And they would also likely put things in place for the real security problem in your org: social engineering. Among other things.
And more than that, spending x dollars on a security consulting firm demonstrates that you did some diligence in securing customer data. And that goes a long way in a courtroom.
Hi hi! Speaking as both a bug bounty vet, and a consulting vet (I run includesecurity.com), here's my .02 on some things you may not have considered given your comment.
1) Sam and the other hackers did not do this as a full time gig, they primarily do this as moonlighting from their full time jobs (you can verify this on LinkedIn)
2) Consultants are often given tight scopes, and these artificial client-driven constraints often prevent consultants from identifying similar findings as Sam and crew found.
3) Bug bounties provide no defined level of assurance. They found an SSRF, but it is a very real possibility that somebody in their crew (or an individual bug hunter) doesn't have experience in that particular topic and Apple would have never been the wiser. In a bug bounty you're at the whim of the crowd's varying skills and interests. You can game this by offering larger bounties, but you can't pre-define a scope or level of assurance.
4) They've gotten paid ~$50k thus far for four bugs, if you read the article they mention they'll very likely be getting paid more. I'd be surprised if their total payout isn't six figures when all is said and done.
5) Your stated rate for consulting firms charge for a particular role is correct for the US market, but the level of "seniority" in a senior consultant varies wildly. Many large firms will undeservedly give somebody with two years experience the title "senior", regardless of actual skillset.
6) You state "a group of amateurs will do better work", first point is to note these five are not amateurs in any way! They're in the top 1% of global bug bounty hackers. Second it seems like you're defining "better" as "finds more vulnerabilities from a blackbox bug bounty perspective". I find that client's IRL don't define things in the same way you've done here.
7) "but over the years I've found that the difference in the security world is that you hire a small shop to discover the truth about risks, but you pay a big firm to lie about them." This I couldn't agree with you more on, it is MIND BOGGLING to me that firms with no ethics, actual standards, or transparency are the top firms in the security assessment/pentesting space. For an industry that proports to hate snake oil security, we sure are comfortable with a ton of snake oil security assessments.
I'd love to see a world where Bug Bounties and full security assessments can live harmoniously and people do flip out declaring one or the other service totally useless all the damn time.
Some fair statements, others less so. I've been in the game for a while, and the point I would emphasize is smart hackers don't get paid as well as people who do less difficult work with a lower bar to entry. Black/grey market bug bounties for iOS vulnerabilities in the $1m range reflect the risk profile and value much more accurately. The bundle in this report are worth at least the pro-consulting rate, and are more commensurate with that high watermark. Good on them for doing it, and the prestige payout is great, but advertising those disadvantaged numbers bears comment.
Regarding amateurs, olympic athletes are amateurs, it's a reference to people pursuing it out of interest instead of just a 9-5 job, even if they happen to do it full time. Amateurs will almost always outperform professionals because the skill distribution among pro's has a longer tail, where to even get in the game without a pro backing you have to be above average. This was an amateur moonlighting effort that delivered better results than consultants who cost 10x the money.
Bug bounties find most vulns in scope that %80 of hackers would find, which I think is more valuable than an assurance level, because assurance levels are bunk. A security architecture is valuable, provided it's built with an understanding of the threat model of the actual business and gets implemented, but otherwise, I think the security assessment document production business doesn't have a long future.
I once came up with a silly way of hijacking facebook accounts that were registered with @hotmail.com. I told both facebook and microsoft about this, never got even a thank you. I know that are some people who make a living out of bug bounty, but I felt very discouraged back then (I was still in college) and never bothered to try again.
well, there was a time here in Brazil when MSN was very popular (@hotmail.com), all my friends used it as the default messenger.
Later came Facebook and people created their account using the @hotmail.com and starting to left MSN, since facebook had a messenger. One day I received an email from Microsoft saying that they were disabling MSN (I'm telling this from memory, forgive me if I'm saying anything super wrong).
Fast forward to me being in college and studying a little bit of pentest. As I recall I was trying to see how much information a could gather from a person by their facebook page (as a non friend). If you try to login using their ID (or username) you could find pieces of cellphone number and emails. So I tried this with a profile from some girl I had a crush back in the day and discovered that shed used MSN as email.
Eventually I tried to log in her email on MSN and found out it has been disabled for a while. So I tried to recreate the email account with me as the owner and for my surprise it worked. I then went back to facebook and recover "my" password. With the email and password, facebook didn't let me login because of my location. But I knew where this girl lives, so I found a proxy server [1] and bam, I was in.
Not going to lie, I did look at some of her messages and pictures, but felt very bad after and decided to tell facebook and microsoft about it. This was facebook's response [2]. After a day or two of getting no answers from both companies (before I got the answer from facebook), I told the story to about 2 or 3 tech reporters. They told me they wrote to microsoft asking for a comment, but never got any answer. A week later I tried to recreate another "dead" account on hotmail and I couldn't. Don't remember exactly what they did, but I just couldn't create the email, so I figure they has fixed it.
This is a big win for Apple because in my experience most internal security teams are BU specific and never get to throw a wider net. Most security engineers probably realize that there are many gaping holes around the company but they never have the time or bandwidth to go broad and find issues in areas outside their BU. Ultimately, this kind of bug hunting, though lucrative for bug bounty people, does not realize true gains for the company because you are doing bug whack-a-mole all the time instead of trying to fix problems systemically. The joke at a big company I use to work was that its easier to pay thousands in bounty then trying to fix systemic issues because fixing those issues would be more costly. Not saying that this is the right mentality but leaders try to do cost benefit analysis and a bad bug is mostly just a bad PR day without any loss of value to the shareholders.
$34,000 - Multiple eSign environments vulnerable to system memory leaks containing secrets and customer data due to public-facing actuator heapdump, env, and trace
I guess it goes to remind you if you are a developer, don't overlook the simple things like not exposing these endpoints in production (literally a line in a config file) or at least making them secured.
And if you are a bug bounty hunter, some of the simplest things can lead to the best ROI. I'm actually surprised something this basic was not already found and reported, but credit goes to their recon efforts for determining where to look.
You need to include a separate actuators module to enable them. IIRC in Spring Boot version 1.5 and older actuators were enabled and exposed as web endpoints by default. The heapdump endpoint mentioned in the article also required inclusion of Spring MVC module – which I guess most web apps do include.
In Spring Boot 2.0 and newer actuator module only exposes "info" and "health" web endpoints by default. Default configuration does expose more endpoints via JMX, though. Also, if your project includes Spring Security module actuator endpoints are secured by default.
I think it is very much related to Apple. There will be organisations because of their culture or approach to security that will fare better or worse than them.
It's possible, but it requires investment, and it's likely to slow down productivity a little. The default approach in big traditional corps (not implying Apple is traditional) is to leave it up to IT, and maybe hire a Security Officer to signal virtue and assign blame.
It is in that a lot of people rely on apple to secure their data and it's good for them to know they should add an extra layer of encryption for anything critical.
The work they have done here is amazing. Imagine a company like Apple being vulnerable to this extent. That's why when people bring up a new privacy safe/better UX alternative for a sensitive data service I am very skeptical to try them out. Like for email, fastmail or protonmail or Hey.
Data security is hard, I would rather trust someone who has shown good capability there, invests a lot in that and has more to lose. That's why for the foreseeable future, I would rather use Gmail, Google Drive over their alternatives. Also, why I prefer to use Amazon instead of individual storefronts which ask for contact and payment details.
What if Apple was just one company they looked at, over the couple of months. It sounds like they had the size and scale, and capability, to scan a lot more and they did.
Is that not the "XML External Entity processing to Blind SSRF on Java Management API" SSRF? As that would make sense to match that payment. I really struggle to believe that the $6k is for the maven access one, that's a billion dollar vulnerability.
I thought the same thing. I guarantee foreign agents trying to hack into Apple systems are being paid much more than these "nice guys". The incentives here don't seem up to par.
Computers made it into the furthest corners of our lives. They are controlling critical infrastructure or are a front-end for it. So IT security should really be a top priority in almost any software project or product.
The upside is that nobody needs atomic bombs to shutdown a whole country anymore ;-)
"As of now, October 4th, we have received four payments totaling $51,500"
What a joke. That's an hourly rate of $20 (assuming 5 researchers working for 3 months). Just enough to buy a MacBook to do the research in the first place.
Independently of how much they've earned so far from the program, huge props to the folks that worked on this (55 vulnerabilities, 10 of which marked critical!)
Operations like this should be a fixture. Considering that not only individuals and companies but society as a whole increasingly depends on digital infrastructure we should develop mandatory procedures and frameworks for measuring security and improving it. You can't drive a car without a license but you can grab personal/private data of millions of people and give it away to criminals without consequence.
Great write-up! I love these kind of posts. It's like solving a puzzle, picking a very complicated lock, or like being water trying to get into a supposedly water-tight box. This beats any crime novel.
The technical speak was very understandable and none of the technical terms seemed really foreign. Again, good job, Mr. Author.
I really don't understand this whole "whining over how much I got paid for my bug bounty" thing.
1. Nobody is asking you to find exploits in the systems of a company you don't work for. If you want to use your time that way, then fine, but understand that is your own time-management decision. Don't go complaining about how you feel "undervalued".
2. Companies are under no obligation to pay bounties and companies are certainly under no obligation to pay headline grabbing bounties.
1+2 = Stop whining and be greatful for someone paying you a five-figure (or greater) sum for something you didn't have to do.
Frankly, I also think this whole bug-bounty thing is a little bit dangerous. Sooner or later its going to end with various attempts at blackmail. It strikes me as a very thin line.
(For the avoidance of doubt, I'm speaking in general here. Not about Apple and not about this particular person.)
> Don't go complaining about how you feel "undervalued".
That's your interpretation, when I see that kind of complains, I don't see people that complains they feel undervalued, I see people complains that companies undervalue vulnerabilities, which is kind of a big deal.
I don't see anything in that article that make it feel like they believe that Apple undervalue vulnerabilities either, if anything it sound pretty positive for Apple. They answer quickly and fix the issues really really quickly (they said 4 hours for the most critical one).
The amounts may seems low, but they doesn't complains at all about it and even say that Apple may pay them more afterward, thus even if you personally believe theses amounts are low while reading it, may get out of that article believing that they will get paid enough afterward.
> 2. Companies are under no obligation to pay bounties and companies are certainly under no obligation to pay headline grabbing bounties.
Sure they aren't under any obligation, just like Nike is under no obligation to pay more than good wages to make their shoes oversea, thing is, their client may be interested in knowing theses facts and making a decision in relation to theses facts.
They found 55 vulnerabilities... that's 55 instances of negligence in Apple infrastructure. If you believe that's alright, then perfect, but don't complains that people try to make it known so that everyone can make an educated decision.
Personally, that article give me MORE confidence toward Apple.
> Sooner or later its going to end with various attempts at blackmail.
How does bug-bounties allow more blackmail? You can still blackmail with or without bug-bounties. If anything, I believe it reduce blackmail as you can go through the bug bounty program instead and get paid legally. I don't think many security researcher would put that they blackmailed Apple, but they will certainly say they got paid over their bug bounty program.
The whole point of the bounty is to incentivize disclosure to the software/hardware maker instead of using it nefariously or selling it to someone who will. Companies can avoid being "blackmailed" by offering fair prices for bounties. If finding several high severity bugs results in a paltry bounty, there is little incentive to disclose.
Though bug bounties are more than that. At the end of the day they have revealed vulnerabilities that would have impacted users such as us. IMO we should value these people more.
I wonder how much XSS and others are automatizable.
Beating markup escaping for example. A program may be able to infer the transformation used by comparing input and rendition, then confirm when JS execution is achieved.
I like this write-up! This could be easily turned into a guide for internal teams on ways to look at your own infrastructure and apps and secure them, as no security team is going to do it all for you.
There goes the myth that your data is safe with apple. All these vulnerabilities might have been already known to multiple parties before these people discovered them.
Many years ago, you'd likely have been told to start by reading the Web Application Hackers Handbook (WAHH). But that has since been replaced by the online interactive labs, available free of charge by the fine folks of PortSwigger.
I've never hunted bugs, but you could start by picking a target website and inspecting traffic on register/login pages, user input screens, pages that load dynamic content (may lead you to API endpoints with methods to exploit), etc.
Look at the Apple Distinguished Educators case. They went in with an immediate goal: get admin access. So they targeted the account admin pages. Once you know what technologies were used to build a service/site (in this case Jive), you can do a lot of due diligence looking for known vulnerabilities or common attack vectors with that software.
Check out ctf competitions. They have a number of "problems" of varying difficulties. Depending on the competition, the easier ones start out with simple stack buffer overflows, SQL injection, etc., and many competitors will post writeups after the competition.
Ctftime.org is a good place for a list of competitions (note: some are more difficult, in that an unexperienced person won't be able to do the intro problems). I'd recommend checking out picoctf as a good intro one.
It really goes to show Apple Advertising has no basis in reality. "Security" claims are obviously debunked on a weekly basis if you work in tech.
"Privacy" claims are just as nonsensical as we've seen Apple bend to multiple governments (PRISM). You bet Apple will sell your privacy if the deal is good enough.
That being said, I don't think anything can be secure, we must treat everything as potentially compromised and act accordingly. I diversify my emails/bank/HDD/etc... So if one gets hacked, I didn't lose everything. Edit- Also those Superstars may be known, but you bet there are experts that would take the money rather than prestige.
> "Privacy" claims are just as nonsensical as we've seen Apple bend to multiple governments (PRISM)
From everything I have seen, PRISM wasn't about companies cooperating. It was about literally hardware splicing the fiber lines between FAANG type corp datacenters and taking that info. Google famously was using dark fiber unencrypted and started encrypting that traffic between DCs because of it. It just so happens you split a fiber line into 2 by using a crystal prism...
Google end-to-end encrypts Android backups. Apple does not end-to-end encrypt iCloud backups (on by default on every iOS device), and it serves as an effective cryptographic backdoor to the end-to-end encryption in iMessage by escrowing the keys (as well as the full message content and attachment history) to Apple each night, using Apple keys, which permits Apple (and by extension the FBI, without a warrant) to read every message sent or received by a device in such a default iCloud backup configuration, without ever touching that device.
They were going to fix this, but Apple Legal killed the project while it was underway. This was done at FBI request, according to Reuters' sources.
PRISM absolutely was about tech companies sharing data with the government.
From the PRISM Wikipedia article[1]:
> The documents identified several technology companies as participants in the PRISM program, including Microsoft in 2007, Yahoo! in 2008, Google in 2009, Facebook in 2009, Paltalk in 2009, YouTube in 2010, AOL in 2011, Skype in 2011 and Apple in 2012.
"Participating" covers a broad range of activity when it comes to this program, and would include things like having a portal to provide the legally mandated info that must be returned upon proper presentation of a warrant. Is it really 'sharing data with the government' if the latter shows up with a properly executed warrant for the data?
PRISM data is obtained without a warrant, even for USians whose data is supposed to be protected by a warrant, because of a special secret interpretation of the FISA Amendments Act (FAA) Section 702.
It's warrantless, and the court that decides whether or not it's legal is itself classified and unaccountable and almost never denies surveillance.
This abuse was cited by Ed Snowden as one of the reasons he came forward. It's a public law, but a secret interpretation by a secret court that cannot be challenged by the people to which it applies.
It's not inaccurate to describe it as a military coup, given that it allows the US intelligence community to surveil everyone in the legislature and judiciary.
Section 702 only allows them to request data from accounts that belong to foreigners outside the US, so no, it doesn't allow the US intelligence community to surveil everyone in the legislature and judiciary.
The slides on that page diagree. PRISM was/is a data collection project. The sources came from other projects like the diagrams show. Those dont show anything about cooperation, only collection.
The documents don't identity them as "participants." They only say when data from those company was ingested into PRISM, which is simply a data integration program between the NSA and the FBI. The FBI's Data Intercept Technology Unit is clearly labeled in the slides.
The government issues a Section 702 order for some account(s) data, the company reviews the request and denies it if the account appears to belong to somebody in the US or an American (both of which cannot have their data requested via a Section 702 order), and then sets up a forward to the FBI. PRISM then geta that data from the FBI and parses it into fields for various NSA databases. Again, this is very clearly drawn out in the system diagram slide that Snowden leaked.
You seem to be quite prone to making unfounded hyperbolic pronouncements about Apple in several different threads lately. As the responses show, it might be worth toning down the rhetoric and staying true to proven reality for a while concerning this subject.
Apple is far from perfect, but often in the context of its market peers being much further from perfect. In that context, saying that their considerable security efforts and accomplishments amount to nothing but marketing lies is more than a little uncharitable.
I think that saying that Apple is especially bad at security would be wrong. But apple claiming they are the only ones who can protect users might be going a bit far....
No. The only people who make this claim are Apple critics who put words in Apple's mouth to justify whatever clickbait blog post they're putting out this week to pad their resumes and harvest echo chamber thumbs.
But as we know from politics, if you tell a lie enough times it becomes the truth.
In the article you linked to, I didn't see Apple claiming they alone can protect user's privacy. I read instead that Apple suggested all companies should strive to protect their user's privacy.
Not seeing Apple claiming only they can protect user's privacy. Instead the article quotes Tim Cook trying to pressure the governments t recognize privacy as a fundamental human right.
If Apple does not pay these guys several hundred thousand dollars per person, they just recruited the worlds best hackers to work against them. Pay them, and the situation is reversed. Now we see how smart Apple really is.
World's best hacking team? That's highly unlikely. If they were wise though, they saved a few to sell to CIA/NSA/FBI who may be contacting them soon for said exploits thanks to articles like these. I doubt they showed the best cards in their magic deck.
I would be genuinely shocked if Apple doesn't end up paying out much more for all the bugs found. Frankly, it would be genuinely concerning if they didn't acknowledge the severity of the bugs and the time invested by this particularly skilled team.
To Sam and the others involved. Fantastic job and amazing write up. 10/10