What Apple is doing here is really smart. An under-appreciated wrinkle is that grey-market sales are valued on continuous access; you get paid over a period of time, and if the bug you sold dies, you stop getting paid. Apple isn't just bidding against the brokers and IC in lump-sum payments, but also encouraging people to submit bugs early, before they're operationally valuable for bad actors.
Is not Apple's move mostly a PR stunt? Standard people will read "The iPhone is so secure that Apple is willing to pays $1M for somebody that find a security vulnerability."
The reality is that they only pay that much for bugs in the kernel that do not require a user interaction.
Other bugs that use a common action on an app that everybody uses, for example opening the stock mail application, may be enough in order to compromise almost all iPhones. Apple seems to pay "only" $100k for those problems.
I just scanned through the comments and clicked on some links so correct me if what I wrote is not accurate.
I'd imagine this is to combat marketplaces like zerodium and the deep web. Traditionally grey hat hackers don't always go through bug bounty programs because the pay is awful compared to what you can get through less ethical sources. By flexing that much cash at bug hunters, they are potentially now offering even more than what you could get on the mentioned markets. The only reason people go underground to sell exploits is for the money. Take away that variable and suddenly there's no reason to sell exploits to bad actors, just sell them straight to the source at Apple and get a fat paycheck.
Reputation plays a big part in it on both sides. Most buys are not Zerodium and putting themselves out there as buyers. So, there is a certain degree of vouching that happens as someone introduces a buyer to a seller.
So, when either party violates the agreement, it reflects poorly on that person who made the introduction, making it harder for them to make those connections in the future. And, these introductions matters, most sellers don't want to just sell to anyone, there needs to be some trust that who you're selling to will be selling it to friendly governments or whatever. Its not like a craigslist ad where you sell to just anyone who answers.
So that acts as a deterrent on the buyer side. It'll be harder to get new sellers if you have a poor, or no reputation.
On the seller side, you're not going to get too many people willing to vouch for you as you start burning bridges by selling non-working exploits.
And on that, the payment scheme acts as a deterrent, like teh great-grandparent said:
> grey-market sales are valued on continuous access; you get paid over a period of time, and if the bug you sold dies, you stop getting paid.
That is, you might get XX Thousand upfront, and then an agreed upon XXX thousand based on the exploit surviving XX days.
So trying to scam the buyer will net you a small amount of the total at best, but I mean, often times they'll hold payment until its confirmed and contracts are written and signed over these sales too, its not under the table payments or anything for the most part. Legitimate business transactions.
So, I guess to sum it up, reputation and a demonstrated, or atleast vouched for past record. There is a lot of trust on both sides.
What's interesting to me about this --- and I've got no firsthand knowledge of the markets --- is that Apple doesn't have to outbid brokers; a broker could offer 50% more than Apple, but that comes with an X% uncertainty penalty. You can sell to Apple and pocket $1MM, or try to structure a deal for $1.5MM and gamble that the bug will survive. I'm betting that's often not a good deal; the lump sum payment is the better option.
I completely agree, it i an enticing offer from Apple for the reason you lay out.
Not all brokers are alike though, exploit survival is a gamble, but sensible end-buyers usually don't want to burn the exploits either so will use them sensibly. There are some brokers that don't sell exclusively (despite their claims), they have a reputation for exploits getting burned early.
I have not been involved with any iOS exploits, not really my area of interest, but lets say I was. Would I consider selling it off to Apple, yeah, it would be something to consider. I'd consider the market rates too of course, 1MM vs 1.5MM, sure Apple is enticing, 1MM vs 2MM, maybe not. Not sure where I would actually draw a line, but you are right that Apple doesn't need to compete directly with the market rate, just close enough.
I'm sure there are those that would rather just go for the bigger profits regardless.
Harder than you might think. Who gets to control the server being compromised?
1. The buyer or someone the buyer trusts, then the buyer can log all the network traffic and find the incoming attack traffic and work out the exploit from there.
2. The seller or someone the seller trusts, can backdoor the software to fake it.
3. Someone they both trust, that would require they have some mutual contacts which while possible I wouldn't count on it.
4. A random victim, more possible, but neither party would want to risk prematurely burning the exploit.
And of course there are a ton of exploits that are not remote, all sorts of local privilege escalations, and there are partial exploits that are sold. Like a multistage exploits like say just the exploit to escape a sandbox, or even just an exploit that requires a memory leak could be sold without a memory leak, or just selling the memory leak. Obviously a fully weaponized exploit sells for the most, but there are buyers for stages also.
> Who gets to control the server being compromised?
I was thinking about phones, not servers.
> then the buyer can log all the network traffic and find the incoming attack traffic and work out the exploit from there.
Is it really that easy? I'm not a security researcher, but I imagine that most exploits aren't just a magic byte sequence you send to the victim -- so I assumed that just a single observation of a successful attack is not enough to understand it easily.
that doesn't change things too much, it does introduce some potential difficulties with intercepting certain types of traffic/input to the phone. The question just becomes who controls the hardware being compromised.
> but I imagine that most exploits aren't just a magic byte sequence you send to the victim
Its not, and its not like you can just replay those very same bytes, but its not magic, it all has a meaning and a purpose. While its not easy, you can work out plenty from logs. The entire exploit necessarily is there, things will change, but all the instructions[0] that get injected to do later stages necessarily needs to be sent, or the instructions to generate/cause them.
Its not an easy skill, but its not unheard of.
[0] I'm simplifying a bit to avoid getting into various code execution techniques
I don't know anyone who hates apple that much. This way, you still get the reputation and thrill. The only people left are malicious actors, like state sponsored attacks.
I think it transcends PR stunt if it actually changes the market for vulnerabilities. $1mil for kernel hacks and 100k for app interactions is just being price competitive with the market.
Before this, some people complained that Apple’s bug bounty rewards were too low. Now they’re higher and a different set of people are complaining that they only did it for PR. It seems like a bit of a “damned if you do, damned if you don’t” scenario.
It’s true that Apple pins its rewards to specific outcomes, but I think a lot of bug bounty programs do something like this. For instance, Google’s top bounty for Android ($200k) is only awarded if you can provide an exploit compromises that the trusted execution environment (see https://www.google.com/about/appsecurity/android-rewards/).
I don't think it's a PR stunt. The typical layperson doesn't know what's a kernel, so the difference between a drive-by kernel exploit and an app exploit couldn't easily be summarized and made understood. Yes a layperson will understand the difference after you give them a five-minute primer of operating system theory, but in this age of social media who still has the attention span to sit through that, if their interest isn't in computer science?
So to me, this isn't a PR stunt. It's a necessary "dumbing down" we see all too often. It's no different from journalists digesting and simplifying the content of an advancement in biology or physics.
> Is not Apple's move mostly a PR stunt? Standard people will read "The iPhone is so secure that Apple is willing to pays $1M for somebody that find a security vulnerability."
When I read the article my first reaction was "Only a million?" Considering the importance of a bug like this to Apple's business and the size of their cash hoard, this sounds like they don't actually care that much.
Yeah, right? Considering they were about to buy Greece[0] at some point, one would assume that they would hand more budget for something that potentially saves their own business.
I know some security companies have higher payouts, but do any manufacturers? As far as I can tell, Google’s highest bounty for an Android exploit is $200k. Apple will pay up to $1.5m if you can hack the kernel in a prerelease version of iOS. Zerodium tops out at $2m.
It has that effect (good PR) as well, but I think tptacek is correct that this will have an effect on the 0day market. If you find a vuln and build an exploit you get to choose whether to do something risky (criminal liability, others might find the same vuln and take the reward, the bug might not live long enough for you to make more than $1M from the black market, further criminal liability from money laundering you ill-gotten gains, ...) or... take the easy $1M. Which would you do? It's obvious.
Now, by keeping these 0days off the market, Apple also gets to further burnish their reputation. It's a good play no matter how you look at it.
Of course, first Apple needed to be fairly certain that there aren't tens of thousands of vulns left to patch!
Well it's true. The Hacker News audience is so specific it is not similar to any specific country's population, let alone world population (ie "people" in the general sense).
That's certainly not to say the HN audience is an elite (it's not) but it is comprised of outliers in the sense of people having abnormal jobs and/or abnormal interests compared to the average population in any city-sized slice taken pretty much anywhere in the real world.
Another benefit of this is that researchers are no longer incentivised to hold onto bugs. Currently, you need to find a vulnerability just to get onto the device to do further research. If the researcher reported that bug then they would lose that access. Now they are free to report any bugs they find without jeopardising future research activity.
I am not sure. Not everybody is driven by money. I know a few people that turned down insane job offers because, as they said, "they are not interested about money." I see the "good guys" saying "I would have report this bug even for free" and the "bad guys" "If I hold on this in the long term, I could be able to keep (or eventually) [put your mad science plan here]"
I guess on average it will reduce those holding on. But it will not eliminate them.
Yes it may not eliminate them but if they hold on too long, someone else may find the same vulnerability and get the reward first. So holding on carries some risk of loss... not just loss of the money, if they really don’t care about that, but also they risk losing the exploit anyway despite holding on.
The change here is that it reduces the "good guys" saying "I would have reported this bug even for free, but if I did I would lose my access to continue researching".
> Apple isn't just bidding against the brokers and IC in lump-sum payments, but also encouraging people to submit bugs early, before they're operationally valuable for bad actors.
Yes, but also bugs "overlap" in multiple different ways; the most obvious is that a "similar" bug in a different code path will, at a company like Apple, Microsoft, or Google, result in a hunt for the same pattern on other code paths, but also the fix for one bug can kill multiple bugs elsewhere. So even though people do sometimes find exactly the same bug --- I'm fond of pointing out that at Matasano, Vitaly McLain found the nginx equivalent of Heartbleed within 2 hours of someone else reporting it --- that doesn't actually have to happen for someone else's work to kill your bug.
You're conflating an exploit for a narrow bug target class with a malware package which likely contains one or more exploits and probably a payload. The act of exploiting requires much more than an exploit alone to have the desired effect.
Also, somewhat famously, both Spectre and Meltdown were discovered independently by multiple teams in the same timeframe, who all coordinated disclosure with the CPU vendors etc. https://meltdownattack.com
But they also pushed up the price in black market, and if someone in the black market is willing to pay $2mil, $500k ahead, and then the rest over a period of time. Someone might take that payments in bitcoin over Apples $1m which you will have to pay tax on.
We should also remember that there are tons of people outside the US who are into this. Africa, Asian, Eastern Europe. They don't have to worry about the legality of selling an exploit.
>But they also pushed up the price in black market
isn't that the point? sure, it makes selling on the black market more valuable for the hackers willing to do that, but it also makes purchasing an iPhone exploit less accessible for anybody else. that's a good thing.
Money is not a problem to Nation state actors. Not even $10M will stop any country. Even an African dictator motivated will easily pay $50M if that means getting what it takes to stay in power.
Yeah but that cost has to be passed on at some point to an end consumer.
This move from Apple makes people like me, working with human rights defenders and journalists, happy.
Why?
Because it drives up the costs for the NSO Groups, Hacking Teams and Gammas of this world. They either pass on (and take a hit reducing their revenue/internal capacity) or drive up their costs (making it harder for crappier regimes to afford
/ reducing the frequency that high end exploits will be used.
That seems like a pretty valid viewpoint. The higher the cost, the less people with access, and the more likely people go with Apple’s offer.
Besides that, earning a 1 million dollar reward for cracking the iOS kernel is probably a nice ticket to a pretty well paying gig at some security firm.
Right. Being able to claim (with proof) that you collected on a $1,000,000 bounty, from Apple no less, makes you massively more employable no matter where in the world you live.
I agree it’s a smart move, but I don’t know that I agree with the figure. Anyone who claims this bounty could make 500k+ at Apple without question. In some ways it seems like a recruitment exercise.
What is the probability of an Apple developer introducing a hard to catch bug and sharing the information with third-party so that they share the bounty?
I don't know. But I do know that the way to prevent that is to handcuff all the developers with crushing process heaviness. Then they won't introduce anything malicious, because they won't introduce anything at all.
Given the published corporate policy on leaking I think it’s safe to conclude they would be fired, and most likely prosecuted when possible.
“The Cupertino, California-based company said in a lengthy memo posted to its internal blog that it "caught 29 leakers," last year and noted that 12 of those were arrested. "These people not only lose their jobs, they can face extreme difficulty finding employment elsewhere," Apple added.”
I think the probability is very low. Apple pays developers very well, and you're asking them to risk all future salary and their freedom. That’s going to cost a lot. Much more than $1M, I would think.
That would be a scam if bug on the grey market are paid over time until fixed.
Then Apple could buy and fix ASAP so the researcher get screwed. So yeah it’s cheaper but if someone notice just wait for the backfire!
Guaranteed one time payement seem like the fair way to go.
Apple salaries aren’t much of a secret, see: levels.fyi.
1M is a lot of money to me, a regular person, but when you consider that top security engineering talent could be making north of 500k in total compensation, 1M suddenly doesn’t seem all that impressive.
It’s a good bet to make on their risk. Imagine paying a mere 1M to avoid a public fiasco where all of your users get owned.
This just seems like good business. They could make it 5M, and it would still be worth it to them in the medium to long term.
I'm surprised by how cheap the vulnerabilities market is. A good exploit, against a popular product like Chrome, selling for 100k or even $1M may sound like a lot, but it's really pennies for any top software firm. And $1M is still a lot for a vulnerability by market prices.
You can do so much damage/return with an exploit that affects > 30% of the population. Get 5 of those and sky is the limit.
> I'm surprised by how cheap the vulnerabilities market is
I think this has a lot to do with government agencies buying any exploit they can get their hands and there is basically no market besides that. I don't know if that is illegal in the US, but it seems that government is the only buyer.
I'm wondering if hedge funds would buy something that would allow access to private data. I heard insider trading is not an unusual thing, so a polished series of exploites wrapped up as a tool with clear interface might be taken seriously.
> if hedge funds would buy something that would allow access to private data
Extremely unlikely. The risk/reward if found out is too lopsided. Conviction for insider trading has you pay a penalty and transform your fund into a family office -- Raj Rajaratnam going to prison for a decade is a unique exception not the rule.
Conviction for insider trading in combination with wire fraud, espionage, and all the other exploit-related charges will send everyone involved to prison for 10-20 years, pretty much guaranteed. What use is a bigger hedge fund if you have that sword of Damocles hanging over you?
Well, for what it’s worth, if you purchase an exploit and use it to hack phones and then trade on the info you steal, I don’t think there would be any insider trading charges involved.
Lots and lots of other charges but if no insider is giving you info then it wouldn’t be insider trading.
What would be 100% legal would be if you bought an exploit and then traded on the release of that exploit. Depending on the severity of the exploit it could move the stock price a bit. And, even though people wouldn’t like it, that’s kind of the point of the market. You get rewarded for helping with information and price discovery.
Some companies put a lot of effort in security (Apple, Google, Facebook, etc). Usually they have engineering driven cultures.
The other majority of companies see security just as a cost center that needs to be covered in order to reduce legal liabilities.
The second kind of companies do not have a bug bounty programs because they know that they have too many holes and prefer not to attract too much interest.
For those companies that may have huge capitalization and profits, paying $1M for a vulnerability is not practical.
I expect that in general all companies (including those in the first group) detect compromised accounts and services from time to time but unless they have to disclosed because the laws demand it, they prefer to avoid the bad PR and potential lawsuits.
But while I expect the first kind of companies doing a root cause analysis and improving the systems, the companies in the second group of companies usually just clean up the detected compromised systems and avoid to look too much deep because either they do not have the skills or they are afraid to find things they will have to disclose and be liable for.
It depends on how an exploit is monetized; the devils is in the terms: exclusivity, duration, scope and level of access. A non-exclusively-licensed exploit that can be sold 50x for $50k/year is bank ($2.5m/yr). If I were to spend 6-9 months developing a good exploit, I wouldn't give it Apple if and only if money were the primary and sole motivation. However, it makes sense to blog about it, turn it in to Apple and leverage such a discovery into outside Angel funding for a startup... that is if Apple doesn't require onerous NDAs. If the terms from Apple weren't favorable (they're likely to be terrible), then reselling it makes sense if you were really broke or going nonprofit security disclosure route at least parlays it into cred.
I don’t know this market well, but I do know what a 3% dip in Apple’s stock price means, so it seems rather obvious that Apple’s incentive to know of vulnerabilities prior to their sale in an alternative market is worth a lot more than 1M.
Valuing something in terms of its short-term impact on stock doesn't make any sense. An iOS vulnerability is worth a lot, but that the stock price dipped 3% is more of a sign of the market being fickle and reacting to any bad/good news about the company on a given day than 3% of Apple's worth being lost in any meaningful sense.
Following this line of thinking leads to some pretty absurd conclusions, like 7% of Tesla's value being predicated on Elon Musk not smoking a joint[1].
>>I'm surprised by how cheap the vulnerabilities market is.
I presume that it is a legal minefield, selling 0days and extortion are first cousins, at least. If you could hold a bid, no doubt an Arab country would pay $50 Mil for one...but they buy them from companies that sell "software" and services.
Agreed. People talking in the top-rated comments of this thread seem to think 1M is a lot of money for a vulnerability that could cost Apple lifetime customer value 10-100x the amount they’re offering in bounty.
This is silly. Apple is going to have bugs almost no matter what they do, as will Google and Microsoft.
It makes sense to invest way over the top if you can kill bug classes outright --- and Apple does this, too. For example, people that were doing DMA hardware attacks against macOS a couple years ago are now on Apple's payroll, designing hardware to defend against those attacks. That's a meaningful serious investment in defense. Rewriting their kernel in a memory safe language would be another example (one they haven't done yet).
Massively outbidding the current spot price for a bug doesn't accomplish anything like that. Think about who they're really bidding against. They can drive the price of bugs way up, and they are doing that gradually, but there will still be a price and people selling them.
The important thing they're doing on this is making unlocked devices available for researchers, and lowering the bar for research for people who would never sell to brokers.
So the question is, assuming you have a valid exploit, could you not convince Apple to pay more than $1m?
If you found an unfound gaping crater of an exploit somewhere -- how much would that be worth to them? Likely a lot more than $1m. I'm sure you could negotiate that number up, a lot.
Well how would you otherwise monetize the exploit? You could sell it on the black market, but I doubt you'd be able to make 1M off of it. Then again I don't know the black market so you could?
But bug bounties like this are competing with the shadier markets. I suspect they found out the shady companies are offering more than their existing bug bounty program did.
Quite simply if you have your name on apple.com that says they paid you $1m under this program you just walk into any VC office in SF or Tel Aviv and say "how much have you got?"
As Warren Buffet says there is plenty of money to be made in the centre
The problem is that technically finding the exploit at all is in violation of the CFAA. Once you approach Apple with the exploit, you're then skating on their goodwill. Trying to play hardball probably won't go well for you.
If one million for hacking or accessing an iphone is more than the real world value, there wouldn't exists business dedicated to unlock protected iphones or hack them, but they do and make business with countries while earning millions.
I'm absolutely sure that saudi arabia would pay much more than that for an exploit like that and I don't need to have experience doing it, it's common sense.
Bounties on security vulns have difficult dynamics on incentives though. At these levels you start running the risk of an insider subtly introducing a vulnerability and share it with a secret aquaintance.
I would hope there is sufficient discouragement for that sort of behavior - that's definitely breaking at least two federal laws.
Maybe I'm naive, but I would think that any programmer skilled and privileged enough to be able to insert a vulnerability into a mainline product, get it past code review, and make it look like an innocent mistake during the inevitable root-cause analysis would not only know better, but would also value their career enough that whatever payout they got from the bounty wouldn't be worth jeopardizing all they've worked for.
And anyway, how's that conversation supposed to play out? "Sorry about that bug boss, yeah could've happened to anyone. Anyway, about my retirement in two weeks..."
Another scenario is, having access to the code, being in a much better position to test and find vulnerabilities in the first place. Instead of disclosing them internally which may just be expected and get you little extra, leak them to a trusted external partner for a share of the bounty.
I'd like to propose a new form of Betteridge's Law that states any time a monetary figure is given in reference to a large company the top comment will always be a form of "That's nothing. It should be at least 10x that number!"
I don’t know the law of which you speak, but the risk of not catching an ugly vulnerability after posturing themselves as a secure platform all over the media could easily cost them a lot more than 1M. That’s why their bounty is actually a lowball. It’s not that they’re a big company, it’s that they have a lot to lose. Does that make sense?
They're offering about 100x more than is common in these programs. Despite that, folks like yourself are whinging about the company that pays the most, instead of the ones who pay the least.
>Forbes also revealed on Monday that Apple was to give bug bounty participants "developer devices" - iPhones that let hackers dive further into iOS. They can, for instance, pause the processor to look at what's happening with data in memory. Krstić confirmed the iOS Security Research Device program would be by application only. It will arrive next year.
I wonder how they're going to manage this. I could easily see some less than ethical researchers applying for this program and selling all the 0 days they find to the usual suspects rather than informing Apple.
Isn't the idea of a bug bounty at this scale that the monetary reward (especially combined with the lowered legal risk, but also when considered in isolation) is higher from reporting it to the vendor than from selling it on the black market? I.E. presumably Apple has done their research and one million dollars is more than they believe you'd getting selling a zero day to somebody else.
I don't work in the security field nor am I a business number cruncher, but that was the gist I had of what these programs achieved.
Edit: see Despegar's reply, I should have RTFA! However worth pointing out that there would be some incentive for researchers to go to Apple instead of a third party, which might tip the scales in their favour.
>Previously, a company called Zerodium was vocal about how much it will pay researchers, before handing them to its unknown government customers. In January, the secretive company announced it was offering $2 million for a remote hack of an iPhone.
So that's already more than what Apple offers. I tend to think they'll always be outbid.
I don't know many people here who believe Zerodium's price list, and while I can't speak to Zerodium's payment terms, the norm appears to be tranched payments, apparently ofter over a year; selling the same bug on the grey market for "more" money (whatever it is brokers actually pay) is a gamble that the bug you've sold isn't going to die.
With those terms, they can buy a bug, report it to Apple, collect the $1m, and be off the hook to pay out the remaining payments. It seems to me this makes it much riskier to go to the black market than people here realize.
Yes, because that is exactly the sort of behavior that a business would engage in. Screwing over their suppliers and demonstrating that they offer no value whatsoever.
A nice pivot for patent companies would be somehow generating the connections and reputation to participate as brokers in this super-insular and highly technical marketplace, and then burn all that work down to fuck over an individual researcher for pocket change?
A "remote hack" generally relies on a stack of exploits. From what I recall when the last jailbreaks came about, that stack was at least 5 or so exploits deep.
So $1M/exploit is priced significantly ahead of the $2M/hack.
Interesting this also means that an entire exploitable stack now becomes worth a lot more, while any given exploit is worth a lot less. And any stack of exploits becomes much more brittle, as a patch of a single one of N exploits can knock out the use of the stack.
Bug bounties merely needed market pressure for the bounties to rise.
Corporations had been unilaterally deciding what the payment for a reported bug would be. They were constantly undervaluing and wasting everyone's time. People would say the same rationale "low liability and clean money is more valuable than dirty money and needing to launder it".
Yeah, but not that much more valuable.
So now the bounties are reaching their market price.
Are there any laws you bump into selling 0-days? Honest question, I know their are laws about the actual hacking part but is it illegal to sell the payload? Obviously this is ethically dirty money I just was curious if it's actually dirty money in a criminal sense.
If you sell a bug to someone you know is going to break the law with it, you're getting close to the line for liability. People who sell bugs to the western IC are, as I understand it, virtually always selling to broker firms designated by governments which offer a veneer of plausible deniability; selling to a well-known broker is probably not legally all that risky.
As has happened disappointingly in the past - there aren't any actual laws offering safe harbor for ethical hacking, companies just tend not to prosecute responsible disclosure... if your disclosure required you to break interstate commerce laws, run afoul of the CFAA[1] or even just violate a TOS - or even if they can convincingly argue that discovering your disclosure might have - then you can be prosecuted.
Now, people who prosecute white hacks that practice responsible disclosure are technically known as "asshole"s but the end result is that there are a ton of laws even innocent computer usage breaks so your liability for any damage to end users usually doesn't need to be considered, there's enough book to throw at people already.
This has nothing to do with hacking, so none of that applies. You're conflating attacking someone else's computer or network.
You don't need safe harbor because analyzing your own property is not a crime. Neither is telling people what you found. Also, please stop using the term responsible disclosure!
What if you sell it to the repair industry? There are plenty of people who want to break open Apple's walled garden, and those are not necessarily what most would consider "malicious" or "unethical".
Unfortunately the intersection of "security researcher" and "right-to-repair advocate" is probably tiny, but that would be something I'd love to see: someone finds a crack that enables a lot of third-party-repair scenarios and sells it to that industry instead of to the black-hat criminals or even back to Apple itself.
The third-party repair/aftermarket industry is huge and would love to break through the proprietariness of the ecosystem. In that light, $1M seems rather small...
There's going to be an equilibrium here dictated by how much Apple itself is willing to pay for bugs. If Apple pays more then presumably more bugs will get reported which makes the remaining unreported bugs much more valuable since they'll be relatively rare.
Depends who’s buying I imagine. Not sure about everyone else but I always picture the entities buying on the black market as singular people for some reason
When you consider it could be the likes of the three digit shoe inspectors over there in the US it could be a fair chunk of change
Who are these individuals that will pay millions of dollars for an exploit? Cyber criminals rely almost exclusively on dead bugs, frequently using exploits from the metasploit framework. That gives then sufficient access to a broad range of victims so they can generate revenue through volume.
0days are used against hardened targets. Think “Iranian nuclear facilities” rather than “grandma’s PC”
Yeah. They don’t exist. People don’t think about things rationally, they see phrases like “black market” and assume it is some sort of criminal transaction with armed guards and meetings at midnight and shit.
It is nothing like that. It is the same as freelance development work, except there are very few customers and the developer writes something that may or may not have any value.
Another thing to keep in mind is that these devices already exist (used by Apple internally), only on the black market. A lot of these development fused devices are stolen from the factory and sold to black hat/gray hat hackers (and companies) already.
I think one of the intentions here is to lessen the demand for black market dev-fused phones, which is already a huge problem for Apple. This is similar to the idea of officially allowing Linux on the PlayStation — give hackers no legitimate reason to pwn your console
I thought that the Linux-on-PlayStation was actually a tax dodge? Something like it let them claim the devices were not just games consoles and so their import duties could be reduced in some regions.
You'd think someone would have dug up a citation for this, but Wikipedia says (for the PS2):
>Some incorrectly speculate it was used as an attempt to help classify the PS2 as a computer to achieve tax exempt status from certain EU taxes that apply to game consoles and not computers (It was the Yabasic included with EU units that was intended to do that).[citation needed]
and Linux on the PS3 was used as a marketing point.
I forget who, but someone made a comment about this a long time ago that stuck with me. We often say, "They'll just sell all these 0 days on the black market." but honestly that's not like a literal market, and you have to make a lot of compromises to not only your own integrity, but also to your safety and ability to stay out of jail if you do something like that.
I wish I remembered the specifics of the comment, but selling a 0day on the black market is not something a casual person can easily do, and even if someone figures out how, there's a lot that can go wrong, with many of those outcomes leading to jailtime.
It's vastly superior to participate in a bug bounty program legitimately, from a risk standpoint, especially if you're standing to make $1M. 0days are (and I'm not an expert on this) not generally going for enough more to justify all that extra risk.
Selling bugs and exploits is not illegal just about everywhere, so there is really no risk. Plus, law enforcement and intelligence agencies, or their contractors, are the ones buying on the “black market”. Nobody has to worry about jail for knowing about a mistake that Apple made in their code and telling someone else.
> I could easily see some less than ethical researchers applying for this program and selling all the 0 days they find to the usual suspects rather than informing Apple.
By vetting applications, presumably. I would imagine it's mostly professors in well known universities and corporations closely affiliated with Apple getting access.
But maybe it depends on how you define “hacker” and what you call “random”. I’m saying that folks in the jailbreaking scene are some of the primary targets for this. It wouldn’t be worth launching if the plan was to exclude them. Some are already part of Apple’s bounty program.
“Apple Calls In Rock Star iPhone And Mac Hackers For Secret Bug Bounty Bash”
That alone suggests that they are already acquainted with every one of the people they'll accept for the program? They might have a "loose" vetting process, but I doubt they have a lack of one.
The point is apple is likely to only aim for researchers for this program as the hackers could just resell most 0days, letting apple know about a small fraction to maintain reputation. It would make sense for apple to not allow hackers access to the program for this reason.
In this context, "security researchers" is just the preferred job title that white-hat hackers put on their business cards. The ones who go for bug bounties are generally not paid by universities or large corporations to do that work.
Some are already in the “vetted” bounty program. That doesn’t necessarily mean they will sell Apple the bugs though.
How much do they make selling to China?
Aside from the Chinese part of the jb scene, it’s kind of disheartening to know that the rest of them are selling that capability to a hostile adversary (if that’s true). I’m surprised the five eyes aren’t offering enough to keep them out of China’s hands.
I wonder how they're going to manage this. I could easily see some less than ethical researchers applying for this program and selling all the 0 days they find to the usual suspects rather than informing Apple.
My guess is that they're not going to let just any rando h4xx0r into the program. They'll take on well-known security researchers and academics who have something to lose by leaking zero days.
For the naive guy that do not know how the trade of exploits really works and keep hearing of "black markets," do you care to explain in realistic terms how these things work?
Where are the trades happening? Is it the exploiter putting out something like "kernel exploit for iOS xx.x" ? Or the exploiter bids on people offering money? How is the seeker of exploits going to be sure that the exploit is working? How do the people keep their anonimity? And how does the money is exchanged? Crypto currencies? And how was it working before crypto?
Are governments bidding but they also want to convict the exploiter so they get the exploit for free? I remember watching one or two movies where the hacker was caught and was sentenced to jail. Government stepped in and said: "You can either be in jail for n years or work with the good guys for n/2 years" Is it just science fiction?
It is not illegal to sell that type of software. It is not a black market, it is a grey market.
There is no way you will ever hear authentic answers to your questions. The only time anyone tried to explain that the resulting article backfired on the interviewee. (Disclaimer, it was me)
Governments do not buy from developers. The paperwork would be insane. They buy from businesses like Raytheon. How Raytheon gets them is opaque. But they do employ hundreds of exploit developers. Read the r/netsec job postings and notice how many require having a TS clearance. Every interesting job that says “work on vulnerability discovery and exploit development” requires TS.
Governments generally speaking do not cheat on business deals where they want to continue having access to that market. It is like stiffing the company that sells you replacement parts for your government vehicles. You save money now, but in the future your planes can’t fly and no one will do business with you.
All of this I explained during the interview, but the objective of the article was not what I assumed it would be, which was to address the dynamics of how the market works. I was naive to think that, but in my defense I was genuinely shocked that people were unaware of the market (it has existed forever). Literally everyone who is an infosec rockstar has been involved with exploit sales [0]. Many still are because It allows them to work on what they enjoy — bugs and exploits — and remunerates for their expertise. They get paid a living wage to do what they want. Like any freelance developer. They are just smart enough to keep their mouths shut.
I haven’t been involved with the market for almost a decade now but you’ll still hear people saying shit like “how does it feel to sell weapons to dictators??” (Even On here there are a number of such comments.) I can truthfully answer that I have no idea. I only ever sold software to western governments who had a hard on for terrorists.
I’m still angry about it, but I have no one but myself to blame. You can’t unfuck the goat. C’est la vie. People want sensational stories about evil people, they don’t want stories about the dynamics of a grey market software industry. No one will ever speak about it again (lessons learned analysis! Protip, don’t be the lesson others learn from).
The market has changed massively over the years. It is nothing like the one I was involved in back then. However, as I said, no one will ever discuss it again. They saw what happened and they won’t speak in public about it.
What was, is, and will continue to be, the legitimate sale of vulnerabilities is now closed forever.
As a thought experiment, think of this. Let’s take it for granted that the IC counter terrorist units and the legal authorities hunting for child abusers are acting in good faith. That is, not every single person at NSA is desperate to see what you are doing on the Internet (literally, you are noise obscuring their signal). There are people who are going after child sex abusers, do you want them to have the capability to exploit a web browser or do you want web browsers to be safe tools for child abusers. This is not hypothetical [1].
There cannot be a discussion about a market where there is so much hysteria about fringe cases of abuse. Rather than trying to find ways of mitigating against abuse, the reaction has been to advocate for prohibition. Prohibition does not work, it simply drives reputable operators out of the market.
The conversation about vulnerability sales has been as even handed and rational as the conversation about marijuana in the 50s. Instead of marijuana madness you get “the FBI can hack your computer!!” ...I guess the upside is that at least this time the topic is not a proxy for racism [edit: I retract that statement. Pretty much every rationalization about banning vulnerability sales talks about African or Arabian buyers.]
And again, I have said too much. Try to explain something, get called a baby killer. I’ll bet there will be accusations of enabling dictators to spy on civil rights activists. To preempt the “you don’t know what happens after you sell it!” I say simply this — the point of having a middleman to handle the transaction is to ensure that you sell to the right end users. Exploit developers don’t want to sell to dictators, they find someone who can get them access to a market where their work will be used ethically. That can’t be said for all, of course. The jailbreak community in particular is essentially a vendor to the Chinese government.
But there you go. The most you’ll hear about it from someone that actually knows what they’re talking about.
[edit: haha, see? It was brought up before I even posted a response! [2] There is no accurate information. Literally every single paper on the topic cites newspaper articles rather than academic research. This is actually unique. It is the outlier case. Mara did a review of the literature and found that the majority of citations were to articles, far in excess of other topics)
[0] https://www.econinfosec.org/archive/weis2007/papers/29.pdf [PDF] — a paper from Charlie Miller talking about how difficult it was for him to sell exploits without a trusted third party to act as an impartial party to the sale. That TTP is called an “exploit broker” because that sounds far scarier than “trusted third party.” Incidentally, this is the environment I was operating in, and it was clear that no one involved in security considered it abnormal.
[1]: https://www.wired.com/2014/01/tormail/ ... look at the framing of the article. It is not “FBI screws up their operation and mistakenly collects data that is irrelevant to their investigation.” It is “if you used this secure email provider [hosted on the same infrastructure as a massive child sex abuse web site] the FBI has your inbox!!!!”
That goat laid you golden eggs though. It takes me over 15 years to earn a $1m paycheck, and I wouldn't mind dealing with some people moaning at me for it. People always find something to complain about anyway, so I wouldn't be too concerned about it.
> The conversation about vulnerability sales has been as even handed and rational as the conversation about marijuana in the 50s.
Sure, you're right, but this is true for many new things, you're just smack in the middle of this discussion. It's good to have these discussions though, because it makes people aware that otherwise weren't. Comparable hysteria is currently happening with 'company X is listening to your conversations' and 'self driving cars might kill you to save a baby'. People in the field have been discussing the ethics of these things for a long time, but now it's becoming a public discussion. This happens when things grow.
Personally, I prefer the black/grey zero day market over back doors being proposed by some. Those would be permanent vulnerabilities, while zero days are (usually) temporary and probably only used while absolutely necessary, opposed to just eavesdropping on anybody. I also see the need, because the internet gives bad people too many places to hide.
So, from me, thanks for your services, you probably helped keep us safe from bad actors.
Yeah, that’s part of the hatchet job. I said I was projecting sales of $1M over the year. At 15% commission that would be $150k. You can make a lot more money than that, I’m sure. Also, don’t predict your sales funnel in February when you have no historical data to compare it with. I was off by about $900k.
So yeah, that $15k golden egg. ¯\_(ツ)_/¯
At thetime I did not know about phrases like “off the record” or that you could have corrections made to articles that had false information. Had I known I would have OTRed at the beginning although I should never have spoken in the first place. And I should have made them correct the inaccuracies.
Yes. It is less than any salary I could make at a company if I could get a job. But that article was a career limiting move. C’est la vie. Never talk to reporters unless you have a specific reason to. Definitely don’t talk to reporters who burn their sources.
I understand that he is writing a book now, and I doubt many insiders will speak to him. I don’t expect it to be very accurate.
I had a lot of problems because of that article. Not just the death threats and such, but ... do not ever become interesting to states. They have a lot of resources.
Damn this is an awesome break down of the industry, and it's hilarious to me that lo and behold someone suggests the Greenberg article and yeh does grugq himself turn up to settle the score.
I can't think my way around your point about prohibition though - I think someone saying "selling exploits is bad" is also someone that would say "the government shouldn't be monitoring us, pedophile or not," and that's part of why they don't think exploits should be sold to governments. Could be wrong.
But, we all generally seem to feel that the government shouldn't be given back doors into our devices that only they get to use, yeah? So instead the alternative is an endless arms race as chrome or whoever tries to out engineer the FBI? Why not just give them the backdoor at that point? (I.e., why not just support them having the backdoor, I'm not implying those in your wheelhouse have the power to legislate or anything)
Think of it like GMO. There are two sides with legitimate concerns. But only one side can speak publicly.
As for backdoored Chrome, what is to prevent China using a modified version of Firefox that removes the backdoor? It would blind NSA to collection on the Chinese target.
There is no way you can use backdoors against hard targets. Hard targets are why they need 0day. It is an arms race because it is a conflict between states.
Whatever fears people have about 0day being used against them are, as I’ve said before, like worrying about ninjas rather than cardio vascular disease. One is something you have no control over, but almost no exposure to as a risk. The other requires regular work to stay safe.
Years ago I wrote “free security advice” and the basic concept is still relevant. I should update it now though. Android 9 is a much harder target that 4.4 was. I would actually rate Android as safer than iOS because all of these ridiculous articles about million dollar pay outs have driven most developers towards iOS, and iOS is a monoculture.
A hardened Android device (disclaimer, I’m making one for retail sale) is safer than a stock iOS.
Literally everything in the media is complete garbage. No one who knows how things work would ever discuss them again.
Your argument is limited to technical and political science concepts, and by limiting itself so, is correct. It is inapplicable to the real world.
Governments have used zero days. Most famously to use a zero day unlock an iPhone against a terrorist (whose house was ransacked by the news media). Less famously was to botch a legal case against a pedophile (amazingly, it would be possible to find and arrest nearly all pedophiles on Tor by burning half a million dollars in zero days). But the government didn’t want to release the zero day for Play Pen and Mozilla got involved in the case.
But Freedom Hosting’s zero day was discovered while it was being used. I think the government still uses zero days, but parallel constructs the evidence from them. This is policy making by mismanagement.
On face value, the government is involved in abhorrently irrational decision making. The government cannot be considered responsible enough to have zero days, but that’s an argument that will lead nowhere.
> Your argument is limited to technical and political science concepts, and by limiting itself so, is correct. It is inapplicable to the real world.
I read your post, but still have no idea why it's inapplicable in the real world. Could you explain that again? I think it's a very interesting discussion, so I'd like to actually understand your point.
>Farook destroyed his personal phone. The FBI wants access to his work phone. UPDATE: FBI locked themselves out of the iCloud account after it was seized.
>FBI already has huge amounts of data from the telco and Apple. This is almost certainly enough to rule out clear connection with any other terrorists.
>FBI is playing politics, very cynically and very adroitly.
I can’t clarify, I can only make disparaging statements saying the government hires incompetent people who can’t notice corruption in front of them, as a side result they mishandle everything and their bosses must work overtime not only to hide corruption from the public but to spin their employees incompetence as reasons to reduce the public’s civil liberties.
> A hardened Android device (disclaimer, I’m making one for retail sale) is safer than a stock iOS.
?
Is Android (and in general any open source system) safer than a iOS (a closed and highly customized system) ?
The idea I heard over and over is that a open source system is more secure because the code is scrutinized by anyone that wants to.
But with a monthly security update and how quickly a vulnerability can be exploited, it does not seems to be the case anymore.
The main reason is the time between a vulnerability is patched in the source code and the patch is deployed. When a commit that fixes a vulnerability is committed on the Android codebase, anybody that knows what is looking at would be able to notice it, and likely build/distribute an exploit before the patch is actually pushed to all users. On a closed source system, an exploiter can still reverse engineer the changes in an update but less people have the skills to do it and it is not straight forward to understand which changes in the code are a security patches.
Considering the timing and what I see on the Android security bulletin almost every month there are EoP and even RCE vulnerabilities being patched. A Google Phone, on average, will go 2 weeks every month vulnerable to a "known" vulnerability.
For all the others the situation is dramatically worse. Samsung is at best a month behind the security update schedule. A Samsung's user will have a phone that is always behind the last vulnerabilities patched and visible in the Android code base.
Some of these vulnerabilities can be quickly distributed since everybody has an LTE internet connection, read new on a browser.
When I wrote it Android devices never got patched (hence the advice to switch to a FOSS rom that would be updated, rather than a frozen in time factory ROM.)
Security involves a lot more than just access to the source code. That is simply a factor in the ease of some techniques for vulnerability discovery. Back then Android had poor process isolation, significant problems with its sandbox, lax SELinux configurations, insecure software architecture (eg not using “least privilege”)
For a regular user, a stock iOS device is safer than an Android device because there is very little iOS malware in the wild. For a user at risk, then they are safer using a secured device, which by default means modified Android.
Security is not a generic “thing”. It is a continuous process that provides countermeasures against threats by mitigating risks.
If you want a device that is safe by default, will always be patched, and is not vulnerable to indiscriminate exploitation or malware embedded in apps — use iOS.
You can achieve that with a Google Android device (starting with about v8 or so). Of course you still have to be vigilant against malware laden apps.
> A hardened Android device (disclaimer, I’m making one for retail sale)
Any more information on this? I'm more than a little depressed by the current options in phones - I don't relish the idea of moving to ios - but at the same time I'm a bit worried about the direction Android is taking...
What kind of marked/price range are you aiming for?
I think (though I can't be sure), what they're trying to say is that it's still legitimate, but it's opaque, because nobody wants to talk about it. Because of that, it seems, to outsiders, like it's an evil black market, even though many people involved in it, believe that they're doing the right thing.
Sorry, to clarify I was referring to the voice of industry insiders. I mean that no one who knows is willing to speak about it.
There is so much bullshit about the “highly lucrative black market” it is staggering. The market is not big. There is significant risk which gets factored into the payment structure, so the payments are lower than people imagine.
The market is not very liquid. If you have a Chrome capability for sale but your client already has a Chrome capability, they won’t buy it. If their capability dies, then they’ll want yours, but by then yours might be dead as well. Gross oversimplification, but that is generally how things work. The demand is very specific, the supply is very limited, and the product is very fragile (particularly time sensitive.) It is lucrative like making a startup is lucrative. You invest a lot of time and resources and sometimes, with luck, you win big, but the odds are not in your favor for a million dollar payday.
Most articles treat it like some sort of open market drugs bazaar. It is nothing like that at all. It is more like a handcrafted goods faire with a few wealthy customers looking for exactly the thing they need. Only they won’t tell you what they need, they simply want to see what is on display. Lots of window shoppers, as it were.
The product has an unknown shelf life.
The customer cannot tell you what they need, they will only look at what you have and possibly choose something.
For the developer they need to ensure that they provide sufficient information about the capability so the customer can make an informed decision. But they have to avoid revealing sufficient details that it can be reproduced from the ad copy.
Part of what a broker does is actually translating between two parties who don’t speak the same language. The customer needs a tor browser Bundle capability. The developer has written a UAF RCE Firefox that relying on JIT spraying for reliability. Someone has to translate from exploit dev speak into IC language.
For the IC, that TBB capability is a replaceable part in a larger program that enables them to achieve their mission objectives. For the exploit dev, that bug is a labor of love that they spent months working on. They have completely different views on the value of the capability. One side sees it as a component they need for a machine they want to use. The other side sees it as weeks of frustration and pain invested into a unique masterpiece.
They have different expectations, don’t speak the same language, and don’t trust each other. Things have changed a lot from when I was involved. It’s all very fascinating but, as I said, no one who knows about it will discuss it.
I’m being stupid and talking about it, again. But hopefully this will clear up some of the stupid myths about the vulnerability market.
For example all those “wow, a way to read a someone’s private messages on Facebook? That’s got to be worth millions!!” No, it is not. If a legitimate client wants to read someone’s messages on Facebook, they get a warrant. There is no ROI for cyber criminals, and whatever it might be worth to North Korea the risks associated with that sale are not worth it. That bug is worth whatever Facebook says it is worth. Dropping the 0day would make for some news, but mostly it would be negative. So the only rational way for a security researcher to make money from a Facebook bug is through the bug bounty system. (I’m not addressing cyber criminals discovering such a bug, because that is not relevant to the issue of vulnerability sales.)
Based on their recent acquisition, it seems like Azimuth made something of a working de-risked business model relative to the uncertainty of the broker days, no?
I appreciate your honesty and taking responsibility. The time I spent in security research had me putting blame pretty far away from middle-people: (a) the users and buyers who almost exclusively go with insecure crap, even if secure ones are highly-usable and/or free; (b) the developers who do nothing to make their software secure. On (b), some vulnerabilities could've been prevented with push-button tools like AFL that they just don't bother to run. Fish in a Barrel LLC makes that point more comically. Those groups driving the vulnerabilities would have to get their shit together before folks selling them become truly bad to me. Do have two points to address, though.
"Literally every single paper on the topic cites newspaper articles rather than academic research."
You mention that everyone is doing it with no citations of academic sources. I'd be interested in reading any recent research you believe is high quality and represents the current market. That other paper was dated 2007. I figure there's been some changes.
"Let’s take it for granted that the IC counter terrorist units and the legal authorities hunting for child abusers are acting in good faith. "
We can't take that for granted. Ok, so the prior precedent I pushed Schneier et al to use in media was J Edgar Hoover. He used blackmail on initially a small number of politicians in control of his budget and power to massively increase his budget and power. The Feds committed all kinds of civil rights abuses. His reign lasted a long time with his power growing. He accomplished it all through surveillance using ancient methods that required actual people listening in on calls and such. Both Feds and I.C. stay doing power grabs even though some or all of that was stopped. We'll never know since FBI continued to have its budget and power.
I predicted that, post-9/11, they'd do a power grab as a USAP. If it's a USAP, then only a few in Congress can oversee the program and therefore only a few need to be controlled. Sure enough, Snowden leaks confirmed they did that for nation-wide surveillance, Congress kept doing nothing more than they usually do (didn't even read reports per GAO), gave them retroactive immunity for abuses, the warrants weren't for specific individuals ("targeting criteria"), they shared data with all kinds of non-terrorism-related agencies, at least one (DEA) regularly arrested folks after lying about sources, and they're steadily expanding that. Again, with criminal immunity for whatever secret things they're doing.
So, no we can't consider them acting in good faith. They've constantly lied to Americans and Congress about programs that are used to put people in jail for all sorts of stuff. There's no telling what they'll do if we give them too much power. That's why some of us advocated warrants for information or specific acts of surveillance. One can also hold people in contempt for not giving up keys. It needs to be targeted with evidence behind what they're doing.
And, yes, some horrible people will get away with crimes like they do with our other civil rights. You'd have to be non-stop spying on every person anywhere near a child 24/7 to achieve the goal of preventing that. Yet, we don't do that because we as a society made a trade-off. This is another one. This isn't hypothetical: the FBI is so corrupt they pay people to recruit/bust terrorists with Presidents and Congress usually taking bribes from companies to get elected. We should always treat them as a threat that acts in their own self-interest that might differ greatly from ours.
> I appreciate your honesty and taking responsibility.
It is a fatal character flaw I have. When people want to know about something I try to help them.
> You mention that everyone is doing it with no citations of academic sources. I'd be interested in reading any recent research you believe is high quality
There is nothing that I am aware of that discusses the current market. The RAND paper is closest.
> acting in good faith.
I should not have used a blanket statement. My point is that there are people in IC who are legitimately going after terrorists and child abusers. They have a legitimate need for capabilities that enable them to do that.
I am not saying that the IC is a benign and wonderful government organ. I am saying that within IC there are people who are actually hunting terrorists and pedophiles. I didn't want to explain all of that because it is obvious that it is true. Hence, "lets take it for granted". Rather than discussing the history of the IC, I wanted to explain that there are legitimate uses for 0day and that is the issue being discussed.
The rest is not relevant to explaining how the vulnerability market operates. (Well, how it did in 2011.) When someone asks "how do shares work?" you don't start off by talking about boom and bust markets and macroeconomics. Same thing here. "How does the market work?" is not a question about the IC. It is about how the market works. If you're talking about the vulnerability market you talk about the vulnerability market. You have to assume that there are legitimate players who are acting in good faith.
This entire post is why I abridged it to "lets assume good faith."
Leaving it out because it was mostly irrelevant makes sense. There's definitely folks doing good with these capabilities. I'm a big fan of their work and grateful for their sacrifices. And thanks for the RAND link. I'll check it out later.
Speaking about exploits in general, at least the old method was to go to cracking forums and say you have the crack available. Usually you would then get into discussions via an IM and finally broker a price.
It used to be done via payment services like PayPal, but I imagine BitCoin would play a large part in the modern world.
I'm talking about game exploits, I spoke to some people still in that scene and they say they typically still use PayPal for the protection. According to them, exploits for modern games typically vary from $10 USD to $100 USD, although they go through tonnes of steam accounts in the development process, generally meaning profits are not great. They say it's more for the technical challenge of it.
Yup. About $20k USD that I had withdrawn. It was a payment to a developer.
If you look closely you can see all the paper towels the photog stuffed into the bag. (He also tried to steal one of the stacks, lol. It was his bag, and when he gave me the money back it was short. He accidentally “missed” one of the stacks.)
OT, but if you have showdead on, you can see beeschlenker's weird comment. It seems that he hears "things" in his own security cameras, and also thinks he is in a "Truman show" setup. Just a heads up if someone in the area could possibly help him.
Can you contact gofundme. He probably needs help and some of these larger companies have ways of informing local authorities if someone is at risk of harming themselves or others.
Yeah there is nothing in those videos. Absolutely nothing even looking at the waveforms just white noise, clicking, him(?) talking. Trump, Obama, Dorsey aren't in any of these videos and if they were why in the hell are they videos and not just the parts with them talking?
What a waste of time. The dude needs mental help there isn't anything there. If there is it's unconvincing going through 20 of these.
However there was the few of him getting evicted that were real.
This is great. Every company should be responsible for paying market price for security vulnerabilities in their own products. If you make something that carries significant market value, you should be paying the security tax in the form of a security team or bug bounties.
Does anyone know if these types of bug bounties are negotiable?
As several people have mentioned, hackers can sell to the highest bidder and having proof that you have an exploit is probably sufficient, but what if Apple was willing to pay as much as the highest bidder?
This may also likely convince people who have sold bugs to reach out to Apple.
It probably costs them a fraction of the PR spend or risk of data breach/user exposure etc.
The press release specifies that the $1000000 quoted figures are minimum payments for the given category of bug, so the actual amount paid could be anywhere upwards from that.
How feasible would it be to find bugs in the iPhone kernel’s network stack? I imagine this is pretty battle-tested stuff, but it would tick all the boxes for remote and no interaction.
Edit: Since it's XNU, and it's open-source, and it's been around for a really long time, this seems unlikely. But if something was found in here, for instance, everything would be practically compromised: https://github.com/apple/darwin-xnu/blob/master/bsd/netinet/...
I think it's a cheap offer. If you are a professional well introduced in the business of selling 0days, for a gem like that, you can charge even a single customer of the same amount. Indeed there are private companies offering even more (https://www.securityweek.com/zerodium-offers-2-million-ios-h... )! Ok, companies involved in this business have some "safeguard" clauses in case the hole is discovered too soon (see for example the Hacking Team e-mails), but you can sell this kind of vulnerability practically to everyone. So the offer IMHO is a public relation move.
This is fantastic news from Apple! I use an iPhone and I can sleep a little better at night knowing that someone would now have to risk burning a $1Million exploit if they wanted to hack me! I’m not worth near that much yet so I’m probably not worth spending $1M on.
I remember the days of jailbreak-me.org when you could just visit that website, your iDevice would be rooted, and Cydia would be installed on your iDevice. You could install all sorts of tweaks, mods, and apps through Cydia. I remember installing a Pandora tweak that gave unlimited skips, gave it a black theme, and removed ads (because I was a poor student) and got freaked out because if tweaks could modify apps like that, then they could probably phish banking passwords. Anyone else remember those days?
This is an area that's always been fascinating to me, but that I've never dived into. I'm not overly interested in this particular program, just exploits in general and perhaps examples of how and why they work. Anyone have any resources that they've found useful?
There's a ton of info out there in various websites and blogs. I like the RPISEC Modern Binary Exploitation class as a great introduction. The lectures and materials (and a VM!) are on github: https://github.com/RPISEC/MBE
I think Apple could do better and has the resources to do so. Why not incentivize enough so the people that profit from this kind of business don't even have to ponder the decision?
Given I know "stuff" but compared to anybody who really "knows" I am a noob in exploit engineering. I just know basic inner workings of a computer and a little reverse engineering.
Would it even make sense for me to try? It seems like the probability I find something, especially without user interaction, seems so far off that it would be hard for me to find a constant motivation.
Imagine one year where I dedicate two days a week learning, understanding and trying. Do I would have any chance at all to find something worth the $1M?
> Imagine one year where I dedicate two days a week learning, understanding and trying. Do I would have any chance at all to find something worth the $1M?
Yes...but probably not the way you're thinking.
Most issues are discovered these days through fuzzing first. So there is always a chance your fuzzer will find an issue worth $1M, its much less likely that you'll realize its worth or be able to demonstrate and begin to weaponize the exploit to prove its worth.
Lets rephrase the question a little bit though:
Instead of "Do I would have any chance at all to find something worth the $1M?" lets ask "Would I have any chance of learning this level of exploit development"
Two days a week, lets just round to 50 weeks a year, give you a bit of a break during the year and say 100 days of effort.
So, in a 100 days would you have any chance of reaching the level of being able to atleast write an iOS exploit, ignoring the discovery aspect? Unfortunately, the answer is still no.
But, you would make some serious progress!
A modern iOS zero-click exploit isn't just one issue, in a worst-case (okay there are worse than this, but this is a poor case) scenario you might need the following issues
- Memory Leak + Entry Point service exploit
- Sandbox Escape to low priv user
- Privilege escalation to higher priv user
- Kernel memory leak + Kernel exploit to finally get root privs
This even for someone with experience, going from fuzz result to exploit can take months. So in a 100 days of spread out effort, you won't be doing that, but you might be able to begin approach that first stage, a memory leak and an exploit in a user-land service.
I do only say might because 100 days is a really short time when you think about how technical your knowledge of this stuff needs to be, but I'd like to think that with some real determination, in 100 days at least foundations of modern software exploits should be approachable.
As for would it make sense for you to even try, the best time to start was 20 years ago, its been getting increasingly more difficult. The longer you wait the higher the barrier to entry gets.
Given the "kernel" requirement coupled with the design of these devices in general, any real non-interactive RCE will be claimed to not be "in the kernel"... it was a Qualcomm or ARM binary blob not the kernel!, It was the Baseband firmware not the kernel! It was libXYZ not the kernel! etc.
If Natalie Silvanovich finds a vulnerability that meets Apple's high-payout bounty criteria, they will pay her; nobody is going to mess with Silvanovich, least of all Apple ProdSec, who I have to assume exists in a relatively constant state of trying to recruit her out of P0 (good luck, ivan).
To be clear, she doesn't want the money, she's paid by Google, but I believe they've said the companies can give the money to charities. Now we see if Apple will payout her prize to charity. She may not have found a $1m exploit, but a lot of those 10 she found are pretty serious.
> Another $500,000 will be given to those who can find a “network attack requiring no user interaction.”
which I believe many of her vulnerabilities are definitely eligible for. I read that article from https://news.ycombinator.com/item?id=20639999 yesterday, and she had this paragraph as her second
> Vulnerabilities are considered ‘remote’ when the attacker does not require any physical or network proximity to the target to be able to use the vulnerability. Remote vulnerabilities are described as ‘fully remote’, ‘interaction-less’ or ‘zero click’ when they do not require any physical interaction from the target to be exploited, and work in real time. I focused on the attack surfaces of the iPhone that can be reached remotely, do not require any user interaction and immediately process input. [0]
The full $1 million is for that level of fully remote attack, but against the kernel. I'd have to look up to see if any of the code she found vulnerabilities in are part of the iOS kernel.
She has a list at https://twitter.com/natashenka/status/1155940732084973568 (recall that "remote, interaction-less" means "do not require any physical interaction from the target to be exploited, and work in real time", according to the Project Zero blog post).
Edit: As the posters below said, those aren't kernel bugs. Thanks for the correction!
Didn't Google just publish an article where their researchers pwned iPhone 10 different ways? Now granted, those bugs are all patched now, but who's to say they were the last ones?
I think that's why the amount is so high. If you sell it for less to bad guys, someone else might find the same exploit (or a connected one) and swoop in and claim the big amount from Apple, and you lose out.
These unethical places would of course never consider reporting the bug to Apple to get a million dollar rebate on their purchase price... that would be unethical.
Yes, but immediately patched by Apple. It seems to me that in the future you simply can't own a rooted iPhone, because Apple put a $1M bounty on making that impossible.
(If you don't see where this is going: after a while all the security holes will be patched, and thus no more rooted iPhones.)
not true false pr, they do this when there is a mass shooting to show there phones are secure. ive contacted apple many times on hacks. mo response. iphones are hackable i can show you
Design. Architecture. Usage model. Let me summarize in a short point. Have you ever had the misfortune of having to use it? It died out and never caught on for good reasons. It certainly played a role in turning lots of OLPCs into less useful devices.
Oh, you can just obtain a signed iOS image with law enforcement features enabled.
I assume all law enforcement features are enabled independent of the investigation, as some of them may be solely for investigating crimes involving non-consensual photography.
This isn't nearly enough money to stop North Korea, Israel, Russia, US, UK, France etc. Pretty sure a zero-day would be 10-100x more valuable to them than this $1 million reward. (Why is this even controversial?)
Cool... Go negotiate with one of those entities if you have a death wish. I'd rather hand over to Apple, make a name for myself and get clean cash then deal with the underbelly of the modern world.
No, but it would be enough money for someone unaffiliated with those states who would otherwise consider selling it to them (through a third-party or whatever) instead.
"The full $1 million will go to researchers who can find a hack of the kernel—the core of iOS—with zero clicks required by the iPhone owner. Another $500,000 will be given to those who can find a “network attack requiring no user interaction."
Ehhh, whats the point in this exactly? Apple only considers bugs to be significant if the penetration can happen without help from the end user? Thats not how this works in the real world Apple.
Nothing wrong with this in itself. A zero-interaction attack - like being able to get root on everyone who even walks past your WiFi hotstop with their phone switched on, even if it's in their pocket not being used - IS more valuable than one that requires user interaction, and it's quite right for rewards to reflect that with higher rewards.
"Another $500,000 will be given to those who can find a "network attack requiring no user interaction.""
The implication of this conditional reward is that interactive use presents more/easier attack opportunities than non-interactive use. To clarify terminology, it is arguable that "non-interactive" can be a synonym for "automated" in this context.
Further, we might argue that canonical examples of "interactive" use are clicks, drags, taps or swipes. In other words, the prevailing "UI" for many users and the one promoted by many developers.
Now, if you agree these are fair statements then it is also arguable that from the user's persepctive it could be useful to engage in non-interactive/automated use not only for reasons of efficiency or convenience but also for reasons of "security".
Finally, given these propositions, the question I ask is why website and app users are continually faced with "terms and conditions" that seek to prohibit non-interactive use. Interactive use benefits those running a website or app server in at least one obvious and significant way: more interaction means more data to collect. But if we accept the implication of this bug bounty it also means greater risk to the user.
Regulators need to protect the user's right to use her computer, including a "smartphone", in a non-interactive manner. This right is constantly under attack (no pun intended) by those who are in the business of collecting user data. Interactive use can result in less data privacy and more/easier attack opportunities.
This seems backwards.... they are offering more money for attacks that don't require user interaction because they are HARDER, not easier, to accomplish.
Exactly. I seem to remember an app a while back that was billed as a heartbeat reader and it would have you repeatedly press and release the fingerprint sensor. After a delay it would flash some sort of in-app purchase authorization. Pwnage that relies on some sort of user interaction is worth 50% less, and rightly so.
Oops. I apologise. Indeed I framed that statement backwards.
If non-interactive use makes attacks "harder" then terms and conditions should not seek to prohibit non-interactive use.
That is the argument in one sentence.
Another phrasing is that if interactive use makes attacks easier then users should not be discouraged or prohibited from engaging in non-interactive use.