> Maybe if they gave it 150%, they could see Norvig's reasoning. It may take more than that, though -- maybe exponentially more.
I hope you'll permit me explicitly to single out your mocking invocation of my bête noire. I think that most non-technical authors just confuse 'exponential' with 'super-linear' (if they think even that quantitatively) … but I sometimes worry that even the somewhat more technically minded think that 'exponential' just means 'has an exponent', and so think that quadratic growth is exponential, y'know, because there's an exponent of 2.
I have the same feeling, that's why I decided to filter out all that noise for myself, and created this blog: https://rubberduck.so
I read the posts every day and filter out "amusing" topics and keep the ones that I actually come back to read again later. It is published on Mondays and updated every day with the articles from the day before https://news.ycombinator.com/front
Sure it can, but not until the population exceeds 16 billion.
Same as with Norvig: the penetration percent will never double again, but the number of users can still keep going up. As the the song says, "population keeps on breeding..."
Or if you redefine the technology. That's the way it usually happens: "Android Gingerbread has 1% market share. 2%. 4%. 8%. 16%, better introduce Android KitKat. 32%. 64%, but look KitKat is now at 4% and climbing exponentially! Gingerbread is now deprecated, KitKat is on a majority of devices, time to introduce Lollipop."
Come to think of it, this applies to a lot of Google's (and Microsoft's, and Apple's, and most tech companies') product strategy.
> Less familiar are the more pessimistic laws, such as Proebsting's law, which states that compiler research has led to a doubling in computing power every 18 years.
If that were true it would actually be quite extraordinary, but in fact it's still hard to beat C and Fortran.
So if you ran benchmarks compiled using the best C compiler from 2004 compared against the best current C compiler on 2004 era hardware you'd see a factor of 2 performance gain ? That's possible, I suppose, but I doubt it.
I have seen that kind of thing happen, yeah. I used to use dumb Fibonacci as an easy microbenchmark for getting a rough idea of language implementation efficiency:
This gives a crude idea of the performance of some basic functionality: arithmetic, (recursive) function calls, conditionals, comparison. But on recent versions of GCC it totally stopped working because GCC unrolls the recursive loop several levels deep, doing constant propagation through the near-leaves, yielding more than an order of magnitude speedup. It still prints the same number, but it's no longer a useful microbenchmark; its speed is just determined by how deeply the unrolling happens.
It's unusual to see such big improvements on real programs, and more recent research has shown that Proebsting's flippant "law" was too optimistic.
Current compiler optimisations are written with current hardware in mind, while I doubt that older optimisations would become pessimisations on newer hardware, so I'd compare the performance of the best C compiler from 2004 against the performance of the current best C compiler on today's hardware instead.
I attended a talk at the Royal Geographical Society where someone explained that given current trends, the super rich would own X00% of the planet if current trends continued for fifty years. And I never understood it. It's like, yeah, ok, if your model of wealth is that there are literally 100 gold bars somewhere then yes, that would be a contradiction. But firstly, lots of things are S-curves, not exponents, and secondly, we can just change what we measure. It looks to me that this comment is talking about something like this article:
Ok. Well, the US is a few hundred million people in a world of 6-7 billion. So yes, doubling would have been impossible. But it happened. According to some source that i just googled[2] there are 6 billion smartphones right now. So this schmuck thought that computers were hitting the wall coming up to 150million. That's an order magnitude of wrongness, and I bet you, the average person in the US today has multiple computers more powerful than a 1999 computer. One in their phone, one in their iPad, one in their laptop, one in their fridge, one in their coffee machine, one in the doorbell, one in their robot hoover, one in their thermostat. I mean.. it's a mad lack of imagination.
It seems to be intended just as a common-sense reminder that fast growth has to eventually slow/stop due to market saturation.
It's not strictly true though since the market itself can grow so your sales could still double or more from a level that had represented 50% of the market at some time in the past.
That Proebsting's law link is of course dead, and redirects to the main page of Microsoft Research. In my experience, it's the natural state of links to Microsoft Research pages. What's up with that?
Can't answer your question, but here's the law (I was curious myself):
> I claim the following simple experiment supports this depressing claim. Run your favorite set of benchmarks with your favorite state-of-the-art optimizing compiler. Run the benchmarks both with and without optimizations enabled. The ratio of of those numbers represents the entirety of the contribution of compiler optimizations to speeding up those benchmarks. Let's assume that this ratio is about 4X for typical real-world applications, and let's further assume that compiler optimization work has been going on for about 36 years. These assumptions lead to the conclusion that compiler optimization advances double computing power every 18 years. QED.
> This means that while hardware computing horsepower increases at roughly 60%/year, compiler optimizations contribute only 4%. Basically, compiler optimization work makes only marginal contributions.
> Perhaps this means Programming Language Research should be concentrating on something other than optimizations. Perhaps programmer productivity is a more fruitful arena.
Compiler optimizations can actually improve developer productivity, because they allow developers to write clean but inefficient code that can be rewritten to near optimal form. For example, in Rust iterators are a very convenient and clear interface that are generally zero cost (sometimes even more efficient) compared to a manual loop implementation. But without optimization, they would be many times slower.
A lot of times code compiled with no optimisations (-O0) is unusable. Specifically, in video some software compiled without optimisations won't push frames on time and instead will just keep dropping frames. There was a post a couple days ago about it being problematic in the games industry where a game compiled without optimisations is unplayable, while higher optimisation levels are hard to inspect in a debugger, due to the myth of "zero-cost-abstractions" in C++. Also to put it on its head a bit, when a compiler isn't fast enough (read not enough work was put into performance of the compiler itself, mostly on the design level, not on the microoptimisation level really), the feedback loop is so long, that developers stop testing out hypotheses and instead try to do as much as possible in their heads, without verifying, only to avoid the cost of recompiling a project. Another instance: when a photo-editing application can't quickly give me a preview of the photo I'm editing, I'm going to test fewer possible edits and probably get a worse photo as a result. With websites, if an action doesn't happen within a couple seconds of me clicking I often assume the website doesn't work and just close it, even though I know there are a lot of crappy websites out there that are just this slow. Doesn't matter. The waiting usually isn't worth my time and frustration.
Microsoft seems to do a massive restructuring of their website every few years and they break all links in the process. Raymond’s blog has suffered this a few times.
(2002), or maybe (2001) or (2000) or (1999): The Wayback Machine's earliest archive of this page is from June 2002: https://web.archive.org/web/20020603071812/https://norvig.co... and the page itself mentions July 1999, so this page is from some time in 1999–2002.
Nonsense. This observation is unworthy of a genius like Norvig and anyway it's not even generally true: it's all a matter of perspective, and the associated revenue model (purchase vs. subscription model). Whether the glass is half-full or empty depends entirely on perspective: whether I'm looking at this as a seller of a device (e.g. smartphone/PC/laptop/tablet) then maybe I only think of once-off purchases. But if I'm Microsoft (software suite/subscription) or Adobe or Netflix or Apple iTunes, then high penetration of my target market is great, it gives me recurring sales/subscriptions(/users on a social network, to serve ads to). If I'm an independent app developer, I love that Android has high penetration, or else that iOS has market segment of users with high propensity to spend on both app and IAP; but whatever I do, in the 2020s I don't target Microsoft Phone/ Nokia/ Blackberry/ PalmOS (RIP). Maybe HarmonyOS. (Also, high penetration and market share have a tertiary effect of squashing potential competition by siphoning revenues that might go to competitors. Anyone remember last.fm [0]? remember how Microsoft destroyed RealNetworks's business model [1] by giving away streaming-media server software for free? ("According to some accounts, in 2000 more than 85% of streaming content on the Internet was in the Real format.")
We will see the rebuttal of Norvig's Law when Netflix launches its ad-supported tiers. Or we saw it during 2020-2021/Covid, when Amazon aggressively pushed its discounted Prime to fixed-/low-income EBT/Medicaid/other government assistance recipients (at least in the US) [2,3]
With all due respect to Norvig (and if you've read his AI book or ever seen him speak in person, he's undilutedly brilliant, and also humble), he should get out there and try to sell a subscription-based device/service. Lemonade-Stand-for-web3.0, if you will... "customer acquisition" is not a dirty phrase.
> To be clear, it all depends on what you count. If you're counting units sold, you can double your count by selling everyone 1 unit, then 2, then 4, etc. (In Finland I understand that cell phone usage is above 1 per capita, but still growing.) If you're counting the total number of households that own the product, you can double your count by doubling the population, or by convincing everyone to divorce and become two households. But if you're counting percentage of people (or households), there's just no more doubling after you pass 50%.
I’m running a subscription-based service, but I’ve stalled at 57% market penetration. Can you give me some advice on how I can double my market penetration from this point?
Remember, what I’m looking for is 114% market penetration. Any help you can provide will be gratefully appreciated.
You missed my point entirely with your sarcasm. Partner companies selling apps on your service don't care that you only have 43% of the market left to capture; that's entirely your business problem, not theirs. However they very much do like that you already have 57% market share; from their perspective, that's good not bad. That's precisely why I write _"it's all a matter of perspective, and the associated revenue model (purchase vs. subscription model). Whether the glass is half-full or empty depends entirely on perspective"_. Understand now?
When Palm and then Blackberry died as platforms, vendors simply moved to a new platform and ported/rewrote.
Maybe if they gave it 150%, they could see Norvig's reasoning. It may take more than that, though -- maybe exponentially more.