A while ago there was an article about creating a "wasteland" of unprofittability around your core business so to become the monopolist and extract the highest margin out of the industry.
The position of Intel/Nvidia is then quite simple, they will open source any model, dataset, toolkit, library, etc that makes use of their hardware. Training new AI will become simpler and simpler and they will extract high margin from selling the hardware.
What about Google instead? They have the data and the engineering knowledge to make complex AI works, however it seems quite unlikely that they will be able to drive the price of AI hardware down. Moreover they are charging quite a lot for the use of their custom TPU.
From this analysis it seems like Google is bound to fail in the long term in the AI race.
Google can compete for the most influential/promising ML researchers with crazy salaries and be at the forefront of both best models as well as immediately putting them into their cloud offering for others to use. And their DNA revolves around ML whereas Intel's and NVidia's doesn't. Looking at past 5 years at Intel, they still don't get it; NVidia seems much more qualified and sensed the opportunity right away in a developer-friendly way, just doesn't have muscles Google does and their ML business model is right now under a threat of quick commodification - I'd be more worried about them than about Google to be honest. Google's only threat is that they will self-implode due to their inner culture and arrogance, but that's a long shot and they manage it better than SUN did.
The question is really whether those high salaries are achieving long-term payoffs. For certain things like image and audio processing, they've made some remarkable strides. However, many of those strides have to be remarkable to people outside the HN/AI community. For example, typing dog in Google photos and having it find pictures of your dog is great. However, when it misses a few and people notice that, they wonder why it's so bad. Almost as if Google should correctly identify 100% of them. Someone needs to sink the money into doing the grunt work of getting close to 100%, but the last few percent of improvement cost a lot of money. And to productize that to a user base of hundreds of millions of people is not cheap on the infrastructure, either.
I think one of the hiring strategies is not having those very talented people outside doing damage to them, rather inside even if inefficient/underutilized. They just need to be better than their competition and frankly, looking this week at what Amazon is offering they can continue being in that mode for quite some time.
General public more likely notices improvements in apps with immediate feedback like what Snapchat/Messenger is doing.
There are already models surpassing human cognition in many tasks, maybe the inferencing costs aren't economical yet?
I remember people ascribing this hiring strategy to Microsoft (back in the day): hiring up all the researchers to ensure there never was another Microsoft.
NVIDIA's developer-friendliness is mostly in the rearview mirror at this point after they went on a Jihad to destroy vendors selling GeForce-based Compute workstations and jacked up their prices to discourage anyone else from trying to do so. What was once their "Democratization of AI" agenda has become their AI Appliance business. That said, they still have a lot of momentum for now.
I think that's how they are preparing for the imminent commodification of their Deep Learning space - by milking the cow as much as they can before they are forced to compete on price. Once TF/Torch/MXNet can reliably run on AMD/Intel hardware without changing a line of code and with comparable performance, they will feel both consumer and data center pressure and their high margin business might evaporate quickly. Still, with CUDA/cuDNN they were the pioneers and the only relevant party so they deserve some ice cream for that.
NVIDIA wasamazing. They drove a bulldozer through the walled garden of HPC Welfare Queens (ask any grad student from the 00s trying to get supercomputer access what GPUs meant to them) and set the stage for the current AI boom.
IMO they deserve an entire fulfillment center of ice cream for that alone.
>The position of Intel/Nvidia is then quite simple, they will open source any model, dataset, toolkit, library, etc that makes use of their hardware. Training new AI will become simpler and simpler and they will extract high margin from selling the hardware.
Yet strangely, Nvidia isn't open sourcing stuff. Nvidia's cuDNN for example isn't open source. There's restrictions around distributing even the binaries too, which seems strange to me. You'd think they'd want to make it really easy to use their products, have every competitive advantage up front.
You're presuming that Google's biggest win is by selling hardware (access). Ignoring cloud, think about Google's broader mission -- making information useful and accessible. Google today sees staying at the forefront of machine learning as being a necessary component of doing that well. And from that perspective, it doesn't matter whose hardware it runs on.
The problem is for google, they have a lot of data but very few use cases that are big enough for all that data.
driving cars, but driverless car services will be slow to take hold and I'm not sure it's winner take all. To me there's not much difference between is it safe and is it the safest. And with a car buying cycles in multiple years and regulations slow to catch up they might not leap ahead enough. And it's not like netflix and amazon service where it's cheap enough you just use both.
There may not be a lot of difference between "very safe" and "most safe", but if they have enough of an advantage to get to "safe enough" before anyone else, that's a big deal.
I don't see google competing on style or fashion etc. And If you have some subscription car service you need to compete on price. It when a service is free(pay with data) or there's a huge training component(think office) that you don't get market forces which drive down prices and profitability.
> Intel is taking a risk because what they release for free can also run on ARM and AMD
Not all of it. The intel c compiler, for example, only produces optimized code on intel platforms (and there was a whole lawsuit about this a while back that lead to them having to put a disclaimer that it wasn't guaranteed to have optimal perf on all platforms, or something like that).
The point is that hardware companies are the one that will extract most of the value from AI.
While Google will be a "mere" user or a swapable layer (in the field of data center). A little bit like the producers of mouses or monitor during the wintel era.
I may be completely wrong, but this is the line of reasoning.
The value is the business consuming AI. If I can layoff 10 Level 1/2 Helpdesk techs, I can increase my profits by $500k. If I can nuke accounting or HR tasks, even bigger costs can be eliminated.
If by fail we mean "doesn't make any money", I think one has to look at Sun and Java as an example of a great technology where the creator of it couldn't make a substantial amount of money.
I'm mostly curious about how their NER and parser compare against what I've implemented for https://spaCy.io . I've tried the architectures they're using, and I've found they need very wide (and therefore slow) hidden layers to get competitive accuracy.
I'm sure they have some evaluations, right? I mean you can't really develop these things without running experiments...
Well, that's what makes me curious! They have a class that runs the BIST parser (by Kilperwasser and Goldberg) using spaCy to pre-process for tokenization, sentence boundard detection and POS tags. Now, spaCy by default uses the parser to add sentence boundaries...So you've already got a parse right there.
So, is their BIST parser more accurate than spaCy's parser? I'm getting similar (but slightly higher) WSJ accuracies to what Eli and Yoav reported in their paper, so I'd be surprised if they're doing so much better?
The position of Intel/Nvidia is then quite simple, they will open source any model, dataset, toolkit, library, etc that makes use of their hardware. Training new AI will become simpler and simpler and they will extract high margin from selling the hardware.
What about Google instead? They have the data and the engineering knowledge to make complex AI works, however it seems quite unlikely that they will be able to drive the price of AI hardware down. Moreover they are charging quite a lot for the use of their custom TPU.
From this analysis it seems like Google is bound to fail in the long term in the AI race.
Am I wrong? Why?