Also one of the people who built the modern AI world. From the data centers to the bulk processing software to the team structures to tensorflow, to some of the most cost effective chips in the field, to many of the early blockbuster results in the field.
They certainly fumbled by not investing in massive LLM scaling early enough, but Jeff Dean has been planning this day since his graduate work on neural networks in the 90s.
Nonetheless, I can empathize with GP, I wish the talk focused on the future more, and on history and marketing of Google a lot less. Yes, we get it, Google used to lead in this space, still does in some narrow niches, but recounting the glory days is not how you win mindshare. Show me the demos, make me excited about your vision.
i would've been impressed by this 2 years ago. i think it's got to the point where real, valuable ai is in the hands of the everyday consumer, so we start judging the models for ourselves. having seen google continually get crushed over the past year, a bunch of benchmarks just fail to impress. in particular in this case, they're comparing their latest model to gpt4, which hasn't changed that much in almost a year.
Not only that, in some cases they’re comparing apples to oranges as well, undermining their credibility further. Eg chain-of-thought vs non-CoT results. I don’t even know why they’re doing that, seems like their results would be impressive enough even without this.
Have we reached a point where lecturers need to post “hot takes” on Twitter as a prerequisite to keeping people’s attention?