Much less than citations, it's a basic empirical observation of SOTA for many NLP+vision tasks that, crucially, supports asking for a _lot_ of funding: $$$$ for hw, and skim $$$ off for a big lab of people for a decent amount of time.
Less cynically... they have to do it for going after low-hanging fruit around the current local maxima. I'd expect similar writing happening in grant proposals at other big AI labs too.
Yes, exactly, the Foundation Models paper is basically a grant proposal of research program. Still many object to coining a new (hyperbolic) term for an established field for own gain.
Personally, I will stick to the established 'larte-scale pre-trained models' instead of 'foundation models' in my publications.
Less cynically... they have to do it for going after low-hanging fruit around the current local maxima. I'd expect similar writing happening in grant proposals at other big AI labs too.