Having their names on something so public is definitely an incentive for prestige and academic promotion.
Shilling for OpenAI & co is also not a bad way to get funding support.
I’m not accusing any non-affiliated academic listed of doing this but let’s not pretend there aren’t potentially perverse incentives influencing the decisions of academics, with respects to this specific letter and in general.
To help dissuade (healthy) skepticism it would be nice to see disclosure statements for these academics, at first glance many appear to have conflicts.
It’s unequivocal that academics may have conflicts (in general), that’s why disclosures are required for publications.
I’m not uncovering anything, several of the academic signatories list affiliations with OpenAI, Google, Anthropic, Stability, MILA and Vector resulting in a financial conflict.
Note that conflict does not mean shill, but in academia it should be disclosed. To allay some concerns a standard disclosure form would be helpful (i.e. do you receive funding support or have financial interest in a corporation pursuing AI commercialization).
I'm not really interested in doing a research project on the signatories to investigate your claim, and talking about things like this without specifics seems dubiously useful, so I don't really think there's anything more to discuss.
Several of the names at the top list a corporate affiliation.
If you want me to pick specific ones with obvious conflicts (chosen at a glance): Geoffrey Hinton, Ilya Sutskever, Ian Goodfellow, Shane Legg, Samuel Bowman and Roger Grosse are representative examples based on self-disclosed affiliations (no research required).