I have to assume, in Google’s case anyway, that they are indexing according to ‘what the user sees’. Not to do so would means lower-quality search results. Users don’t care how the page ‘happens’.
I make this assumption because most of us don’t actually know how Google works, acting as a user’s proxy is in Google’s interest, and it is certainly within their technological capability. (Let me emphasize assume, again.)
This would require that the crawler actually renders sites, indexes the resulting DOM, and clicks what can be clicked. A headless Chrome, perhaps.
"Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site."
Not sure what you are trying to say by merely copying and pasting text about random search engine spiders that may have trouble crawling you. Maybe you could explain?
With regard to Google, however, they are explicit in their support for JavaScript rendered sites:
I make this assumption because most of us don’t actually know how Google works, acting as a user’s proxy is in Google’s interest, and it is certainly within their technological capability. (Let me emphasize assume, again.)
This would require that the crawler actually renders sites, indexes the resulting DOM, and clicks what can be clicked. A headless Chrome, perhaps.