Funny - I woke up this morning and found this majoras mask Zelda tribute https://youtu.be/vbMQfaG6lo8 and then the behind the scenes and was suprised it was cgi on top of on location forest video that ends up looking rendered. Sometimes reality looks so good it looks fake so this was a interesting spin.
I was part of Cornell's Program of Computer Graphics lab for a few years. The lab still has the original physical Cornell Box model sitting around. That thing was wild to look at; even in real life, it doesn't look real because the walls are so close to perfectly diffuse, and it looks exactly like all of the rendered versions.
The grain doesn't quite line up across the surfaces and the edges are unnatural. Quite a good simulation of someone 2D texture-mapping a block of wood, rather than the more expensive volumetric mapping.
And lens flares and chromatic aberration - lens manufacturers go to great lengths to remove these as much as they can, but games and VFX re-create them.
Too-perfect stability makes your brain realize it is fake. It's the same reason music producers make their digital instruments play very slightly out of time.
From a software and algorithm perspective, yes - although what with measured materials and lights (in other words in physical units that match what they are in the real world), it's pretty trivial now to set up scenes in CG which match simple materials like the ones in this example.
What's still a not-quite solved problem (at least efficiency-wise) is displacement / lean / bump micro-surface detail (linked to microfacet response) and stuff like scattering and absorption (for skin, hair in particular) - we can approximate it very well, but still not quite 100%.
I've observed hardware design is getting flatter in the last couple of years (thinkpad, MacBook, even IKEA furniture). Guess it has something to do with the change in software UI language.
I think it may also be that initially, skeumorphism was used to educated the general public as to the intuitions of the UI, but after that intuition become more and more widespread generally, the UI could turn into a more abstract and minimal form of the previously skeumorphic UI while retaining its properties / placements etc. This abstraction then allows interaction paradigms that cross-cut what is possible physically, or to add more to the now-abstract UI with less noise. Technology is changing of course, but also humans are changing collectively in learning to communicate with it. This is also historically present: at some point typing wasn't a thing, then some jobs did typing, and now typing is a widespread skill and allows interactions at a new level once that is established as assumed on the part of the user.