Hacker News new | past | comments | ask | show | jobs | submit login

Between the video decoder and the screen is the display server (e.g. Xorg or GNOME Shell) which is untrusted.



This wasn't my understanding. If the decoding happens in hardware, I wouldn't have expected the decoded video to be passed back to the display server to be sent back again to the GPU and out to the screen.

My understanding was that there was some kind of compositing going on, in hardware, where the display server would tell the GPU to display the output between some coordinates, but the server itself wouldn't know what the actual output would be.

Here is the libva documentation which seems to support this: http://intel.github.io/libva/group__api__prot.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: