All the (uncompressed) video protocols still work like in the good old time of CRTs, you have the notion of horizontal and vertical blanking. Image is sent top to bottom, left to right (well, progressive video is, but let's not get into interlacing).
Actually in many protocols you have to keep the blanking periods because people crammed tons of things in them over the years (audio, subtitles, timecode, remote control protocols, various metadata...).
Having random access to the pixels would be beneficial if you only wanted to only update specific portions of the screen but you have to know which parts of the screen have to be updated (either by diffing each frame with the previous or by being tightly coupled with the drawing code and know which parts have been updated). However in very dynamic content like movies or video games you'll often end up having the entire screen changing at once so you'll have to be able to support that anyway and the additional complexity of optimizing for the simpler cases is probably not worth it.
What features would that add? The only one I can think of is saving energy for static content, but you can mostly achieve that by varying the refresh rate and most of the energy used by the display goes to the backlight anyway.
the reason for the CRT requiring the data "serially" was the electron beam that refreshed the phosphor.