From the PDF - "One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are "search" and "learning".
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done."
Working on - "real-time conversations in rich video streaming". Have created rich video composition, mixing, streaming studio (http://www.thecheerlabs.com), working on to bring real-time conversations that can be mixed in real-time for streaming/recording.
VS Code Editor which is based on Electron, is really fast, even with large codebase & many open tabs. Their monaco engine (https://microsoft.github.io/monaco-editor/) uses custom, virtual code processor that is optimized for surgically updating underlying DOM. It also uses WebGL + canvas rendering to show minimap of the file.
Similar approach (custom virtual processor) is leveraged by Google docs/sheets.
Canvas rendering may be the last resort when nothing worked.
As far as I know, VSCode/Monaco does use viewport virtualization
> Canvas rendering may be the last resort when nothing worked
For the minimap, yes. But AFAIK, for text rendering, it's not really a goto solution. Text wouldn't look crisp enough, apart from the fact that text layout is a science in itself.
Basically you can get away with a debounced and cached canvas version of the full rendered DOM for that minimap, but you cannot use a huge DOM representing the full source for the actual editor.
Docs afaik implements an expensive custom text rendering engine, similar to Flutter.
Monaco doesn't.
Take it with a huge grain of salt, I haven't researched this really and generally am not very familiar with the Monaco or VsCode source. I'm on mobile, so not inspecting a Monaco instance either.
The Monaco repository seems to contain some files only in minified form, and refers to the VsCode repo.
Skimming through the interfaces there, it definitely seems to have hints for viewport virtualization.
Apart from that, WebWorkers seem to be used heavily to move the language server logic out of the main thread (completely different topic).
What I wanted to say is that "surgical DOM updates" might be good, but DOM _size_ is the main issue for rendering.
Sure, heavy-handed DOM updates have an effect too (it's the same as rendering a new large DOM tree).
But keeping DOM elements consistent instead of replacing large subtrees is without alternative anyway, regardless how optimized browser rendering and parsing will ever be, because of focus states for example.
Also worth noting that querying layout via JS is similarly expensive (not related to Svelte either).
Back to your comment:
React might be less performant than Svelte, but a React "render" is not as expensive as a browser rendering the changed DOM.
And Svelte's main differentiation is that it doesn't need a runtime in the browser and instead directly produces DOM-API code.
The difference is not on the number of updates (React, Vue etc are "surgical" there too). It's how the required DOM API calls are computed.
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done."