I know [RethinkDB][1] used to do this with their SQL-like ReQL language, but I looked around a bit and can't find much else about it - and I would have thought it would be more common.
I'm more interesting in queries with joins and doing it efficiently, instead of just tracking updates to tables that are modified, and re-rerunning the entire query.
If we think about modern frontends using SQL-based backends, essentially every time we render, its ultimately the result of a tree of SQL queries (queries depend on results of other queries) running in the backend. Our frontend app state is just a tree of materialized views of our database which depend on each other. We've got a bunch of state management libraries that deal with trees but they don't fit so well with relational/graph-like data.
I came across a Postgres proposal for [Incremental View Maintenance][2] which generates a diff against an existing query with the purpose of updating a materialized view. Oracle also has [`FAST REFRESH`][3] for materialized views.
I guess it's relatively easy to do until you start needing joins or traversing graphs/hierarchies - which is why its maybe avoided.
EDIT: [Materialize][1] looks interesting in this space: "Execute streaming SQL Joins" but more focused on the event streams rather than general-purpose DML/OLTP.
[1]: https://github.com/rethinkdb/rethinkdb_rebirth
[2]: https://wiki.postgresql.org/wiki/Incremental_View_Maintenance
[3]: https://docs.oracle.com/database/121/DWHSG/refresh.htm#DWHSG8361
[4]: https://materialize.com/
The fun part of this problem is that it's really inverted from most traditional database literature out there. Mostly the problem is you have a query and need to find the matching rows. To make updates efficient you need to do this the other way around - if there is a row (that has changed) find all the matching queries.
With joins (or any data dependant query where you can't tell a row is in the result set without looking at other data) you need to keep the query results materialized otherwise you can't have enough information without going back to disk or keeping everything in memory, which isn't really feasible in most cases.
Source: I worked on Google Cloud Firestore from it's launch until 2020 and was the on responsible for the current implementation and data structures of how changes get broadcasted to subscribed queries.