w.r.t "useless": Joins against memcached data, and expiry of keys via trigger. Different, yes, useless -- well, up to you. Not written for no reason, though.
That expiring triggers could be somehow useful.
But in most cases it's not worth to query memcached from database (selects). The reason is it's much faster to query memcached directly without even opening database connection.
All these caching things make applications complex, because cache invalidation it's a complex task.
MySQL is doing right steps - it's doing caching itself via InnoDB buffer pool. And makes the interface faster by replacing SQL queries with fast memcached API.
> The reason is it's much faster to query memcached directly without even opening database connection.
Sometimes you need to augment data in a SQL database with memcache information, and then subsequently compute a predicate, and it would be faster to use your sql executor than Ruby, say. I'm not suggesting you'd always use this access path, but it can be handy, just like dblink is.
> MySQL is doing right steps - it's doing caching itself via InnoDB buffer pool. And makes the interface faster by replacing SQL queries with fast memcached API.
I'd really like to see some benchmarks on that. If one has a protocol-level prepared plan in Postgres (which is not an exotic entity, some drivers even create them transparently), one just has to send "Bind" and "Execute" messages to call functions.
If there are major advantages to be had where there is any disk i/o involved at all, my suspicion it'd be having a caching strategy that is more suited to the memcached workload: most SQL implementations already have fast-path mechanisms (such as prepared statements, which are in my understanding relatively slow in MySQL due to some vagary in the protocol) where I'm a bit hard pressed to believe on intuitive grounds alone that parsing (and not planning, if one uses prepared statements) is the principal culprit for poor performance.