He's saying they're uniquely capable of storing both dense and sparse data efficiently, so which means they can store literally anything, hence the universal storage engine.
Not making a value judgement btw, not very at home in the field, I wouldn't know if any other database is capable of this, nor if storing dense and sparse information in the same database is even something anyone wants or needs.
Most every (analytic) RDMS database system can model sparse arrays. A sparse array is modeled by defining a clustered index on the table "array" dimensions and defining a uniqueness constraint on that clustered index. This works well with columnar storage because the data needs to have (and assumed to naturally have) a total sort order on the dimensions. Ex. Vertica, Clickhouse, Bigquery... all allow you to do this. TileDB allows for efficient range queries through an R-Tree like index on the specified dimensions.
Most real world data though is messy and defining a uniqueness constraint upfront (upon ingestion) is often limiting, so for practical use cases this gets relaxed to a multi-set rather than sparse array model for storage, and uniqueness imposed in some way after the fact (if required).
I agree that many use cases of sparse data, uniqueness of the dimensions can't be guaranteed or you might not want to enforce the uniqueness. With the recent TileDB 2.0 release we introduced support for duplicates in sparse arrays which adds the support for multi-sets[1].
> dense and sparse multi-dimensional arrays
Ok, this I get. Sounds very interesting. But I'm not sure how we make the jump to this:
> The foundational invention is the TileDB universal storage engine
I still don't get what the invention is, or what makes it any more universal than any other alternatives.