There's a few projects that use ES/lucene as a backend/datastore once the feature engineering is done, but I don't see models operating on the native indexes directly, maybe the format is too different from one-hot (after turning off stemming/stopwords and other info-losing steps)
They process Lucene index and create embedded representation of it. Then you can search over that representation for "semantic" matches.
Last time I checked it about a year ago the embedded collection of documents was kept in the memory and the search was implemented by a linear scan. So I suspect it can be slow on very large collection of documents.
Are there any word embedding tools which take a Lucene/Solr/ES index as input and output a synonyms file which can be used to improve search recall?