The OpenLibrary dataset has ~28M records, takes up 6.8GB on disk and 14.3GB in RAM when indexed in Typesense. Each node has 4vCPUs. Took me ~3 hours to index these 28M records. Could have probably been done in ~1.5hrs - most of the indexing time was because of cross-region latency between the 3 geo-distributed nodes in Oregon, Mumbai and Frankfurt.
I'm going to try the free tier of typesense right now, perfect for my current use-case of site-search.
How large is the dataset for the books? How large nodes are needed?