That was going to be the next stage, but this would have been a more complicated solution with additional challenges but definitely a lot of code could have been re-used as it would just have been a matter of substituting the database lookup portion.
There are unknowns with the vector store solution because it doesn't suffice to just fetch a few relevant sentences in arbitrary order; we had to fetch every relevant piece of information in appropriately-sized chunks (some of which had to be multiple lines, some of which required the section heading for context) to formulate a correct answer. Sometimes there was something mentioned in a different section of the text which changed the outcome. Going down the vector database route would have taken longer and involved additional learning and it's not clear that we could have reduced the input size by doing that. I still think it was a good decision to start with a regular database first given that all information mapped neatly under the headings in our table of contents and that each section was relatively short. All sections were less than 500 words but most were only about 100 to 200 words.