If you know a book’s worth reading, going ahead and reading it works well. But for a lot of books/talks there’s competition for time - eg my bookshelf has 20 half read books (this is after triaging out the ones that aren’t worthy of my time) - any tooling that can help better determine where to invest tens or hundreds or hours of my time is a win.
Regarding accuracy, I think we’re at a tipping point where ease of use and accuracy is starting to make it worth the effort. For example Bard seems to know about youtube videos (just a couple of months ago you’d have to download it -> audio to text -> feed into a LLM). So the combination of greater accuracy and much easier to use make it worth considering.
> If you know a book’s worth reading, going ahead and reading it works well. But for a lot of books/talks there’s competition for time - eg my bookshelf has 20 half read books (this is after triaging out the ones that aren’t worthy of my time) - any tooling that can help better determine where to invest tens or hundreds or hours of my time is a win.
Is it that hard to determine that a book is worth reading where worth is measured from your perspective? It's usually pretty easy, at least for technical books. Fiction books are another story, but that's life. Having some unknown stochastic system giving me a decision based upon some unknown statistical data is not something I'm particularly interested in. I'm interested in my stochastic system and decision making. Trying to automate life away is a fool's errand.
> Is it that hard to determine that a book is worth reading
I'm a huge believer in doing plenty of research about what to read. The simple rationale: it takes a tiny amount of time to learn about a book relative to the time it takes to read it. Even when I get a sense a book is bad, I still tend to spend at least a couple of hours before making the tough call not to bother reading further (I handled one literally 5 minutes ago that wasted a good few hours of my life). I'm not saying AI summaries solve this problem entirely, but they're just one additional avenue for consultation that might only take a minute or two and potentially save hours. It might improve my hit rate from - I dunno - 70% to 80%. Same idea for videos/articles/other media.
I get where you're coming from and definitely vet books in similar ways depending on the subject, but I also feel like this process is pretty limited in ways too and appeals to some sort of objective third party that just doesn't exist. If you really want to know or have an opinion on a work/theory/book at the end of the day you have to engage with it yourself on some level.
In graduate school for example, it was pretty painfully obvious that most people didn't actually read a book and come to their own conclusions, but rather read summaries from people they already agreed with and worked backwards from there, especially on more theoretical matters.
I feel like on the long term this just leads to a person superficially knowing a lot about a wide variety of topics, but never truly going deep and gaining real understanding on any of them- it's less "knowing" and more the feeling of knowing.
Again, not saying this in an accusatory way because I totally do engage in this behavior too, I think everyone does to some degree, but I just feel the older I get, the less valuable this sort of information is. It's great for broad context and certain situations I suppose, but in a lot of areas I consider myself an expert, I would probably strongly disagree with summaries given on subjects and they also tend to miss finer details or qualifying points that are addressed with proper context.
I think the more you outsource "what is worth my time" the less you're actually getting an answer about what's worth YOUR time. The more you rule out the possibility of surprise up front, the less well-informed your assumption about worth can possibly be.
There are FAR too many dimensions like word choice, sentence style, allusion, etc, that resist effective summarization.
LLM accuracy is so bad, especially in summarization, that I now have to fact check google search results because they’ve been repeatedly wrong about things like the hours restaurants are open.
There's a huge difference between summarizing a stable document that was part of the training data or the prompt, and knowing ephemeral facts like restaurant hours.
Technically true statement. If you're offering it to imply that the GP bears responsibility for knowing what document was in the training data and what's not, I have to quibble with you.
Knowing it's shortcomings should be the responsibility of the search app that is currently designed to give screen real estate to the wrong summary of the ephemeral fact. Or, users will start to lose trust.
IMHO, the good old method of skimming through the table of contents, reading the preface and perhaps the first couple of chapters is going to be a much higher fidelity indicator of whether a book is worth your time than reading an AI generated summary.
Regarding accuracy, I think we’re at a tipping point where ease of use and accuracy is starting to make it worth the effort. For example Bard seems to know about youtube videos (just a couple of months ago you’d have to download it -> audio to text -> feed into a LLM). So the combination of greater accuracy and much easier to use make it worth considering.