Hacker News new | past | comments | ask | show | jobs | submit login

Seems like a nice project. However, I am currently receiving PR_CONNECT_RESET_ERROR on Firefox. Bringing it to your attention.



Interesting! I don't recall seeing this before, and the app otherwise held up to the HN hug of death which was unexpected given nothing is optimised.

Can you recall the steps that led to the error?


I was not able to access it until I last checked on yesterday night. I was very interested in implementing this project myself, but did not have enough SQL skills to go about it, and hence I was naturally curious. It was working for a brief period where I accessed it, however seems to be down for me again.

It is an amazing site for sure. I am very happy one can just search for any phrase and it shows up with results in a matter of seconds! Allowing fuzzier results is a very useful feature I must say.

The site may use a few improvements on CSS part as you may have already noticed (search boxes get smaller while typing, including dark theme, cleaning up the results table, etc). Also, "Discovery" section was not displaying the correct results although the query was executed for some time (I searched "days of our lives" in place of "call me Ismael"). With the same parameters, the search engine shows results anyway, so not a big deal.

Please release the pg_dump some day. It would be extremely useful.


Thanks for the details! I tried your search string but could not replicate the exact bug (either app not working at all, or your error message). What is your setup and connection speed? Please feel free to email me instead at contact@ if you would prefer to preserve your privacy.

"Days of our lives" returns fine on the Search tab and times out (as I assumed it would, with such common words) on the Discovery tab because the broader plainto_tsquery is returning too many rows to rank in time.

...which made me think, Discovery ranks by random(), so I do not need to calculate ts_rank... time to push a quick fix!

Regarding pg_dump, I originally wanted to do this, but at 60GB a pop the bandwidth would be quite expensive if the thing got popular at all; I recommend you head to the repository [1] and build it locally instead.

[1] https://github.com/cordb/gutensearch




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: