Hacker News new | past | comments | ask | show | jobs | submit login

Can you explain the image and what it's trying to show? That the number of text links should be more than the number of image links to well known sites? I'm asking because I do something similar on my site, solveforall.com, but to a much lesser degree of course.



I am going to hack on this and see how it works, but first glance looks cool.

The image (really confusingly) illustrates the searchflow model:

Broweser ==> Google

Goolge ==> Results

results ===> Hacker news or another aggregator

aggregator ===> your fav. sub community (because gooigle discovery sucks)

subreddit helps you find links you want

in those links you find information

==============================

that is what a manual crawl feels like, and how many people use the web.


Ah, I get it now. Thanks for the explanation. This is the problem with the "deep web" not being indexable by crawlers. I've started looking at how to make this easier for users. Basically I'm got some ways to detect which sites should be searched in response to your queries, based on the category of words in your query and the site. But then I need to do a real-time search of that site, that might require JS to run, extract the links, and present them back to the user. It might be slow, but it's easier for the user to wait for the cloud to do it than click the links on multiple sites himself.

Thanks for checking out my site. If you have any questions or have any trouble please let me know at help@solveforall.com. And maybe we can explore brainstorming/collaborating since you've clearly put a lot of thought into search.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: