Hacker News new | past | comments | ask | show | jobs | submit login

I suspect this project was pushed by Google, to make importing wiki data to their knowledge graph more convenient for them.



Google already have their own knowledge graph that is much bigger than the Wikipedia graph, and they already scrape every Wikipedia page daily so they don't need a Wikipedia API.


For hot topics, search engine wants to scrap every minute, not daily, now wikimedia will provide them with such feed.

Also, they need to have team of engineers, who support infobox extractor, now this work will be done by wikimedia.


wikipedia is a large input into their knowledge graph




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: