Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you get notified about newest research papers in your field?
406 points by warriorkitty on Aug 5, 2016 | hide | past | favorite | 135 comments



I wrote http://www.arxiv-sanity.com/ (code is open source on github: https://github.com/karpathy/arxiv-sanity-preserver) as a side project intended to mitigate the problem of finding newest relevant work in an area (among many other related problems such as finding similar papers, or seeing what others are reading) and it sees a steady number of few hundred users every day and a few thousand accounts. It's meant to be designed around modular views of lists of arxiv papers, each view supporting a use case. I'm always eager to hear feedback on how people use the site, what could be improved, or what other use cases could be added.


Andrej, thank you very much for making this site. I use it every day.

A problem: I think one of the most necessary things that are missing from arXiv.org is comments. People just come, read, and then take their discussions somewhere else, fragmented all around the net. Arxiv-Sanity already filters just the ML articles and does personalized feeds, maybe it could also be a place of discussion. I know it potentially leads to other complications (like moderation), but I really think readers would benefit from reviews, questions and answers.

The current ML related discussion sites (blogs, /r/machinelearning, G+, Twitter, StackExchange and YC) are often mixed with lots of noise. I'd like to read what researchers think.

Another suggestion: add links to code repositories, where they are available. Maybe some of your trusted users could be empowered with the right to add such links, if it's too much work for a single person. If interesting discussions are reported on other pages on the internet, they could also be added to the article, to make them easier to find.


A simple-ish way of subsidizing some of that effort is to just make a subreddit for arxiv submissions and link to the comments section from arxiv-sanity for a given paper. You still don't tie into other communities, but if someone has something to say about a particular paper it provides a straightforward mechanism (until the, what, 6 months at which point the submission is archived and can't be voted on or commented on any further). You only need a couple moderators and some strict rules (automoderator rule to only allow submissions from the arxiv-sanity user, etc).


If you want links to code repositories for each of the papers, there is already a project http://www.gitxiv.com/. It also has a comments section. Maybe both the maintainers can work together to get both the projects integrated. I actually subscribe to GitXiv mailing lists as well since they send list of top articles under particular categories.


Thanks! The option to contribute additional links would be a great feature.

As to discussions about papers there are plans (semi-related to arxiv-sanity) in motion to do that well and correctly, not just from me alone. I think we'll see a big delta here over the coming months.


What about a Gitter-equivalent for each paper with logs. That's one way I envision paper's and conversational threads get related. Each paper would be a different channel. Maybe there are topic channels too.


For me getting alerted when there are new papers that cite papers that are relevant towards my current research topic would be ideal. Google scholars has alerts on authors and search queries but for me they don't have enough recall.

Its much easier to tell when a paper is relevant for me if it happens to cite 3 of the commonly used datasets for my particular task.

btw I use arxiv-sanity, its pretty great, thanks a lot!


Thanks! Email alerts are one of top requested features, it's definitely on my shortlist for the next feature to incorporate.

Another feature I'd like to add is an ability to follow people, but I'm worried about the exact implementation since the current assumed contract is that your library is private.

One more feature of course I hear about often are comments, but I'm afraid of the site disintegrating into YouTube comments. I think comments have to be done very carefully and would require significantly higher code complexity to incorporate moderation tools, etc. Tricky and non-trivial not just implementation wise but design-wise, incentive-wise, etc.


Consider having a minimum number of characters or words for comments. It's basically the opposite of twitter, and would result in people having to actually put some effort into their comments. Also I've found that even on youtube, if the comments are moderated, the degenerates stop showing up.


Draft help from your readers. There have to be a few who want to contribute.


I also use a homemade code to keep up with new papers.

I feed in a .bib file with papers I like and use a Naive Bayes classifier to find papers I might like in news feeds (science, nature, PNAS, etc).

It works pretty well. As a bonus you can use post high ranked papers to slack or use papers sent to me by other people to repopulate the bib file.

Always welcoming suggestions: https://github.com/pfdamasceno/shakespeare


not exactly the same thing, but http://www.gitxiv.com/ is pretty cool for pairing papers with source


Wouldn't you miss a lot of important publications when just checking arxiv?


In AI, all the important publications are posted on arxiv first. If a paper is sent to a journal before arxiv, it is clear the authors believe their paper is not significant enough to alert the community.

The publishing culture from the life sciences is toxic and will be avoided by the AI community.


Thanks. This is an interesting approach to get the needed.


it is very helpful


(1) I manually check the proceedings of the important conferences in my subfield when they come out.

(2) I check my field's arXiv every other day or so.

(3) Google Scholar alerts me of papers that it thinks will interest me, based on my own papers, and it's very useful. Most of what it shows me is in fact interesting for me, and it sometimes catches papers from obscure venues that I wouldn't see otherwise. The problem is that you need to have papers published for this to work, and also, it's only good for stuff close to your own work, not that much for expanding horizons - (1), (2) and Google Scholar search are better for that.


Yep, this is what I do, except that security papers don't make it to arXiv so I also keep an eye on twitter (I have followed a bunch of academic security people) and a couple subreddits (/r/ReverseEngineering, /r/REMath, and /r/systems). It's not ideal, but it works out okay.

None of them are a substitute for a proper related work search when I'm writing up a paper though, this is just to keep current on what the trends and interests of the community are.


What does arXiv provide that your conference proceedings / association doesn't?

For example, I usually log in to the ACM site and go to my SIGs and see what's new there. I've never thought about visiting arXiv.


It provides basically a subset (maybe around a third) of the same interesting papers that I see in conference proceedings, but it provides them earlier (typically 1 month to 1 year earlier). This can be important as my field (NLP) is quite fast-paced.

Just to give a concrete example, this paper (which was a relevant read for me) was published in TACL in July this year but was available in arXiv since February: https://arxiv.org/abs/1602.01595


In physics, arXiv is where it's at --- conference proceedings are usually not very relevant, and people usually put also them on arxiv.


I'd be willing to be the GP is in CS; in CS, conferences are where it's at.


Conferences certainly dominate journals, but most people publish their work on arxiv first, anyway.


I'm in CS (at the intersection of PL/compilers/HPC), and I've never heard of anyone in my field doing that. In fact, the only papers I've read on arxiv have been ones linked on HN.


I'm in a similar intersection (hi!), and same goes for me. I want to change that, though. I have started publishing tech reports (I work in an industry research lab) whenever I submit a paper for review. I'm tired of work being stuck in endless review cycles, not public and not referenceable. Were I still in academia, I would submit to arxiv, and I have even recommended this to grad students.


Virtually every ML paper is posted on arxiv before conferences.


At least for theoretical CS and cryptography, Crypto ePrint and arXiv often have more detailed full versions of the paper. These are often invaluable for understanding proofs and other important details.


>The problem is that you need to have papers published for this to work

The one place where one could actually use a "Follow" button for other people...there isn't one. Classic.


This is a great list, also there are sometimes mailing lists dedicated to a particular topic or field. It helps to have more eyes on the net.


I like to follow The morning paper by Adrian Colyer. He writes a summary of an influential CS paper each day and sends it out on his e-mail list.

https://blog.acolyer.org/


Write one influential paper. Then all the later papers in the same sub-subfield probably cite your paper. Go to Google Scholar and check the latest citations to your paper.

Ok, it doesn't need to be your paper. Just find a paper that was so influential that others working on the same problem probably will cite it, and monitor the new citations.


I came in to say exactly this. Google Scholar alerts are incredibly useful.


So, that's close to how I operate (basically bibliography-surfing), though with one handicap: what do you use to track citations?

Particularly something that's generally open.

Best tool I've got ready access to is Google Scholar. There are citations indices I can get access to, by going on-site to a specific facility, but that's pretty limiting when the rest of my work can be done (and the bulk of my materials) are in my office.

(And yes, I'm aware that having to go to where the indices are is how it Used to Be Done, and in fact, I Did That. Technology has moved on.)


Huh, that seems so obvious in retrospect. This is basically how I've grown into jazz. I find someone I like, find out who they played with, who those folks played with, and so on.


Just FYI, you should know about SHARE. It's an effort to create a free, open dataset of research activity across the research lifecycle. You can read more at

http://share-research.org

So, if you want to see a reddit for research, better news feeds, etc., it is the SHARE dataset that can provide that data. SHARE won't build all those things--we want to facilitate others in doing so. You can contribute at

https://github.com/CenterForOpenScience/share

The tooling is all free open source, and we're just finishing up work on v2. You can see an example search page http://osf.io/share, currently using v1. Some more info on the problem and our approach....

What is SHARE doing?

SHARE is harvesting, (legally) scraping, and accepting data to aggregate into a free, open dataset. This is metadata about activity across the research lifecycle: publications and citations, funding information, data, materials, etc. We are using both automatic and manual, crowd-sourced curation interfaces to clean and enhance what is usually highly variable and inconsistent data. This dataset will facilitate metascience (science of science) and innovation in technology that currently can't take place because the data does not exist. To help foster the use of this data, SHARE is creating example interfaces (e.g., search, curation, dashboards) to demonstrate how this data can be used.

Why is SHARING doing it?

The metadata that SHARE is interested in is typically locked behind paywalls, licensing fees, restrictive terms of service and licenses, or a lack of APIs. This is the metadata that powers sites like Google Scholar, Web of Science, and Scopus--literature search and discovery tools that are critical to the research process but that are incredibly closed (and often incredibly expensive to access). This means that innovation is exclusive to major publishers or groups like Google but is otherwise stifled for everyone else. We don't see theses, dissertations, or startups proposing novel algorithms or interfaces for search and discovery because the barrier of entry in acquiring the data is too high.


Hi. This looks really interesting. Unfortunately the results page after a search freezes the stock browser on my LG G3.

I've also read the front page, the about page, and your post several times, and I'm not exactly clear what you provide. I thought I'd do some searches to see the product made sense. A search for a field in interested in, arthritis, yielded zero results. Okay, so... no medical research? A search for "reddit" yielded results, and mentions of "providers". I'm not clear what providers are... is reddit a provider, or the research papers, or the publishers, or the researchers...?

I'll read more later when I'm not on mobile, maybe it will be clearer.

I'm starting a project related to analysing published research, so this is a field I'm very interested in. I hope SHARE can help in some way, and I'll definitely be keeping tabs on your work. Thanks for posting.


Are there any plans to provide an API or any kind of database dump to allow building other services based on the aggregated data?


I know this question is probably a little off topic for this post but I'm very eager to get some kind of answer.

What should I be reading? I'm a computer science student, I want to go into a "Software Engineering" line of work. Are there any places to read up on related topics? I have yet to find something that interests my direct field of choice. Is there one on in academia writing about software?

I also like NLP and other interesting parts. Basically all practical software and their applications are things that interest me.


ICSE [1] and FSE [2] are the top software engineering research conferences. Skimming the titles/abstracts of their papers each year doesn't take long.

Also, they generally have industry or "in practice" tracks that have postmortems from the big software companies in case you want something more applied.

[1] http://2016.icse.cs.txstate.edu/

[2] http://www.cs.ucdavis.edu/fse2016/


A good way to do that is to skim titles on DBLP. E.g. http://dblp.uni-trier.de/db/conf/sigsoft/fse2015.html.


I'll suggest a minority position: If you feel the need to keep up at the bleeding edge of your field, your work is probably replaceable, i.e., if you didn't do it then someone else would do it a year later.

Instead, read more review papers and seminal papers in your field.


If everyone thought like that, there'd be no bleeding edge. :)


Ok but where do I find those for my field? I'd just like engaging material to read that will provide others insights into how I can do my job better then I currently can.


There are a lot of papers on sentiment analysis if I recall correctly. I would look into literature on parsing and statistical analysis, a lot of big data stuff is related to that and there are a lot of books on big data. Very popular field to hire people for as well, a lot of big companies want people to massage their data into giving them useful avenues for money-making.


You might want to check out the Papers We Love repo at https://github.com/papers-we-love/papers-we-love. That's my goto resource.


Tossing out a contrarian view: I'm finding there's a tremendous amount of good information and publishing that's old. Keeping up with the cutting-edge can be interesting, but you have to do a lot of the filtering yourself.

Finding out how to identify the relevant older work in your field, finding it, reading it, and seeing for yourself how it's aged, been correctly -- or quite often incorrectly -- presented and interpreted, and what stray gems are hidden within it can be highly interesting.

I've been focusing on economics as well as several other related fields. Classic story is that Pareto optimisation lay buried for most of three decades before being rediscovered in the 1920 (I think I've got dates and timespans roughly right). The irony of economics itself having an inefficient and lossy information propogation system, and a notoriously poor grip on its own history, is not minor.

The Internet Archive, Sci-Hub, and various archives across the Web (some quite highly ideological in their foundation, though the content included is often quite good) are among my most utilised tools.

Libraries as well -- ILL can deliver virtually anything to you in a few days, weeks at the outside. It's quite possible to scan 500+ page books in an hour for transfer to a tablet -- either I'm getting stronger or technology's improving, as I can carry 1,500 books with one hand.


I made a simple service for myself (http://paperfeed.io) which is a feed of all the new papers in journals I care about. I can "star" papers for reading later. Works extremely well for my habits.

You're welcome to try it (not sure if the signup workflow still works; let me know). I'll be happy to hear your feedback.

Edit: you can upvote papers, and they'll float to the top just like on HN.


This might be off topic but would you mind sharing how you wrote the website and if you have any tutorial that you can recommend? I want to design something extremely similar for a different application but I do not have much knowledge in web development (I am more experience in programming for numerical and data analysis). I figure this might be a good project to get my feet wet. Thanks!


Not the OP, but this looks like an nginx-powered API (which may be coming through a reverse-proxy) that returns JSON, and Bootstrap 3 + KnockoutJS for the client side to render it all. That doesn't answer your questions about the OP's thought and design processes but maybe it'll give you something to read up.


Exactly. The API is written in Go (because I wanted to learn it), and there's a Postgres database behind it, and a background Go process that scans the journals for updates. I recently rewrote the client in React as a learning exercise, but haven't made the switch yet

If I restarted from scratch I would do the server-side in Python because there's just a lot more good libraries available.


Hmm, I've been thinking about learning Go or Rust since they've been almost on the front page of hacker news lately. Is it worth it or should I stick with Python?


It's always worth learning something new. I don't mean to sound like a dick when I say that, but it's true; build yourself a little API in Go with the help of https://gobyexample.com and see if it's right for you. We really cannot tell you if it will be worth your salt for your particular project. Structure your application in a manner that if you decide to throw it out and replace it with C# tomorrow, your client won't know the difference.

In my case, I really enjoy Go, but certainly not all the time. It has its place. You may find either that it's the best thing ever, or that you cannot stand how it does X, and Python does it so much better. Some comparisons are objective, but the things that make or break it for you may be subjective.


Thanks! It gives me at least a direction I should start!


Great! How did you manage the different feeds? I did something similar for my field but its a nightmare since some journals violate the rss, or dump metadata in the feed (my shameless plug is http://sciboards.com)


I have slightly different parsers for each family of journals. I use the DOI to get the metadata where possible. Then I reformat to show title, authors and journal consistently. I also create a direct link to the PDF where possible because I prefer to get at the paper with a single click.


I actually just manually check arxiv every morning for the new submissions in my field. It's like getting in the habit of browsing reddit except with a lot less cute animal pictures (maybe because I'm not in biology).


ArXiv has email search alerts. I subscribe to a few topics, they are well formatted plain text digests.

I also have a few ScienceDirect search alerts set up, that come in once every few weeks typically with 1-5 papers.

And Google Scholar, if you use it and you are logged in with an account, learns from your search history and suggests new papers for you to read. It's relatively good.


In case someone here hasn't seen it: http://www.arxiv-sanity.com/ (Machine learning topic specific)


I don't. If I'm working on something and need (or want) the latest cutting edge algorithms then I search for papers in that area as I need it. Otherwise, there's simply too much stuff going on to try reading through everything, or even a filtered down subset. Only a very small portion of it will be remotely relevant to my work or my interests.

If there's a fundamental new result in basic CS or something like that, I figure I'll hear about it on HN or another news site.

I can imagine it's different for people actively working on new research, though.


For programming language research, 1) the RSS feed of http://lambda-the-ultimate.org/ (Lambda the Ultimate), and 2) my old-school paper subscription to ACM SIGPLAN, which includes printed proceedings for most of the relevant ACM conferences (POPL, PLDI, OOPSLA etc.)


I manually check conference proceedings when released:

- OSDI - SOSP - FAST - EuroSys - APSys - NSDI - SIGCOMM - ATC - ISMM - PLDI - VLDB

These days, accepted papers in specialized conferences are actually on mixed topics these days.. like you'll see security and file systems in SOSP


In addition to the important conferences proceedings, it's common for researchers to work in a very narrow subfield where everybody knows everybody. They keep seeing each other at various events where they discuss their ongoing work.


Surprising that feed.ly hasn't been mentioned. It's like gmail for feeds, and it has all the arxiv categories prepopulated. My workflow is as follows: (i) check feedly every day, see ~20-30 new articles, (ii) skim all the abstracts in 5-10 minutes, (iii) mark 0-2 to read later in the day, (iv) mark rest as read, and repeat.


E.g., it's an RSS / Atom reader.

Yes, this is precisely the sort of application RSS is excellent for.


Just knocked this out after reading this question (using an open source tool developed as a Show HN project called https://www.hellobox.co ):

http://www.ivoryturret.com/

I hope it catches on.

Others have tried and they don't get enough traffic to get it to take off but since low levels of hosting are free, I could just keep it out there for a long time.


http://www.arxiv-sanity.com That helps sort through arxiv papers and get recommendations.


There should be something like reddit for academic papers. With upvotes and what not. But I guess it takes people longer to read a paper than to read reddit content.


It's a neat idea, but I would want identity verification - only upvotes from people well-versed in the field should "count", precisely so it doesn't become Reddit. Which means you would have a chicken-and-egg problem when the service got started and few experts were on it yet.


That's actually something we are working on. We are working with verified researchers in the field (industry and academic) to help surface good papers and foster an open discussion.


There are websites that try to do that (e.g. scirate) but the problem is that people doesn't participate much. The problem is not the website, but convincing academics to comment and vote (academics are typically starved of time, and reading a paper and writing a good coment is not easy...)

It would be nice if someone solved the problem and managed to create a working one, though.


What about collecting tweets from verified researchers? It could help getting to critical mass. You could even consider papers tweeted by researchers that are followed by your known researchers, and so on, with the right weighting (something resembling pagerank).

With the right weighting this could really boost the size and quality of your dataset.


That would be interesting to leverage Twitter.

Right now we are working on helping the community surface information and working with verified researchers to build their "curated" lists for different topics.


Can I have a beta when it's ready? I've been thinking about this for a while and I like to think that I could provide useful input.


Awesome, yea. I'll send you an email.


Also very much something is be interested in, if invites are available, thanks.


Perhaps you could get an ordering based only on the upvotes from your friends, and maybe friends of friends with a lesser weight. Maybe also include upvotes from strangers whose past voting pattern is similar to yours. Maybe one can construct some PageRank-esque structure, whose votes should be weighted heavily and whose not, in the light of your own voting history.


> based only on the upvotes from your friends

That would create an echo chamber. You need to know about research that challenges your assumptions.


https://scirate.com/

Is something like that for papers on the arXiv.


I think this is what academia.edu is trying to be. It seems to be a mix of reddit and linkedin for academics.


pretty sure this is reddit, just make a subreddit for the topic and then start feeding it.


It's not.

Reddit doesn't allow subreddits to limit who can moderate posts or comments except by taking the subreddit private and limiting the membership.

It's actually a bit of a major pain, particularly for smaller public subreddits.

Reddit's moderation system in general is just hugely problematic. It kind of works, but it really doesn't, and has received very little love.

The first question for any such system should be "what is your goal?". Reddit serves popularity relatively well. Accuracy, relevance, information: rather less so.

Some non-brief thoughts on that from a few years back:

https://www.reddit.com/r/dredmorbius/comments/28jfk4/content...


Great writeup. Too much of these web "platforms" and use the word loosely with air-quotes, don't support the level of compositional delegation that could/would enable what you are looking for w/o having to make your own platform.

The only thing I can suggestion w/o understanding your needs on more than a superficial level would be to create bots that have admin access, that attach "flair" that denotes rank that the bot uses to move stories around, etc. Network effects and availability still make sites such as reddit very attractive.

I have always wanted a multidimensional discussion, so that joke posts and memes automatically diverge from the current hyperplane of the discussion.


Thanks, and largely agreed.

What Reddit does offer, and woeprks fairly well, is moderation tools and teams sufficient to scale out pretty well.

A bigger problem is that conversation sinply doesn't scale well, something old-timers have been realising for a while. I've got a Dave Winer quote somewhere to tthat effect, and was rereading Shiky's "A Group is its Own Worst Enemy" which suggest what I'm increasingly concluding: with the right people, from 2-3 through maybe 50-100 people can actually discuss something. More than that and it's broadcast or a large number of comingled side conversations.

I'm coming to appreciate Wordpress and blogging platforms' capabilities, and sheer size. There's a ton of blogged content out there, it's mostly that finding and commenting on it is challenging.

Another element that's lacking is filtering tools, for which I think randomness and/or community ought play a larger role -- filtering content up through smaller groups.

Also both implicit measures and known trusted quality "roots" (vetters / editors).

I'm coming to appreciate Wordp


In the Reddit model, it would be nice to have sub-sub-reddits, where a splinter group can discussion a facet. For example, given a Redis subreddit, there could be a Lua-Redis sub-sub-reddit with a smaller audience and whose best posts bubbled up to the parent. I find that a smaller but more active community is preferable over a larger anonymous, passive one. People are quicker to help each other out, share w/o feeling stupid and don't blend into the background, keeping snark and insult to minimum.

As you mention, it is broadcast vs discussion.


That's a big element of it.

I'm trying an experiment (and am way behind schedule) at /r/MKaTS and /r/MKaTH along these lines. There's a private and a public subreddit, one for more closed discussion, one for more open. The idea is to build these out.

Using flair, you can get something like the related-subtopic discussion. See /r/dredmorbius (a solo bloggy effort) or any of the big subs with flaired discussion (/r/AskHistorians or /r/AskScience) for examples -- you can look at the full sub, or dive into a specific flair's topics.

A significant problem with Reddit is that establishing these structures is difficult. Setting up post flair -- the names, the styles, the sidebar search, etc. -- is a major PITA. FSM help you should you want to revise the scheme later.

And you're still stuck with the problem that it's not possible to filter out a flair to report only posts above some arbitrary cutoff (you can sort by "best" or "top"), not that the moderation system gives you any particularly good mechanism for doing that in the first place.

Reddit (as with many discussion systems) is a bit too focused on the now and not sufficiently on the good. I'm particularly annoyed that it's not possible to revisit old posts for discussion (the six month comment freeze), a feature of G+ which actually turned out to be really useful.

There's also the whole Notifications dynamic which ... simply doesn't work well. Yes, you see if someone's mentioned your name, specifically, but you can't get a general notification of discussion on a post (unless you've specifically subscribed to it, and that only for 48 hours). That's utterly unworkable for larger discussions, but works well for small ones.


In the bio/health/bio-info areas: a key option is to create alerts with http://www.ncbi.nlm.nih.gov/pubmed


Yes, and Google Scholar alerts are also useful and pick-up slightly different things. Good to have both


I've been using http://www.sparrho.com throughout my PhD (in Biochemistry) and I was so impressed with its recommendation engine that I joined their team last year. We've been making a lot of changes to the Sparrho platform lately, including adding a pinboard feature to help lab groups and journal clubs coordinate their reading and keep their comments in a single place. Our database are updated hourly with papers from 45,000+ sources from all scientific and engineering fields, including arXiv. Most of our users set up Sparrho email alerts to replace journal eTOCs/newsletters, RSS feeds and Google Scholar alerts. I'd love to hear what you think! Free sign up here: http://www.sparrho.com


Take a look at academia.edu. It's basically a social network for the academia. Researchers can post their papers and follow other people's work.


Yes. I have an account there. Saw either in their newsletter or on their site recently, that they say some X0 million people (researchers) are using it.


Some people have already mentioned these but so far I'm using:

Karpathy's http://www.arxiv-sanity.com/library subscribe to archive email lists

Semantic Scholar (no notifications) is good for manually finding things

Google Scholar notifies you when your papers get citations... Unfortunately they don't have a way for you to get notified if the paper is not yours.. so I made a few fake accounts that add papers to the library as if they are the author and then I set up a forwarding to my email. (really wish they would just expand the notified of citations feature to your library and not just your papers but whatever)


As a software developer, my effectiveness doesn't depend on up-to-the-minute knowledge of what's happening in my field. It's more useful to pursue a deeper understanding of the fundamentals.


Agree, and will further expound.

There are some obvious exceptions on the cutting edge of technology (VR etc) but developers in my position care more about reliably making reliable software that earns (or saves) money. To this end, it's usually better to apply techniques and technologies that are already somewhat mature. I think this is more typical.

This doesn't mean I'm stuck on Java 2, but it means I don't read the papers on Paxos and Dynamo and such (instead I read the Hacker News article on the release of Apache Cassandra and build distributed software on top of relatively-early beta versions - and occasionally the business deals with costs from migrating from Thrift to CQL but the risk was worth it).


My university subscribes to Engineering Village (https://www.engineeringvillage.com), which collates 3 major paper databases (Compendex, Inspec, NTIS). I set up a weekly alert for a variety of keywords that I'm interested in. It's not perfect - I do a bunch of searching on my own - but it at least lets me know of major papers so they don't slip under my radar.


Sparrho[1] is a new startup tackling this problem, built by early career scientists to help solve the issue of scaling / distributing the knowledge that builds up in experienced academics about where & how to find papers.

They index a whole bunch of sites and repos to provide a recommendation engine tailored to you and your field.

[1] https://www.sparrho.com


In addition to the other excellent mentions here, I get weekly ToC alerts from several pertinent journals.

I scan the emails during weekly meeting.


Hello, cofounder of a company that makes a product to help stay up to date with the latest academic research here!

I help build a product called BrowZine [1]. It's focused on researchers at an institution - academic, private, and medical especially - who want to easily track the latest research papers in their favorite journals.

If you have login credentials at one of our institutions, please login and try it out! We think it's a great way to discover what journals your school/hospital/organization subscribes to, and My Bookshelf lets you save favorite journals for later, and keeps track of new articles as they are published.

If you don't have login credentials at a supported school, you can try out the Open Access library with just OA content.

Give it a try - we have a great team trying our best to make it easy to stay up to date with your journal reading! Love to hear your thoughts.

[1] http://browzine.com


This doesn't necessarily fall under the "newest" category but I wrote a twitter bot (https://twitter.com/loveapaper) that tweets random papers from the Papers We Love (http://paperswelove.org/) repository as a simple way to find new (to me) papers that might be interesting. Do check out PWL though, it's a great community with chapters from all around the world that meet up to discuss and learn more about academic computer science papers.


Almost all journals have an RSS feed. I just subscribe to a dozen or so major journals. Add a web feed reader as well you can skim through them easily, or save up the more interesting ones for later.


Academic journals have RSS feeds these days?


If you're in the biomedical domain, you can use: http://pubmed-watcher.org/ (shameless plug, I wrote it)


For pubmed searches, pubmed itself already offer email, rss notifications on newest entries. Does your site offer anything special


I'm using this service to get notifications from a few pages without RSS: https://urlooker.com


There are a lot scientists nowadays use twitter to share, so does some prestigious journals. So if you know who are the goto guys, follow them and the journals


In my field (Computer Vision/Machine Learning) newest research papers usually get into arXiv before getting accepted in any conferences. So I try to keep up with the arxiv's rss of this field.

Further more I follow other people interested in this field on twitter/google +/facebook, some of which are researchers in this field.

Moreover when a major conference's program is released I try to look into the proceedings.


Can anyone recommend good science blog aggregators? Places I can go to find blogs that reference research papers. I know about http://www.scienceseeker.org/ and http://researchblogging.org/ but I wonder if there are more?


Like spystath menrionned, all journals have an RSS Feeds stream or more, so I use RSS Feeds with my webapplication https://www.feedsapi.org/ to receive curated alerts in realtime (many of our users have this as use-case as well).

You can also use the rss feeds with a service like IFTTT or Zapier to set up an alert system.


For crypto papers, I wrote a twitter bot to track all updates on the IACR ePrint archive: https://twitter.com/IACRePrint

I basically just check my twitter account daily (also follow many great researchers who have twitter accounts :))


I manually go to https://eprint.iacr.org/eprint-bin/search.pl?last=7&title=1 on Friday evenings and read anything of interest over the weekend.



Here's another resource, from MIT Technology Review:

https://www.technologyreview.com/contributor/emerging-techno...


Honestly, I read hacker news for the noteworthy stuff. Otherwise, I ask people who are savvy in the domain what papers I should check out - a lot of the smarter people I've worked with are raving about new architectural approaches etc.


* Shameless plug *: Our users track research papers with custom RSS feeds for Google Scholar, ResearchGate, Academia.edu etc. using our tool at https://feedity.com


In computer science, there are a few big conferences in a specific CS discipline, I usually attend those conference or look at their programs. But, computer science is a unique field in which papers are funneled through conferences.


You can get arxiv submissions in a topic in a rss feed and subscribe to it also


As a semi-casual in NLP, HN frontpage more or less takes care of the big news.


Google alerts. Just insert the journalist or topic names that you're most interested in. Does an incredible job of not only research papers but of informing you pre-publisihing, which has advantages.


Google Scholar Alerts, back when I was still doing academic research (optical communication).

Also, the more experienced researchers all seemed to be have many connections to other researches through which news propagated.


A lot of groups have a journal club/ article aggregator. Try to start one with your colleagues if there is none. Google scholar alerts are also a good option if your field has nice keywords.


I have set up several Google Scholar alerts for articles. It works extremely well. I also follow everyone I can in my field on Twitter. My field is evolutionary biology.


I use google scholar alerts for people whose research I want to follow. You will get a jive email when ever people you follow publish something with links.


Making friends with the professors in your field at the best local university, and keeping an open line of communication with them can be helpful.


In my field, most cutting-edge papers show up in the monthly digest from www.optimization-online.org, a pre-print site for optimization papers.


I also find that it is a shame to restrain article sources to arxiv. It would be awesome if your tool would allow saving articles from Sci-Hub into one's library. http://www.ibtimes.co.uk/sci-hub-offline-elsevier-gets-yet-a... I think scientific research should benefit all of humanity.


NBER working papers series is great for economics papers. Most go on to be published in top journals.


To emphasize: NBER working papers are not peer reviewed, yet sometimes have been referenced by media.


Feeds from journals I follow (mostly Bio-related things) and some specific alerts from NCBI.


primarily RSS feeds - arXiv alone releases several papers each day worth at least a glance


Rsearch.ca is a tool I made for keeping up to date with custom topics


IACR updates twitter, peers sending links, Google Scholar alerts


I subscribe to the arXiv rss feeds of hep-lat and nucl-th.


You can setup email alerts directly with arXiv.


Somewhat relevant to a post earlier this week, I use RSS to subscribe to various blogs / sites / alerts etc... - the problem is that it is indeed reactive and not 'organic': https://news.ycombinator.com/item?id=12196131

http://feedly.com/smcleod/blogs

That's a link to the various sites, blogs, updates that I subscribe to, Phronix and Ars are both a bit noisey but other than them the rest I take good care to keep up with.

I personally think it's fantastic that RSS has made such a come back (some would say it never actually went away), it' such a simple, useful tool that's easy to integrate with just about anything.

----

Another interesting discussion I enjoy having is finding out how people read / digest / discover feeds: tldr; I use Feedly to manage my rss subscriptions and keep all my devices in sync, but instead of using the Feedly's own client, I use an app called Reeder as the client / reader itself. I can see myself dropping back to a single app / service, which would likely be Feedly but for me Reeder is just a lot cleaner and faster, having said that I could be a bit stuck in my comfort zone with it so I'm open to change if it ever causes me an issue (which it hasn't).

----

I use a combo of two tools:

Feedly - https://feedly.com RSS feed subscription management.

Features:

- Keyword alerts

- Browser plugins to subscribe to (current) url

- Notation and highlighting support (a bit like Evernote)

- Search and filtering across large numbers of feeds / content

- IFTTT, Zapier, Buffer and Hootsuite integration

- Built in save / share functionality (that I only use when I'm on the website)

- Backup feeds to Dropbox

- Very fast, regardless of the fact that I'm in Australia - which often impacts the performance of apps / sites that tend to be hosted on AWS in the US as the latency is so high.

- Article de-duplication is currently being developed I believe, so I'm looking forward to that!

- Easy manual import, export and backup (no vendor lock-in is important to me)

- Public sharing of your Feedly feeds (we're getting very meta here!)

2. Reeder - http://reederapp.com

A (really) beautiful and fast iOS / macOS client.

- The client apps aren't cheap but damn they're good quality, I much prefer them over the standard Feedly apps

- Obviously supports Feedly as a backend but there are many other source services you can use along side each other

- I save articles using Reeder's clip to Evernote functionality... a lot

- Sensible default keyboard shortcuts (or at least for me they felt natural YMMV of course)

- Good customisable 'share with' options

- Looks pleasant to me

- Easy manual import an export just like Feedly

----

- Now can someone come up with a good bookmarking addon / workflow for me? :)

Edit: Formatting - god I wish HN just used markdown


Regarding your last comment,

> - Now can someone come up with a good bookmarking addon / workflow for me? :)

Unless I've missed something I'm puzzled why social bookmarking has never taken off or achieved critical mass. Once upon a time there was deli.cio.us (or however they punctuated it!) but when that went through a bunch of churn I think it felt like it got semi-abandoned - I stopped using it _ages_ ago anyway.

What is more incredible to me is that Linked Data is based on URLs so you'd think that social bookmarking would have evolved out of something in that space at some point but to the best of my knowledge it hasn't.

Perhaps it's the organisational, classification, taxonomy/folksonomy[1], tagging conundrum that is holding this space back, I really don't know.

[1] https://en.wikipedia.org/wiki/Folksonomy


NCBI alerts. Done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: