Hacker News new | past | comments | ask | show | jobs | submit login

We know how to deal with this, and have for years. A bot which instruments and invokes humans, learning about content and individuals both. Few humans are needed each time, and those need not be experts, if used well. 20 people is much more than enough. A candy machine, in an undergraduate lounge, can grade CS 101 exams.1 Ah, but discussion support - from Usenet to reddit (and on HN too), incentives do not align with need. Decades pass, and little changes. Perhaps as ML and croudsourcing and AR mature? Civilization may someday be thought worth the candles. Someday?

1 http://represent.berkeley.edu/umati/

Edit: tl;dr: Future bot: "I have a submission. It has a topic, a shape, and other metrics. It's from a submitter, with a history. Perhaps it has comments, also with metrics, from people also with histories. I have people available, currently reading HN, who all have histories. That's a lot of data - I can do statistics. Who might best reduce my optimization function uncertainty? I choose consults, draw the submission to their attention, and ask them questions. I iterate and converge." Versus drooling bot: "Uhh, a down vote click. Might be an expert, might be eternal September... duh, don't know, don't care. points--. Duh, done."




Hmm, two downvotes. My first. So comments would be especially welcome.

Context: The parent observed that with a small number of people up/down voting, the result was noisy. I observed the numbers were sufficient, if the system used more of the information available to it. And that the failure to do so is a long-standing problem.

Details, in reverse order: "Civilization": Does anyone not think the state of decision and discussion support tech is a critical bottleneck in engineering, business, or governance? "AR": A principle difficulty is always integrating support tech with existing process. AR opens this up greatly. At a minimum, think slack bots for in-person conversations. "crowdsourcing": or human computation, or social computing, is creating hybrid human-computer processes, where the computer system better understands the domain, the humans involved, and better utilizes the humans, than does a traditional systems. "ML": a common way to better understand a domain. As ML, human computation, textual analysis, etc, all mature, the cost and barrier to utilizing them in creating better discussion systems declines. "Usenet": Usenet discussion support tooling plateaued years before it declined. Years of having a problem, and it not being addressed. Of "did person X ever finishing their research/implementation of tool Y". "decades": that was mid-1990's, two decades ago. "little changes": Discussion support systems remains an "active" area of research - for a very low-activity and low-quality value of "active". I'm unclear on what else could be controversial here.

For anyone who hasn't read the paper, it's fun - it was a best paper at SIGCHI that year, and the web page has a video. A key idea is that redundant use of a small number of less-skilled humans (undergraduates grading exam questions), can if intelligently combined, give performance comparable to an expert human (graduate student grader). Similar results have since been shown in other domains, such as combining "is this relevant research?" judgments from cheaper less-specialized doctors with more-expensive specialized ones. On HN, it's not possible to have a fulltime staff of highly skilled editors. But it is technically plausible to use the exiting human participants to achieve a similar effect. That we, as a field, are not trying very hard, reflects on incentives.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: