Hmm, two downvotes. My first. So comments would be especially welcome.
Context: The parent observed that with a small number of people up/down voting, the result was noisy. I observed the numbers were sufficient, if the system used more of the information available to it. And that the failure to do so is a long-standing problem.
Details, in reverse order:
"Civilization": Does anyone not think the state of decision and discussion support tech is a critical bottleneck in engineering, business, or governance? "AR": A principle difficulty is always integrating support tech with existing process. AR opens this up greatly. At a minimum, think slack bots for in-person conversations. "crowdsourcing": or human computation, or social computing, is creating hybrid human-computer processes, where the computer system better understands the domain, the humans involved, and better utilizes the humans, than does a traditional systems. "ML": a common way to better understand a domain. As ML, human computation, textual analysis, etc, all mature, the cost and barrier to utilizing them in creating better discussion systems declines. "Usenet": Usenet discussion support tooling plateaued years before it declined. Years of having a problem, and it not being addressed. Of "did person X ever finishing their research/implementation of tool Y". "decades": that was mid-1990's, two decades ago. "little changes": Discussion support systems remains an "active" area of research - for a very low-activity and low-quality value of "active". I'm unclear on what else could be controversial here.
For anyone who hasn't read the paper, it's fun - it was a best paper at SIGCHI that year, and the web page has a video. A key idea is that redundant use of a small number of less-skilled humans (undergraduates grading exam questions), can if intelligently combined, give performance comparable to an expert human (graduate student grader). Similar results have since been shown in other domains, such as combining "is this relevant research?" judgments from cheaper less-specialized doctors with more-expensive specialized ones. On HN, it's not possible to have a fulltime staff of highly skilled editors. But it is technically plausible to use the exiting human participants to achieve a similar effect. That we, as a field, are not trying very hard, reflects on incentives.
Context: The parent observed that with a small number of people up/down voting, the result was noisy. I observed the numbers were sufficient, if the system used more of the information available to it. And that the failure to do so is a long-standing problem.
Details, in reverse order: "Civilization": Does anyone not think the state of decision and discussion support tech is a critical bottleneck in engineering, business, or governance? "AR": A principle difficulty is always integrating support tech with existing process. AR opens this up greatly. At a minimum, think slack bots for in-person conversations. "crowdsourcing": or human computation, or social computing, is creating hybrid human-computer processes, where the computer system better understands the domain, the humans involved, and better utilizes the humans, than does a traditional systems. "ML": a common way to better understand a domain. As ML, human computation, textual analysis, etc, all mature, the cost and barrier to utilizing them in creating better discussion systems declines. "Usenet": Usenet discussion support tooling plateaued years before it declined. Years of having a problem, and it not being addressed. Of "did person X ever finishing their research/implementation of tool Y". "decades": that was mid-1990's, two decades ago. "little changes": Discussion support systems remains an "active" area of research - for a very low-activity and low-quality value of "active". I'm unclear on what else could be controversial here.
For anyone who hasn't read the paper, it's fun - it was a best paper at SIGCHI that year, and the web page has a video. A key idea is that redundant use of a small number of less-skilled humans (undergraduates grading exam questions), can if intelligently combined, give performance comparable to an expert human (graduate student grader). Similar results have since been shown in other domains, such as combining "is this relevant research?" judgments from cheaper less-specialized doctors with more-expensive specialized ones. On HN, it's not possible to have a fulltime staff of highly skilled editors. But it is technically plausible to use the exiting human participants to achieve a similar effect. That we, as a field, are not trying very hard, reflects on incentives.