That's a great concept, I always wondered why "best X for the money" (like the columns on tomshardware) weren't just automated, given the availability of crowdsourced benchmarks (http://www.userbenchmark.com).
I wish I could specify other specific priorities, like video conversion or compression or data management, so it would trade off between CPU, SSD speed, SSD storage space, or RAM in slightly different ways (though maybe that level of optimization is overkill).
> I wish I could specify other specific priorities
I tend to think that's overkill :) But one can toggle between a gamer setup and a mainly more more cpu heavy app focus, it's in the advanced menu under the recommend button. Switching between gpu and cpu heavy builds catches a lot of those different usecases.
One problem with those is that they are often based on artificial benchmarks, which is most of the time not what you want. I use benchmark results from professional publications instead. The problem then moves to ordering, but that's manageable with enough data.
> > I wish I could specify other specific priorities
> I tend to think that's overkill :)
Yeah, I think you're right. It's like that old Knuth quote. Programmers waste enormous amounts of time worrying about the speed of noncritical parts of programs.
> I use benchmark results from professional publications instead.
Yeah, I generally do that too, I just thought userbenchmarks would be easier to API and be more comprehensive.
Though thinking about it now, I'm not sure why I distrust UB so much. In a sense I'm just choosing different biases. Even if I find a reliable reviewer that personally buys each part at a retailer to avoid "reviewer binning," the best case, I still risk a sampling error due to the small numbers (typically n = 1) tested. The bell curves on two different CPUs might overlap considerably. So buying a marginally better rated processor might only give you better performance in 51% of purchases.
No professional reviewer could afford the cost or time to individually buy and test a statistically significant number of devices to determine its variance in operation.
On the other hand, the errors in judgment that might creep in from variance due to small samples are almost certainly washed out completely by the leaps between generations, right? So trustworthy reviewers are probably "good enough" and worrying about that issue is another form of unnecessary optimization, as much overkill as my first idea.
I'm open to that possibility.
Though we might go further down this line... Maybe the performance exaggerations due to conformance to published benchmarks aren't much of an issue either. As long as we're talking about ranking cards for real world usage, the odds that a card will perform noticeably better at benchmarks but noticeably worse at tasks I care about is... Well it's nonzero, but small. Smaller still if we upgrade our threshold from "noticeably" to "significantly" or "substantially."
I mean, it's your own call where you pull your data from because no data is perfect, and we probably pull from the same places. But you really got me running thinking everything is needless optimization and now I can't stop. :)
I wanted to write "the problem is less the unprofessionalism of the benchmarkers, but the kind of benchmarks they are running. Not being real games they won't accurately reflect performance in them." Ofc, that is kind of the bias I work with, gaming performance is the most important factor, it's the default profile. But actually, the average benchmark position of gpus is pretty good. It looks like they were very careful in selecting synthetic benchmarks that reflect gaming performance accurately. Did that change?
I now look at cpus again.... yeah, there it would not work, that is too far off of what I see in gaming performance. But cpus are hard, with single and multi/threaded performance differing so much and having different effects in each game you look at.
So imho one could only use the gpu ranking, but the gpus are not hard to rank anyway :)
But now I wonder whether that would not be a good data source for SSD and HDD performance. Maybe even for RAM...
Thanks for your thoughts, you made me take a second look on something I had discarded earlier!
I wish I could specify other specific priorities, like video conversion or compression or data management, so it would trade off between CPU, SSD speed, SSD storage space, or RAM in slightly different ways (though maybe that level of optimization is overkill).