There is a similar organization in the USA: Consumer Reports. It used to be a magazine, now I believe it's just a website. Entirely funded by subscription - and not advertisement or other sponsorship - they tackle entire categories of consumer goods in the USA, rigorously testing and ranking competing products across many metrics.
If they give your product good marks, you are not allowed to mention it in your marketing (Not sure how they enforce it; maybe they stop reviewing your stuff).
I worked at a company that regularly got top marks from them, and our Marketing folks would have fits, because they couldn't mention it.
Sounds great, but Consumer Reports definitely has a checkered past. Read about the Suzuki Samurai debacle, in which they methodically manipulated their tests (strictly for that vehicle, and not any of the others in its class that they were simultaneously evaluating), with the goal of destroying the vehicle's reputation.
I am less confident about either your contention or about the position you are attacking than about the idea that this particular website is going to lie to me about it.
Why does anyone need to stand up for Rush? It was a great band and not really controversial, plus they disbanded years ago because the drummer died. They don't exist any more, just like Led Zeppelin no longer really exists (also because the drummer died, coincidentally).
That's not really Rush, just like the surviving Led Zeppelin players playing a concert with Jason Bonham isn't really LZ. The guest drummer isn't an official band member in either case, just a guest musician. The band itself just doesn't exist; now it's "the surviving members of band X". Pink Floyd is the same way; they disbanded after their keyboardist died.
> Led Zeppelin should reunite with the deceased drummer’s son playing drums
That's kind of how it worked when Zep was one of the recipients of the Kennedy Center Honors a few years back, with Ann and Nancy Wilson + Jason Bonham, son of John, + others doing Stairway to Heaven.
The Rock and Roll Hall of Fame is a complete joke, and has nothing to do with real Rock and Roll music. They regularly induct musicians and performers who have absolutely nothing to do with Rock. It's best to just ignore the whole institution.
So the website also publishes right wing propaganda. Does that invalidate the claims in the article he linked? Did you read the article before formulating your opinion?
This is basic critical thinking, you don't have to be as shallow as many readers of that website probably are, you're just choosing to be.
If somebody writes for you a long piece featuring many pages of text and references, quoting lawsuits and making numerous reasonable-seeming inferences throughout...
The opportunity they have to lie to somebody who is unfamiliar with the subject is immense. Falsehood could be hiding in any of a thousand places, and it could easily require you to hire a team of experts for weeks to find it and conclusively debunk it, line by line. It may well require decompiling what is functionally or literally the source code behind the piece to dismiss one's suspicions. "Basic critical thinking" is not trivial against a determined adversary.
Whether to take the claims within on face value depends on your purpose and on what you know of the writer. In this case, it is very easy to become quite familiar with this motivation and ethics of the writer in under a minute by clicking around the website, and come to the reasonable conclusion that this is a place that generally attempts to deceive their reader to secure material gain for their patrons & movement, and there are likely deliberate lies somewhere in the body of the piece. You don't even need to read the body.
In general, if someone cites a wacky propaganda website as evidence for something, I'm going to assume that they're doing this because there's no proper evidence. I suppose occasionally this isn't the case, and they've just made a bizarre choice on what to link, but if it walks like a duck and quacks like a duck...
It did not get little media attention. Indeed, the lawsuit got rather a lot.
> AIM has submitted an amicus brief in the case, arguing that Suzuki should be allowed to present its evidence to a jury. It is hard to understand how any judge could honestly rule that the evidence in this case does not prove that the defendant knew that its claim that the Samurai “rolled over easily” was false.
The source definitely has a dog in this hunt.
Who are we to believe, a partisan in the lawsuit, or the trial judge? And why?
It's very likely that AIM's goal is to present the best facts in their argument, and ignore or minimize other factors. Or as CU put it (quoting https://www.theautochannel.com/news/press/date/19970422/pres... ): "First it was the cigarette makers, now it's an automobile manufacturer. Different industries, same desperate tactics. Throw up a smoke screen, hurl ludicrous charges, and falsely claim (despite overwhelming evidence to the contrary) that their product is safe -- all to avoid liability for defective and dangerous products." ...
> "We welcome and invite NHTSA to evaluate our honesty and integrity.
Courts have done so and found unanimously that our methodology was beyond
reproach. For example, the U.S. Court of Appeals in New York -- one of the
nation's most highly respected courts -- has said that our work 'exemplifies
the very highest order of responsible journalism.'"" ...
> "On the other hand, Dr. Pittle said, in a decision that the U.S. Supreme
Court refused to review, a Federal Court of Appeals stated that Suzuki and its
attorneys "engaged in an unrelenting campaign to obfuscate the truth."
> "The truth that was revealed despite Suzuki's cover-up is that Suzuki
knew -- prior to first selling the car in the U.S. -- that the vehicle had a
'rollover problem' and that General Motors refused to sell the car because its
evaluation demonstrated the danger of rollover," Dr. Pittle said.
Do you really expect HN readers to act like trial court judges and decide which of these two partisans are correct, and dig through decades old material to offer a point-by-point rebuttal?
If the evidence is so clear-cut, why did Suzuki and CU end up with a rather mundane settlement?
I hadn't heard about this, thanks for the link. I did a little digging and it doesn't appear be quite as clean cut as that article says. For instance, check out https://www.theautochannel.com/news/press/date/19970422/pres... (a CR press release), where they mention internal Suzuki documents acknowledging the rollover issue.
I've read the text of the lawsuit. As you mention, this is a press release, so I'm highly skeptical of it. Video documentation of CR's manipulation of the tests is on YouTube. It's wild stuff: https://www.youtube.com/watch?v=q2Bv9WL3vpY
Best guess? CR fabricated their test results because they couldn't figure out how to replicate the real-life problems on their test course. Which is to say, both sides are in the wrong.
Every review site "fabricates their test reults" thats the whole point. You create a test that documents your assistent.
The ruling on an appeal makes the argument better than i could.
> [The] først theory is that CU know it was probably lying because its employees tries to make the samurai flip and we're happy when they suceeded. The second is that CU purposely avoided the truth by failing to address a potential source of experimental error. Neither of these theories withstands serious scrutiny
The opinion end up concluding that the entire reason for changing the test setup, along with a description of the changes, was present in the article.
Not if you were trying to decide between the Samurai and the other vehicles CR was "evaluating" (a Wrangler and the Bronco II). Internally, CR's testers praised the Samurai as having the best handling of the bunch, but their editor made sure the public never heard this.
What are you talking about? The original review literally mentioned: "Under the touch of our drivers, all four utility vehicles got through the course at 52 mph or better. The Suzuki Samurai was actually more maneuverable than the others, since it's so much smaller and lighter"
Tell me again how it was some cover up. They literally published your argument along with the review.
They might be ethical, but they don't "live" with appliances to really figure out what they're dealing with, either. I used Consumer Reports' recommendation to buy a full set of appliances for a new house about 18 years ago. I bought "GE Gold" washer, dryer, fridge, stove, microwave, and dishwasher.
Within 2 years, every single one had failures. For instance, the oven's convection fan failed in a month. The washer AND dryer completely failed within 3 years. I bought refurbished units from a local guy, and when I told him what I had, he didn't even want them to flip again.
I, too, resigned myself to the fact that, unless you pay for commercial-grade appliances, it's all crap, and you may as well just buy the cheapest thing at Lowe's, and replace it when it fails. The industry deserves all the loss of trust they have earned.
I’ve had great success with them, my vacuum is going strong after 13+ years, washer and dryer great after 9 years. But I’m still harboring negative feelings about their car reviews. They marked the 2013 Ford Edge as a great vehicle with minimal flaws, but when they reviewed the 2014 Ford Edge they found an array of problems and lowered the score. While I was researching and shopping for my Ford Edge I found the 2013 and 2014 were exactly the same cars. I think they enhanced the weld points for ANCHOR points to support 60lbs instead of 45lbs, but that was the extent of changes.. small and incremental. Yet CR faulted them for excess road noise, stiff suspension, and reliability. I test drove both mode years and they both performed and sounded exactly the same. Major parts including suspension were interchangeable as well. We went with the 2013 to save money before the CR reviews for 2014 came out. It made me realize their reviews are not consistent, especially when Tesla Model S went from having top marks to all of a sudden being scored very low. These are not cars that drastically changed between the years, so just buyer beware YMMV
FWIW, I do love my Ford Edge and it is still the daily driver for our household
You do need to distinguish between their reviews (which are done by CR internal people and reflect their values and judgement) and the ratings which are done by surveying CR subscribers who are owners of the products.
I'm a member of CR, but they obviously have their biases and blind spots.
Sometimes they start with a premise they want to prove instead of just providing a straight review of the products. They may do this by selecting the criteria (key performance metrics), for example.
Sometimes, they just don't competently evaluate the products because they fail to take into account real world consumer needs.
The quality of reviews in CR these days doesn't hold a candle to what CR used to provide, but I remain a member because even a weak signal is better than the other random stuff out there like Amazon reviews.
They actually do allow you to mention it in marketing; I've worked with them for a consumer product. What you can't do is use it in "paid" marketing.
Emails, social media posts, website landing pages, collateral in-store/retail is all fully acceptable. You can also pay additional fees to them for additional materials to use in communications.
Another similar site is https://www.rtings.com/. They do very scientific, thorough quality tests. I've only used them to buy monitors so far, but it looks like they're starting to branch out from tech -- they have new categories for blenders and vacuums.
Hopefully they don't go the same path as the Wirecutter. Started out great and small and independent and slowly started watering down reviews as they branched into more and more areas. They are now owned by the NYTimes and the quality of the reviews is much more hit and miss.
rtings is trying to push a subscription now. I was looking up wireless mice latency, and after looking at 5 mice, I had used up all my free views of "advanced metrics" like click latency.
rtings does good work, I'd pay a flat fee for it. But a subscription for a website I check every 2-3 years when I'm upgrading some tech? I'll just clear my cookies.
> I was looking up wireless mice latency, and after looking at 5 mice, I had used up all my free views of "advanced metrics" like click latency.
If you're specifically looking at mice, you might want to check out RocketJumpNinja. The reviews are pretty biased towards suitability for FPS, but they are quite in depth, and the reviewer is pretty knowledgeable.
Their conclusions are somewhat suspect sometimes though… with weird qualifications for “best” that often barely effect actual functioning. Like, heavily weighting quietness over power.
Are you referring specifically to their car reviews? Those seem to regularly attract criticism for not being more like traditional car enthusiast-oriented reviews.
I think that's largely due to car enthusiasts having insufficient self-awareness about the degree to which their priorities differ from those of mainstream consumers. PC gamers and PC building enthusiasts are also frustratingly prone to this kind of thing. (I spent several years reviewing PC hardware for a living, which included constantly fielding comments from readers who seemed to be genuinely unable to understand how their could be a market for low-end components.)
CR's core failing is, on the surface, their greatest strength: they refuse to have any "special" contact with any manufacturers. Unfortunately, this also means that they don't ask, or listen, when their test procedures are nonsensical.
Last year, I was looking to upgrade my desktop. Found a review that was bemoaning a motherboard because it only had two M.2 slots. How many consumers use two, let alone would benefit from a third?
I have opposite bemoaning for recent motherboards: only one PCIe slot (x16) is directly connected to CPU (not via chipset). It's useless to have PCIe 5.0 x16 slot even for average consumer because 4.0 x8 is still enough for modern GPU for gaming and even if it become not enough, 5.0 x8 should be enough for foreseeable future. Lack of high bandwidth dedicated PCIe slots for other than GPU makes the PC less expandable, e.g. video capture, another GPU (not for SLI), 10GbE, HBA, etc...
I hate it too. The allocation of PCIe lanes is garbage unless you spend $500 on a motherboard. We should have had the lanes divided better once we hit 4.0 speeds. At 5.0 speeds, it's absurd that 16 full-speed lanes would go to a single slot except for very specialist scenarios.
Part of the problem is that it's quite difficult to get a PCIe 5.0 signal to travel further than the first slot while keeping the motherboard price reasonable by consumer standards.
I suspect that AMD & Intel push the motherboard manufactures in that direction. You can certainly get more high-speed PCIe slots if you get one of the higher-end product lines (Threadripper, Xeon, etc.)
Yeah sadly now. PCIe x8/x8 or x8/x4/x4 (bifurcation) from CPU was common in standard priced board like Z170X-UD3, but now requires mid-high priced board. More flexible slots (dynamic switch by PLX chip) is always for highends.
Since the only thing consumers use full size PCIe slots for now is for a single GPU, extra M.2 slots isn't that big of an ask. One M.2 keyed for an interchangeable wifi card, one for NVMe storage, and then an extra one if you want to upgrade to more storage later. If you only have one keyed for storage, you're SoL when you buy a bigger one later and it's very inconvenient to move data over.
PC motherboard marketing and reviews usually don't count the WiFi card slot when tallying up the number of M.2 slots (since approximately zero motherboards are sold with an empty WiFi-type M.2 slot), so the complaint was most likely about not having more than two storage-type M.2 slots.
It's not a big ask, but it's also quite unimportant. Adapter cards to put one M.2 card into a PCIe slot are in the $5 range, and better adapters will do four.
At least they're open about how they weighed their values. And they give you enough information to draw your own conclusions from your own values. Some people want to be told what's "best", and they've found a way to make everyone (minus one) happy.
I had a problem with them where their rating methodology for carpet cleaners was not adequate. So the cleaner with strongest cleaning capability was like 10th in the list, instead of first
I've seen some reviews for which I know a fair amount, and often the testing and rating rubric are ... Bewildering and unsophisticated. I love the idea of Consumer Reports a lot more than the actual thing itself.