I dislike that this elevates, canonizes, design decisions out of context.
However, this is how we train machine learning systems. I enjoy this as a social or artistic commentary in that way. How would you like to be trained this way, or what do you learn from being trained this way.
It implies, all knowledge is a convention we adopt and the ability to match or reproduce it.
---
I can do what the task asks, almost entirely, score TDB, but it reminds me of surveys with rigged questions. It's technically correct, technically a majority of people answered questions in a predictable way, and "predictable" will be argued "correct".
As sibling mentions, the issue is with overusing the default style. Another pattern could be to use a secondary outlined style, making skip as prominent as the default.
As others mentioned there were dark patterns presented as 'more correct' - there were also several stylistic choices which are entirely subjective - a major one being border radius.
Yeah, it looks nice, but so do nice clean rectangular edges. This test is missing context, in that sense it is pretty subjective.
Some of the alignment issued were fairly objective, many of them were not, e.g. some vertical alignment ones. Ofc you want consistent alignment, but you simply can't look at two images to figure out what is 'correct'. That's a design decision.
Resizing the image of the guitar to fill the space was "correct" over showing the whole image. In the context of someone selling something, seeing the whole image before clicking through would be important, versus potentially having the object of the photo being cropped out entirely.
There were several where the text between the two appeared to be different, but resolved to the same during compare.
Well characterized. You've summarized here the more clear cut examples of smuggling meaning of "correct" by presenting them all in the same context, from "consistent alignment" to beknighted stylistic choice and further to manufactured consent.
Perhaps these claims are the result of A/B testing and measuring differential engagement. In which case: Cool! That we lowly internet addicts can guess or determine the most effective from design rules, aesthetic preferences, and stockholm syndrome (kidding, a little).
YES! I came here to say that. I pressed the option with both buttons being blue. Ugly? Yes. Respectful to me as a user? Very much yes. Making a dark pattern look good is not helping the user.
Insisting that "last seen 2h ago" online indicator being green and nothing else, requiring border radius for text boxes etc are not objective in any way.
The icon consistency check is subjectively scored, as the "correct" version is undeniably bad design.
The solid camera icon is off-center (it looks smaller, mangled), but the hollow versions of the mic and camera effectively communicate if they are "on" or not and are sized correctly.
Some of these are quite subtle. I've come around. I appreciate being faced with lots of side by side examples, and enumerating the number of good decisions design benefits from.
However, this is how we train machine learning systems. I enjoy this as a social or artistic commentary in that way. How would you like to be trained this way, or what do you learn from being trained this way.
It implies, all knowledge is a convention we adopt and the ability to match or reproduce it.
---
I can do what the task asks, almost entirely, score TDB, but it reminds me of surveys with rigged questions. It's technically correct, technically a majority of people answered questions in a predictable way, and "predictable" will be argued "correct".
Searching for fallacy leading questions in surveys, I found https://www.cagw.org/thewastewatcher/fallacy-surveys-and-stu... examining misleading net neutrality surveys cited in Congress, a mix of topics able to draw discussion.