It's also a logically trivial statement. There is no threshold for "best"--food quality is a matter of degree. So you can cite any date at all and justifiably say the food is "best" before that date.
I would not be able to tell if these slider toggles were saying "yes" or "no" without them going from gray to green. How do colorblind people deal with them?
That the whole world adopted these abominations, when there was absolutely nothing wrong with checkboxes, is immensely frustrating.
Does green (usually these are blue) mean yes? Every time I see one of these I realize I don't understand them and I'm not even sure they are consistent in behavior. I usually use a context cue to infer what they mean. The dumbest part is there is plenty of room to just write "on" and "off" right on them.
I think it is no coincidence these are the preferred control for all manner of dark pattern setting screens.
Most of us cope just fine, so long as the contrast between the states is clearly different. My colour blindness is anomalous trichromacy, with a perception difference for red-green. Because the examples shared are a bright blue for on and black for off, I can see it just fine, however, I'd suggest making the switches off state back ground the same as the main background in this case. Aesthetically, it looks better.
PDQ Bach is funny to young people and musical neophytes, and even funnier the more erudite you are. Every time I read or hear another PDQ score, book, or record, I find jokes that I wouldn't have gotten even two weeks before.
I've heard this expression many times before, still doesn't make any sense to me.
I vastly prefer my setup based around FreeBSD I've used for the last 20+ years, than the corporate hell that is Windows.
Not much has changed over the years - I still use rc.conf and pf.conf to configure my system. Things are still where I expect them to be.
XFCE is still the same, thunderbird hasn't changed much. Vim is exactly how it was. And the initial learning curve wasn't even that steep.
Meanwhile, Windows is just getting worse and worse and worse for each year.
So in that case, I am paying both with my time wasted and with money(though someone elses).
Interesting contrast. No question that Windows wastes a lot of one's time. Probably most people guess that Linux will waste more of their time, because they may not be able to learn easily enough what you've learned about how to configure it and keep it running smoothly. They may suspect that even if they put in a little time, they will nonetheless make fatal mistakes or be left with a system they must replace with Windows after all. So their perception or misperception of the time cost they will pay causes them to choose Windows, and maybe a lot of them are wrong in the long term.
Anything subjective that doesn't fix an identifiable execution problem must be explicitly labeled as a suggestion. You don't get first choice over the code because you are asked to be the reviewer. You are there to (1) catch mistakes and (2) teach the other coder if they appear to not know something useful that you do know. If you have a preference about a simple stylistic matter that is not covered by your style guidelines, either put it in the guidelines, or hold your tongue.
The only exception that I would tag on to that is documentation. "I can't understand this documentation/comment" deserves more attention than I think most places give it.
It doesn't directly relate to execution today, but it might later on when someone misunderstands the docs or just gives up on them and tries to hack it together blind.
IMO you should never provide feedback that can be implemented as an automated check. If you don't like deeply nested control flow, then you should catch that with static analysis. If you need code coverage, you should require it for merging. Implement your check and provide a new PR to fix your nitpicks, or shut up. The goal is to put 100% of the focus on correctness.
This seems precisely backward to me. Programmers are humans, not input/output machines, and there's definitely a role for encouraging a certain standard of judgment that doesn't require tooling to enforce. To argue otherwise seems akin to arguing that any bad behavior is fine that isn't explicitly banned in paragraph 5 subsection D. Tooling is expensive, especially for smaller teams, and should be saved for phenomena that hit the cost/benefit calculations squarely.
I agree that sometimes the cost/benefit analysis doesn't make sense for adding tools, but I would argue that in many of those cases the benefit of your nitpicking isn't worth it either, so just don't say anything so that we can focus on the important parts of the code. Of course people will sometimes do some really silly stuff that couldn't realistically be checked, so I would hope that less important stuff gets put aside so we can focus on the real issues.
On that note, what good code coverage tools are out there? GitHub and Gerrit, as well as (egads) ReviewBoard don't seem to have native support for this in the review. It's unfortunate since it seems super useful to have be up front and visible.
Doesn't seem hard to me? You could easily run a spell checker on identifiers and comments. It will produce a lot of false-positives, but that can be solved by making the changes optional or using an allow list.
Note that the important part is a good way to mark all those false positives that is simple and doesn't go to far. Just because I want a bad spelling in one place doesn't mean I want it everywhere. As a result I'm going to predict that your tool either results in too much boilerplate needed to suppress all the false positives, or your tool lets pass a lot of things that shouldn't. But that might be just that I don't have good ideas: if you create a good tool for this I'm willing to be proven wrong.
Does that understand that when talking about HTTP headers I talk about "referer" but when talking about JavaScript I have to use "referrer"? - What is wrong in one place can be different elsewhere. Terminology, spelling, accepted abbreviations depend very much on context. And sometimes even a wrong spelling is right as it's in some standard ...
Voice assistants do have a lot of potential due to the fantastic ergonomics. "Voice command line" is used as a pejorative here but it's true and a good thing. The original visionaries were right to pursue this idea.
To me where it's gone wrong is that like many other things the A team conceives it but the B team is responsible for its development. The A team will ask "what will users want to say to it"? But Team B says "well if they say 'blah' how do we know that they mean 'blah'"? Mediocrity creeps in.
As an example, here is a typical dialog between me and my Alexa.