Not true; UX designers typically are responsible for advocating for a robust, intuitive experience for users. The fact that kernel updates don’t have a user interface doesn’t make them exempt from asking the simple question: how will this affect users? And the subsequent question: is there a chance that deploying this eviscerates the user experience?
Granted, a company that isn’t focused on the user experience as much as it is on other things might not prioritise this as much in the first place.
Good idea, generalising: it’s sampling mappings from arbitrary domains to a binary range (the message appears, the message does not appear). This gets more complicated when your range isn’t binary but the underlying intuition, to take an informed statistical approach, is a necessary evil for tests to keep pace with a program as its complexity grows.
Just like many other industries. I recently learned about inspectors who sample cacao beans at the warehouse for the many ways they can go wrong before being processed.
As someone who really appreciates strong examples to motivate learning generalised approaches to work, a variation of this article could be a great motivation to introduce people to parameter based testing, or (closely related) fuzzing.
For one, to ensure we're building our civilization for millennia rather than current short sighted thinking which has given us global warming and other problems. Profit motive solves for incentives, but isn't tuned for long term and community/overall betterment.
The irony here is that there are many domains using statistical methods, that bound the complexity and failure modes of statistical methods successfully. A lot of people struggle with statistics but in domains where the glove fits I think AI will slot in all across the stack really nicely.
Ehhh it’s a spectrum. First you innovate, then you commercialise. Even Google took a few years to successfully monetise and they weren’t the first mover in web search. LLMs have been around for, what, coming up on three years? Probably two to four more years to see results.
Apples to oranges. Card networks have more than three stakeholders to work with per transaction, apply fees indiscriminately from milk to digital music subscriptions, and operate at truly staggering volumes.
Yet Canada has Interac and India has UPI which nationalize all digital transactions while charging even less. In Europe they’re much more heavily regulated and this charge even less than in the States.
So if within their own industry Visa and Mastercard overcharge like crazy, especially when they can, what makes you so sure that they’re a good counter example for a completely different industry? Especially when competitors seem to be charging a very similar rate for a similar service?
That being said, it is a fair point when you consider that retailers and other similar product middle men tend to charge 1-3%, but it’s important to also consider that Apple position’s itself as a luxury product and brand where 30% markup isn’t actually out of line.
Exactly right. I’m not against competitor analysis here. But let’s at least compare against a basket of structurally similar offerings instead of cherry picking companies whose rake happens to be an order of magnitude lower in percentage terms.
My point, which hasn’t really been addressed, is that “more” isn’t a meaningful term when comparing card networks and software app stores, because the markets are structurally different.
Granted, a company that isn’t focused on the user experience as much as it is on other things might not prioritise this as much in the first place.