Nope, they already rolled back the checkbox within 72 hours of the court order. This was added this week, a few days after the original checkbox was removed.
Also, the value of the checkbox is "yes-it-is", which would break the API anyway if that were the reason.
Can't do with docker or can't do as easily with docker as you can with HAOS? My understanding has been that everything can be done by just adding new containers or files, and it's worked for me thus far.
HAOS uses docker to containerize everything, so it can’t be that difficult, and it really is not. Docker has a —-device flag for this purpose, udev makes it easy enough to assign stable names.
What do you mean by “HAOS on docker”. HAOS is a standalone complete Linux system with its own fully managed kernel, not meant to be containerized. It uses docker internally itself though and “pass through” works transparently.
If you’re talking about running home assistant in a docker container, sure you’re more on your own, but since official home assistant in HAOS must run in docker, none of this is terribly difficult to configure.
The dongles are usually exposed as tty devices and I’ve been running zigbee2mqtt and Zwavejs addons in docker containers for years with no issue.
HAOS takes care of stable naming (based on default udev rules) out of the box.
Unlike system virtualization, there isn’t really anything that needs passing through, it’s a naming and permissions issue - the container just needs an appropriately permissioned dev node ideally with a stable name. If you are using official addons it is effectively zero-config, and if you’re not, sorry but I don’t find the configuration to ensure a dev node to be anything but straightforward container config.
As someone else mentioned it may be as simple as:
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
But you can just as easily use the /dev/serial tree to have stable names. Those names come out of the box with udev. You can always make your own too, I’ve done it, it’s not hard.
Efficient markets enforce efficiency. The trouble is that purely efficient markets are very much a spherical cow—they're useful for modeling reality in some simple simulations, but they miss an awful lot of detail and that can lead to very bad conclusions if you take them too seriously.
Exactly, which totally undoes OP's argument. At the high end the manufacturer can have its cake and eat it too. By pricing the TVs in the high income price bracket and ensuring there are no ad-free versions on the market (easy to do because most consumers don't choose devices based on ads or not, even if they find the ads irritating after the purchase), they get both the profit of selling a luxury item and the recurring ad revenue for selling ads that they can confidently tell advertisers will be seen by affluent people.
OP's argument assumes an efficient market, which over-the-air updates ensure this market cannot be.
Your model is oversimplified in a way that downplays the value of ads to the manufacturer. They don't reach out to sponsors and get a $50 static offer per TV in deals. They do some math and figure out that they can make at least $X per customer on average over the lifetime of the TV by selling ad slots dynamically.
The subtle difference here is that because the sponsorships can be updated live across TVs that have already been sold, the actual value of each TV sale can be made to go up after the date of purchase by updating software and/or changing the ad deals.
So the manufacturer isn't pricing the TVs at a discount precisely equal to the ad revenue they receive per TV, they have to price the TVs based on a complicated formula that includes both a rough estimate of the minimum value of ad deals and customer willingness to pay (keeping in mind that customers are choosing their willingness to pay based on a landscape that has no ad-free models!). And what's more, the manufacturer is free to alter the deal after the sale is made to try to make a larger profit per-TV than was originally priced in.
You make it sound like it's a reasonable outcome of an efficient market, but the current situation—where one party can and does alter the deal retroactively and unilaterally—does not create an efficient market!
Wow. I disagree with your position on the merits and read through your replies to see if someone had provided an effective counterargument, but you really touched a nerve: it's straight ad hominem snark all the way down. So I guess I'll give honest engagement a shot.
> I'm not sure about the benefits compared to hard science or medical science. ... we're talking about a very small number of researchers here. The average person of whichever race is only affected in the sense that his tax dollars are spent more effectively.
If I'm understanding you, this is your main point: social sciences have a weaker return on investment than medical sciences (and presumably some others?). Here's a counterargument.
There are some fields that study universal facts about biology or physics. It doesn't matter where you are in the world, these will largely yield similar results that can be applied anywhere. There's a small amount of value to replicating research done in one population on a different population, but humans are broadly similar enough that it's not strictly necessary.
On the other hand, there are fields where the location of the research absolutely does matter. This is true of the social sciences. Conclusions drawn about the functioning of one human culture are not broadly translatable to other cultures.
This means that even if the net return on investment for medicine is higher (and it probably is, precisely because it translates to more people), it's actually more valuable for small countries to pay for their own social sciences than their own medical research. They can always take advantage of what others are learning about biology, but if they don't research the way that New Zealand works then no one will.
In theory what you say sounds great, unfortunately in practice, many fields have become far from objective and “peer reviews” seem to be more of “peer support”.
This is baby-with-the-bathwater logic. Because there are flaws in the system (which are flaws in all of academia, not limited to the social sciences) we should defund the programs entirely and give up on whole fields of endeavor.
It depends on how you define dangerous—some of the most dangerous men in history have been dangerous in part because of how insecure they are. That's how you get men like Putin, putting on an extremely self-conscious show of masculinity while slaughtering hundreds of thousands.
Obviously not every schoolyard bully gets an army and a navy to play with, but they're all dangerous in their own ways, and that danger often stems from a self-conscious desire to project power.
Some Western leaders who killed hundreds of thousands including Bush II, Obama etc did not have to put on a masculinity display. The web of lies weaved by traditional media and vested interests gave a good enough cover.
Putin's propaganda features these absurd strongman displays of masculinity because it works, it's a very well known and ancient exploit in the human mind and societies as a whole.
I'm sure Putin would prefer to do the evil things he does without having to do the whole song and dance of propaganda.
It's also two hours that would have been completely avoided if the author were familiar enough with Node to know to pin the version and not try to install 4 years of updates in one shot.
Most who are here saying that X, Y, or Z ecosystem "compiles and runs" fine after 4 years are talking about the time it takes to resume an old project in a language they're very familiar with running the same dependency versions, not the time it takes to version bump a project on a language that you don't know well without actually having it running first on the old version.
I can open my 4-year-old Node projects and run them just fine, but that's because I use the tools that the ecosystem provides for ensuring that I can do so (nvm, .nvmrc, engines field in package.json).
The author didn't update all dependencies, they just tried running it on a newer version of Node itself. That is definitely a use case included when most people talk about an ecosystem compiling and running fine after several years.
In some ecosystems, yes, backwards compatibility is a given, but not in most. Python versions behave in much the same way as Node, with you having to make sure you're using the same Python version this time as last time in order to be able to install the same dependency versions. Java has been better in recent years, but Java 8->9 can take several hours to get working on even a small project.
50% of Java developers are still regularly working in Java 8 [0], which is the same solution that the author could have arrived at immediately—when starting up an old project after 4 years, use the same version you ran last time, don't try to update before you even have the thing running.
> C, C++
Not my experience, but maybe it depends on your operating system and distro? In my experience sorting through the C libs and versions that you need to install on your system to build a new project can easily take a few hours.
reply