They are helpful because they are unclear.
There are cases where a bit of ambiguity is useful or you don't know exactly what to say.
Face to face this can be solved with expressions, gestures etc. But in text form it's more difficult and emojis can help a bit.
Except their usage is often too unclear for the recipient. If that praying emoji after a request means they are praying for me to fullfil it? Or they just namaste me? Or they need that request fulfilled because they highly depend on it?
If someone tags my message with a clown emoji - does that message brought extreme joy to them or should I find them to talk face to face with them?
Some emoji are helpful by adding the non-verbal context. Some aren't.
Hey, there's another one that my social circle has settled on a clear meaning for. If any of my friends use it, what they mean is that they are admitting being foolish about something.
That's an interpretation that would never have occurred to me. Perhaps there's a cultural difference here? I don't see people in real life use that gesture to mean "thanks", so the thought wouldn't have crossed my mind in emoji form.
I'd take it as "I'm praying for you", which can be meant in both a positive and negative way.
It's a clear import from the American culture here and while I can infer the meaning from the context, I doubt everyone would. Especially considering it's "Folded Hands" and not "Praying" and some sets have a... very strange picture for this emoji: https://emojipedia.org/google/android-5.0/folded-hands
PS holy shit, there is a WikiHow article on it's meaning and how to use it. While there is a lot of stuff on WH, the mere presense of that article adds to my position.
I actually have a couple of those Superfest glasses - they were in the house I bough in 2021 (which was full of junk and generall a bit messy). But I have no idea how they got here as the house is in western germany.
I'm jist getting started with home automation and have a couple of ESP32 running tasmota.
How do they compare? Thr site explains how I can migrate bit not why or under which circumstances I should...
Tasmota is firmware you can configure on-device[0] while ESPHome is a YAML-driven construction kit for compiling firmware specific to a device's configuration. Every change to the YAML is a compile-and-flash cycle.
Tasmota is only for Espressif platforms. ESPHome has expanded to support BK72xx, RTL87xx, and the Pico W, but good luck figuring out what's actually implemented on those platforms.
ESPHome supports more sensors/peripherals. Some ESPHome Components[1] simplify the combination of multiple sensors and peripherals to accomplish a task to basic YAML (check out the different Cover components).
Tasmota on ESP32 has an embedded scripting engine with REPL (Berry). ESPHome is... complicated[2]. Triggers, Actions, and Conditions can accomplish very simple automations in pure YAML. For more complicated tasks, you'll be writing C/C++ code.
ESPHome releases frequently. If you're using it with Home Assistant, it will constantly nag you to update ESPHome and all of your ESPHome devices. Tasmota releases every few months. Tasmota suggests not upgrading a device unless you have a particular need[3].
[0] Pre-compiled Tasmota binaries work for most purposes, but there are situations where you might need to compile your own to support less common features or devices.
in most cases you can just use more fine-grained exports.
e.g. export /home/user1 to 10.0.0.1 and /home/user2 to 10.0.0.2 instead of /home to 10.0.0.0/24 etc.
It is considering what you get for it, and it's not lower end six figures and most likely seven. The JetMoe team released their training cost estimate and it took them $100k to train what's effectively a 2.2B model for 1.25 T tokens. Compare that to the still tiny Mistral 7B which is 3x larger and was trained on 4x more data you get a figure more around $1.7M. These are the absolute smallest production-viable LLMs.
For something like Mixtral 8X22B with 40B active params you'd looking at the $10M range, and if something gets screwed up during training you can be left with a dud and nothing to show for it, like LLama-2-33B. It's like buying millions worth of lootboxes and hoping something good drops.
This is over trivializing it, but there isn't much more inherent complexity in training an 8B or larger model other than more money, more compute, more data, more time. Overall, the principles are similar.
Nope, visit ANY old german village that wasn't bombed during ww2.
Most of the old houses still stand and every single one is prettier than what is being build today.
An average pre-WW2 house in a German village may be prettier than an average house from the 1950s, but certainly not than an average house being built today. Your perception is presumably clouded by the touristy "villages", which in fact have been at least locally important towns at some point in the past.
Are you saying medieval Germany was richer than present-day Germany or USA? That's of course not true.
And it's also not clear why you call it a survivorship bias - these villages were probably above-the-median in their days, but it's not some singular building like Pyramids that is not representative of overall building of that era/territory. It was just how the houses were built there and then.
VWs strong point is manufacturing. They have car factories all over the world and actually know how to run them efficiently.
While software is their weak spot with no change in sight and might break their back.