Hacker News new | past | comments | ask | show | jobs | submit | more smith7018's comments login

The Supercharger program is arguably Tesla's most valuable asset, though. Arguing that they're "under preforming" and therefore deserved a mass firing is unbelievably shortsighted. Without the Supercharger network (that was on its way to becoming a US monopoly), there isn't much differentiation between Tesla and other EVs beyond brand recognition and largely controversial design decisions.


It's still an asset, and it's still becoming a monopoly. Nothing has really changed except the team


He'll most likely re-hire the core engineers (maybe 1 per team) at higher salaries to maintain context. That's what he did at X


Well, it's technically a system-level launcher which means it has more permissions and access than a standard app.


We’ll find out in discovery if a lawsuit around this ever happens, I suppose


I'd guess he's right. No one outside of Apple knows more about the inner-workings of the M series than Hector. For those that don't know, he's the main contributor and lead of Asahi Linux.


I use twitter daily and the site is a shell of its former self. It's slow, prone to bugs, filled with bots, the amount of real users has cratered, user reports go nowhere, there's no support team, the ads are now bot accounts posting crap like "Today is a good day, be sure to make it advantageous", there are no new features besides previously in-flight projects pre-Musk, they've actually removed a lot of features (like Circles, block lists, etc), and much more. He took an otherwise functioning social media service and forced it into maintenance mode. He also fired all of the people that keep the user base alive so now it's flooded with bots (which he presumably likes so he can boast about engagement being up). So yes it's still around but it's dying and the skeleton crew he has left can't do anything.

In other words, he destroyed it.


The bot plague is atrocious. Like, there are tons of "keyword watcher" bots... write "onlyfans" and you'll get ~5-10 spambots in under half a minute, and for stuff involving popular politicians or political events (anything Russia, Ukraine, Israel, Palestine, Covid) you'll get Russian fake newspaper clones. On top of that come the "human bots" - write the name of infamous German youtuber "Drachenlord" and you'll get that vile hater bunch and it's just the same.


Honest question, why on earth are you on a fascist social media platform?


Honest answer: it's where my friend group's group chat lives. I miss a lot of conversation if I'm not in it. To be fair, I've blocked 235k+ Blue accounts so my experience is actually a lot better than most users.


According to the README, it runs natively on ARM but it looks more like a program than an OS. So I'm sure it can be updated to run on Android or iOS if the GUI code is rewritten with their respective frameworks but making a bootable OS seems more difficult. The author wrote an article last year about making it a bootable OS by using a barebones x86 kernel and QEMU so I'm sure it could probably be repurposed for ARM devices. [1]

https://pmig96.wordpress.com/2023/02/24/pumpkinos-busybox-an...


I dunno, I have an extreme command of my platform's framework and I'd guess that 85% of the time I've asked GPT-4 for help has been a waste of time. I think it's been most helpful in regards to writing regexes but beyond that, it hallucinates correct-sounding methods all the time which leads to _a_lot_ of wasted debug time before eventually getting to the right answer by Googling what it meant or by manually rewriting large portions of what it meant to do.

It's funny how a year ago I was really excited for how AI can help my everyday coding while fearful that it would replace me. Now I'm not really sure either will happen in the short term.


OP is right that it's not necessarily a _scam_ but is more so deceptive advertising. You're also right that they should just show total cost of your stay rather than the nightly rate pre-fees. It's wild to me that Airbnb hasn't done this because it's one of the worst parts of their service that has pushed me (and others I've talked to) back to hotels.


If you switch to the Australian AirBnB site, you can see an all-inclusive price because they are required by law to do so there.


Agreed but this isn't the same as an open source library; it costs A LOT of money to constantly train these models. That money has to come from somewhere, unfortunately.


Yeah. The amount of compute required is pretty high. I wonder, is there enough distributed compute available to bootstrap a truly open model through a system like seti@home or folding@home?


The compute exists, but we'd need some conceptual breakthroughs to make DNN training over high-latency internet links make sense.


Distributing the training data also opens up vectors of attack. Poisoning or biasing the dataset distributed to the computer needs to be guarded against... but I don't think that's actually possible in a distributed model (in principal?). If the compute is happing off server: then trust is required (which is not {efficiently} enforceable?).


Trust is kinda a solved problem in distributed computing, The different "@Home" projects and Bitcoin handle this by requiring multiple validations of a block of work for just this reason.


How do you verify the work of training without redoing the exact same work for training? (That's the neat part: you don't)

Bitcoin is trust-solved because of how the new blocks depends on previous blocks. With training data, there is no such verification (prompts/answers pairs do not depend at all on other prompt/answer pairs) (if there was, we wouldn't need to do the work of training the data in the first place).

You can rely on multiplying the work where gross variations are ignored (as you suggest): but that will take a lot more overhead in compute, and still is susceptible to bad actors (but much more resistant).

There is no solid/good solution - afaik - for distributed training of an AI (Open assistant I think is working on open training data?), if there is: I'll sign up.


There has been some interesting work when it comes to distributed training. For example DiLoCo (https://arxiv.org/abs/2311.08105). I also know that Bittensor and nousresearch collaborated on some kind of competitive distributed model frankensteining-training thingy that seems to be going well. https://bittensor.org/bittensor-and-nous-research/

Of course it gets harder as models get larger but distributed training doesn't seem totally infeasible. For example if we were to talk about MoE transformer models, perhaps separate slices of the model can be trained in an asynchronous manner and then combined with some retraining. You can have minimal regular communication about say, mean and variance for each layer and a new loss term dependent on these statistics to keep the "expertise" for each contributor distinct.


Forward-Forward looked promising, but then Hinton got the AI-Doomer heebie-jeebies and bailed. Perhaps someone picks up the concept and runs with it - I'd love to myself but I don't have the skillz to build stuff at that depth, yet.


Then don't use ChatGPT. There are hundreds of other models and ways for you to use an LLM without OpenAI's injected prompt.

> I find the idea of corporations and SV tech bros trying to define my values repulsive.

They're not. They're reflecting _their_ values in their product. You're not entitled to use their product with _your_ values.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: