Um because people really want to be done burning petroleum in their cars. Hybrid is just an old gas car that’s gets improved gas mileage. It’s not a real fix.
I am interpreting it correctly. Donald Trum is repeated choice of GOP and its voters. He won twice. They are choosing him, because they want exactly what he is, who he chooses to cooperate with, who he picks for supreme Court.
He is not some kind of outlier either. There is whole network of media, influencers, think tanks doing the same to various degree. The same kind of rhetoric was here for years.
So is the idea that you would run a normal compiler once in your production pipeline to check syntax, and then re-run it with ts-blank-space to generate the final bundle?
Clearly you can’t eliminate the syntax checking correct?
Yes and the TypeScript ASTs generated while checking the code can be re-used when then running ts-blank-space to generate the JavaScript. Only generating the ASTs once is a significant performance win as this work takes much longer than the type-stripping work.
People constantly get locked out of cars with refused door handles. I’m struggling to understand how the door handle has anything to do with a door lock.
While it's not super-clear what happened in this instance, in the _previous_ incident mentioned in the article, the car 'locked' because due to a power failure; the locks presumably didn't magically engage with the battery's last gasp, but you couldn't open the doors because it doesn't have real door handles.
Many cars do, in any case, have manual/mechanical locks; you can unlock them with the key. (Or, er, they _did_, anyway. Not much of a car person; maybe this is less common with modern cars?)
My wife's 2022 Mercedes has a mechanical lock on at least the driver's door.
I discovered the process when I was driving around with our dog and needed to run in to a store briefly. The car has some quirks; if you take the key fob out when it's running, the car starts a chime inside that is annoying. It can be turned off, though. So I left the car running, put the window down, got out, turned the chime off, locked the doors (WHICH CANNOT BE DONE WITH THE FOB WHEN THE CAR IS RUNNING!), and used the auto-full-close feature of the window.
Then I discovered that the car also stops responding to the fob in this situation! I had left my phone in the car, so I ended up calling my family on my Apple Watch (which was doable but annoying because... the car is still on, which means it keeps defaulting to Bluetooth connection) and getting them to look up the Mercedes help line and text it to me so I could call it.
Dog was fine, if a little freaked out because she could see me but not get to me.
Anyway, yeah, investigate your car's method for accessing and using the physical key well before you need it. Even if it's not your car. I don't think my wife would have figured out my convoluted method for getting the car into that state, but I know she would have been confused as hell at the results. And then called me to ask how to fix the problem.
Isn’t this shape just the inevitable outcome of the more sane distribution from 50 years ago if you simply add time + compound interest? Wouldn’t it be more surprising if wealth didn’t grow faster based on how much wealth you already have?
No, because the child will be able to apply what they've learnt in novel ways, and experiment/explore (what-if or hands-on) to fill in the gaps. The LLM is heavily limited to what they learnt due to the architectural limitations in going beyond that.
> The LLM is heavily limited to what they learnt due to the architectural limitations in going beyond that.
I'm not sure how much of that is an architectural limit vs. an implementation limit.
Certainly they have difficulty significantly improving their quality due to the limitations of the architecture, but I have not heard anything to suggest the architecture has difficulty expanding in breadth of tasks — it just has to actually be trained on them, not merely be limited frozen weights used for inference.
(Unless you count in-context learning, which is cool, but I think you meant something with more persistence than that).
I don’t know if you’re correct. I don’t think you know that our brains are that different? We too need to train ourselves on massive amounts of data. I feel like the kids of reasoning and understanding I’ve seen ChatGPT do are soooo far beyond something like just processing language.
When I talk to 8B models, it's often painfully clear that they are operating mostly (entirely?) on the level of language. They often say things that make no sense except from a "word association" perspective.
With bigger (400B models) that's not so clear.
It would be silly to say that a fruit fly has the same thoughts as me, only a million times smaller quantitatively.
I imagine the same thing is true (genuine qualitative leaps) in the 8B -> 400B direction.
We do represent much of our cognition in language.
Sometime I feel like LLMs might be “dancing skeletons” - pulleys & wire giving motion to the bones of cognition.
reply