That actually is an approach. Some teachers make you read the lesson before class, others give you homework on the lesson before lecturing it, and some even quiz you on it on top of that before allowing you to ask questions. I personal feel that trying to learn the material before class helped me learn the material better than coming into class blind.
There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.
So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
“Which is a question that can be interpreted in only one way. And done only one way.”
The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.
While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.
> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.
There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.
Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)
2 Software Engineering roles | Full Time | Hybrid (1 day) or Remote | El Segundo or anywhere in USA | US Only (Clearance)
I have 2 open roles on my engineering team supporting the Space Force as a contractor at ManTech. One is for a cloud/infrastructure/data engineer to manage our cloud enclave(s) and one is for a sysadmin in tools like Jira/Confluence/Foundry. Our team (~20) is 75% distributed across USA fully remote and 25% in the office one day a week. Both roles require 7+ years experience, and a Secret or Top Secret SCI security clearance (not required to start). $110K-$185K depending on experience and location.
I’m trying to fix the job posting to correctly reflect some of the info above, so if you see a discrepancy, go with this post.
I don’t own the role you listed. I’m only a manager, not a recruiter, so can’t help you with other opportunities. Feel free to apply to my role, the one you listed, or any other role and you should get connected to the right people for it!
There should be an option in FSD to have it not pass on the right and to change what speed difference it will wait to pass for (for example only pass when the car in front is [5] mph slower than what I want to go). These are separate options to look into than chill mode, and could also fall short to Comma, but thought I’d share in case you didn’t know they were there.
It was an idea before Homecoming, and even before this 2015 paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4547388/ But that aspect of Homecoming scripts were probably inspired by the general discussion of possible treatments over the years.
PTSD applies to more than just veterans. They're one of the most prominent discussed class of sufferers of PTSD, but there are victims of crimes, of simply accidents, or of war or terrorism that can also suffer PTSD.
Fun game! I thought you programmed it wrong till I realized in your version of liars dice 1s are wild. Also never played the way of guessing the total number of dots, and we always have loser go first on the next round. Thanks for sharing!
Any concern here of the nasal spray crossing the blood brain barrier in a way that the EpiPen doesn’t? I found a study from 2007 with semi-mixed results: https://pubmed.ncbi.nlm.nih.gov/17472409/
> Of these, only two studies in rats were able to provide results that can be seen as an indication for direct transport from the nose to the CNS. No pharmacokinetic evidence could be found to support a claim that nasal administration of drugs in humans will result in an enhanced delivery to their target sites in the brain compared with intravenous administration of the same drug under similar dosage conditions
2024 Q1: “we recorded one crash for every 7.63 million miles driven in which drivers were using Autopilot technology. For drivers who were not using Autopilot technology, we recorded one crash for every 955,000 miles driven. By comparison, the most recent data available from NHTSA and FHWA (from 2022) shows that in the United States, there was an automobile crash approximately every 670,000 miles.” [0]
And for further info that includes a comparison to Waymo see [1].
Doesn’t seem right to call something that is 10x safer “terrible”, but maybe you have different metrics you are looking at.
> that is 10x safer “terrible”, but maybe you have different metrics you are looking at.
Of maybe your “metrics” are just slightly dishonest?
Presumably you can only use FSD under conditions where accidents are already significantly less likely to happen (per mile driven) like freeways etc?
Which makes this an apples to oranges comparison.
Turning “FSD” off just before an accident is likely to happen without necessarily giving enough time for the driver to react would also be very helpful (although I’m not sure how the cases would be recorded specifically )
> Doesn’t seem right to call something that is 10x safer “terrible”, but maybe you have different metrics you are looking at.
Quick question: are there roads or conditions Autopilot doesn't do well on, so can't or won't be activated?
I don't see Autopilot working well in a bitter New England winter storm at night. Or a torrential Florida downpour.
That's why those metrics suck. And Tesla knows. They compare "all drivers, all conditions" to the "idealist, best case subset of miles driven by Autopilot".