I think professional software devs won’t get much value from copilot.
OpenAI’s demo from a few months back showed it as a sort of bridge to convert natural language instructions into APIs calls. Eg converting “make all the headings bold” to calls to a word doc api.
For direct emissions: Stop burning stuff. Do you call it fuel? Stop burning it. Or at least burn it as little as possible.
EVs and heat pumps solve two of biggest personal direct emissions sources: transportation and heating living spaces.
Beyond that, if you own a roof, get solar panels. Those are a modern miracle at capturing 20% of the Sun's energy, when photosynthesis for crop plants is about 1% to 2%.
That's the thing. The software shouldn't have to "learn" fundamental basic real world shit like that. We KNOW that things don't just flicker in and out of existence. The system should be built on that premise from the start.
Oh but that's what basically every gps software is doing - one moment you're slowly trailing on the highway, the other you're placed on the bridge above.
Most GPS software is smart enough to not flicker between the highway and the bridge repeatedly though. It's not like this category of problems is a wholly new thing. The car needing to switch lanes twice a second like in one of those videos is fairly unlikely, so if the ML is spitting out that it should, there probably ought to be a layer on top of that that's interpreting the ML output that can take the decision to pick one. Picking the wrong lane is less wrong than being indecisive about it.
I don't know about "most gps software" but I have a Garmin doing that and Google Maps does the same. So in my experience it's 100%. Now if such well established software products didn't figure out such basic stuff, either we're calling it wrongly "basic" (and it's a difficult problem) or the responsible factors just don't bother. The major difference is though, while gps navigation can do with frequent recalculations, realtime obstacle avoidance less so.
> there probably ought to be a layer on top of that that's interpreting the ML output that can take the decision to pick one
It needs to be an input, not just a filter on output. A heuristic that doesn't account for the previously observed state is insufficient because prior existence information changes what gets detected. And it needs to lean heavily toward "if we saw something there before, the safest course of action is to assume that it's still there until strongly indicated otherwise".
It can be a filter on the output and account for previously observed state. E.g. one trivial (and probably pretty bad) solution in that direction would be to boost the probability of small state changes as opposed to large changes.
> if we saw something there before, the safest course of action is to assume that it's still there until strongly indicated otherwise
Normally I'd agree with you, but driving a car is a bit of a special case since in may cases stomping on the brakes is way worse than doing nothing. Take the video of the tesla seeing spurious traffic lights[0] for instance, should it really do an emergency stop in the middle of traffic in that case?
All the machine learning can do is give you probabilities on what the state of the world around the car might be. The software needs to pick from those probabilities some state to act on in such a way to be the least likely to endanger anyone. There isn't a simple correct answer. all the variables need to be weighed. currently teslas seem to ignore some pretty important variables while driving.
First, jumping and flickering aren't the same thing. In my experience, google maps is somewhat unwilling to rapidly reposition you if two paths are equally probable.
Second, the cost of deciding that you're in the wrong lane is very different from the cost of deciding that there isn't an obstacle in front of you. The worst thing that happens when a navigator gets your position wrong is you miss a turn and spend a few extra minutes in the car. The worst thing that happens in the other case is someone dies instantly in a cloud of blood.
IIRC this is the stated reason they're dropping radar, the camera network maintains object persistence but the radar disagrees very frequently with the camera network causing the flickering because the system doesn't know which input to trust. Apparently by throwing out radar signals they get smoother object recognition.
I don't think radar can sufficiently differentiate between stationary obstacles and background - hence the long habit of smashing into stationary objects.
Yeah thanks, I looked it up and got those numbers from calculations but wasn't sure if that was the right way to just calculate that because I am not familiar with it.
What is interesting is that, if you watch the views of the engines during ascension, there are flames around the base of the engine(s) (only one? somewhat hard to tell) starting at T+00:14 and continuing off and on until around the first engine cutoff time (T+02:10).
Wow, I wonder how much damage there will be to the launch complex - some of those chunks of debris are enormous. Hard to imagine that nothing important was hit by falling parts.
But that doesn't explain his Ledger wallet! I'll keep saying it...those seed words were on paper, hidden from all sight, without anyone knowing they exist...for years.
Then, on February 24th, both wallets get cleaned out at around the same time. Why sit on the seed words for years?
I feel semver combines two separate ideas into the meaning of v1:
By labeling as v1 you're:
- communicating the package is stable and will be maintained
- communication that breaking changes will now increment the major version (saying nothing about stability, backported bug fixes, or any maintenance)
I wish they were not combined. Incrementing major versions frequently on early stage projects would make them work better with package managers, but without the expectation of maintenance from the community.
Not incrementing major versions makes it hard for package managers to make sensible decisions.
A consequence of the complexity is that some devs stay below v2 forever because it’s easier.
Backwards incompatible changes pile up in v0.x’s.
It’s then really difficult for consumers to install two versions of the same package, which can happen with diamond dependencies.
If the data on chain becomes valuable enough, an ecosystem of open source, auditable programs operating on it seems powerful to me.
And yeah tx fees are facemeltingly expensive right now, but scalability tech is making gradual progress.