Hacker News new | past | comments | ask | show | jobs | submit | more fnoof's comments login

I suspect that blockchains won’t need to reference external data to be useful.

If the data on chain becomes valuable enough, an ecosystem of open source, auditable programs operating on it seems powerful to me.

And yeah tx fees are facemeltingly expensive right now, but scalability tech is making gradual progress.


I think professional software devs won’t get much value from copilot.

OpenAI’s demo from a few months back showed it as a sort of bridge to convert natural language instructions into APIs calls. Eg converting “make all the headings bold” to calls to a word doc api.


It can still do that. Write a small comment before writing CSS and see it go!


Most of the comments here are opinions on what other people should do to stop climate change.

For those that have read up on it, What are the highest impact things the average hacker news reader can do over the next few years?


For direct emissions: Stop burning stuff. Do you call it fuel? Stop burning it. Or at least burn it as little as possible.

EVs and heat pumps solve two of biggest personal direct emissions sources: transportation and heating living spaces.

Beyond that, if you own a roof, get solar panels. Those are a modern miracle at capturing 20% of the Sun's energy, when photosynthesis for crop plants is about 1% to 2%.

Grow your own food, if you have a garden.


Get involved in carbon free energy sources getting better and more widely introduced as well as current energy using devices being more efficient ?

Any improvement by an andividual of this kind that gets deployed in scale eclipses any changes of lifestyle of a single.individual can have.


Change policy.

This isn't a problem solved by individualism.


The flickering is concerning. Kind of suggests their NN hasn’t learned basic object persistence yet.


That's the thing. The software shouldn't have to "learn" fundamental basic real world shit like that. We KNOW that things don't just flicker in and out of existence. The system should be built on that premise from the start.


Oh but that's what basically every gps software is doing - one moment you're slowly trailing on the highway, the other you're placed on the bridge above.


Most GPS software is smart enough to not flicker between the highway and the bridge repeatedly though. It's not like this category of problems is a wholly new thing. The car needing to switch lanes twice a second like in one of those videos is fairly unlikely, so if the ML is spitting out that it should, there probably ought to be a layer on top of that that's interpreting the ML output that can take the decision to pick one. Picking the wrong lane is less wrong than being indecisive about it.


I don't know about "most gps software" but I have a Garmin doing that and Google Maps does the same. So in my experience it's 100%. Now if such well established software products didn't figure out such basic stuff, either we're calling it wrongly "basic" (and it's a difficult problem) or the responsible factors just don't bother. The major difference is though, while gps navigation can do with frequent recalculations, realtime obstacle avoidance less so.


> there probably ought to be a layer on top of that that's interpreting the ML output that can take the decision to pick one

It needs to be an input, not just a filter on output. A heuristic that doesn't account for the previously observed state is insufficient because prior existence information changes what gets detected. And it needs to lean heavily toward "if we saw something there before, the safest course of action is to assume that it's still there until strongly indicated otherwise".


It can be a filter on the output and account for previously observed state. E.g. one trivial (and probably pretty bad) solution in that direction would be to boost the probability of small state changes as opposed to large changes.

> if we saw something there before, the safest course of action is to assume that it's still there until strongly indicated otherwise

Normally I'd agree with you, but driving a car is a bit of a special case since in may cases stomping on the brakes is way worse than doing nothing. Take the video of the tesla seeing spurious traffic lights[0] for instance, should it really do an emergency stop in the middle of traffic in that case?

All the machine learning can do is give you probabilities on what the state of the world around the car might be. The software needs to pick from those probabilities some state to act on in such a way to be the least likely to endanger anyone. There isn't a simple correct answer. all the variables need to be weighed. currently teslas seem to ignore some pretty important variables while driving.

[0] https://twitter.com/sascha_p/status/1400173874285744129


First, jumping and flickering aren't the same thing. In my experience, google maps is somewhat unwilling to rapidly reposition you if two paths are equally probable.

Second, the cost of deciding that you're in the wrong lane is very different from the cost of deciding that there isn't an obstacle in front of you. The worst thing that happens when a navigator gets your position wrong is you miss a turn and spend a few extra minutes in the car. The worst thing that happens in the other case is someone dies instantly in a cloud of blood.


IIRC this is the stated reason they're dropping radar, the camera network maintains object persistence but the radar disagrees very frequently with the camera network causing the flickering because the system doesn't know which input to trust. Apparently by throwing out radar signals they get smoother object recognition.


> Apparently by throwing out radar signals they get smoother object recognition...

...and driving straight into barriers.

> the system doesn't know which input to trust

Well, clearly their camera network is untrustworthy, so...


I don't think radar can sufficiently differentiate between stationary obstacles and background - hence the long habit of smashing into stationary objects.


the snipped linked are on 9 beta, which is only vision. object flickering and other issue remains unabated.


An iPhone 12 weighs 164g [1], so, through the magic of the metric system, it needs to displace a minimum of 164ml of water to float.

The iPhone has an external volume of 14.67cm * 7.15cm * 0.74cm = 77.62cm^3 = 77.62ml. So it would need to be about 2x the volume to float.

[1] https://www.apple.com/iphone-12/specs/


Yeah thanks, I looked it up and got those numbers from calculations but wasn't sure if that was the right way to just calculate that because I am not familiar with it.


Time to roll back this innovation and release the new iphone with 2x volume so that it'd float.


Meet the new iPhone Air...


Lifeproof used to make an add-on life jacket that fit around their cases.

https://duckduckgo.com/?t=ffab&q=lifeproof+case+life+jacket&...


But then it would be heavier, and need to be even bigger than 2x.


Might be an idea for a magsafe attachment: a floater :)


It exploded mid air just after landing burn started.

In this video you can hear engines start then an immediate explosion https://mobile.twitter.com/TheFavoritist/status/137689513012...

Possible the burn started late right above the ground. But I reckon we would have seen a larger explosion on the cameras in that case.


What is interesting is that, if you watch the views of the engines during ascension, there are flames around the base of the engine(s) (only one? somewhat hard to tell) starting at T+00:14 and continuing off and on until around the first engine cutoff time (T+02:10).

https://youtu.be/4qS5Vhz8VJo


Well, one of the Yuca plants got the top sliced off. And the Nasaspaceflight camera got shovered with dirt. :)


Wow, I wonder how much damage there will be to the launch complex - some of those chunks of debris are enormous. Hard to imagine that nothing important was hit by falling parts.


Plus there was very little horizontal velocity in the parts raining down, I think you're right.


> - The passphrase for the Trust Wallet is saved as a screenshot on his iPhone.

Could the photos have been uploaded to iCloud and compromised from there? Or accessed from another device?


The photo for his Trust Wallet? Maybe.

But that doesn't explain his Ledger wallet! I'll keep saying it...those seed words were on paper, hidden from all sight, without anyone knowing they exist...for years.

Then, on February 24th, both wallets get cleaned out at around the same time. Why sit on the seed words for years?


Keep the python brand and fix these things in a new backwards incompatible python release. Call it Python 4.

I’m sure it’ll only take a couple of years for everyone to migrate


I feel semver combines two separate ideas into the meaning of v1:

By labeling as v1 you're:

- communicating the package is stable and will be maintained

- communication that breaking changes will now increment the major version (saying nothing about stability, backported bug fixes, or any maintenance)

I wish they were not combined. Incrementing major versions frequently on early stage projects would make them work better with package managers, but without the expectation of maintenance from the community.

Not incrementing major versions makes it hard for package managers to make sensible decisions.


A consequence of the complexity is that some devs stay below v2 forever because it’s easier. Backwards incompatible changes pile up in v0.x’s. It’s then really difficult for consumers to install two versions of the same package, which can happen with diamond dependencies.


Do you have an example of such package? I've used Go modules for years and never encountered that.


someone cited kubernetes above


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: