Hacker News new | past | comments | ask | show | jobs | submit | j1f4's comments login

The word that I have seen in similar contexts is 'trusted', which I like and would have preferred -- the block has extra privileges and isn't machine verified. Some people tend to give 'trusted' an opposite reading when they first come across it, though.


The problem with that word is it doesn't say who is doing the trusting, which is the crucial point. In fact, "trusted" can be used to describe both safe code and unsafe code. In the safe code, the programmer is trusting the compiler. In unsafe code, the compiler is trusting the programmer. Both code environments are "trusted," but the trust is being given to different parties.


Oh yeah, 'trusted' would definitely give the opposite of the intended meaning! I had to read your comment twice just to get why you were calling it that.


This reminds me of Freeman Dyson's research on British Bombers during WWII -- the data showed experienced crews didn't fair better than novices, but they didn't figure out why till after the war.

> Bomber Command told the crews that their chances of survival would increase with experience, and the crews believed it. They were told, After you have got through the first few operations, things will get better. This idea was important for morale at a time when the fraction of crews surviving to the end of a 30-­operation tour was only about 25 percent. I subdivided the experienced and inexperienced crews on each operation and did the analysis, and again, the result was clear. Experience did not reduce loss rates. The cause of losses, whatever it was, killed novice and expert crews impartially. This result contradicted the official dogma, and the Command never accepted it. I blame the ORS, and I blame myself in particular, for not taking this result seriously enough. The evidence showed that the main cause of losses was an attack that gave experienced crews no chance either to escape or to defend themselves. If we had taken the evidence more seriously, we might have discovered Schräge Musik in time to respond with effective countermeasures.

...

> the German pilots were highly skilled, and they hardly ever got shot down. They carried a firing system called Schräge Musik, or “crooked music,” which allowed them to fly underneath a bomber and fire guns upward at a 60-degree angle. The fighter could see the bomber clearly silhouetted against the night sky, while the bomber could not see the fighter. This system efficiently destroyed thousands of bombers, and we did not even know that it existed. This was the greatest failure of the ORS. We learned about Schräge Musik too late to do anything to counter it.

https://www.technologyreview.com/s/406789/a-failure-of-intel...

The wingsuit jumpers consider possible dangers to lie with complacency of experts on easy flights, experts envelope pushing, or novices jumping unprepared. But these factors are all so contradictory that I'm left wondering if there is a hidden risk that affects novices and experts alike.


Fascinating.

In a comment elsewhere on this page capncrunch suggests downdrafts[1]. Seems like the kind of thing that could randomly strike novices and experts alike.

1. https://news.ycombinator.com/item?id=15114380


That's a great story.

My first instinct for where to look to find things that would affect experts and beginners alike is manufacturing defects or poor suit design.

I'd say also weather but that seems more likely to be highly variable and prone to skill (i.e. experts know when conditions are poor).


I would look no further than the basic ingredients of the sport: speed, gravity, zero margin for error and over a long enough run of events you are destined for the morgue. This is not a sport, it is suicide in disguise.


One more thing: it's turned into a small industry. Some of the work in it I am certain is considered to be partly art (I am thinking of the analogy to the sailing industry in the early years of yachting). I wouldn't rule out a look to "advances" in suit design where the new engineering has an unforeseen consequence. Something the advanced guys would jump on board with, but we don't have enough data yet to determine the failures.


Does experience grow the margin of error? Do "incidents" become less fatal? Like in Russian roulette, its possible that experience doesn't improve your odds when something bad happens.

Sky diving is a little different, if a chute fails to open, your ability to remain calm and deal with it and deploy the backup makes a difference. In a lot of extreme activities I could see experience making a huge impact during incidents, flying suits might not be that forgiving.


It could just be the non-linearity inherent in flight. Or perhaps being in proximity to the surface causes the Reynolds number to be more variable, meaning 10 experts could do the same maneuver and have different outcomes.


As a Firefox on Linux user I checked one of those sites that tries to estimate how many bits each public aspect of your setup reveals about you. It turned out available fonts was by far the most unique aspect of my setup.


The only surefire way is to disable javascript, extensions, cookies, etc. https://browserleaks.com has a pretty good breakdown of the different techniques you can use. There's another JS technique that probes the hardware to fingerprint a browser too.

http://yinzhicao.org/TrackingFree/crossbrowsertracking_NDSS1...

Use Tor Browser even if your not using Tor if you're looking for better privacy. It's modified to mitigate as much as possible. Facebook is just bad. Avoid it at all costs if you value privacy. And it's not just facebook. Sites like facebook, google, etc also use several 3rd party "advertising" (i.e. data gathering) companies to gather data and build profiles on users and share that data with each other. Even on your regular use browser I would highly recommend uBlock Origin and Privacy Badger.

https://github.com/gorhill/uBlock

https://www.eff.org/privacybadger


But with such a unique browsing situation you're basically identifiable on that basis alone. Your best bet would be to have your browser present itself as a common browser on a common platform, and block tracking and ads.


UserAgent is still top culprit (16 bits of identifying information) followed by browser plugins (12bits) then WebGl (12b), canvas (9b), language (if not english nor chinese) and then fonts at 5bits.

Total is around 20bits (due to overlaps).

YMMV.


Could you share a link to this site?



One thing I don't really like about that site is that it gives browsers worse scores for not unblocking third parties which promise to honor do not track. Surely you're more safe when you don't trust anyone instead of trusting that third parties which honor DNT actually honor it. It kind of reeks of pushing an agenda, which would have been okay (it's the EFF after all) if the tool didn't claim to score your browser on how well it protects you from tracking.


That's a great blog post. The `linux_literal` section is particularly interesting re: mmap.


> reasonable to conflate panics with unsafety if one is only used to "crashing" in the context of C.

I suppose it's possible that's what generates most instances of this question, but I had the same question when I was first introduced to Rust and it came from a different root.

I had been reading about Erlang, and from that context I expected "safety" to be about avoiding system downtime (e.g. gracefully handling failed asserts) as much as anything.


Pass the release flag when you build loc; `cargo build --release`.

I see ~5 seconds for loc and ~34 seconds for loccount to count a freshly cloned linux repository.

edit: woops, that was user time. These both spin all the cores. loc takes ~0.7 secs and loccount ~6.6 secs real time.


I'm not seeing any changes in my results. I'm testing in an old laptop, I guess that's why it's so slow.

My rustc version is 1.15.0 and go 1.8.


If you are running the program with `cargo run` that takes the same --release flag. Could be the machine difference though.


I'm not. Are you testing with go 1.8 btw?


Go 1.7.4 and rust 1.14. Interesting discrepancy in the relative speeds of the tools we are seeing.


> In this specific example, I saw that building the project required mucking with my rust version/setup and decided that the cost of that was too high for me to proceed.

Agreed it would be nice if you could compile it with your package manager's provided version of rust.

Sibling comments mentioned rustup; with it this is the entirety of the mucking required [as described by alacritty's readme, and confirmed by compiling it myself]:

    rustup override set $(cat rustc-version)
The override is local to the project of the directory you set it in. This will also download that version of rust if necessary.


I have a hunch that learning resources get a significant bump from people using upvotes as bookmarks.

I think the ML results and demo posts are mostly getting upvotes because the results are fascinating and the posts are often excellently written. Image manipulation in particular lends itself towards posts that are appealing both to people casually skimming articles and to people looking for some technical depth.


Notable for code archaeology: the docstrings weren't in the previous version[].

(He also switched from manually implementing counting behavior with a Dict to a Counter, added the P function which replaces an inline dict lookup, similarly added the candidates function, changed the somewhat awkward known_edits2 function to just edits2, and reorded some things.)

[] http://web.archive.org/web/20160408180602/http://norvig.com/...

edit: small edits


Of course the risk of a habit of flippantly walking across minefields is significantly more than the willingness to do it to save a life.

It seems likely to me that the respondents deviated from the question strictly asked using at least in part that reasoning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: