Hacker News new | past | comments | ask | show | jobs | submit | mining's comments login

Searching "how to fix a leaky faucet" on Google turned up the page from "This Old House" immediately (top 3 results, top 2 were wikiHow and a YouTube video that seemed OK at a glance).

I'm not sure why my personal results are often so much better than posts like this one whenever I do the experiment - maybe it's based on location?


Do you use an ad blocker? I can confirm the results: plumber ads that extend below the fold, followed by one useful article from Home Depot, then a useless "people also ask" blob of links, some videos (likely useful, faucets aren't complex), and another useless "people also" blob.

I am in the US, if that matters.


I searched on mobile, so no ad blocker. There were no ads for that query, and all of the content up to maybe the 5th result was an acceptable answer to the query (i.e. on a site that wasn't plastered with ads)

If I search "plumber" the first 3 results are ads.

I'm in Australia.


I would probably argue that either the first algorithm is incorrect (because searched can be larger than INT_MAX) or the complexity of the second algorithm is bound both by O(searched) (or sqrt(searched), if implemented with more vigour) and O(1) (because the value of 'searched' is bound by a constant).


I explicitly wrote

> to find the square root of an int:

so the value can't exceed INT_MAX :)

Overall, I know the two algorithms aren't perfect, but they're a simple minimal example to show the difference between complexity and runtime.


> I doubt they decided independently on a back to office date in the same ~1 month timeframe

Given the reason for office closures was covid, I'd guess that companies in the same geographical area would make similar decisions about when it was appropriate to return to office work. It would be more surprising to me if large companies with similar decision-making throughout the pandemic, in the same geographical area, had drastically different timelines.


Almost all of that blog post was complete bullshit. Recursive neural net? I don't understand why that post was written.


You can find the difference between recurrent and recursive neural nets on wikipedia. No need to edit your post then flag someone when they point out how lazy you are in understanding a simple rant post, pointing out how lazy people are in understanding the "AI" fads.


Saying 99/100 doctors think something medical is true isn't an appeal to authority... Doctors /are/ an authority on medical matters. Programmers aren't an authority on economic matters.


That is exactly an appeal to authority and logical fallacy. It doesn't mean it isn't a helpful short cut in decision making, but it is logically incorrect. There was a point in time when enough scienticians believed that the world was flat. Regardless of their expert belief and number, the world was not flat, nor was it the centre of the universe. Facts remain regardless of what 99/100 people believe.


There are infinitely many infinitely thin lines that fit in that band and cover the countries.


Ahh yes, I missed the subtlety that Bulgaria and Slovakia overlapped


I don't know. That seemed like a fairly simple exploration of the problem and an easy library function to solve it. Would it be much simpler in any other language?


It's a simple problem with a simple fix, from a very simple benchmark - but it's already reaching the limits of how much tuning Go offers. If even such a basic benchmark needed one GC tuning knob, that seems suggestive that maybe a larger and more complex program really would need all the knobs that Java offers, the ones that Go claims are unnecessary.

Or maybe not - maybe that one knob really can solve all the problems. But I really was surprised that anyone would ever need to actively configure a GC for such a simple use case, so I've learnt something at least.


If you need the machinery Java has, you're already fucked. To a first approximation, no one knows how to tune a JVM. Whole books have been written multiple times on the subject, and everywhere I've been that used large-scale Java applications, tuning failures are still at or near the top of the list of most common production issues.

We don't appear to be much better off than we were with C. I guess the server melting is probably preferable to an exploitable smashed stack, but prevention of both issues appears to be to hire geniuses who don't make mistakes while doing enormously mistake-prone work.


> We don't appear to be much better off than we were with C. I guess the server melting is probably preferable to an exploitable smashed stack, but prevention of both issues appears to be to hire geniuses who don't make mistakes while doing enormously mistake-prone work.

"Everyone gets correctness, you need geniuses to get maximum performance" is a huge improvement over the reverse.


Except I see tons of outages caused by things like VM and application server tuning. Software that isn't working isn't correct or performing well.


Not running at all, while not ideal, is a lot better than silent data corruption.


"more complex program really would need all the knobs that Java offers, the ones that Go claims are unnecessary." - but that is really a move in the wrong direction... it's meta programming with Env variables, allocation sizes, command lines, and JDK versions. Sure, Go will probably eventually have all that, to support more tuning, but at that point, ti feels the war for "simple, predictable, low-overhead GC" has been lost, yet again.


> Sure, Go will probably eventually have all that, to support more tuning, but at that point, ti feels the war for "simple, predictable, low-overhead GC" has been lost, yet again.

Well, it seems we don't have a simple GC that performs well in all cases, and really why would we expect to? So choosing simplicity means leaving performance on the table, at least for now, and while there are languages that do that (Python) it doesn't seem like a good fit for Go.


> "it seems we don't have a simple GC that performs well in all cases" Exactly, and after 20+ years, it may be that it's not possible to have such a GC.


I think most pathological cases can be reduced to similarly simple examples. I'm sure these exist for other GCs as well.


Yes, the take-away here is not that Go is particularly bad, but that GCs are problematic. Notably, GC != automatic memory management.


That's not a reasonable conclusion to draw. You would need to preallocate to maximize performance regardless of GC in this case, and in other cases, the performance and determinism gaps between GC languages and non-GC languages is almost entirely due to tertiary factors. For example, GC languages are more likely to prioritize friendliness over raw performance in other areas--language features, compiler optimizations, standard library design, etc. Non-GC languages really only need to appeal to the high-performance and high-determinism crowds.


> Notably, GC != automatic memory management.

Actually, GC == automatic memory management, just not automatic performant memory management


Simpler would be "just work like I expect".


I think the expectation the Go lot have been working towards is the expectation that individual pauses are very short. Contrast to a Java server GC, aiming towards overall efficiency. (Tell me if I'm wrong, I'm no expert.)


Yes, Go optimizes for low latency and Java for high throughput.


Depends on the Java GC being used.


Granted, but the stock GC is optimized for throughput.


The stock GC on OpenJDK, there are other JVMs to choose from.


As another commented posted elsewhere, GC is all about tradeoffs, and for a govern set of GC tradeoffs there will always be pathological cases. Another GC might work out of the box for this workload, but perform very badly for a workload at which Go's GC excels. As others have already mentioned, the fact that he was so easily able to tune the GC is a testament to Go's simplicity.

There are lots of good criticisms of Go to be had, but it's runtime is pretty remarkable, in my opinion.


What about the use cases where what you expect is not what I expect? GC is, as a sibling comment says, about compromises around memory safety.


Doesn't work with GCs.

As another comment in this thread explained, GCs have CPU and Memory tradeoffs and you can easily make a GC that would work excellent in the OP benchmark but would suffer severely under other workloads.


If that 95% assumes that people undergo background checks, my assumption is that number might decrease.


I'm not certain that crime is one where the perps are logically weighing the pros and cons.


95% is sexual crimes in UK in general.


> I'm guessing pretty junior with this kind of hubris

If you check their profile their site suggests that have ~15 years of experience.


That's just management experience.

I've been programming professionally for almost thirty years at this point.

That's admittedly long enough to get some peculiar ideas about programming, but there's a certain kind of person that thinks "statically typing" is a panacea for programming mistakes -- or even that there's a single solution for anything at all...


I wouldn't trust anyone who thinks a solution is a panacea in the software field, but having type annotations demonstrably leads to not just more correct code but also code that can be understood better by its surrounding tooling, thereby enabling automatic refactorings, auto completion, smart browsing, optimizations, etc...

Static typing is not perfect but it's better in all respects than dynamic typing.


> I wouldn't trust anyone who thinks a solution is a panacea in the software field

... unless of course that solution is static typing.

> Static typing is not perfect but it's better in all respects than dynamic typing.

Except performance, of course: A dynamically-typed language is at the top of the STAC-M3 benchmarks for time series data processing.

Oh and defect count: Qmail has the lowest defect count of any source-available software in the last twenty years and it's written in C.

So I guess static typing is better unless you want correct code that runs quickly, which unfortunately is important to me.


Can you point to these Stac-M3 results and maybe at least mention that language? I wasn't able to find it.

Static languages are universally recognized and demonstrated as faster than dynamically typed languages, even if you claim to have found one rare exception. This is not just an opinion, it's scientific and objective fact.

But please share with us that code that runs more quickly on a dynamic language than a static language for you, I am genuinely curious (and equally curious to find out if you just made this up).


> Can you point to these Stac-M3 results and maybe at least mention that language? I wasn't able to find it.

That makes sense, since your background isn't in high performance computing and you can't use google:

https://stacresearch.com/m3

The first item on the list is written in an interpreted language.

> This is not just an opinion, it's scientific and objective fact.

I've just demonstrated two counterexamples, so it's clearly not "scientific and objective fact". Indeed I've never met anyone who even thought Rust would outperform an experienced C programmer in programmer speed, program runtime, and low program size.


>The first item on the list is written in an interpreted language.

Could I see the code? Because I'm pretty sure you can't write a fast numeric code without the knowledge of which type you will get on input. That's why fortran still rocks and we do nat have anything beyond fortran, c and c++ in the field of computation.


> Because I'm pretty sure you can't write a fast numeric code without the knowledge of which type you will get on input. That's why fortran still rocks and we do nat have anything beyond fortran, c and c++ in the field of computation.

You're not far off.

The first trick is that even though `x` can have any type, `x[4]` and `x[5]` must have the same type. This kind of array whilst uncommon in Python is extremely common in array languages. Array languages tend to have a lot of vector operators (and so therefore are very competitive users of the AVS512 instruction set) which tend to be very fast. Programmers of array languages also tend to avoid things like loops and branches -- indeed you might enjoy http://nsl.com/ if you want to expand your mind a bit on that point.

The second trick is that an interpreter can be made very small. If you can get your entire program and the interpreter into L1 then you do not stall the CPU while your program fetches various parts of itself from memory. This is a trick languages like Fortran and C and C++ don't miss per-se, a very experienced programmer can usually identify the hotspots and optimise the hotpath for these things, but it is very time consuming to do this when your entire program is the hotpath!


That should have been AVX512. I blame autocomplete for the typo and myself for missing it.


That's the first link I found and it says nothing about a benchmark, what's being benchmarked, what languages were used and what the timings were.

Do you actually have anything to show to back up your claim? Anything at all?


> That's the first link I found and it says nothing about a benchmark

Except the first paragraph:

The STAC-M3 Benchmark suite is the industry standard for testing solutions that enable high-speed analytics on time series data, such as tick-by-tick market data (aka "tick database" stacks).

I'm not going to bother any further with such an obvious troll.

Good luck dude. Wish you the best.


So you still can't name the language that won these benchmarks and which other languages it ran against?

Because they're not on the page, so I can only conclude you made this up.


Depends how you look at it. Dev time is a finite resource, and spending it on supporting a weird combo of user agent and browser seems likely to lead to less user utility.


Weird combo being Firefox with its default user agent..


Well, mobile Firefox which is much less common.


Or Desktop Firefox (~43% of German desktop market) in Google Inbox (during the launch), the new YouTube redesign, Hangouts, Allo, Earth...

Half of Google products either don’t work at all, or far worse, on one of the largest browsers out there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: