Searching "how to fix a leaky faucet" on Google turned up the page from "This Old House" immediately (top 3 results, top 2 were wikiHow and a YouTube video that seemed OK at a glance).
I'm not sure why my personal results are often so much better than posts like this one whenever I do the experiment - maybe it's based on location?
Do you use an ad blocker? I can confirm the results: plumber ads that extend below the fold, followed by one useful article from Home Depot, then a useless "people also ask" blob of links, some videos (likely useful, faucets aren't complex), and another useless "people also" blob.
I searched on mobile, so no ad blocker. There were no ads for that query, and all of the content up to maybe the 5th result was an acceptable answer to the query (i.e. on a site that wasn't plastered with ads)
If I search "plumber" the first 3 results are ads.
I would probably argue that either the first algorithm is incorrect (because searched can be larger than INT_MAX) or the complexity of the second algorithm is bound both by O(searched) (or sqrt(searched), if implemented with more vigour) and O(1) (because the value of 'searched' is bound by a constant).
> I doubt they decided independently on a back to office date in the same ~1 month timeframe
Given the reason for office closures was covid, I'd guess that companies in the same geographical area would make similar decisions about when it was appropriate to return to office work. It would be more surprising to me if large companies with similar decision-making throughout the pandemic, in the same geographical area, had drastically different timelines.
You can find the difference between recurrent and recursive neural nets on wikipedia. No need to edit your post then flag someone when they point out how lazy you are in understanding a simple rant post, pointing out how lazy people are in understanding the "AI" fads.
Saying 99/100 doctors think something medical is true isn't an appeal to authority... Doctors /are/ an authority on medical matters. Programmers aren't an authority on economic matters.
That is exactly an appeal to authority and logical fallacy. It doesn't mean it isn't a helpful short cut in decision making, but it is logically incorrect. There was a point in time when enough scienticians believed that the world was flat. Regardless of their expert belief and number, the world was not flat, nor was it the centre of the universe. Facts remain regardless of what 99/100 people believe.
I don't know. That seemed like a fairly simple exploration of the problem and an easy library function to solve it. Would it be much simpler in any other language?
It's a simple problem with a simple fix, from a very simple benchmark - but it's already reaching the limits of how much tuning Go offers. If even such a basic benchmark needed one GC tuning knob, that seems suggestive that maybe a larger and more complex program really would need all the knobs that Java offers, the ones that Go claims are unnecessary.
Or maybe not - maybe that one knob really can solve all the problems. But I really was surprised that anyone would ever need to actively configure a GC for such a simple use case, so I've learnt something at least.
If you need the machinery Java has, you're already fucked. To a first approximation, no one knows how to tune a JVM. Whole books have been written multiple times on the subject, and everywhere I've been that used large-scale Java applications, tuning failures are still at or near the top of the list of most common production issues.
We don't appear to be much better off than we were with C. I guess the server melting is probably preferable to an exploitable smashed stack, but prevention of both issues appears to be to hire geniuses who don't make mistakes while doing enormously mistake-prone work.
> We don't appear to be much better off than we were with C. I guess the server melting is probably preferable to an exploitable smashed stack, but prevention of both issues appears to be to hire geniuses who don't make mistakes while doing enormously mistake-prone work.
"Everyone gets correctness, you need geniuses to get maximum performance" is a huge improvement over the reverse.
"more complex program really would need all the knobs that Java offers, the ones that Go claims are unnecessary." - but that is really a move in the wrong direction... it's meta programming with Env variables, allocation sizes, command lines, and JDK versions. Sure, Go will probably eventually have all that, to support more tuning, but at that point, ti feels the war for "simple, predictable, low-overhead GC" has been lost, yet again.
> Sure, Go will probably eventually have all that, to support more tuning, but at that point, ti feels the war for "simple, predictable, low-overhead GC" has been lost, yet again.
Well, it seems we don't have a simple GC that performs well in all cases, and really why would we expect to? So choosing simplicity means leaving performance on the table, at least for now, and while there are languages that do that (Python) it doesn't seem like a good fit for Go.
> "it seems we don't have a simple GC that performs well in all cases"
Exactly, and after 20+ years, it may be that it's not possible to have such a GC.
That's not a reasonable conclusion to draw. You would need to preallocate to maximize performance regardless of GC in this case, and in other cases, the performance and determinism gaps between GC languages and non-GC languages is almost entirely due to tertiary factors. For example, GC languages are more likely to prioritize friendliness over raw performance in other areas--language features, compiler optimizations, standard library design, etc. Non-GC languages really only need to appeal to the high-performance and high-determinism crowds.
I think the expectation the Go lot have been working towards is the expectation that individual pauses are very short. Contrast to a Java server GC, aiming towards overall efficiency. (Tell me if I'm wrong, I'm no expert.)
As another commented posted elsewhere, GC is all about tradeoffs, and for a govern set of GC tradeoffs there will always be pathological cases. Another GC might work out of the box for this workload, but perform very badly for a workload at which Go's GC excels. As others have already mentioned, the fact that he was so easily able to tune the GC is a testament to Go's simplicity.
There are lots of good criticisms of Go to be had, but it's runtime is pretty remarkable, in my opinion.
As another comment in this thread explained, GCs have CPU and Memory tradeoffs and you can easily make a GC that would work excellent in the OP benchmark but would suffer severely under other workloads.
I've been programming professionally for almost thirty years at this point.
That's admittedly long enough to get some peculiar ideas about programming, but there's a certain kind of person that thinks "statically typing" is a panacea for programming mistakes -- or even that there's a single solution for anything at all...
I wouldn't trust anyone who thinks a solution is a panacea in the software field, but having type annotations demonstrably leads to not just more correct code but also code that can be understood better by its surrounding tooling, thereby enabling automatic refactorings, auto completion, smart browsing, optimizations, etc...
Static typing is not perfect but it's better in all respects than dynamic typing.
Can you point to these Stac-M3 results and maybe at least mention that language? I wasn't able to find it.
Static languages are universally recognized and demonstrated as faster than dynamically typed languages, even if you claim to have found one rare exception. This is not just an opinion, it's scientific and objective fact.
But please share with us that code that runs more quickly on a dynamic language than a static language for you, I am genuinely curious (and equally curious to find out if you just made this up).
The first item on the list is written in an interpreted language.
> This is not just an opinion, it's scientific and objective fact.
I've just demonstrated two counterexamples, so it's clearly not "scientific and objective fact". Indeed I've never met anyone who even thought Rust would outperform an experienced C programmer in programmer speed, program runtime, and low program size.
>The first item on the list is written in an interpreted language.
Could I see the code? Because I'm pretty sure you can't write a fast numeric code without the knowledge of which type you will get on input. That's why fortran still rocks and we do nat have anything beyond fortran, c and c++ in the field of computation.
> Because I'm pretty sure you can't write a fast numeric code without the knowledge of which type you will get on input. That's why fortran still rocks and we do nat have anything beyond fortran, c and c++ in the field of computation.
You're not far off.
The first trick is that even though `x` can have any type, `x[4]` and `x[5]` must have the same type. This kind of array whilst uncommon in Python is extremely common in array languages. Array languages tend to have a lot of vector operators (and so therefore are very competitive users of the AVS512 instruction set) which tend to be very fast. Programmers of array languages also tend to avoid things like loops and branches -- indeed you might enjoy http://nsl.com/ if you want to expand your mind a bit on that point.
The second trick is that an interpreter can be made very small. If you can get your entire program and the interpreter into L1 then you do not stall the CPU while your program fetches various parts of itself from memory. This is a trick languages like Fortran and C and C++ don't miss per-se, a very experienced programmer can usually identify the hotspots and optimise the hotpath for these things, but it is very time consuming to do this when your entire program is the hotpath!
> That's the first link I found and it says nothing about a benchmark
Except the first paragraph:
The STAC-M3 Benchmark suite is the industry standard for testing solutions that enable high-speed analytics on time series data, such as tick-by-tick market data (aka "tick database" stacks).
I'm not going to bother any further with such an obvious troll.
Depends how you look at it. Dev time is a finite resource, and spending it on supporting a weird combo of user agent and browser seems likely to lead to less user utility.
I'm not sure why my personal results are often so much better than posts like this one whenever I do the experiment - maybe it's based on location?