Hacker News new | past | comments | ask | show | jobs | submit | leapis's comments login

> Why on earth would you bet your money on some random tool you don't even understand? ... I built a tool for people who knew what harmonic patterns were.

The tool is for drawing "technical analysis indicators", one of the most convoluted ways to ascribe meaning to a random process and something that will only ever be true in the self-fulfilling sense. I don't think it's a surprise that some users are willing to blindly trust the tool, when all users of it are blindly trusting concepts that are built on sand.

Although I'm sure the author is burnt out from the experience now, I'd be interested in hearing how their next side project venture goes- is the experience more enjoyable when you're dealing with a user base that self-selects differently? Or do all users suck equally, just in different ways?


At least half of the interactions that are presented as terrible, I feel are actually quite normal and potentially even pleasant. If you don't actually enjoy talking about your product with 'beginners' or even just normal people, then maybe reconsider the customer support role?

For me this reads as 'I don't enjoy voluntary customer support' rather than my customers suck.


I see only one single sentence - "others had very basic questions, answers to which were given in the description of each script" - that might be referring to situations where people were seeking either clarification (including cases where the answer was in the documentation, but not obviously so) or advice on how to use the tool more effectively, (I exclude bald requests for 'hot tips' or source code from those categories.)

For all I know, the author might have both received and responded substantively (with more than RTFM) to many such requests, but has not mentioned them here because they were not part of the problem.


Agreed- I come from a Java/C++ shop where we tried to tackle this dichotomy with interop but it ended up causing more problems than it solved. A lot of the work that Java has done with modern garbage collectors is impressive, but even they admit (indirectly, via Valhalla) that no/low-alloc code has it's place.


> no/low-alloc code has it's place

... Which is pretty large in the embedded firwmare field. However that's not systems programming but system (singular) programming.


I mostly agree with this, but I've been a big fan of having primitive types in config. Most of the time if I have something I want to configure, it's either one of the following (or a map/list-based structure consisting of):

- scalar value

- feature toggle

- URI/enum option/human readable display text

Having float/long/boolean is trivial to validate in the config language itself, and if they're useful and simple enough isn't it nice to be able to validate your config as early as possible?


It's nice, but it comes at a cost. For example, every user of toml forever will have to put strings in quotes. Why? Because having other types creates ambiguity, that is resolved by this one simple trick. But if you don't quote them then you have "the Norway problem" like in yaml.


Nullable is a huge issue in Java, but annotation-based nullability frameworks are both effective and pervasive in the ecosystem (and almost mandatory, IMO).

I'm really excited about https://jspecify.dev/, which is an effort by Google, Meta, Microsoft, etc to standardize annotations, starting with @Nullable.


This can never be as effective as changing the default reference type to be not nullable, which would break backwards compatibility, so you can never really relax.

I know Kotlin is basically supposed to be that, it has a lot of other stuff though, and I haven't used it much.


That's basically what c# has done. But it's implemented as a warning which can be upgraded to an error. I think it might even be an error by default in new projects now.


Holy shit, how didn't I know they'd taken it this far? This is great! https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...

They actually fixed the billion dollar mistake...


They didn't. A proper fix would require getting rid of null altogether in favor of ADTs or something similar. I work with C# daily and nulls can still slip through the cracks, although it's definitely better than @NotNull and friends.

I haven't worked with Kotlin in a while, but IIRC their non-nullable references actually do include runtime checks, so you cannot simply assign null to a non-nullable and have it pass at runtime like you can (easily) do in C#.


they won’t change the default reference type to non null. might take a few years but you can see their planned syntax here: https://openjdk.org/jeps/401


I hope they succeed. So many people have tried.


Not having nulls is easy.

Persuading Java devs not to use nulls is hard.


I was just writing about nullability annotations!

https://news.ycombinator.com/item?id=37534184


Nullable annotations don’t work with well with generics, or at least those tools I use.


Depends- one of the hardest parts of the 11-20 upgrade for us was that cms gc was removed.

If you run a bunch of different microservices with distinct allocation profiles, all with high allocation pressure and performance constraints, and you've accomplished this w/ the help of a very fine-tuned CMS setup, migrating that over to G1/ZGC is non-trivial


Java sort of suck for microservices (microservices suck on their own) as it has relatively high bootstrap cost.

High allocation rate feels weird with micro services - I suppose that depends a lot on the coding style. G1GC is meant for generally large setups, with several cores at least. E.g. the default setup of 2048 areas on 2GB heap means, allocations over 1MB require special care.


If you GraalVM Native Image or one of the frameworks based on it then bootstrap cost disappears:

https://quarkus.io


GraalVm is great, it made our Spring REST API app go from 10+ seconds to 0.5 seconds on startup (not to mention the lower mem and cpu requirements).

Except… when we try to build it with Jenkins on a Kubernetes cluster, this happens: https://github.com/oracle/graal/issues/7182


I can't help but think if you're teetering on a knife's edge, only holding on thanks to hyper tuned GC params, then you should take a step back and consider getting out of that predicament.


Is that what happened at your company?


Yup. We've clearly benefitted- G1 and generational ZGC have large advantages over CMS- but it's a lot of experimentation and trial-and-error to get there, whereas other deprecations/removals are usually easier to resolve.


Isn't the default config correct for pretty much all workloads, unless you have very special requirements? Like, at most one should just change the target max pause time in case of G1, depending on whether they prefer better throughput at the price of worse latency, or the reverse.


The re


Very well explained, that made it click for me, thanks!


Even simpler, every time you sample random you get the same number of bits of entropy. Sampling once and then cubing spreads the same entropy bits over the interval. Cubing thrice spreads 3x the entropy into the reachable area. Thus you know this guy is doing no net favor to the world.


There might be a potential market in HFT/trading firms (market makers). They're struggling to recruit right now (industry is doing very well, so high headcount growth) and they're usually pretty small so joining w/ friends means working in closer proximity than you might get in big tech.

Their small class size and heavy new-grad hiring also means that they're usually more flexible with recruiting and some would probably welcome a new pipeline for talented new grads like this.


My experience is that HFT/trading firms are almost exclusively looking for seasoned professionals like low latency engineering specialists. There's typically the expectation of providing value and filling expertise gaps almost right off the bat. Fresh grads are incapable of doing that.


Is this a theoretical difference, or a difference in terms of an actual implementation? I have no knowledge of linear systems algorithms, but as far as I was aware, many w.h.p algorithms resolve to correctness with probability of 1 - (1/n)^c (with c effectively being a hyperparameter), which would seem like quite a strong result in itself.


“Of the 18 crossovers I talked to, 16 said yes.“

I’m just as surprised to see the number be this high. I can’t help but wonder if this might be due to survivorship bias: crossovers are, after all, self-selecting. Perhaps not all engineers would say the same, if forced to swap roles.

Regardless, I’m excited to hear more from these crossovers. As a (soon-to-be) recent graduate, I think there can tend to be a great difference in perception between CS students and Engineering students, particularly at institutions without rigorous CS programs. Many tend to view CS students in the same way that some CS students view bootcamp students- people motivated by an “easy path to success”. I’ve found this opinion is rarely held of engineering students, given the reputation of engineering programs. I think it’s very useful to profile the motivators, mindsets, and general attitudes between the disciplines, if only to see that perhaps the difference isn’t as large as one might assume.


The answer is a clear yes in my case, but I work on commercial numerical design and analysis software. The interleaving of computers and engineering entailed here makes a hard case that there is not an absolute "no" available. But computational software is a corner case, I know. (sigh)

We certainly view our pure CS people as experts in what they do -- and the things I see them do are, e.g., build out cloud based versions of our product, add new GUI features, automate build and test systems (until management tries to replace them with us... a dubious proposition at best). That's just the stuff I see day to day, since it's most closely tied with us on the numerical side. Why wouldn't that be engineering? I could speculate but I don't care to. We are all working to make the product the customers want.


MechE here so my thoughts can be biased. It’s high because of two reasons. One the pay is typically much better (upward of 20%-100% (yes 100) better) and secondly the “crossover” barrier is much lower compared to other engineer mobility. For instance I would argue, a MechE becoming a SWE is easier than a SWE to become a MechE since companies are opened to SWE not having a specific degree in it.


I wonder if there might be a technological barrier. My degree is in physics, so I depend on mobility for any hope of employment. ;-)

It was easy to learn programming. One reason was that the tools were always relatively cheap (even when they cost money), and the cost to learn by trial and error was negligible. At least this is true for the basics. Becoming a good programmer who can be an asset to a large project is outside my wheel house, though I'm in the process of learning.

Today, for mechanical design, you at least need access to SolidWorks, and the part of learning that comes from experiencing failure is costly and time consuming. Surely 3d printing is changing that equation, but not overnight.

But you can test the waters of programming without asking anybody for permission, and if you discover that you hate it, then you can just bury it. And many do. Programming is hard or most people, for reasons that I don't think we understand.

Now, programming and mechanical design by themselves are not engineering, but if someone wants to get into a new skill through the back door, they are similar. A person with SolidWorks skills can be useful as a designer without being a full blown engineer.


How do you achieve people like this? From my limited experience (college senior joining a hft firm shortly, so I've recently been in several quant finance SWE interview loops), firms seem to vastly downplay the financial aspects of the job for software engineers. Compounded on top of that, firms don't expect or encourage financial backgrounds for engineers (at least new grads)- the expectation is that whatever limited financial background we'll need to work will be given to us when it becomes necessary.

Is this because it's easier (obviously) to teach a quant engineering than it is to teach an engineer quant finance? Or rather because it's expected now that traders will become the bridge between researcher models and implementation, and engineers will simply provide the underlying infrastructure to power these implementations?


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: