Hacker News new | past | comments | ask | show | jobs | submit | rifung's comments login

I am a huge fan of the checking/budgeting features of Wealthfront. Does anyone know of a good competitor in case I'll have to jump ship?

For those who don't use it, it allows you to create categories and automatically deposit money into those categories at some interval. Then you can easily transfer between those categories.

I use this so that I can put some money away for expected expenses like taxes and a new car and also to budget for fun so I known what I can actually afford.


> We are getting to the point where lectures should be mandated to be recorded and put into the public domain

Does that really make sense? I thought MIT was a private institution. The research is often publicly funded but I don't believe that's the case for the tuition.


> What exactly is the UNIX philosophy?

I believe it is the idea of writing small tools focused on doing one thing well with reusability in mind as opposed to writing larger complicated tools that do multiple things.

https://en.wikipedia.org/wiki/Unix_philosophy


A cargo cult philosophy never adopted by commercial UNIX clones and adored by UNIX FOSS, where the man page of each GNU tool describing the available set of command line arguments looks like a kitchen sink.


I believe you're in agreement.

The original comment said

> It's not until you reach a level where everyone around you has a mastery of the fundamentals, that the meta comes into play.

and you said

> At any given MMR bracket, everyone around you has a similar mastery of the fundamentals so the meta is relevant.

which refutes the original comment.

> However, put someone with the fundamentals of a 6k+ MMR player into a 3 or even 4k match and they'll demolish everyone else even with a disadvantage relative to the metagame.

I don't think anyone disagrees with this; I think the issue I (and the person you're responding to) have is the idea that you should ignore understanding the meta until you're 6k MMR as you say.

Both are important and you should develop both at the same time if you want to improve.


Perhaps the best use of the meta, as noted in the article, is to help guide you to understanding what are the fundamentals to learn, and they help filter down what fundamentals you need to learn.

Zeus is never picked? Great! Don't need to learn that kit. Or perhaps digging into why Zeus isn't picked can help inform why another character is more useful especially in conjunction with others in a team.


> This other Googler likely already has a competing offer— that’s how he got promoted last year.

Source? I'm very skeptical this is true. I am not a huge fan of Google's promotion process but it isn't that bad.


I have two friends who told me they were trying to get competing offers so that they could use it in their promo packet. One was in the ads team, extremely sharp guy, trying to make Staff level in his 3rd year there.

Joe Beda, one of the creators of Kubernetes, had a giant competing offering from Facebook. Turned it down to stay at Google, got some more freedom, and now we have Kubernetes (!).

If you look on Blind, there is plenty of discussion of using counter offers to force the promo process to work.


I worked at Google for nearly a decade, and as a manager for most of it, and I strongly believe there's a misunderstanding here. Google does negotiate pay of high L5+ performers against outside offers, but promos are simply off the table. Google is really conservative about diluting it's leveling bar, and would much rather give well-above-band counter offers than a promotion.

Moreover, promo decisions are made by a committee of engineers who don't work with you -- they have no incentive to even care about your outside offer.


I've literally never seen mention of this on blind. I see jokes about leaving and then returning at L+1, but never "forcing promo" via a competing offer.


I know someone who did it, but that was a few years ago during the peak of talent competition with Facebook. And the person was clearly qualified for the promo anyway.


It's unlikely the promo was because of the competing offer. During that time with Facebook they gave generous retention grants though.

The promo process is fairly opaque, so there's no way someone would know if the committee or manager took a competing offer into account or if they "forced it".

It seems more likely they were qualified and the timing was coincidental with the competing offer.


> Joe Beda, one of the creators of Kubernetes, had a giant competing offering from Facebook. Turned it down to stay at Google, got some more freedom, and now we have Kubernetes (!).

Is this public knowledge? Otherwise, I think it is uncool to talk about other people's situations on a public forum without their approval.


It's just an example, and not very far fetched. Ultimately, the reason folks get formally promoted is to retain them.


> Aside from Uber drivers, SV types, and wannabe road warrior squinters, nobody uses maps for their daily activities. People know where they're going and they go there without consulting technology. That's why we have traffic jams.

I might know where I'm going but I don't always know how to get there so I use Google Maps all the time. I don't use it for my daily commute but if I'm going to a friend's or something I'll usually use it.

When I sit in my friends cars we also use it all the time. Often we're going to a restaurant or some other location that we don't often go to.

For exploration my friends pretty much just use Yelp or Google Search. I sometimes but rarely use Google Maps for this because I find that the reviews are usually much lower quality and Google Maps is too slow (I have an old Pixel 2)


> Lives at stake don't change anything here. The question is whether self-driving cars, even with the errors, are safer for people than regular drivers on average. If so, then absolutely yes everyone should bet their lives and their families'.

Maybe logically that makes sense but from an ethical perspective I argue it's much more complicated than that (e.g. the trolley problem)

In the current system if a human is at fault, they take the blame for the accident. If we decide to move to self driving cars that we know are far from perfect but statistically better than humans, who do we blame when an accident inevitably happens? Do we blame the manufacturer even though their system is operating within the limits they've advertised?

Or do we just say well, it's better than it used to be and it's no one's fault? When the systems become significantly better than humans, I can see this perhaps being a reasonable argument, but if it's just slightly better, I'm not sure people will be convinced.


I'm voting for the "less dead people" option. Mostly because I'm a selfish person, been in automobile accidents caused by lapsing human attention, and I want it to be less likely that I'll die in a car crash.


But it's not just about quantity. It's also _different_ people who will die. That radically alters things from an ethical perspective.


Yep. Medical professionals have been aware of this dilemma for millennia: many people die from an ailment if no treatment is attempted, but bad approaches to treatment can kill people that would have survived otherwise. And setting 'better average accident rates' as the threshold for self driving vehicle software developers to be immune from the consequence of their errors is like setting 'better than witch doctors' as the threshold for making doctors immune from claims of malpractice.

Move fast, break different things, is not the answer.


What if its very much better average accident rates? This isn't black-and-white.


No, it certainly isn't black and white. Indeed 'much better' is hard to even define when human drivers cover an enormous amount of miles per accident, miles driven are heterogenous in terms of risk, there isn't even necessarily a universally accepted classification of accident severity or whether drivers should be excluded from the sample as being 'at fault' to an unacceptable degree. Plus the AV software isn't staying the same forever: every release introduces new potential edge case bugs, and any new edge case bug which produces a fatality every hundred million miles makes that software release more lethal than human drivers, even if it's better at not denting cars whilst parking and always observes speed limits in between. I don't think every new release is getting a enough billion miles of driving with safety drivers to reassure there's no statistically significant risk of new edge case bugs though.

And in context, we still punish surgeons for causing fatalities through gross negligence even though overall they are many orders of magnitude better at performing surgery than the average human.


Sophistry. 'Much better' can be very clear, in terms of death or injury, or property damage, or insurance claims, or half a dozen reasonable measures.

Sure it takes miles to determine what's better. Once automated driving is happening in millions (instead of hundreds) of cars on the road, it will take only days to measure.


I mean, the 'half a dozen reasonable measures' is a problem, not a solution, when they're not all saying the same thing. And sure, it only takes days before we know the latest version of the software actually isn't safer than the average human. And a lot of unnecessary deaths, and the likelihood the fix will cause other unnecessary deaths instead [maybe more, maybe less]. It's frankly sociopathic to dismiss the possibility this might be a problem as sophistry.


Straw man? There are many phases to testing a new piece of software, short of deploying everything to the field indiscriminately.

Some of us believe (perhaps wrong but there it is) that the human error rate will be trivially easy to improve upon. That's not sociopathic. It would be unhelpful to dismiss this innovation (self-driving cars) because of FUD.


Some of us believe, based on the evidence that the human fatal error rate is as low as 3 per billion miles driven in many countries, and some people actually are better than average drivers. Might be trivially easy to improve upon human ability to not to dent cars whilst parking or observe speed limits, but you're going to struggle to argue that improving on the fatal error rate is trivially easy for AI, or that the insurance cost of the dents matters more than the lives anyway.

People who actually want initiatives to succeed are going to have to do better than sneering dismissal in response to anybody people pointing out obvious facts that complex software seldom runs for a billion hours without bugs and successfully overfitting to simulation data in a testing process doesn't mean a new iteration of software will handle novelty it hasn't been designed to solve less fatally than humans over the billions of real world miles we need to be sure.


People CAN drive well. But understand in my rural state the highway department has signs over the road, showing fatalities for the year. It averages one a day. I don't think the cancer patients in the hospital die that frequently.

So you can name-call all you like and disparage dialog because you disagree or whatever. But I don't think a billion miles between accidents is anywhere close to what I see every day.

FUD isn't a position, its got no place in this public-safety discussion.


I vote for that option as well.

So far, I have been killed exactly zero times in car crashes. All the empirical evidence tells me that there's no need to surrender control to a computer.

If I die in a crash, perhaps I'll change my mind...


Do we gain something from placing blame? Who do we blame for people who die from natural disasters? Freak occurrences?

Are deaths where blame can be placed preferable to deaths where it cannot? By what factor? Should we try to exchange one of the latter for two of the former?


> The thing is, I see no way to have full self driving without AGI.

Why? AGI seems like a significantly harder problem than self driving cars (itself a hard problem admittedly).

What I personally think will happen is we'll meet somewhere in the middle: we can redesign roads/cars to make solving the problem of self driving easier.


Because of all the edge cases that humans can handle a good amount of the time because we have a lot of intelligence compared to any computer.

But I agree with you that if we meet in the middle, that could actually work well.


I work for Google, opinions are my own.

In theory, breaking up monopolies increases competition which allows better companies/products to spring up that would otherwise be crushed by anti competitive practices. If you agree with that theory, then the act of breaking up monopolies actually makes the US more competitive.


> AppAmaGooBookSoft ignores Netflix, which is important for salary calibration since they are cash-only. And FAANG ignores MS.

Why does it matter if it's cash only? The stock is equivalent to cash unless you care that much about the fluctuation.

At Google we have Autosale so any time I get a stock grant it's automatically sold and deposited into my bank account as cash.

At Microsoft I think they go even further and give you an amount of stock based on the cash value at the time you receive it so you don't even have to worry about price fluctuations. (At least that is my understanding)

I believe the reason FAANG ignores MS is that MS salaries are lower, but that is only based on my personal experience.


FAANG was coined base on stock performance and MS wasn't a hot stock at the time

It happened that because their employees are paid heavily in stock, the acronym became synonymous to high paying jobs


Cash is not equivalent to stock. Stock has a higher expectation since you can invest unvested stock but not unvested cash.

Hell, a stock offer is something like 30% more valuable than cash over 4 years, though obviously riskier.

This is also why a lot of start-up offers actually blow away faang comp (way higher expected rate of return).. again at expectation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: