Hacker News new | past | comments | ask | show | jobs | submit | traviswingo's comments login

This literally made me lol. Not sure if it’s true. I might be true. But come on!


Lol


This reply cannot be understated. Those who are strong advocates for highly leveraged equity positions who use real estate to justify either have yet to experience a true market decline, or are simply really green to investing.

If I could leverage 4:1 on the total market index using a fixed 30 year loan without the ability to force a sale I would in a heartbeat. Unfortunately, that’s just not how it works.

And anything claiming to be the solution to that (like a leveraged ETF such as UPRO), suffers from volatility decay that causes it to underperform or eventually go to zero in horizontal markets (e.g. lost decades).


I refuse to pay for this. I’m willing to wager it will be free within a reasonable amount of time.


Relevant XKCD: https://xkcd.com/605/


Now do TVs


I picked up a car from a mechanic the other day and we got to riffing about my own Tesla. He admitted to having friends that worked at Tesla on the manufacturing line. In his own words, these guys are “complete idiots,” and “do shrooms before assembling cars.”

I want to take his words with a grain of salt, but…I kinda believe it. Obviously hearsay means nothing, though.


I mean I don’t doubt that the some people working the assembly lines are getting a little messed-up before their shifts.

Our classic mini from Australia has other production anomalies, you kinda just chock it up to the workers being a little drunk during its construction.


Not sure if you’re discrediting the parent comment or pointing out the reality of it.

But yes this is something only someone with a high income would say. And yes, it’s true.

Beyond a certain income, the wrong job can be soul sucking and depressing. And it takes achieving that level of income to fully appreciate that reality.


It's been described as another way to hit rock bottom, because you realize that even after "making it" it doesn't make you happy. So NOW what do you do?


This sounds an awful lot like analysis paralysis to me. My recommendation: just launch. You probably won’t run into any of the problems you’re worried about and, if you do, you can just patch them up.

As you launch more and spend more time dealing with users the default things to do will become second nature, and you’ll find yourself using the built in tools from AWS, DigitalOcean, CloudFlare, etc. rather than rolling them yourself.

But seriously, just launch. There’s a really good chance you won’t have any problems.


Please don't do "just launch" if you accept any user accounts or PII =/ You're responsible for their data and security too, and should at least exercise some minimum security... doesn't have to be the most secure site in the world but soooome bare effort would be appreciated.


I'm actually with traviswingo. Just launch. Chances are, no one will care about your website for quite a while. Unless you're building a product with a lot of hype around it, there's likely going to be a huge gap between launching and seeing any traffic at all. This gives you plenty of time to implement some of the great recommendations given here. But don't delay the launch for it.


There are a million bots scanning all of IPv4 space every minute looking for automated exploits. You don't need someone dedicated looking to get into trouble.


Please don't listen to this advice, this is precisely how services get pwned.


I’ve had people look at me like I’m crazy when I say that the A.I. that will be a threat to humanity will be of Chinese origin. It’s this exact “succeed, at any cost” mentality that cements my belief.


"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html


strange comment for this particular video. not really sure what a rapid disassembly of a rocket has to do with AI.

i don't disagree though, maybe for differing reasons


The boosters should have their own flight termination system which should have exploded if they were going to fall anywhere near populated land. Everyone in the US has to have them even if they're launching from Florida over the ocean and each independent flight component (like rocket boosters) has to have its own.

Allowing a rocket to fall on land like that shows extreme contempt for public safety.


I think what OP might be saying is that the attitude that it's OK to just yeet a rocket booster at a populated area, might also not yield much safety when it's applied to Artificial General Intelligence.


Exactly. The Chinese wont be worried about 'alignment' to prevent bias against minorities. That will be a feature.

Success is all that matters.


[flagged]


It's complicated isn't it.

Yes. As a guest or visitor. I am granted a lot of deference. Especially in someone's home. Chinese are very friendly.

But China is modernizing, progressing, very rapidly, and much of that has a human cost. So think it still is correct, that OP was implying, that China will not be as worried about controlling AI. As many fear, China will use AI for surveillance and control.


I imagine you're correct if we're talking about the elite enjoying more legal privileges than us mere commoners. Both counties do have have that in common.


Let’s ask the Uyghurs about those legal privileges.


The Uyghurs are a particularly interesting case. Like several Chinese minority groups, they are entitled to extra points on the 高考.

Unlike other minority groups, it appears to be relatively common for universities to discount their score by that amount.


Extra points on some stupid exam are supposed to make up for genocide/extrajudicial internment?


China makes Saudi Arabia look like a paragon of human rights.


SpaceX's flight termination system malfunctioned last year as well, though it was out to sea at that point.


That's the entire reason why they launch from the coast, so that if something goes wrong the debris or booster will fall into the ocean.


What if the termination system was needed because it turned out of control early on?


You probably still have a >50% chance that it's going to be over water since that's the direction it was initially moving. Yes, if flight termination fails and by chance it vectors backwards towards land somehow without just spinning like out of control rockets normally do then it may fly far enough to wind up on someone else's property but that's why launch sites are located at a reasonable distance from inhabited areas. The point is that it's very different to take all of those precautions vs. just flying your rockets over inhabited areas as a matter of course and not giving a fuck if they land on someone.


Flight termination system isn’t relevant when you’re deliberately launching over land. The FTS doesn’t disintegrate the thing


In general I agree with the point you are making, but if you will indulge my "well actually..."

From to the Range Commanders Council Range Safety Group, 2010, summary of FTS requirements:

2. produce a small number of pieces, all of which are unstable and impact within a small footprint;

And also another requirement relevant to this particular video where the spent booster clearly had uncombusted carcinogenic hypergolic fuel when it hit the ground:

3. control disposition of hazardous materials (burning propellant, toxic materials, radioactive materials, ordnance, etc.);


> It’s this exact “succeed, at any cost” mentality that cements my belief.

I think that's a ideology thing that goes across borders, as it's a problem on both sides of the pacific.


In the context of this video, you're just wrong.

Other countries seem to be very careful with their reentry plans.


The analogy falls apart with AI, though.

- American AI “safety” practices have little or nothing in common with best practices in other fields of engineering, to the extent they can be said to exist at all (most AI safety work focuses on making sure the AI doesn’t say anything that might offend someone).

- When a rocket blows up, people die but we learn from the mistake. When an AI seriously “threatens humanity”, humanity dies and we very possibly don’t get a second chance.


A rocket booster falling on your head is a real threat.

AI taking over the world is, as yet, an imagined threat.

When AI proves it will take over the world and China builds theirs without the "don't take over the world" code mandated by every other country, let's talk more.


> let's talk more.

Unfortunately if that happens, we won’t be able to!


I don't feel like parent was talking "in the context of this video" as they mentioned AI, so I figured we left that context behind in this comment-tree. Seemingly, I was wrong.


I didn’t have time to read the entire article, so forgive me if I blatantly state what it said.

But, isn’t “being bored” leading to enhanced productivity deeply rooted in neuroscience, particularly with regards to dopamine and the reward centers of the brain?

Dopamine is incredibly powerful when understood properly. It can both be the reason you can’t seem to start, and also the thing that allows you to accomplish your goals. Your brain uses dopamine to get you to do more stuff that’s “good for you.”

The problem is, everything in our lives today is engineered to release dopamine, so we get “rewarded” for doing nothing.

By being bored, you dilute yourself down to either doing nothing, or doing the thing you’ve been putting off. It doesn’t take long before the thing starts looking appealing, and you get rewarded for doing it.

This is essentially what people experience when they do a “dopamine fast,” which isn’t really a thing at all, it’s more of a fast from stuff that isn’t productive. It simply works because we really seem to have a dopamine limit within a short time period, where, once reached, things that used to be enjoyable simply aren’t anymore.

Same goes for creativity, motivation, etc. Just stepping away from all the stimulation brings those baseline levels back down and allows you to get excited about doing stuff again, even if it’s not insanely fun.


That’s simply because confirmation bias heavily influences perspective. If I want something to be true, I wouldn’t be hard pressed to find research or conduct my own that could make a solid argument for it.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: