Hacker News new | past | comments | ask | show | jobs | submit | jbotdev's comments login

I’m not sure an “objectively great” feature exists, because “great” is such a vague and subjective term.

I think it’s more productive to discuss it in terms of the use cases and who they benefit.


I get it, he’s trying to do everything in software (with the existing cameras), since software is “cheap” and can do anything in theory.

Unfortunately they keep showing how they’re in over their head with some of these features (like FSD). But it’s fine because their customers agree to keep beta testing while they figure things out on production cars.


>But it’s fine because their customers agree to keep beta testing while they figure things out on production cars.

This is the bottom line right here. It's the same with many other products too. It's not really wrong to run a business this way, because SO many customers apparently like it and continue to support these companies. Other examples are Microsoft and the ads it's baking into Windows, and Netflix and its price increases combined with feature degradation (like cracking down on password sharing in families).


> This is the bottom line right here. It's the same with many other products too.

This is how I feel about some released "MVPs". "I gotta show something even if it's crap. If it doesn't work, we'll fix it along the way."

Anytime someone mentions they're releasing an MVP, I make note of it to revisit later when they're done with alpha/beta testing. Incomplete release products kind of bug me and it seems to be more common, but I guess that may be a "me" thing. I'm glad my clients are tiny to small businesses and want things to work properly before they take over.


They’ve actually had 48V systems in production cars for several years now. Tesla is late to the game on that one.

https://en.m.wikipedia.org/wiki/48-volt_electrical_system


Just like “this is the year of the Linux Desktop”?

I’d love to have an open non-profit alternative to Google’s productivity suite, but realistically it’s very hard to host those things outside a traditional for profit business. Building the open source software is the easy part, and has been done already. The hard parts are things like email deliverability, and getting people to pay for what’s typically a free ad-supported service.


linux is not nonprofit, it’s no profit. There’s a difference. I would gladly pay for a linux operating system. Problem has been that they’re giving away a great product for free. Open source developers could get paid under a nonprofit model, not under the current no profit model.



It seems like they’ve gotten to the “holy grail” of deployment where developers don’t have to worry about infrastructure at all in theory.

I’ve seen many teams go for simple/leaky abstractions on top Kubernetes to provide a similar solution, which is tempting because it’s easy and flexible. The problem is then all your devs need to be trained in all the complexities of Kubernetes deployments anyway. Hopefully Uber abstracted away Kubernetes and Mesos enough to be worthwhile, and they have a great infra team to support the devs.


It's not clear to me that being completely unaware of your infrastructure is a good thing. I don't think it's too much trouble to ask an engineer to understand k8s and think about where their service will live, even if it's a ci system that actually deploys. Furthermore, many layers of abstraction, especially in-house abstraction, just mean you have more code to maintain, another system for people to learn, and existing knowledge that you can't leverage anymore.


There is a wide spectrum of infrastructure (and platforms, frameworks, etc) from “allows applications to do just about anything, though it may be very complex” to “severely constrains applications but greatly simplifies doing things within those constraints.” To be clear by “just about anything” I am not talking about whether some business logic is expressable, but whether you can eg use EBPF and cgroups, use some esoteric network protocol, run a stateful service that pulls from a queue, issue any network call to anything on the Internet, etc.

If you are developing applications software like Uber 99.99% of the time you really do not need to be doing anything “fancy” or “exotic” in your service. Your service receives data, does some stuff with it (connects to a db or issues calls to other services), returns data. If you let those 0.01% of the things dictate where your internal platform falls on that spectrum, you will make things much more complicated and difficult for 99.99% of the other stuff. Those are where leaky abstractions and bugs come from, both from the platform trying to be more general than it needs to be and from pushing poorly understand boilerplate tasks (like configuring auth, certifications, TLS manually for each service) to infrastructure users.

Being unaware (of course not completely unaware, but essentially not needing to actively consider it while doing things) of infrastructure is actually the ideal state, provided that lack of awareness is because “it just works so well it doesn’t need to be considered”. It means that it lets people get shit done without pushing configuration and leaky abstractions onto them.

I’ll give you one example of something that does an excellent job of this: Linux. Application memory in linux requires some very complex work under the hood, but it has decent default configurations with only a couple commonly changed parameters that most applications don’t need much, and it had a very simple API for applications to interface with. Similar with send/receive syscalls and the use of files for I/O ranging from remote networking to IPC to local disk. These are wonderful APIs and abstractions that simplify very hard problems. The problem with in-house abstraction isn’t that they are trying to do abstractions but that sometimes they just don’t do a good job or churn through them faster than it takes them to stabilize.


Well put, 99% of companies don't need to introduce such complexity for their relatively trivial use cases (though well-intentioned albeit bad engineers will try to invent it anyway).


Part of my point is the goal with such a system is usually to require less infra work/knowledge from your devs, but it backfires if you don’t invest enough in your abstraction.

The implicit goal of these abstractions is really to central knowledge and best practices around the underlying tech. Kubernetes itself is trying to free developers from understanding server management, but you could argue it’s not worth using directly vs. just teaching your devs how to manage VMs for the vast majority of organizations.

I don’t think you’re ever going to stop more and more layers of abstraction, so the best we can hope for is they’re done well. Otherwise you may as well go back to writing raw ethernet frames in assembly on bare metal.


> Part of my point is the goal with such a system is usually to require less infra work/knowledge from your devs, but it backfires if you don’t invest enough in your abstraction.

I disagree that the solution is to simply build more. Often the best thing to do is accept that devs will need to know a little infra, and work with that assumption.

> The implicit goal of these abstractions is really to central knowledge and best practices around the underlying tech.

I agree with that.

> Kubernetes itself is trying to free developers from understanding server management, but you could argue it’s not worth using directly vs. just teaching your devs how to manage VMs for the vast majority of organizations.

The difference is that spinning up a VM and setting it up to have all the features you would want from k8s would be too much to ask from a dev. You would probably just end up re-creating k8s.

> I don’t think you’re ever going to stop more and more layers of abstraction, so the best we can hope for is they’re done well. Otherwise you may as well go back to writing raw ethernet frames in assembly on bare metal.

The problem is that abstractions are not free, and most of the time they aren't done well. Once in a while you'll get one that reduces(hides) complexity and becomes an industry standard, making it a no-brainer to adopt, but most of your in-house abstractions are just going to make your life worse.


I think the biggest “win” with abstractions is that it makes it easier for infra teams to update underlying concretions (is that a word? the concrete version of the abstraction) without having to dig deep into the codebase.

e.g. with kubernetes, if you have the actual manifests defined by every team, it is a pain to do any sort of k8s updates. With a simple abstraction where teams only define the things they are interested in configuring (eg helm values), that simplifies this task a lot.


All it takes is for one microservice to start hanging on a GRPC request, server hardware stops doing some fundamental thing correctly, or some weird network quirk that 10x’s latency to half the switch ports in a rack, and you end up with insane, sophisticated cascade failures.

Because engineers don’t have to understand infra, it often spans geographies and failure domains in unanticipated, undetectable ways. In my opinion the only antidote is a thorough understanding of your stack down to the metal it’s running on.


A single engineer can’t understand everything at scale.

Even in a 100 person startup that I worked for where I designed the infrastructure and the best practices and wrote the initial proof of concept code and best practices for about 15 microservices it got to the point where I couldn’t understand everything and had to hire people to separate out the responsibilities.

We sold access to micro services to large health care organizations for their websites and mobile app's. We aggregated publicly available data on providers like licenses, education etc.

Our scaling stood up as we added clients that could increase demand by 20% overnight and when a little worldwide pandemic happened in 2020 causing our traffic to spike


None of the layers of abstraction are perfect. You have to deal with the whole mess all the way down.

We've had individual EC2 instances go bad where I currently work, with Amazon acknowledging a hardware problem after a ticket is raised. The reality is, quickly resolving the issue means detecting it and moving off of the physical machine.

Naturally our tooling has no convenient way to do that, because we have layers of things trying to pretend physical machines don't matter.


No the answer is keeping all of your VMs stateless and just using autoscaling with the appropriate health checks. Even if you just having a min/max of 1.


Describe a health check that can detect any possible hardware problem.

The error rate on the machines was higher in both cases, but many requests still succeeded. Amazon certainly didn't detect an issue right away either.


There is no way that you could record metrics - even custom metrics that get populated via the CloudWatch logs agent to CloudWatch and over a certain threshold of errors, bring another instance up and kill the existing instance? If you could detect sporadic errors there must be some method to automated it.

I’m assuming this isn’t a web server, if so it’s even simpler.


A statistical rule moves you into the realm of deciding what rate of false positives and false negatives you'll tolerate. Based on data from exactly two incidents in this case, which is obviously a bit fraught.


  > abstractions on top Kubernetes
  > abstracted away Kubernetes
I am beginning to think it's not such a bad thing to live and work in a third-world country far away from SV-induced hype cycles. This is genuinely painful to read.


But lots of people here talk positive about services like Heroku or Fly where you just push the code somewhere and it runs without you having to know a lot about the infrastructure.

Not every software development problem is a big scale problem and once you identify such a case you can start optimization work taking all the low level details into account. In reality most scalability problems revolve around databases, caches, concurrency and locks and you probably aren't going to tackle a lot of these in your average stateless service.


Kubernetes works great for larger projects when combined with ArgoCD or similar.

They all use GitOps which means all infra deployments and changes are tracked and easily able to be rolled back on any issues. And the complexity is nothing compared to having to manage your own cloud resources using Terraform etc which used to be the case.

And these days every developer needs to be on board with DevOps and so there are no real old-school infra teams supporting anyone.


The other "leak" in these abstractions that arises from physical limits is performance, especially when it comes to IO.

This is a major problem for databases and ultimately makes database "portability"/fault tolerance tricky since they work best with direct-attach storage that's inherently bound to a single physical machine.


Not to mention there is all sorts of other limits that you can hit at scale just on the compute layer itself (eg max pids, file descriptors etc).

I don’t know if we can truly abstract away the underlying system. The best we can do is give a best effort approximation that works in most cases, but explicitly call out the limits when they are reached.

I suspect that this is just the bubbling up of the underlying physics limitation of having limited resources where compute is run.


I think you’re confusing Progressive Web App (PWA) with Progressive Enhancement. PWA is basically a web app (typically an SPA) that behaves like a native app, as described in the MDN page they reference. Loading progressively such that the page is still useful without JS is Progressive Enhancement.


It’s conflating progressive with “progress”. It’s like conflating functional programming with whether it functions.


The post glossed over how exactly they detected session hijacking. They mentioned “This detection looks for suspicious sessions appearing without an authentication event that are consistent with session hijacking.”, but authentication obviously happened at some point, otherwise the session wouldn’t exist. I’m guessing this is a complicated way of saying the IP changed since login.

Of course the easiest solution is you shouldn’t voluntarily share HAR files for an active session.


How they detected it can be found at the bottom of the blogpost:

> *Indicators of Compromise*

> ...

> Okta activity for a user without any clear indication that the user authenticated (e.g. a user.session.start event for that user from a similar geographic area)


It’s weird to complain about supporting AEG claiming they’re somehow the 2nd “monopoly” after LiveNation. You can only have one “monopoly” in a given industry, otherwise they’re both just close competitors.

Having a close second competitor is still a step in the right direction, even if you disagree with their business practices.


Fine. I used the wrong term. You can't argue anything else I wrote outside of the word "monopoly" though, sooooo...


That’s not true. You can have two or more companies share a market via market splitting and the result is still effectively a monopoly.


That would be a duopoly not a monopoly.


Perhaps. It still sucks for the same reasons.


It sounds like this builds on top of Ethernet to provide a higher performance alternative to UDP/TCP, with some sort of hardware acceleration.

I may be in over my head since I’m not an HPC/datacenter expert, but not sure I understand how you’d use this on the software side. Maybe someone is aware of specific examples? (beyond the vague “HPC/AI”)

edit: as another comment mentioned, the diagram shows it’s on top of UDP/IP, so it’s mostly an alternative to TCP/IP


From a software perspective you probably wouldn't see Falcon at all. You'd use, say, the RDMA verbs API and under the hood it's accelerated by Falcon.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: