Hacker News new | past | comments | ask | show | jobs | submit | pinkcan's comments login

it's hip, they use hip tech and hired hip folks, so you know it's the place to be ;)


what's up with the status page?


There's a global status page, and then there's a local update for people with instances on an affected host --- past some threshold of hosts, the probability of having an issue on some random host gets pretty high just because math. The local status thing happened for people with instances on that machine.

Ordinarily, a single-host incident takes a couple minutes to resolve, and, ordinarily, when it's resolved, everything that was running on the host pops right back up. This single-host outage wasn't ordinary. Somehow, a containerd boltdb got corrupted, and it took something like 12 hours for a member of our team (themselves a containerd maintainer) to do some kind of unholy surgery on that database to bring the machine back online.

The runbook we have for handling and communicating single-host outages wasn't tuned for this kind of extended outage. It will be now. Probably we'll just paint the global status page when a single-host outage crosses some kind of time threshold.


thanks for clearing that up


Status pages are usually for marketing purposes.

Why would anyone want to become a new customer if all they see is jumble of green, yellow and red?

Green status pages attract business.


thanks for sharing your opinion, but I was looking for a reply from someone inside fly.io


DO, OVH, and Hetzner are more stable because they don't use buzzwords?


I guess what OP is getting at is that these providers stick to the battle tested proven bedrock and nothing like "run your app where your users are" which I find interesting because that too can be done with any cloud that has a Datacenter in the region where you happen to have users.

So this "closer to your users" voodoo is a little beyond me.


The 'where each user is' is implicit, the expectation is that you're some kind of global SaaS, and you want low latency whereever your users are.

Sure you can do that with any cloud (or multiple) that has datacenters in a suitable spread of regions, but I suppose the point (or claimed point, selling point, if you like) is that that's more difficult or more expensive to coordinate. Fly says 'give us one container spec and tell us in which regions to run it', not 'we give you machines/VMs in which regions you want, figure it out'. It's an abstraction on top of 'battle tested proven bedrock' providers in a sense, except that I believe they run their own metal (to keep costs down, presumably).


Some workloads are surely latency sensitive but some of those transactional CRUD systems don't need that much closer to the edge is my possibly flawed opinion.

I mean chat or e-commerce yes, the edge and all.

But for a ticketing system, invoicing solution or such, a few hundred millisecons are not that much of a big deal but compliance, regulations matter more.


monitoring means you might get called-in on your night-out

who wants that?


Every web dev who is worth their salt knows what s/he/they signed up for.

For the unique privilege of being able to build machines out of thin air, I will accept the occasional weekend page


> a throat to choke

yikes


That's a common phrase, not to be taken literally.

It just means one single person (at the vendor) who you can complain to, or raise an issue with.


yea, it's just another one to add to a list of expressions that are unnecessarily aggressive, and for which there are better alternatives


Been in the industry a long time. It’s not a common phrase. It’s weirdly violent. At most “someone to yell at”. A throat to choke? What the fuck.


not a native speaker, but have been reading and writing english long enough to pick up the meaning immediately

anyway, had the same though when typing my sibling comment, felt so disgusted reading that


in order to reduce hallucinations one can use other tricks, chain-of-thought and reflection being two popular ones


In this case you are tricking ChatGPT to output information it can’t know.


models are still having trouble reasoning – current tricks still depend of feeding back an intermediate answer with a different prompt, but it is still an LLM, and most times the same LLM


Yeah, it's a large language model not a large reasoning model. I don't really have any faith in what seems to me like attempts to make LLMs act like LRMs.

I think it's always going to be a bit shit, no matter how large they make them it's always going to be an obvious faker, always going to confabulate and lie etc.

As long as we are using a technology that merely fakes intelligence (using statistics to generate the most likely next token based on training data is not intelligence), it is going to be obvious that it's fake. This whole bubble is going to burst when people start realizing how overrated it is.


that would be copilot not chatgpt


parent gave five examples of issues they found while using a taxi in the manner you described

you could have just believed them instead of being antagonistic about it, this is especially unnecessary given the topic has a history that supports the experiences the parent poster listed


it's funny the person is upset about some taxi driver cheating them, but okay with paying tribute to some app developer middleman with every trip


The wise man solemnly bowed his head and said “there’s actually zero difference between free exchange of currency for service and fraud.”


if you're going to cheat a man do it honestly


  import flask
  import random
  app=flask.Flask()
  @app.route('/')
  def random_redir():
    with open('urls.txt') as of:
      return flask.redirect(random.choice(of.readlines()))


Prolly don't want to read the file on every request though


congratulations, you've optimised a snippet that never ran – it was typed into the post form from memory


Premature optimization may sometimes be unadvisable, but in this case the optimized version uses half the memory, is two orders of magnitude faster, and doesn't let your users DOS you by triggering huge file reads on each request.


> uses half the memory

this is false

> two orders of magnitude faster

incorrect

> doesn't let your users DOS you by triggering huge file reads on each request.

I understood what you were trying to do on your first reply – "Prolly don't want to read the file on every request though" – no need to repeat yourself.

plese remember:

1. it's pseudo code, the optimisation is superfluous

2. it never ran, the optimisation is useless


Optimization is never useless. Even code that doesn't run is best written in a resource-effective manner.


unless the intent of the code is to demonstrate some implementation where the optimisations would be distracting


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: