Hacker News new | past | comments | ask | show | jobs | submit | belthesar's comments login

Rather than avoid tools that work well, I would encourage you to adopt solutions that solve your use cases. For instance, if you aren't getting notifications that a backup is running, completing or failing, then all you've set up is a backup job, and not built a BDR process. If you're looking for a tool to solve your entire BDR plan, then you're looking at a commercial solution that bakes in automating restore testing, so on and so forth.

Not considering all the aspects of a BDR process is what leads to this problem. Not the tool.


That's not a benefit to me if I can't control how someone gets access to my vehicle, dealership or not. If I want a dealership to be able to assist me, I should have to authorize that dealership to have access, and have the power to revoke it at any time. Same for the car manufacturer. It ideally should include some combination of factors including a cryptographic secret in the car, and some secret I control. Transfer of ownership should involve using my car's secret and my car's secret to transfer access to those features.

If you feel like this sound like an asinine level of requirements in order for me to feel okay with this featureset, I'd require the same level of controls for any incredibly expensive, and potentially dangerous liability in my control that has some sort of remote backdoor access via a cloud. All of this "value add" ends up being an expense and a liability to me at the end of the day.


I'm a huge fan of SOPS, especially since it can integrate with numerous crypto providers, from `age` for a fully offline crypto source to Hashicorp Vault and big cloud secret / crypto providers.

I wanted a tool that allowed me to store secrets safely without tossing them in plain text env files called `sops-run`. It manages yaml manifests to store your environment variables based on the name of the binary you're running, and only applies the environment variables to the context of app you're running. I never did tidy this up into an installable python package so it can't be easily installed with pipx yet (I keep putting off finishing all of that, pull requests welcome ;-) ), but I like it better than simply using direnv or equivalents, since it doesn't load the environment variables into the shell context, though it could probably be combined with it to hot-load shell aliases for the commands you want to run.

https://github.com/belthesar/sops-run


"Better" is definitely the wrong word, but the jist is sound with the right framing. A better tool often allows you to do work safer, and that is what was attempted to be conveyed. Applying the approach to one of your examples, a faster car doesn't make you a better driver, but a car with more safety features makes your driving experience safer than one with less.


Interesting! This appears to be an alternative to the BeeWare Project and its suite of tools. I'd be interested in looking at them head-to-head. https://beeware.org/project/overview/


I know a lot of folks are talking about what they do, or what dot file managers they use, but there's something to be said for building a workflow that works for you. Pretty clever setup!


> what they do, or what dot file managers they use, but there's something to be said for building a workflow that works for you

Like consuming time. If there is a tool which does what you need like chezmoi then you should use it, so that you don't have to spend maintaining something bespoke which consumes your time that could be better spent on other things.


Stupid argument. With this logic, nothing new is ever explored, nothing is learned, no insights gained, or shared. This logic even questions the need for chezmoi - stow existed for decades before chezmoi. You think chezmoi sprang into existence with all its features? High chance it was someone's toy project because they didn't want to use stow.


stow's approach of using symlinks is extremely limiting. For example, it means you can't have templates (for small machine-to-machine differences and secrets) or encrypted files. chezmoi does have a symlink mode, like stow, but using symlinks has multiple downsides: https://www.chezmoi.io/user-guide/frequently-asked-questions...

chezmoi was actually inspired by Puppet, not stow.

Source: me, I'm the author of chezmoi.


I'll be honest, I think you've got an uphill battle to convince so many of us that are concerned about accidental pricing that this is exactly the thing we actually want, vs. what we feel we want. This position parrots what Vercel's leadership said when someone had a massive surprise bill, and the story made the rounds both here and elsewhere.

To be super honest, you might be right too. You might go down a huge engineering effort to build this in only for 5% of your customers to ever engage it. I think the real question is what percentage of your customers will feel better knowing that the have the choice to set those limits, and how will that comfort actually improve their trust with Fly, and cause them to choose it over another cloud provider.

It may end up being a lot like this feature we had in a platform that I used to support. It had a just-in-time analytics pipeline that at one point required tens of thousands of dollars in compute, storage and network hardware alone to function. Based on our analytics, it was barely used compared to the usage of the rest of our app, which made the zounds of resources and fairly frequent support attention it needed feel silly in comparison, so I advocated to sunset it. Product assured me that, regardless of how silly it might be to continue supporting this feature, it was a dealmaker, and losing it would be a dealbreaker.

So yeah, y'all might be right in that the majority of your customers don't actually want it. But maybe what they do need is to know that it's there, ready for them if they ever need to engage with it.


> You might go down a huge engineering effort to build this

This is an overlooked issue: billing caps are hard to implement and will likely incur losses for the cloud company that does.

Take an object storage service as an example. Imagine Company X has a hard cap at US$ 1000 but some bug makes their software upload millions of files to a bucket and rack up their bill. Since objects are charged in GB/month they will not reach the cap until some time later that month. Then, when they do, what does termination of service mean? Does the cloud provider destroy every last resource associated with the account the second the hard cap is reached? If they don't, and they still have to store those files somewhere in their infra, then they'll start taking a loss while Company X just says "oops, sorry".

That's what tptacek is talking about: you want to NOT destroy the customers' resources because they can quickly figure out that something went wrong and then adjust while still maintaining service. But the longer you keep the resources the more you're paying out of pocket as a cloud provider. If you can't bill the overages to the customer, which a hard cap would imply, then you're at a loss. Reclaiming every resource associated to an account the moment a cap is reached is an extreme measure no one wants.

A hard cap then becomes only a "soft" cap, a mere suggestion, and cloud providers would then say "you hit the cap, but we had to keep your resources on the books for 12 hours, so here's the supplemental overages charges". Which would lead to probably just as many charges disputes we have today.


$1000/mo in S3 deep glacier storage buys you a petabyte (a million gigabytes) of storage. It’s hard to imagine such a small customer uploading a petabyte without noticing, and part of what happens when you hit the cap could be moving things from normal object storage to glacier.


If you turn off servers and shut off bandwidth you get rid of the vast majority of expenses.

Storage fees are a lot less risk, but if you want to cap those then you should cap the number of gigabytes directly. That prevents the overage issues you describe.


I've been meaning to give this a shot myself. I do like netboot.xyz's maintained catalog for convenience, but when I self hosted it (to get the caching proxy benefits), I felt that cache cleanup was a little clunky (basically, go into the cache dir and delete the ISO/image). For the few times I need to install an OS on something, it'd be totally fine to grab the image and stage it on the PXE server.


Netboot.xyz allows you to self-host an instance of it, which can cache ISOs. I was doing it for a time, and thought it neat, but not worth the hassle to keep running.


Ironically, this is something that I think Pandora solved quite well with their recommendation engine. By virtue of creating a station around a particular vibe, even if 5 playlists all started with the same seed song, weighting other songs up and down on each station would curate a different listening experience, by virtue of finding how those are similar. Where Pandora was limited (at least, the last time I used the service) was the pre-seeding process is a bit arduous and opaque. I'm not sure how you make that easy to interact with, as going a layer beneath to the "why" a song was recommended and allowing folks to influence the graph at that layer sounds like a daunting UX challenge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: