It's not uncommon in configuration management. Ansible has ansible-vault which encrypts secrets you then commit. When you need to use them you decrypt them and run your ansible commands.
It suffers the same problem as any other secrets management in git. If the decryption key leaks, even if your repo hasn't, you have to rotate every secret in case the repo is ever leaked in the future.
Even if Ansible has it that doesn’t mean people should put secrets in GIT repos. It just means a lot of Ansible users wanted it - and from my POV users don’t want correct features, they want what they feel they need.
GIT repo or config files should have references or secret names that should be filled in on the machine where scripts are running. Ideally secrets should never ever be transmitted even encrypted.
That’s people are lazy and don’t want to do proper setup is their problem.
There is nothing that should be encrypted belonging in GIT repo because secrets and encrypted stuff is not meant to be shared/dispersed. Where GIT main purpose is to share and distribute code.
Modern AEDs have voice guidance telling the person what to do. So you can follow the instructions as you do it.
Also, you should call the emergency number in your region and (at least in Australia) they'll transfer you to someone who can coach you through using the defib and performing CPR until professional help arrives.
Don't let that stop anyone from getting their CPR up to date though. The more experience you have the better equipped you'll be if you need to use it
I see AEDs at work. If I have a heart attack, I have no confidence in my team being able to use it. I've seen how they handle requirements and documentation in stories.
> Now we just need Oracle to decide not to want to maintain the Virtualbox kernel modules
If I remember correctly you can already select KVM as the backend in Virtualbox. And in Windows you can use HyperV as your backend. Not sure about MacOS land.
As far as I can tell, VirtualBox supports KVM and Hyper-V personalities for its paravirtualization, presumably to be able to reuse virtio/hyperv guest drivers. The host side still seems to require their custom kernel module.
Yep. That said, Virtualbox still has some advantages mostly not related to the actual hypervisor and more related to the UI and other emulation details, so I use this patch to use Virtualbox on top of KVM on my machine.
From my PoV, it mainly is just missing support for more networking options. It's said that it isn't tested much on AMD, but I'm using it on multiple different AMD boxen with no issues.
Under Apple Silicon, you are forced to use the hypervisor which is inbuilt in macOS. Both VirtualBox and VMware use said hypervisor, and do not have the original backends for macOS on Apple Silicon.
So this is great, if you're just looking to deduplicate read only files. Less so if you intend to write to them. Write to one and they're both updated.
Anyway. Offline/lazy dedup (not in the zfs dedup sense) is something that could be done in userspace, at the file level on any filesystem that supports reflinks. When a tool like rdfind finds a duplicate, instead of replacing with a hardlink, create a copy of the file with `copy_file_range(2)` and let the filesystem create a reflink to it. Now you've got space savings and they're two separate files so if one is written to the other remains the same.
How would this work if I have snapshots? Wouldn’t then the version of the file I just replaced still be in use there? But maybe I also need to store the copy again if I make another snapshot because the “original “ file isn’t part of the snapshot? So now I’m effectively storing more not less?
AFAIK, yes. Blocks are reference counted, so if the duplicate file is in a snapshot then the blocks would be referenced by the snapshot and hence not be eligible for deallocation. Only once the reference count falls to zero would the block be freed.
This is par for the course with ZFS though. If you delete a non-duplicated file you don't get the space back until any snapshots referencing the file are deleted.
Yes that snapshots incur a cost I know. But I’m wondering whether now the action of deduplicating actually created an extra copy instead of saving’one.
copy_file_range already works on zfs, but it doesn't guarantee anything interesting.
Basically all dupe tools that are modern use fideduprange, which is meant to tell the FS which things should be sharing data, and let it take care of the rest.
(BTRFS, bcachefs, etc support this ioctl, and zfs will soon too)
Unlike copy_file_range, it is meant for exactly this use case, and will tell you how many bytes were dedup'd, etc.
I'm substantially more productive at home. Not for any single reason, but as a result of small things coming together, for example.
More sleep. I can set my alarm 15 minutes before I start work instead of an hour and a half. So I'm more refreshed.
Commuting is mentally draining.
I get sick less. Less often as a sardine in a tin can. More sleep probably helps too.
Less distractions. There's just me in my home office room, at work there are 3 other people right next to me and a dozen within ear shot.
I get home stuff done during work breaks. When I step away from my desk at work I do so because I need a break from what I'm doing, not a break from everything. But there's nothing else to do at work so I sit and do nothing. At home i:
- unload the dishwasher
- walk to the shops to buy items for dinner
- sit in the park
And I find doing those things more refreshing than sitting in the break room staring into space, or walking through the city in the noise of cars everywhere.
So when I step back to my desk, at home I'm more refreshed ready to get back into it.
This also means when I finish work for the day, in office it's another hour or so to get home and then do chores. Vs at home I finish work and I can go for a walk in the park because I've done my chores already.
So I'm happier and less stressed. Which leads to less fatigue and burn out. So I'm ready to go again the next day.
Driving? Well, you have to be in at 8am so that thunderstorm, blizzard, morning twilight, yup, you have to drive through it. And the same the other way.
Catching a train? Is it on time? Will you get a seat or be standing for 30+ minutes. Will your connection arrive? If it's cancelled, what's the alternate route home if the line is closed.
Of course, your millionaire company owner has an apartment a short walk from the centre-of-the-city office.
I was young and only had a deskop, so all my data was there.
So I purchased a 300GB external usb drive to use for periodic backup. It was all manual copy/paste files across with no real schedule, but it was fine for the time and life was good.
Over time my data grew and the 300GB drive wasn't large enough to store it all. For a while some of it wasnt backed up (I was young with much less disposable income).
Eventually I purchased a 500GB drive.
But what I didn't know is my desktop drive was dying. Bits were flipping, a lot of them.
So when I did my first backup with the new drive I copied all my data off my desktop along with the corruption.
It was months before I realised a huge amount of my files were corrupted. By that point I'd wiped the old backup drive to give to my Mum to do her own backups. My data was long gone.
Once I discovered ZFS I jumped on it. It was the exact thing that would have prevented this because I could have detected the corruption when I purchased the new backup drive and did the initial backup to it.
(I made up the drive sizes because I can't remember, but the ratios will be about right).
There’s something disturbing about the idea of silent data loss, it totally undermines the peace of mind of having backups. ZFS is good, but you can also just run rsync periodically with checksum and dryrun args and check the output for diffs.
It happens all the time. Have a plan, perform fire drills. It's a lot of time and money, but there's no equivalent feeling to unfucking yourself quite like being able to get your lost, fragile data back.
The challenge with silent data loss is your backups will eventually not have the data either - it will just be gone, silently.
After having that happen a few times (pre-ZFS), I started running periodic find | md5sum > log.txt type jobs and keeping archives.
It’s caught more than a few problems over the years, and allows manual double checking even when using things like ZFS. In particular, some tools/settings just aren’t sane to use to copy large data sets, and I only discovered that when… some of it didn’t make it to it’s destination.
The opposite may happen. Batteries follow a learning curve where the price drops exponentially in volume. High demand means a larger industry which means more scale benefits and cheaper prices.
Solar panels could be a comparison. Demand has been growing exponentially but this has actually pushed prices down because this demand drive the learning curve.
Or maybe the prices are being pushed down by the chinese government? I remember reading some news about how european governments are upset over this.
In general, I wonder how much do we really know when we talk about cause and effect in economics. We see prices going down. We attribute it to something. But without experiments, how can we be sure?
We see it in other goods like lightbulbs. Economies of scale is understood. The learning curve for solar/batteries has been stable for almost 50 years.
A small part of the price declines is due to subsidies from the Chinese government but I don't believe it's the best explanation for the 97% price drop.
But if you can answer the question "how much would batteries go up in price next year if China removed subsidies", I'd genuinely like to know the answer to that because it is important.
One risk is a war between the US and China, regardless of the subsidy question. So we should get good at domestic alternatives like compressed air and pumped hydro, and work towards our own manufacturing of batteries.
https://docs.ansible.com/ansible/latest/cli/ansible-vault.ht...
It suffers the same problem as any other secrets management in git. If the decryption key leaks, even if your repo hasn't, you have to rotate every secret in case the repo is ever leaked in the future.