To be honest, with Azure Devops platforms, web app instances, containers and deployment slots, its almost bulletproof once you spend the time to get it originally configured. Same with AWS I imagine.
how much time did it take for you to initially configure it? So with your and other comments, I'm getting a feeling that if you are just on one cloud and don't have a complex architecture, then it's pretty easy to do all these tasks, right?
And even so, how much time would developers be spending on these tasks weekly?
Deciding to sell this on the darknet is a life changing decision, white to black overnight and imagine not really something most would contemplate. Payment in BTC probably from an already compromised address so loads of factors. Probably an easy + quick 2BTC though
I feel like for a pro-level security person, 2 btc is not worth stressing about it for approx five years after that your whole career can be taken down at any time, as security ppl can absolutely not get jobs if they have a criminal file
This is an easy and obvious exploit so an attacker would need to extract the data from all sources ASAP. High risk of getting caught and ending in jail to be honest for measly 2BTC. Not worth it for anyone in the US or even Europe.
> Not worth it for anyone in the US or even Europe.
Lots of crimes are not "worth it". And yet criminals do it anyway. Because criminals (nor most humans) are not perfectly rational.
There's routinely reports where people try to rob a gas station with a loaded gun - a $200 haul if everything goes perfect. It doesn't and now they have 10 years in jail...
Yeah Sam Bankman Fried probably had the skills and connections and trust to coast through life. But instead he chose to commit the most egregious of financial fraud.
I would strongly argue that almost all crime is irrationally - take it to the game theory strategies of tit-for-tat and such.
You mean buying a handgun to give as a gift to your dad, vs buying a handgun for someone who wouldn't pass a background check and might use it for bad things (straw purchase).
The newer generation smart meters operate on their own network, so that its critical and separate infrastructure so this won't happen again in the future. Also you have to be pretty set on not getting a smart meter as it removes you from a lot of competitive energy deals, especially ones with a cheaper night rate.
Without knowing the problem or context, I will take this on the basis of a "coding problem" where I may be tasked to create a piece of software/code/webapp to solve the needs. I take things a bit differently to most, I find the best way (for me) is to start tackling the problem with the tools I have, and then when I get stuck, see what I can do to fix it. I don't believe any problems I face have no solution, its just how long it takes/much it costs.
The only certainty in tech, is that there will always be some tool/software package that looks to solve an already established solution with a new approach, only time will tell if it will be successful. Reflecting on the past can shed light on why somethings remain unchanged decades later, and why others have fallen behind. But I think the real answer to your question, is whilst having a coffee/tea/break whatever, it can be nice to read an older article, or a new article on old software, to remind yourself about the "old days" or see how you wold solve a past problem today. It's a bit like classic cars, some people like to buy the car they had or couldn't afford when they were a teenager, equally now as a tech worker (hopefully earning good money) you can indulge and purchase a snes/COM64/Apple 2gs or whatever.
That's a great read, many people get burned out. Have spoken to a few people who left the industry to do something else, most of them came back a few never did. The grass isn't always greener, some people can easily work through a burnout and come through the other side, others simply need a few months off to re-ignite the flame. It can just be working in the wrong environment, switch to a smaller team maybe? A few friends of mine ditched the startup wheel and now work for an agency producing "standard" websites/cms and enjoy their life so much more.
I have been using chroot since approx 04 when I found it during my first Gentoo install, what docker did for me was make it so much easier to deploy various environments, due to how much exposure they have. It took a good amount of effort to make chroot both "cool" and widely available to both sys admins and developers. Sadly what it has done, is make users less aware of the core Linux/Unix principles that allow us to have cool software that just does it all for us.
As someone who has used FreeBSD exclusively on bare-metal corporate production servers for the last 10 years, I still wouldn't use it on desktops/workstations. It's built as a server OS and my opinion is that it does this very well, I have tried a few times over the years to use it as a destkop OS, I spent more time configuring my machine than I did actually using it. I can install Ubuntu and 30mins later be using it for work with no issues.
If you run your own servers, especially ones that you physically have to maintain the hardware on, FreeBSD is a very reliable choice and the upgrade paths on it are fantastic. I now run some services in Azure and use docker, but all my "on prem" stuff, is still freeBSD powered.
OpenBSD is often considered the desktop BSD. You install it and it works or it doesn't. The only real hangup tends to be configuring the sound if you want to do something different but once you get it configured that configuration is forever and solid.
Yea, I have tried OpenBSD a few times, but sadly I just don't have enough time to invest in my OS to get the most out of it. I use my machines until they fail, then replace them with a new one/part. A fresh install of ubuntu and they are ready for work in 30mins. I used NixOS for a small period of time for this reason, but even this became more of a headache than it was worth.
So for some reason I am just discovering borg (been an rsync.net customer since 2014), currently all my data is GPG encrypted on the client side, so database dumps, files and server logs. I have a custom script that encrypts all files, then rsyncs them to rsync.net, if I need them, rsync them back to a local machine and decrypt using GPG. A drawn out process that seems quite antiquated, especially seeing as my need to access these backups is maybe once per year. It would seem from initial (quick) reading, borg just basically makes this whole process easier and arguably even more secure?
I'm sorry, if you cannot see the benefits to running programs inside a container, then you have been reading the wrong article. I have been using FreeBSD jails for years, and now on Linux deployments, docker as well.
Lets say you have a server, it runs a mail server, database server, web server (proxy) and application server.
If they all run without containers, if one service gets compromised, and a root exploit is found, that's it game over.
If you have a service that starts eating up memory, with proper configuration, it can't overload the root server. Basically each jail/container can only see itself and any exploits cannot effect the host system, or other jails/containers (when configured correctly).
It also allows for easy expansion, when one jailed/containered service gets to large for the server, just move it to another server easily and quickly.
It also allows for speedy deployment, with docker you can bundle everything on your laptop, create an image, then ship this straight to an external host like EC2/Goole Cloud (for example), with the addition of pre built containers for Django/Rails/Postgres/MySQL etc, it creates a ready working environment for developers who might not be to hot on configuring systems. The "shipping" ability to docker is a by product of the container which is another added benefit. There are loads more features than what I have stated here, this is just a very brief summary.
If they all run without containers, if one service gets compromised, and a root exploit is found, that's it game over.
To be fair, if a kernel-level root exploit is found, it's probably also game over for containers. It's possible to have root exploits that cannot escape containers due to UID virtualization or whatever, but typically(?) root exploits are based on being able to mess with kernel memory, in which case escaping a container should also be possible.
> if one service gets compromised, and a root exploit is found, that's it game over.
For root exploits isn't Docker toast as well? I haven't followed Docker in much details, but does Docker actually promise that commands run as root will be contained?
There is a certain level of isolation for root even inside containers, but in the case of a privilege escalation exploit you would most probably achieve "real" root even if inside a container.