The problem is real and it's not the only one in this space.
I was surprised to see so many words and so little substance. If feels like Troy didn't really try about this one.
1. Require not 1, but 2 people, for hire, right now. It will take a long time until it gets approved, HR does their thing, and you (or someone else) hire people to help you.
2. Document, as fast as you can, all of the assets in your environment. Don't waste time on looking for software. Plain text or spreadsheet will do for start.
3. Identify the most important assets (servers, network devices, etc) - and try to understand what are your priorities for each. For example, availability for your frontend servers. Confidentiality and integrity for your server storing PII. Based on this, you'll understand what are your biggest risks. Would it be a business disaster if your website is down for few hours? Likely not. What about if the PII is compromised and you get huge fines and be in the news? More likely.
4. Find out who has the access to the most important assets, and cut off anyone who doesn't really need it.
5. Establish some sort of monitoring for the most important assets.
Also, backups are kind of thankless - no one ever puts backup maintenance on your schedule, but if a machine goes down everyone expects you have a backup from three weeks ago and that they can retrieve all the data from it. You also have to poll people as to what needs to be backed up. You need to do test recoveries. Some backups should be offsite, and the policy around that should be communicated to managers.
I agree with the rest of this, but I'm not sure about 4. If you're new to a system, you don't have a good sense of who "really needs" access, and in particular you don't have a good sense of all the terrible, awful, dangerous things that are nonetheless delivering business value.
Maybe person X built system A, and is now working on system B, but remembers a lot of details - keeping their access to system A can help you recover in an emergency. Maybe someone wrote a monitoring job (or worse, a deployment job) that runs as themselves. Maybe someone has read-only access to a system and is using it to ask occasional good questions to the team that actually runs the system. Maybe team C has a formal API but it doesn't work well so person Y informally got access to C's database and it's making system D able to run at all, and it would take a few months of effort to fix the API that's supposed to be there between C and D.
And, more frustratingly but no less impactful - maybe there's a senior person who likes still having access to system E, and making them mad will cause political problems for you and impose burdens when you you try to make any other changes, which gets in the way of your ability to solve all the other problems that you need to solve.
Your priority in the short term is to keep the business running, not to improve things. Insider threats are real, but they are much rarer than all the other reasons your business could get in trouble. If you don't have a good understanding of your tech debt and why the tech debt exists and how quickly it can be paid off, focus on that first.
A possible corollary to 2: Don't forget to document incidents as you go. You'll learn a lot in these situations and you may find resources that are handy / work for you along the way.
Keep a text file, or OneNote or whatever, open while working an incident. Don't overthink it, don't over-complicate it.
I use InkDrop for these kinds of things. It’s dead simple, and syncs everywhere. I don’t care about formatting and love that I can use simple markdown. It’s perfect to jot some notes down on a phone and open them later on a computer — more importantly, it’s not tied to a certain OS and it’s cheap.
> Establish some sort of monitoring for the most important assets.
You need this to know if your web sites, servers and services are running, and to be emailed or contacted when not. I am used to Nagios. I understand there is a Nagios fork, Icinga, which I know little about. Some people use Graphite, which I also know little about.
You can also have it check for disks running out, high CPU loads, inodes running out, heavy I/O etc.
Also, you can email etc. alerts, or pay for a company like Amelia to monitor a dashboard and page/call people at night if a server is down, or to even do some simple scripted remediation attempts.
6. Get certified. Sysadmin positions often are in the hot seat, the first to take the beating when the shit hits the fan (if that's what happened to your predecessor, it will happen to you), and a certification often helps landing a new job more than "assumed past experienced" will when it's time to pass the HR sniff test.
7. Eventually, in 5-7 years, become a consultant, either in a consultancy, or freelance.
As a financial auditor I've no idea how we would pick this up. It wouldn't hit the books of the company at all. We would research board members and check their other directorships but that only covers executives and their formal relations. It would be more likely to be picked up by the auditors of the company offering the bribe (what's this payment for?).
Good infosec teams keep inventory of all the software used in the org. If they see that the org already pays a vendor for software doing X, a question should be raised, why we need another one for doing the same thing.
Also, each new vendor or software provider needs go get a "security clearance", after the infosec teams checks their state of security.
These kinds of practices would probably discover the shady intents.
LOL. I've stop reading right there.