Hacker News new | past | comments | ask | show | jobs | submit | 300's comments login

>SOC2 is the best-known infosec certification

LOL. I've stop reading right there.


but it is? Which one is the best-known in your opinion then?


The problem is real and it's not the only one in this space. I was surprised to see so many words and so little substance. If feels like Troy didn't really try about this one.


I use Hugo. Lots of free themes here: https://themes.gohugo.io/

You can write your content in Markdown, keep everything in a repo, and only push static files to the hosting of your preference.


Here's what to do in the first few weeks:

1. Require not 1, but 2 people, for hire, right now. It will take a long time until it gets approved, HR does their thing, and you (or someone else) hire people to help you.

2. Document, as fast as you can, all of the assets in your environment. Don't waste time on looking for software. Plain text or spreadsheet will do for start.

3. Identify the most important assets (servers, network devices, etc) - and try to understand what are your priorities for each. For example, availability for your frontend servers. Confidentiality and integrity for your server storing PII. Based on this, you'll understand what are your biggest risks. Would it be a business disaster if your website is down for few hours? Likely not. What about if the PII is compromised and you get huge fines and be in the news? More likely.

4. Find out who has the access to the most important assets, and cut off anyone who doesn't really need it.

5. Establish some sort of monitoring for the most important assets.


6. Any critical assets - verify you can restore and get working offsite with 0 access to the current network


7. Have a backup strategy

7.5 Test your backup strategy every now and then (nothing is more painful than data loss and dysfunctional backups)

8. "All users lie, there is no exception". Sad but true, get used to it.

9. Automate your infrastructure. If something goes down, just recreate it with Chef/Ansible/Terraform/etc.

[edit: added no.9]


10. Rotate all the passwords immediately since the other guy has left the organization and have a password policy in place.

11. Understand the reason the other guy was fired make sure you are not the one holding the bag.

12. Have a issue tracking system in place, you will thank yourself later.

13. Have a change management policy in place.

14. Before you make any change full understand the consequences and have a tested plan to get back to a known good state.


I would add 15 start keeping a physical log book to document changes.

Dependant on what you have to look after you might want more than one book.


Also, backups are kind of thankless - no one ever puts backup maintenance on your schedule, but if a machine goes down everyone expects you have a backup from three weeks ago and that they can retrieve all the data from it. You also have to poll people as to what needs to be backed up. You need to do test recoveries. Some backups should be offsite, and the policy around that should be communicated to managers.


I agree with the rest of this, but I'm not sure about 4. If you're new to a system, you don't have a good sense of who "really needs" access, and in particular you don't have a good sense of all the terrible, awful, dangerous things that are nonetheless delivering business value.

Maybe person X built system A, and is now working on system B, but remembers a lot of details - keeping their access to system A can help you recover in an emergency. Maybe someone wrote a monitoring job (or worse, a deployment job) that runs as themselves. Maybe someone has read-only access to a system and is using it to ask occasional good questions to the team that actually runs the system. Maybe team C has a formal API but it doesn't work well so person Y informally got access to C's database and it's making system D able to run at all, and it would take a few months of effort to fix the API that's supposed to be there between C and D.

And, more frustratingly but no less impactful - maybe there's a senior person who likes still having access to system E, and making them mad will cause political problems for you and impose burdens when you you try to make any other changes, which gets in the way of your ability to solve all the other problems that you need to solve.

Your priority in the short term is to keep the business running, not to improve things. Insider threats are real, but they are much rarer than all the other reasons your business could get in trouble. If you don't have a good understanding of your tech debt and why the tech debt exists and how quickly it can be paid off, focus on that first.


A possible corollary to 2: Don't forget to document incidents as you go. You'll learn a lot in these situations and you may find resources that are handy / work for you along the way.

Keep a text file, or OneNote or whatever, open while working an incident. Don't overthink it, don't over-complicate it.


I use InkDrop for these kinds of things. It’s dead simple, and syncs everywhere. I don’t care about formatting and love that I can use simple markdown. It’s perfect to jot some notes down on a phone and open them later on a computer — more importantly, it’s not tied to a certain OS and it’s cheap.


> Establish some sort of monitoring for the most important assets.

You need this to know if your web sites, servers and services are running, and to be emailed or contacted when not. I am used to Nagios. I understand there is a Nagios fork, Icinga, which I know little about. Some people use Graphite, which I also know little about.

You can also have it check for disks running out, high CPU loads, inodes running out, heavy I/O etc.

Also, you can email etc. alerts, or pay for a company like Amelia to monitor a dashboard and page/call people at night if a server is down, or to even do some simple scripted remediation attempts.


6. Get certified. Sysadmin positions often are in the hot seat, the first to take the beating when the shit hits the fan (if that's what happened to your predecessor, it will happen to you), and a certification often helps landing a new job more than "assumed past experienced" will when it's time to pass the HR sniff test.

7. Eventually, in 5-7 years, become a consultant, either in a consultancy, or freelance.


The only thing I wonder about here is how it's possible that the infosec/GRC team didn't notice this.


Or the Accounts / CFO /Audit departments.

This sort of fraud is quite common in white collar crime


As a financial auditor I've no idea how we would pick this up. It wouldn't hit the books of the company at all. We would research board members and check their other directorships but that only covers executives and their formal relations. It would be more likely to be picked up by the auditors of the company offering the bribe (what's this payment for?).


I was thinking the internal controls that audited contracts, not external auditors.


How do you notice this? If the man is the ultimate arbiter of what does or does not get purchased, you don’t know why right?


A sensible company has internal audits and checks and balances.


I mean, what do you do if you notice this?


By following the basics of ISO27K1, for example:

Good infosec teams keep inventory of all the software used in the org. If they see that the org already pays a vendor for software doing X, a question should be raised, why we need another one for doing the same thing.

Also, each new vendor or software provider needs go get a "security clearance", after the infosec teams checks their state of security.

These kinds of practices would probably discover the shady intents.


Back when I worked for BT an anonymous letter from "a friend" to the internal security team was one option.



>The learning never stops :)

Totally. I wrote a book on it, and I still keep learning :)

>Is this how SnipMate, UltiSnip etc work under the hood?

No idea, I don't use these. I'm generally trying to use as few plugins as possible.


They could have blocked the user in Iran. It's without sense to block the organization's account.


OFAC sanctions are transitive.


Thank you, I mostly agree :)

And finally I have a good name for those people I discuss with on a regular basis - the "checklist experts" :D


More and more companies get compromised by a "highly sophisticated" actors.

And somehow, these companies immediately "know" that it was a nation-state actor. Same case here. There are only claims, but no facts, no evidence.

And lately, it's always "the russians" :) I'm just noticing the pattern.

Lastly, what I really can't understand is - how in the world these "highly sophisticated attackers" are so bad at covering their tracks. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: