Hacker News new | past | comments | ask | show | jobs | submit | skwee357's comments login

I agree.

But in order to be the "person who would say no", the industry needs to understand that your opinion and expertise--matter. It could be a cultural shift, or a gate-keeping style shift where we protect the title "Engineer", like they do in some professions in some countries.

But given the current state, you can't on one hand blame the developers, and on the other hand treat them like spoiled kids who make too much money and in any way AI can replace most of them. It doesn't work this way. A Structural Engineer bears the responsibility because he has the authority, and respect to his knowledge, to refuse to sign off a broken design. This is not the case in software engineering.


> treat them like spoiled kids

To what extent is this because we act like spoiled kids? I really do mean "we" here; I probably have acted like that sometimes. I wonder if we, the post-microcomputer generations, are messed up to some extent because we started programming as a fun distraction from the work we were supposed to be doing, rather than learning programming as a serious job from the beginning like our predecessors who learned on mainframes or minicomputers in college.


Right. This is my point exactly. It’s hard to have it both ways. We can’t both carry the burden of responsibility for society, and expect truck drivers to learn programming in a 12 week bootcamp. It’s hard to expect programmers to have rigor in our work if we hire programmers who are self taught. And that’s a bitter pill to swallow, because lots of self taught software engineers are really good.

But we don’t need to boil the ocean for things to improve. You personally can still decide you don’t want to make software that harms society. You personally can push back against your company if they want you to sell your users data. Nobody really knows how much they should respect your opinions and skills, so they’ll try things on. If you don’t respect yourself, nobody else will either.


It was a hyperbolic statement related to the fact that if you want developers to take full responsibility, like surgeons, then you need trust their authority.

I haven’t heard of a surgeon who said “this operation will take as much time as needed”, and the hospital manager pressured him to “finish it in 8 hours, and not use too many syringes”.


Structural engineer can refuse to sign off a design that is not 100% fail proof. If he will be pressured into approving such design, he will ask the pressurer to put his own signature and bear the risk.

Now try to pushback on your managers request to “cut this long deploy process just once because this big client wants it fast”.


Hey, op here.

The premise of the post was a response to the ridiculous claim that when something goes bad, we need to blame the engineer(s) who pressed the button.

I tried, through rant, demonstrate that there are other people to blame, starting from politicians who are incompetent in what they do, to CEOs who get compensated for taking the risk, to managers who cut corners, etc.

The culmination of the post is that if you want o blame someone, you might as well blame any of the involved parties. But instead, if we want to prevent such issues in the future, we need to understand that the entire process or broken, rather than throwing individuals under the bus.

I hope this clarifies it a bit


The only bone I’d pick with the article is blaming regulations. The regulations in question rarely say anything particularly boneheaded. Blanket compliance culture interprets those regulations in boneheaded ways. Because to do it any other way would be much more expensive.

My point was, why a display at a check in counter needs to run EDR software to begin with? Why can't it run a locked down, slimmed version of Windows in an isolated network that has very low potential to get malware on it?

I know why, because there is, probably, a regulation that says that if you run an airline company, you need to have malware protection on all machines. I bet, some IT guy even tried to question the need to run EDR on a non-mission-critical machine, but he was stopped by a wall of "it is what it is".


"I know why, because there is, probably, a regulation"

Instead of assuming a regulation and writing a blog about it, do the research and find out. To quote the irreplaceable Benny Hill, "You mustn't assume, because it will make an ass out of you and me."

Also, and more important, why default to regulation and not airline directors pushing ill-advised modernization strategies pushed by M$?


I've done the research. It’s called first hand experience. I was the guy making the arguments, that controls we already have in place obviate the need for edr everywhere, but I was told it doesn’t matter, gotta check the box.

Awesome. Is the box there because of government regulation or someone in corporate deciding it's necessary?

Corporation and consultants mostly, judging from my experience. If asked about precise law or regulation they just wave hands.

In my case it was a pci-dss (payment card industry data security standard) audit.

The thing is, you read regulations, and they pretty much always tell you to do something, but it’s always heavily principle based. Companies are left with extraordinary leeway as to how these regulations are actually implemented.

You’re right, which I also used in my argument, but I was shot down by our own people, because their success metrics were based on passing the audit with the least amount of fuss.

We kept our other controls, we just added edr as well, because just having it appeased auditors. If you try to explain to an auditor your other controls, it could change a part of the audit from five minutes to multiple days.

We don’t use crowdstrike, but this was years ago.


FWIW, I don't think that's by Benny Hill - https://quoteinvestigator.com/2021/02/08/assume/

As someone who works in this space, I can tell you: it's because big companies buy Cyber Security Insurance, and the insurance forms have a checkbox along the lines of "do you run Endpoint Security Software on all devices connected to your network", and if you check the box you save millions of dollars on the insurance (no exageration here). Similarly, if you sell software services to enterprises, the buyers send out similar due diligence forms which require you as a vendor to attest that you run Endpoint Security Software on all devices, or else you won't make the sale. This propagates down the whole supply chain, with the instigator being the Cyber Security insurance costs, regulation or simply perceived competence depending on the situation.

So it's not necessarily government regulation per se, but a combination of things:

1. It's much safer (in terms of personal liability) for the decision makers at large companies to follow "standard industry practices" (however ridiculous they are). For example, no-one will get fired outside of Crowd Strike for this incident precisely because everyone was affected. "How could we have foreseen this when noone else did?"

2. The Cyber Security Insurance provider may not cover this kind of incident given there was no breach and so as far as they are concerned installing something like Crowd Strike is always profitable.

3. The insurance provider has no way to effectively evaluate the security posture of the enterprise they are insuring, so rely on basic indicators such as this checkbox, which completely eliminates any nuance and leads to worse outcomes (but not to the insurance provider!)

4. "Bad checkboxes" propagate down the supply chain the same way that "good checkboxes" do (eg. there are generally sections on these due diligence questionnaires about modern slavery regulation, and that's something you really want to propagate down the supply chain!)

Overall I would say the main cause of this issue is simply "big organisation problems". At a certain scale it seems to become impossible for everyone within the organization to commicate effectively and to make correct, nuanced decisions. This leads to the people at the top seeing these huge (and potentially real) risks to the business because of their lack of information. The person ultimately in charge of security can't scale to understand every piece of software, and so ends up having to make organisation-wide decisions with next to no information. The entire thing is a house of cards that noone can let fall down because it's simply too big to fail.

Making these large organisations work effectively is a very hard problem, but I think any solution must involve mechanisms to allow parts of the business to fail withing taking everything down. Allowing more decisions to be taken locally, but also the responsibilities and repercussions of those decisions to be felt locally.


Yes, "cyber insurance" is a common driver behind these awful security and system decisions. For example, my company requires password changes every 90-days even though NIST recommends against that. But hey, we're meeting insurance requirements!

Because isolating the display and every machine to which it is necessarily connected obstructs monitoring and greatly increases the cost and delay of fixes if something should go wrong.

Also I doubt any slimmed version of Windows is sufficiently malware proof without added EPS.


Low potential is not no potential, and most everyone is looking for swiss-cheese defense when it comes to these devices.

In the case of a display at a check-in counter:

- The display needs to be on a network, because it needs to collect information from elsewhere to display it.

- It's on a network, so it needs to be kept updated, because a compromised host elsewhere on the same network will be able to compromise it, and anyway the display vendor won't support you if your product is nine versions behind current.

- Since it needs updates for various components, it almost certainly needs some amount of outbound internet access, and it's also vulnerable to supply-chain attacks from those updates.

- Since it is on a network, and has internet access, it needs to be running some kind of EDR or feed for a SIEM, because it is compromisable and the last thing you want is an unmonitored compromised host on your internal network talking back to C2.

Anything that can be used for lateral movement will be used for lateral movement, and if we can get logs from it we want logs from it. A cross-platform EDR solution is perfect for these scenarios.


Agreed. Re:

"- It's on a network, so it needs to be kept updated, because a compromised host elsewhere on the same network will be able to compromise it"

the suggested solution was "an isolated network".0

The problem there is the operator would have to use SD cards to update the adverts... :)


Hey, OP here.

This blew up a little. Thanks everyone for replying. Don't want to repeat myself in replying to everyone individually, so I'll add a comment.

Many people suggest keeping them separate for the sake of lower signal:noise ratio. I get it, but I also feel like comparison to YouTube is unfair. YouTube is a different platform, and you generally don't watch individual content pieces of a creator, but rather subscribe to a creator as a whole. I agree that in the case of YouTube, it's harder to keep a broad channel. I have subscribed to people who provided content on Software Engineering, tolerated when they pivoted towards indie-hacking, and unsubscribed when their content turned into anti-government/establishment rants or personal story of how they overcame adversity over and over again.

But in the case of blogs, you can subscribe to a particular category. Say, you are interested in software engineering content, then just subscribe to it via RSS and get relevant content.

I tried to look at other blogs, one of them is Patrick McKenzie (patio11)[0], and I can't say that his blog is super niched down. Some content is very valuable for me, while other is not interesting.

Another thing is that I think most people misunderstood the "general life" thing. I don't intend, and never planned to, post about the food I ate or how I prepared lasagna on Saturday. I was aiming more towards self-improvement content, for example tips on time management. Where would you put that kind of content? In a software engineering blog? In entrepreneurial blog? It is relevant for both disciplines, yet it does not fit any niche if you plan to niche down.

One suggestion that I liked from the comments, is to treat the content as a DB. So essentially, I can keep 4 blogs: the one under my name domain will be the master, and the one I'll update. I can design the URL scheme is such a way that I can point individual blog domains to a particular category, so my software engineering blog will https://xyz.com, but the domain is pointing to http://me.com/blog/category/software-engineering. This might simplify things, while still keeping them separate.

Thanks everyone for the comments, I appreciate it.

Edit: after giving it some more though, I came to the conclusion that my problem is unnecessary complications in the way I manage my blogs. And as a good software engineer, I will probably redo my statically generated blog, unify everything to one big repo that will use common building blocks, but each piece of content will be segmented to one blog, under its own domain, and they all will be built from the same repo.

This will ensure that I can reuse common building blocks like OG image generators, JSONLD tags, and reuse parts of theme, while minimizing the maintenance to just a single repo.

[0] https://www.kalzumeus.com/archive/


Hey Xena!

Funny things, I actually checked your blog before posting this question, to see how other people do it. I'm not a frequent reader of your blog, but I was under the impression that yours is mostly consistent in terms of content. It still feels to me that you are sticking to a broad term of software engineering, but I haven't read all the content, so can't tell for sure.

Thanks for your comment!


It's mostly software as of late, but there's bits like this: https://xeiaso.net/blog/2024/the-layoff/

As someone who is actively pursuing entrepreneurship for the past year, while doing it on-off pretty much from the beginning of my career (about 15 years), as well as following other solopreneurs/indie-hackers on social network--I move closer and closer to the realization that it's majorly luck based. Let me explain.

Yes, it's humbleness to say that it's all luck, and I think it's egocentric to claim that you know how entrepreneurship work, once you succeed, let alone build frameworks/coaching programs around it.

However, based on my observation, as well as my experience, luck plays a major role in success in entrepreneurship/business/work. You could get lucky because your manager likes your personality, or you can be a hard-working asshole that no one loves and/or respects. The same goes to other aspects. You could get lucky because YouTube/Twitter decided to pick up your content and show it to millions of people, thus earning you customers/followers/brand.

I do think that the only formula for success is when preparation meets luck. As cliché as it sounds, if you put in the work, but get unlucky, you won't succeed. If luck finds you, but you have no product or your videos suck, you won't succeed. It's only when you work constantly, improve your craft, and continue to stay *active*, then, when luck finds you, you might succeed.

And this is what I am going to teach my kids, and this is the answer I'll provide to a random person on the streets.


I struggled 10 years to find a SaaS product that worked, launched 20 projects (and there was no AI back then), way before the "indie hackers" movement.

Now our company is doing a few millions a year, highly profitable, no VC or external investment.

Was luck that I decided to try over and over? I might got lucky, but I persisted for so long while many others gave up. For me, making your own luck fits very well. Working hard gives you opportunities and you learn in the process so on next try, chances are better. Last projects got more and more profitable as I learned how to identify what people want.


Sure, I don't deny the importance of hard work. But then again, how many people worked hard, the same hard as you or maybe even harder, and it took them 20 years to get to 1M? 30 years? How many never made it despite working hard?

I feel like we, humans, have some repulsion towards "getting lucky". It's seen as a discredit of our hard work, and yet, I think luck plays way bigger role. And it's spread across the board starting from the type of community you are born into, the education you get, the access to connection you and/or your family has, the money, etc.

It doesn't mean that it's only luck. As I said, you can get lucky a thousand time but if you didn't prepare for it, then it's not worth it. This is why consistency is important.


I use B2, because It's only for encrypted incremental backups from restic


Yeah, I get what you are saying. But I guess I'm living in "survival" mindset, and realize that at any moment I might have to move, meaning I'll be able to take only important things with me. A NAS is usually not important, nor convenient to take.

However, as I was writing this, I thought that I could use a Mini PC with an additional HDD or two (for raid redundancy), and in case of whatever reason I'll have to move, I can just take the hard drives with me.

As for Google/iCloud. iCloud is easier to take out your data, it's literally a folder on the machine, in addition to the fact that Google ruins the metadata of photos (it took me a few days to fully merge it back with a help of some script off GitHub).


Some stuff in in folders, not all the stuff. If I bring up the Files app on my phone, there isn’t a folder for Photos. That is a separate service, which likely has a DB (or several) sitting in the cloud somewhere. I think if I want to get my photos out in an efficient way, I’d need to use Photos on the Mac to download everything, then open the photo library package, and find the Originals folder… or export everything through the UI. Other things, like bookmarks, app settings, etc are all hidden away somewhere.

The metadata is an issue. If you are using Apple products, it makes much more sense to use iCloud rather than Google.

I picked up a little micro PC off Amazon a while ago to mess around with. There was space in it for 2 drives. If you had to move, I’d take the whole thing, not just the drives. The whole package is pretty small (it can fit in a jacket pocket). You’re not going to save much space by just taking the drives, you’ll just complicate things for yourself down the line. You may not see a piece of hardware as important, but it’s the data you’re taking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: