Hacker News new | past | comments | ask | show | jobs | submit | srer's comments login

> But fundamentally no one should ever be trying to merge code that hasn’t been unit tested. If they are, that is a huge problem because it shows arrogance, ignorance, willingness to kick-the-can-down-the-road, etc.

Here you are asserting that unit testing is fundamental, and that not believing this is arrogance and ignorance.

I'd suggest your view that your way is "the" way, is an ironic display of arrogance, and perhaps ignorance.

And this perhaps I think is the core of much of the anti-TDD sentiment. It's not that we don't think TDD and unit tests are without their positives, it's that we don't like being told this is the one true way to write software, and if we don't do it your way we are engaging in poor engineering.


Consider it is not an unheard of experience to join the workforce, use a preconfigured macbook with precisely 1 SSD, and spend one's career building on SaaS platforms like AWS Lambda.

In such a world, perhaps those who remember there's a world of flexibility and power within the OS could be seen as welding some supernatural power.


> The language has memory-safe concurrency (except for maps, which is weird)...

My understanding is you should operate the other way around. Things aren't safe for concurrent mutation unless it's explicitly documented as safe.

> So you can have parallelism without worrying too much about race conditions or leaks.

You might not worry, but I find these the two easiest classes of Go bug to find when entering new codebases ;).

Still, I agree Go is easier to get a web service up and running with.


I work somewhere going through a similar process for much the same reason.

And to be frank, I'm wondering if how it ends is in me looking for a new job.

I know there's good reasons for the business to go through this process, and maybe it is really important for the business future but as an individual it sucks, my main motivator of achieving good outcomes for users, has been derailed by a large amount of procedure instigated under the belief that some auditor I've never met will be impressed.

Now with all this new procedure piled on, all of which is very new and thus immature and untried, I feel my energy to innovate, build new things and just generally drive change sapping away.

So the cost of your company isn't just the direct hours and the auditors fees! There's also a harder to quantify cost, as everything and everyone bends to change every team to work in the way some external entity thinks it should. This loss of autonomy and the loss of spirit is I think potentially far far more expensive than the direct costs.

Of course, maybe we're just going about the whole process poorly. I like to think if you were to do it well maybe it's not so bad. But, to do that I think you'd need abnormally talented staff involved, who are well versed in the topic, enthusiastic and empathetic. Most people involved in these sorts of processes in my experience aren't like that, and I can't say I blame them.


For what it's worth, as someone who helped quarterback the SOC2 process, it's a place that's ripe for personal innovation. We had automated scripts do our quarterly screenshots, learned and made full use of AWS Config to check and enforce compliance, worked hard to automate patching, and started tracking all audit tasks using Jira Service Desk. It ended up we used about two engineer-days a quarter on audit tasks. There's no escaping the annual review, but if you spend some time to streamline yourself, it goes pretty smoothly we found.


There are tons of people automating it already (Vanta, Secureframe, etc.)


> For what it's worth, as someone who helped quarterback the SOC2 process, it's a place that's ripe for personal innovation

I wonder if the parent comment was talking more about things like instead of committing code on their own, now it becomes a process that could be delayed only for the sake of compliance.

Something that was purely self contained and blocked by nothing is now blocked by at least someone else reviewing what you've done. But it's not just reviewing the code. To get something reviewable also means writing a detailed description of the work, documenting any and all processes involved and then maybe answering follow up questions if they come up. If you happen to change something then it needs to be re-tested, etc..

This is how something goes from being done and self tested in 2 hours to taking 3+ days. It can be a motivation killer at a personal level and also delay goals, features and everything basically at the company level.

I'm all for documenting work and following best practices like code reviews, well written tickets, etc.. but there are certain things where sometimes having all of that isn't an option because you don't have a team built around what you're doing. For example I'm in a platform / SRE / "devops engineer" type of role at a place and I end up shipping quite a lot of infrastructure code to production without review. I use my best judgment here. If it's something I can test in a test environment and I have high confidence I'll do it. If there's high stakes to the change I'll ask someone on the dev team to screenshare with me while I explain everything since most of them aren't deep in the woods with Terraform, Kubernetes, etc., but a 2nd set of eyes is still very helpful and valuable.

The problem there is not every change needs that level of review. It becomes extremely wasteful. I mean, if I add a Kubernetes annotation to a deployment, that's something I can do on my own in 10 seconds and push it to production but if it needs a review then it needs a jira ticket, documenting why / etc., making a PR, doing the work, finding a reviewer, going over what a deployment is in Kubernetes to a developer without Kubernetes knowledge and then going over what an annotation is, then finish things up by explaining what Kustomize does and how it works along with Argo CD because the state of the pod after Argo CD deploys it is the verification step to make sure it works. That has to be done over a screenshare because an application developer won't have these tools installed or know how to use them.

This ends up taking potentially longer than 1 day all-in. Maybe 15 minutes to do the ticket + PR but it could take 4-5 hours before someone is ready to review it, then another 30min for the screenshare, suddenly it's end of day. But for SOC 2 compliance I'm pretty sure you also need to deploy to a pre-prod environment as part of the workflow so that means the next day you spin up a temporary Kubernetes cluster in a proper test environment on the cloud, deploy a test application to it, applying your PR, verify it, kill your test cluster and then deploy it to prod which also involves someone else to approve the PR being merged to your infra's main branch.

Now imagine this type of scenario coming up at least 5 or 10 times a week. You would be in a constant state of being blocked and delayed. Without SOC 2 compliance I would have just pushed the annotation to production (main branch) directly in 10 seconds and moved onto the next thing.


You're absolutely right.

Most organizations do a completely hopeless job of implementing compliance sensibly. Those responsible for compliance tend to choose a solution that's scalable for them with zero regard for the inefficiencies they're introducing elsewhere in the organization.

Solving compliance sensibly provides organizations with a substantial, long-term competitive advantage. Nobody working in compliance seems to care.


10 hours isn't too bad when dealing with a giant organization. Often problems cut across teams and services, and troubleshooting then liaising and ultimately getting a remediation action through (which might involve producing, testing and releasing a patch) all takes up time. Sometimes things blow out to weeks!

Personally, my last AWS Support ticket was pertaining to Lambdas and I got a very good answer. I was impressed.

It's important I think to appreciate working in support is difficult work, every single day is a customer with their own urgent problem. When urgency is the norm, it's not urgent. And heart? It can be soul sucking work.

In my observation support takes the brunt of the rest of the orgs shortcomings, bad releases, deprecated features, etc, drive customers towards you in unfortunate circumstances. Sometimes there's a whole waterfall of shit raining down on you, and it ain't your fault, and there's nothing you can do or could have done.

And to add insult to injury, you're normally at the bottom of the org pecking order.

As I say, difficult work. I salute all those who do it!


We have SLAs which pay well but incur fines; we pick the hosting partners so we cannot exclude them as ‘act of god’ or something when they go down or something, like this, happens because of them. 10 hours is a bizarre amount of time and would be very costly for many reasons.


10 hours is too much time. Time is money, we deal with time sensitive business. If people are not able to process transactions how are their users able to do their daily activities. Money was definitely lost once the Finance team analyse the situation


You're not paying for the support you're expecting. This was an oversight on your part to vent on HN instead of upgrading to the Business tier the past however many hours your systems have been down. You can face the same suspension issue with any cloud provider. A painful, but necessary lesson it seems.


I do the opposite. Across C, Python and Go, I was prepared and had a good understanding of the languages and stdlibs before joining through reading books and working through the exercises.

It worked very well for me, on joining my knowledge of the languages and their stdlibs was respectable, and I could hit the ground running and get patches accepted easily. I could also in each languages case fix weirdness they had in their code because their own knowledge had gaps. That impressed people and made a pretty good first impression.

It's probably not universally true, but in my observation learning on the job much more commonly ends up resulting in sizable gaps in language knowledge.

If you've got a role working in $LANG, I don't see how upskilling in $LANG before you join is a trap. It's commonly knowledge with lasting value that offers returns across jobs.

I've never worked with C++, I hear hints that it's too big to actually learn and instead one must learn a subset. Maybe that's true, and the article more applies there.


Agreed. Personally I'm always grateful when someone joins and brings along a good up-to-date idea of how our particular languages/frameworks are used and taught by the rest of the world. It's so easy for a big internal codebase to devolve over time into a bunch of weird patterns, and forcing some newcomer/outsider perspectives onto it is a great corrective force.


A single server is much faster than most people think, too!

In the microservice or serverless arrangements I've seen, data is scattered across the cloud.

It's common for the dominant factor in performance to be data locality, most times this talk about data locality is about avoiding trips to RAM, or worse disk. But in our "modern" distributed cloud things, finding a bit of data frequently involves a trip over the network. In the monolith world what was once invoking a method on an account object, has become making a HTTP POST to the accounts microservice.

What might have been a microseconds operation in the single server world, might become hundreds of milliseconds in the distributed cloud world. While you can't horizontally scale a single server, your 1000x head start in performance might delay scaling issues for a very long time.

A most excellent paper related to this topic that I think should be mandatory reading before allowing anyone an AWS account is http://www.frankmcsherry.org/assets/COST.pdf :)


When you give half the effort to set things up properly, a single server can handle a lot of load and traffic, and get a lot of things done.

If you know some details of the services you're going to host on that hardware, the things you can do while saving a lot of resources is considered as black magic by many people who only deploys microservices to K8S systems.

...and you don't need VMs, containers, K8S and anything.


What are those details?


It generally boils down to three things: 1) How much resource a service need to run well, 2) How much the service wants to consume if left unchecked, 3) How performant you want your service to be.

After understanding these parameters, you can limit the resources of your application by running it under a cgroup. Doing this won't allow a service to surpass the limits you've put onto it, and cgroup will pressure your service when it nears its resource limits.

Also, sharing resources is good. Instead of having 10 web server containers, you can host all of them under a single one with virtual hosts, most of the time. This allows good resource sharing and doing more with less processes.

On the extreme case, I'm running a home server (DNS server, torrent client, synching client, a small HTTP server and an FTP server with some other services) under a 512MB OrangePi Zero. The guy works well, and never locks up. It has plenty of free RAM, and none of the services are choking.


I agree but at the same time: Inter-process communication is also faster when a process is allowed to write to or read from another process's memory. Doesn't make it a good idea, though.


The way I deploy them doesn't mean "compromise one, compromise all". For example, I generally leave SELinux intact. So they're properly isolated.


Yeah, nowadays you could just scale-up and still have a lot of leeway. A high-end server (w/ redundancy maybe) is more than enough for 95% of all common startup use cases.

Distributed computing only makes sense when you're starting to deal with millions of daily users.


Yes. HN works well with one dual socket server (2 x 4)

A new server could have 4 × 64.

Also, to distribute the load you can use an ‘A’ DNS record per server:

https://blog.uidrafter.com/freebsd-jails-network-setup


To be fair you can run a microservices stack on a single server and it will be very fast, especially if you use gRPC instead of HTTP


Would grpc make a huge difference if requests took a second each?


gRPC being an RPC, it avoids the overload of HTTP by miles, achieving low latency as well.


Leetcode is programming.

It's not sprint planning, 1 on 1s, backlog grooming, jira tickets, slack support, gathering requirements, making estimates, negotiating features with management, responding to alerts, or the million other things you get sucked into as a nominal "software engineer".

But it's definitely programming.

It's a specific style of working, not entirely unlike TDD, with a specific theme of problems which aligns well with the content of undergraduate algorithms and data structures courses.

Is there a degree of pattern matching and rote memorization? Most certainly!

And does pattern matching and rote memorization play a significant part in real world programming? Definitely! If there's not as much memorization happening it's because the art of copy and pasting from stackoverflow supplants the need.

And presented with wishing to learn the content of a university course, what's one rather effective technique to achieve it? To do exercises, a lot of them, and when you're stuck look at how other people solved them.

That's all leetcode is.

Perhaps others think that undergrad course in algorithms and data structures is useless material but I do not. I think it makes you a better programmer.


I think a better way to phrase what the parent is saying is that Leetcode is not software engineering. You can be awful at solving leetcodes and still be a great software engineer.

Now, do I think algorithms and data structures make people better programmers? Absolutely. But I feel like most people work on in an area like web development where the usefulness of it is severely reduced when you are working at such a high level of abstraction.


I agree. Memorizing algorithms to answer leetcode questions have their benefits but it’s extremely rare that a front end web developer would need those skills. If they did, that would mean the backend engineers aren’t using the right approach to organize and serve data to the client.


The point is if you know the basic algorithms and can apply them you don't need to memorize the answers to leetcode. You can (Gasp) come up with them.


Perhaps "leetcode isn't programming" is hyperbolic, and philosophically arguable, but what's 100% clear is leetcode uses a different skillset from real-world programming, and if you want to get better at one, doing the other won't help.


Spring planning, backlog grooming, etc. are not the examples I would use if I wanted to support the opinion that leetcode is not a real programming. (I definitely agree that some knowledge of algorithms and data structures can be very beneficial.)

My examples would be activities like building complex systems, maintaining and extending them, making them modular and flexible enough, doing performance analysis, identifying risks, and addressing technical depts, considering and implementing different architecture patterns, creating efficient CI/CD pipelines, automating routine.


you are wrong. it's not. it's simply memorizing optimal solutions to obscure programming challenges. the goal is to reproduce the optimal solution within a few minutes. nobody can do that, unless you memorize what the optimal solution is. the reason leetcode has become popular is because that gayle woman lobbied herself into a position of power at google and then wrote a book so she could profit from a "problem" tech companies think they have.


As a general rule for dev work, trying to make evidence based decisions is fairly difficult. There's just not that much evidence around yet that can make it obvious as to if in your particular situation what the best choice might be.

And at the end of the day you have to contend with being in a work environment where politics and personalities rule, not science (or engineering).

That said I do wish more devs would take an interest in the available quality literature. Unfortunately I'm far more likely at work to run into an Uncle Bob recommendation at work, than a recommendation of ACM's Digital Library.


I largely agree with you, that rather than introducing Rust into your workplace it's easier to change workplaces.

I currently work in a Go shop. Why Go? Did it win some business value delivering contest? I don't think so, it's just the initial team lead liked Go the most of the languages they knew and made everyone else learn it.

If you read my comment history you'll see I think Go is a mediocre choice. But, it's here to stay in this company and it doesn't matter what new language comes along it won't have the inertia or acceptance of the status quo language. The expected improvement has to be large enough to overcome the associated costs to sell the business case.

This idea of the status quo being somewhat arbitrary, and difficult to change, applies to more than just languages. It applies to everything else, your architecture, your work processes, your culture. A blueprint is set early on by some founding members, and perpetuated potentially forever.

In this world, rather than trying to change a company you change employers. Innovation is change, and to achieve that the individual changes, but leaves their company to remain the same.

Personally, I seek to improve in what I do, to innovate. And I think what we do can be done better and that the tools and systems in-place play a part in that. But will I introduce them at my workplace?

Perhaps not, I think I'll find a new job, and both my old and next company can pay the rather high costs of someone leaving and someone joining. This makes me a little sad, since I like my company and leaving would be a significant amount of institutional knowledge walking out the door.


> I currently work in a Go shop. Why Go? Did it win some business value delivering contest? I don't think so, it's just the initial team lead liked Go the most of the languages they knew and made everyone else learn it.

By this logic either language keep changing when new people join and desire some different language.

> This makes me a little sad, since I like my company and leaving would be a significant amount of institutional knowledge walking out the door.

You shouldn't be. If company you are leaving is any good they would have documented important stuff. If not, they'll get what they deserve. Better to join place where you are part / lead of initial team. Because anything less you would simply be following someone else's language choice or even when language is preferred still their product design decisions.


> By this logic either language keep changing when new people join and desire some different language.

Changing is a big effort. Unless a company grows huge, they're probably stuck with their original choice of language for the whole lifecycle.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: