Hacker News new | past | comments | ask | show | jobs | submit login

>Coding at scale is about managing complexity.

I would extend this one level higher to say managing complexity is about managing risk. Risk is usually what we really care about.

From the article:

>any one person's opinions about another person's opinions about "clean code" are necessarily highly subjective.

At some point CS as a profession has to find the right balance of art and science. There's room for both. Codifying certain standards is the domain of professions (in the truest sense of the word) and not art.

Software often likens itself to traditional engineering disciplines. Those traditional engineering disciplines manage risk through codified standards built through industry consensus. Somebody may build a pressure system that doesn't conform to standards. They don't get to say "well your idea of 'good' is just an opinion so it's subjective". By "professional" standards they have built something outside the acceptable risk envelope and, if it's a regulated engineering domain, they can't use it.

This isn't to mean a coder would have to follow rigid rules constantly or that it needs a regulatory body, but that the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.




A lot of "best practices" in engineering were established empirically, after root cause analysis of failures and successes. Software is more or less evolving along the same path (structured programming, OOP, higher-than-assembly languages, version control, documented ISAs).

Go back to earlier machines and each version had it's own assembly language and instruction set. Nobody would ever go back to that era.

OOP was pitched as a one-size-fits-all solution to all problems, and as a checklist of items that would turn a cheap offshored programmer into a real software engineer thanks to design patterns and abstractions dictated by a "Software Architect". We all know it to be false, and bordering on snake oil, but it still had some good ideas. Having a class encapsulate complexity and defining interfaces is neat. It forces to think in terms of abstractions and helps readability.

> This isn't to mean a coder would have to follow rigid rules constantly or that it needs a regulatory body, but that the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.

As more and more years pass, I'm less and less against a regulatory body. Would help with getting rid of snake oil salesman in the industry and limit offshoring to barely qualified coders. And simplify hiring too by having a known certification that tells you someone at least meets a certain bar.


Software is to alchemy what software engineering is to chemistry. Software engineering hasn't been invented yet. You need a systematizing scientific revolution (Kuhn style) before you can or should create a regulatory body to enforce it. Otherwise you're just enforcing your particular brand of alchemy.


Well said. In the 1990s, in the aerospace software domain it was a once referred to an era of “cave drawings”


> OOP was pitched as a one-size-fits-all solution to all problems, and as a checklist of items that would turn a cheap offshored programmer into a real software engineer.

Not initially. Eventually, everything that reaches a certain minimal popularity in software development level gets pitched by snake-oil salesman to enterprise management as a solution to that problem, including things developed specifically to deal with the problem of othee solutions being cargo culted and repackaged that way, whether its a programming paradigm or a development methodology or metamethodology.


>having a known certification that tells you someone at least meets a certain bar.

This was tried a few years back by creating a Professional Engineer licensure for software but it went away due to lack of demand. It could make sense to artificially create a demand by the government requiring it for, say, safety critical software but I have a feeling companies wouldn't want this out of their own accord because that license gives the employee a bit more bargaining power. It also creates a large risk to the SWEs due to the lack of codified standards and the inherent difficulty in software testing. It's not like a mechanical engineer who can confidently claim a system is safe because it was built to ASME standards.


> It could make sense to artificially create a demand by the government requiring it for, say, safety critical software but I have a feeling companies wouldn't want this out of their own accord because that license gives the employee a bit more bargaining power.

For any software purchase above a certain amount the government should be forced to have someone with some kind of license sign on the request. So many projects have doubled or tripled in price after it was discovered the initial spec didn't make any sense.


I think that at this point, for the software made/maintained for the government, they should just hire and train software devs themselves.

From what I've seen, with a few exceptions, government software development always ends up with a bunch of subcontractors delivering bad software on purpose, because that's the way they can ensure repeat business. E.g., the reason Open Data movement didn't achieve much, why most public systems are barely integrated with each other, is because every vendor does its best to prevent that from happening.

It's a scam, but like other government procurement scams, it obeys the letter of the law, so nobody goes to jail for this.


The development of mass transit (train lines) has a similar issue when comparing the United States to Western Europe, Korea, Japan, Taiwan, Singapore, or Hongkong. In the US, as much as possible is sub-contracted. In the others, a bit less, and there is more engineering expertise on the gov't payroll. There is a transit blogger who writes about this extensively... but his name eludes me. (Does anyone know it?)

Regarding contractors vs in-house software engineering talent, I have seen (from media) UK gov't (including NHS) has hired more and more talent to develop software in-house. No idea if UK folks think they are doing a good job, but it is a worthy experiment (versus all contractors).


>should just hire and train software devs themselves

There are lots of people who advocate this but it’s hard to bring into fruition. One large hurdle is the legacy costs, particularly because it’s so hard to fire underperforming government employees. Another issue is that government salaries tend to not be very competitive by software industry standards so you’ll only get the best candidates if they happen to be intrinsically motivated by the mission. Third, software is almost always an enabling function that is often competing for resources with core functions. For example, if you run a government hospital and you can hire one person, you’re much more likely to prefer a healthcare worker hire than a software developer. One last, and maybe unfair point, is that the security of government positions tends to breed complacency. This often creates a lack of incentive to improve systems which results in a lot of legacy systems hobbling along past their usefulness.

I don’t think subcontractors build bad systems on purpose, but rather they build systems to bad requirements. A lot of times you have non-software people acting as program managers who are completely fine with software being a black box. They don’t particularly care about software as much as their domain of expertise and are unlikely to spend much time creating good software requirements. What I do think occurs is that contractors will deliberately under bid on bad retirements knowing they will make their profits on change orders. IMO, much of the cost overruns can be fixed by having well-written requirement specs


Do you mean sign as in qualify that the software is "good"?

In general, they already have people who are supposed to be responsible for those estimates and decisions (project managers, contracting officers etc.) but whether or not they're actually held accountable is another matter. Having a license "might" ensure some modicum of domain expertise to prevent what you talk about but I have my doubts


> Do you mean sign as in qualify that the software is "good"?

We're not there yet. Just someone to review the final spec and see if it makes any sense at all.

Canonical example is the Canadian Phenix Payroll System. The spec described payroll rules that didn't make any sense. The project tripled in cost because they had to rewrite it almost completely.

> In general, they already have people who are supposed to be responsible for those estimates and decisions (project managers, contracting officers etc.) but whether or not they're actually held accountable is another matter.

For other projects, they must have an engineer's signature else nothing gets built. So someone does the final sanity check for the project managers-contracting officers-humanities-diploma bureaucrat. For software, none of that is required, despite the final bill being often as expensive as a bridge.

> Having a license "might" ensure some modicum of domain expertise to prevent what you talk about but I have my doubts

Can't be worse than none at all.


Annoyingly, the government already sorta does this: many federal jobs, as well as the patent bar, require an ABET-accredited degree.

The catch is that many prominent CS programs don’t care about ABET: DeVry is certified, but CMU and Stanford are not, so it’s not clear to me that this really captures “top talent.”


I suspect this is because HR and probably even side hiring managers cannot distinguish between the quality of curriculums. One of the problems with CS is the wide variance in programs...some require calculus through differential equations and some don’t require any calculus whatsoever. Sob it’s easier to just require an ABET degree. Similar occurs with Engineering Technology degrees, even if they are ABET accredited.

To your point, it unfortunately and ironically locks out many CS majors for computer science positions.


> I suspect this is because HR and probably even side hiring managers cannot distinguish between the quality of curriculums.

Part of the reason for that is they likely haven't even been exposed to graduates of good computer science curriculums.


In what sense do you think they haven't been exposed? As in, they've never seen their resumes? Or they've never worked with them?

I think it's an misalignment of incentives in most cases. HR seems to care very little once someone is past the hiring gate. So they would have to spend the time to understand the curriculum distinctions, probably change their grading processes, etc. It's just much easier for them to apply a lazy heuristic like "must have an ABET accredited degree" because they really don't have to deal much with the consequences months and years after the hire. In some cases, they even overrule the hiring manager's initial selection.


>the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.

The problem I see with this is that programming could be described as a kind of general problem solving. Other engineering disciplines standardize methods that are far more specific, e.g. how to tighten screws.

It's hard to come up with specific rules for general problems though. Algorithms are just solution descriptions in a language the computer and your colleagues can understand.

When we look at specific domains, e.g. finance and accounting software, we see industry standards have already emerged, like dealing with fixed point numbers instead of floating point to make calculation errors predictable.

If we now start codifying general software engineering, I'm worried we will just codify subjective opinions about general problem solving. And that will stop any kind of improvement.

Instead we have to accept that our discipline is different from the others, and more of a design or craft discipline.


>kind of general problem solving

Could you elaborate on this distinction? At the superficial level, "general problem solving" is exactly how I describe engineering in general. The example of tightening screws is just a specific example of a fastening problem. In that context, codified standards are an industry consensus on how to solve a specific problem. Most people wrenching on their cars are not following ASME torque guidelines but somebody building a spacecraft should be. It helps define the distinction of a professional build for a specific system. Fastening is the "general problem"; fastening certain materials for certain components in certain environments is the specific problem that the standards uniquely address.

For software, there are quantifiable measures. As an example, there are some sorting algorithms that are objectively faster than others. For those systems that it matters in terms of risk, it probably shouldn't be left up to the subjective eye of an individual programmer, just like the spacecraft should rely on a technician's subjective opinion of that a bolt is "meh, tight enough."

>I'm worried we will just codify subjective opinions about general problem solving.

Ironically, this is the same attitude in many circles of traditional engineering. People who don't want adhere to industry standards have their own subjective ideas about should solve the problem. Standards aren't always right, but it creates a starting point to 1) identify a risk and 2) find an acceptable way to mitigate it.

>Instead we have to accept that our discipline is different from the others

I strongly disagree with this and I've seen this sentiment used (along with "it's just software") to justify all kinds of bad design choices.


>For software, there are quantifiable measures. As an example, there are some sorting algorithms that are objectively faster than others. For those systems that it matters in terms of risk, it probably shouldn't be left up to the subjective eye of an individual programmer, just like the spacecraft should rely on a technician's subjective opinion of that a bolt is "meh, tight enough."

Then you start having discussions about every algorithm being used on collections of 10 or 100 elements, it doesn't really matter to the problem to be solved. Instead the language's built in sort functionality will probably do here and increase readability, because you know what's meant.

Profiling and replacing the algorithms that matter is much more efficient than looking at each usage.

Which again brings us back to the general vs specific issue. In general this won't matter, but if you're in a real-time embedded system you will need algorithms that don't allocate with known worst case execution times. But here again, at least for the systems that matter, we have specific rules.


>Profiling and replacing the algorithms that matter is much more efficient than looking at each usage.

I think this speaks to my point. If you are deciding which algorithms suffice, you are creating standards to be followed just as with other engineering disciplines.

>Then you start having discussions about every algorithm being used on collections of 10 or 100 elements, it doesn't really matter to the problem to be solved

If you’re claiming it didn’t matter on the specific problem, then you’re essentially saying it’s not risk-based. The problem here is you will tend to over-constrain design alternatives regardless if it decreases risk or not. My experience is people will strongly resist this strategy as it gets interpreted as mindlessly draconian.

FWIW, examining specific use cases is exactly what’s done in critical applications (software as well as other domains). Hazard analysis, fault-tree analysis, and failure-modes effect analysis are all tools to examine specific use cases in a risk-specific context.

>But here again, at least for the systems that matter, we have specific rules.

I think we’re making the save point. Standards do exactly this. That’s why in other disciplines there are required standards in some use cases and not others (see my previous comment contrasting aerospace to less risky applications)


> At some point CS as a profession has to find the right balance of art and science.

That seems like such a hard problem. Why not tackle a simpler one?


I didn’t downvote but I’ll weigh in on why I disagree.

The glib answer is “because it’s worth it.” As software interfaces with more and more of our lives, managing the risks becomes increasingly important.

Imagine if I transported you back 150 years to when the industrial revolution and steam power were just starting to take hold. At that time there were no consensus standards about what makes a mechanical system “good”; it was much more art than science. The numbers of mishaps and the reliability reflected this. However, as our knowledge grew we not only learned about what latent risks were posed by, say, a boiler in your home but we also began to define what is an acceptable design risk. There’s still art involved, but the science we learned (and continue to learn) provides the guardrails. The Wild West of design practice is no longer acceptable due to the risk it incurs.


I imagine that's part of why different programming languages exist -- IE you have slightly less footguns with Java than with C++.

The problem is, the nature of writing software intrinsically requires a balance of art and science no matter what language it is. That is because solving business problems is a blend of art and science.

It's a noble aim to try and avoid solving unnecessarily hard problems, but when it comes to the customer, a certain amount of it gets incompressible. So you can't avoid it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: