Hacker News new | past | comments | ask | show | jobs | submit login

It's so unfortunate that this effort is still alive. The ACM canceled its involvement for excellent reasons which are worth reading: https://web.archive.org/web/20000815071233/http://www.acm.or...

It's probably also worth reading Dijkstra's assessment of the "software engineering" field (roughly coextensive with what the SWEBOK attempts to cover) from EWD1036, 36 years ago.

> Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".

https://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF

The ACM's criticisms, however, are much harsher and much more closely focused on the ill-conceived SWEBOK project.

The IEEE's continued involvement calls the IEEE's own credibility and integrity into question—as do its continued opposition to open-access publishing and its recent history of publishing embarrassingly incompetent technical misinformation in IEEE Spectrum (cf., e.g., https://news.ycombinator.com/item?id=41593788, though there are many other examples). What is going on at IEEE?




Wanted to call out the specific requirements for what the ACM wanted out of their participation in creating a core body of knowledge (from the linked reasoning):

    * It must reflect actual achievable good practice that ensures quality consistent with the stated interest; it is not that following such practices are guaranteed to produce perfect software systems, but rather that doing so can provide reasonably intuitive expectations of quality.
    * It must delineate roles among the participants in a software project.
    * It must identify the differential expertise of specialties within software engineering.
    * It must command the respect of the community.
    * It must embrace change in each and every dimension of its definition; that is, it must be associated with a robust process for ensuring that it is continually updated to account for the rapid change both in knowledge in software engineering and also in the underlying technologies.
It then details exactly how SWEBOK fails to meet those (which all still seem to be relevant) and comes to the following scathing conclusion:

    Overall, it is clear that the SWEBOK effort is structurally unable to satisfy any substantial set of
the requirements we identified for bodies of knowledge in software engineering, independent of its specific content.

I haven't read the SWEBOK but some spot checking and a review of the ToC seems to indicate they have not meaningfully taken that criticism into an account.


The ACM's insistence on clearly delineated roles and specialties seems so bizarre and misguided. Having defined roles necessarily implies some rigidity in allowed process and organizational structure, which seems out of scope for an engineering body of knowledge.

If you insist on defined roles then you end up with something like Scaled Agile Framework (SAFe) or Large Scale Scrum (LeSS). Which aren't necessarily bad methodologies if you're running a huge enterprise with a complex product portfolio and need to get productive work out of mediocre resources. But not good approaches for other types of organizations. The SWEBOK, for better or worse, largely steers clear of those issues.


This is actually a general problem with the SWEBOK: it isn't an engineering body of knowledge. It's a set of management practices like the ones you describe. It doesn't steer clear of those issues, at all.


Only a small fraction of the SWEBOK covers management practices and it doesn't dictate any particular methodology. Competent engineers might not need to do management but they have to understand at least the basics of the management context in which they operate.


I agree with your second sentence, but your first sentence is pretty profoundly incorrect. Each of its 413 pages is divided into two columns. I generated a random sample of 10 page numbers associated with column numbers as follows:

    >>> import random
    >>> r = random.SystemRandom()
    >>> [(r.randrange(1, 414), r.randrange(1, 3)) for i in range(10)]
    [(299, 1), (164, 2), (292, 1), (246, 2), (205, 2), (113, 1), (167, 2), (393, 2), (16, 1), (129, 2)]
Page 299/413 column 1 contains: part of a confused description of mathematical optimization in the sense of finding infima, incorrectly conflating it with space-time tradeoffs in software, which is at least a software engineering topic; and the beginning of a section about "multiple-attribute decision making", which is almost entirely about the kinds of decision-making done by corporate management. Though software design is given lip service, if you dig into the two particular "design" approaches they mention, they turn out to be about corporate management again, with concerns such as brainstorming sessions, identifying business cost drivers, staff headcount, presenting ideas to committees, etc. Conclusion: project management, not software engineering.

Page 164/413 (6-12) column 2 is about corporate operations (for which telemetry can be used), corporate operational risk management, and automating operational tasks to improve corporate efficiency. Conclusion: project management, not software engineering.

Page 292/413 (15-3) column 1 is about software engineering economics, specifically proposals and cash flow. Project management, not software engineering.

Page 246/413 (11-12) is a table summarizing chapter 11, which contains both project management and software engineering elements. I'm going to eliminate this point from the sample as being too much work to summarize fully and too hard to avoid interpretation bias.

Page 205/413 (9-13) column 2 is about software engineering management issues such as the difficulty of estimation, the project risks posed by the rate of change of the underlying technology, metrics for managing software, and what software organizational engineering managers should know. Project management, not software engineering.

Page 113/413 (4-14) column 1 is about what a platform standard is, TDD, and DevOps. Mostly software engineering, not project management.

Page 113/413 (6-15) is another summary table page similar to page 246, so I'm eliminating it too.

Page 393/413 (A-5) column 2 is about the SWEBOK itself and the documents it draws from and contains no information about either project management or software engineering.

Page 16/413 (xv) is part of the table of contents, so I'm eliminating it as well.

Page 129/413 (5-12) column 2 is about random testing (software engineering), "evidence-based software engineering" (an utterly vapid section which contains no information about software engineering, project management, or anything else, as far as I can tell), and test cases that force exceptions to happen (software engineering). Conclusion: software engineering, not project management.

So of the seven non-eliminated randomly sampled half-pages in the document, four are about project management, two are about software engineering, and the seventh is just about the SWEBOK. I guess my declaration that it's just a set of management practices was incorrect. It's only mostly a set of management practices. It's not at all only a small fraction.


Your analysis doesn't support your claim. Just to point out one basic flaw, real engineering always has to account for financial realities including cash flow as a constraint or optimization parameter.

I don't think you even understand what software engineering is. If we want to limit the discussion to just software development as a craft and take out the engineering aspects then you might have a point, but that's not what the SWEBOK is about.

And in fairness, most real world software projects can produce good enough results without applying real engineering practices. If you're just building yet another CRUD web app then rigorous engineering is hardly required or even economically justified.


While I agree that "real engineering always has to account for financial realities including cash flow as a constraint or optimization parameter" and that, as I said, "Competent engineers (...) have to understand at least the basics of the management context in which they operate," that's no substitute for attempting to replace real engineering with project management in the curriculum, which is what the SWEBOK is attempting to do—as my analysis conclusively shows!

Contrast, for example, MIT's required courses for a degree in mechanical engineering (https://catalog.mit.edu/degree-charts/mechanical-engineering...): 13 required core subjects of which zero are project-management stuff; one course chosen from a menu of four of which one is "The Product Engineering Process" and another "Engineering Systems Design"; and two electives chosen from a menu of 22, of which three are project-management stuff. The core subjects are Mechanics and Materials (I and II), Dynamics and Control (I and II), Thermal-Fluids Engineering (I and II), Design and Manufacturing (I and II), Numerical Computation for Mechanical Engineers, Mechanical Engineering Tools, Measurement and Instrumentation, Differential Equations, and your undergraduate thesis.

Berkeley's equivalent is https://me.berkeley.edu/wp-content/uploads/2022/03/ME-Flowch..., with math courses, chemistry courses, physics courses, and engineering courses such as ENGIN 7 (Introduction to Computer Programming for Scientists and Engineers), ENGIN 26 (Three-Dimensional Modeling for Design), ENGIN 29 (Manufacturing and Design Communication, which might sound like a project management course but is actually about things like manufacturing process tolerances and dimensioning), MEC ENG 40 (Thermodynamics), and MEC ENG 132 (Dynamic Systems and Feedback). Again, as far as I can tell, there's virtually no project-management material in here. Project management stuff doesn't constitute one tenth of the curriculum, much less two thirds of it.

The software equivalent of Thermal-Fluids Engineering II, Differential Equations, or Thermodynamics is not, I'm sorry, proposals and cash flow, nor is it multiple-attribute decision making, nor is it corporate operational risk management.

The same holds true of chemical engineering (https://catalog.mit.edu/degree-charts/chemical-engineering-c...) or electrical engineering (https://catalog.mit.edu/degree-charts/electrical-engineering...) or basically any other engineering field except "systems engineering". In all of these courses you spend basically all of your time studying the thing your engineering is nominally focused on and the science you use, such as chemical reactions, thermodynamics, fluid mechanics, separation processes, algorithms, electric circuits, and the theory of dynamical systems, and very little time on HR, accounting, and project management.

That's because HR, accounting, and project management aren't real engineering, much as the SWEBOK tries to pretend they are.

Real engineering is a craft based on science, navigating tradeoffs to solve problems despite great intellectual difficulty, and that's just as true of software—even yet another CRUD web app—as of gears, hydraulic cylinders, electric circuits, or chemical plants.

See https://news.ycombinator.com/item?id=41918787 for my thoughts on what a real-engineering curriculum about software would include.


On a tangent here, but...

> The ACM canceled its involvement for excellent reasons which are worth reading: https://web.archive.org/web/20000815071233/http://www.acm.or...

This jumped out at me from the first para there:

" ... also stating its opposition to licensing software engineers, on the grounds that licensing is premature ... "

I wonder what ACM's current thinking on licensing software engineers is almost 25 years further on?


As much as I like Dijkstra and this particular article of his (it is an assigned reading in my "Software Engineering" class), developing any large scale software that we have today starting from formal methods is just a fantasy.

I understand the importance of learning formal methods (discrete math, logic, algorithms, etc.), but they are not nearly enough to help someone get started with a software project and succeed at it.

So, if not "software engineering", then what should we teach to a student who is going to be thrown into the software world as it exists in its current form?


Formal methods have advanced enough now that any competent software engineer should know that it is at least an option. It's obviously not practical or necessary to apply everywhere but in any sufficiently large piece of software there are likely a few modules where applying formal methods would allow for faster, higher quality delivery at lower cost. Making those trade-offs and selecting appropriate approaches is the fundamental essence of engineering as a profession.


I mostly agree, although I differ on your last point.


Maybe if developing large-scale software starting with the formal methods we have today is just a fantasy—and that's plausible—we shouldn't be trying to formalize "software engineering". Imagine trying to formalize medicine before Pasteur, motor engineering before Carnot, mechanical engineering before Reuleaux, or structural engineering before Galileo. Today, we do have relevant bodies of formal knowledge that are enough to help someone get started with a project in those areas and succeed at it. As you say, that knowledge doesn't exist yet for software.

So what would you teach an architect in 01530 or a mechanical engineer in 01850? In addition to the relatively sparse formal knowledge that did exist, you'd make them study designs that are known to have worked, you'd apprentice them to currently successful master architects or mechanical engineers, and you'd arrange for their apprenticeship to give them experience doing the things people already do know how to do.


Since we’re talking Dijkstra, perhaps “structured programming” is a starting place.


> The ACM canceled its involvement for excellent reasons which are worth reading

Interesting, thanks for the hint; the paper is from 2000 though, and as it seems it would need an update; just checked e.g. the "roles" point and it seems there were significant changes. I also think ACM has rather different goals than IEEE.

> It's probably also worth reading Dijkstra's assessment of the "software engineering" field

Well, was there anything or anyone that Dijkstra didn't rant about ;-)


Any suggestion for a handbook or compendium that you consider to be a worthy alternative?


The thing here is, this reads like a prissy textbook that no-one can really disagree with but is still not gripping the reality. More HR handbook than blood-red manual.

For example, project management. The book covers this but does the usual wrong headed way of imagining there are executives with clear eyed Vision and lay down directives.

This is of course not how most projects in most companies are started. It’s a mess - reality impinges on the organisation, pain and loss and frustration result in people making fixes and adjustments. Some tactical fixes are put in place, covered by “business as usual”, usually more than one enthusiastic manager thinks their solution will be the best, and a mixture of politics and pragmatism results in a competition to be the one project that will solve the problem and get the blessed budget. By the time there is an official project plan, two implementations exist already, enough lessons learnt that the problem is easily solved, but with sufficient funding all that will be abandoned and have to be rebuilt from scratch - and at a furious pace to meet unrealistic expectations that corners will be cut leading …

That manual needs to be written.


You know that you could be speaking about mining operations or building highways in your post rather than software and everything would apply the same?

I really don't see the argument against the book here in your comment.


There are three absolutely key differences here.

The first is that, if you get a four-year college degree in mining or civil engineering, you will not spend much of those four years studying management practices; you will spend it studying geology, the mechanical properties of rocks and soil, hydrology (how water flows underground), and existing designs that are known to work well. You probably will not build a mine or a highway, but you will design many of them, and your designs will be evaluated by people who have built mines and highways.

The second is related to why you will not build a mine or highway in those four years: those are inherently large projects that require a lot of capital, a lot of people, and at least months and often decades. Mining companies don't have to worry about getting outcompeted by someone digging a mine in their basement; even for-profit toll highway operators similarly don't have to worry about some midnight engineer beating them to market with a hobby highway he built on the weekends. Consequently, it never happens that the company has built two highways already by the time there is an official project plan, and I am reliably informed that it doesn't happen much with mines either.

The third is that the value produced by mining operations and highways are relatively predictable, as measured by revenue, even if profits are not guaranteed to exist at all. I don't want to overstate this; it's common for mineral commodity prices and traffic patterns to vary by factors of three or more by the time you are in production. By contrast, much software is a winner-take-all hits-driven business, like Hollywood movies. There's generally no way that adding an extra offramp to a highway or an extra excavator to a mine will increase revenue by two orders of magnitude, while that kind of thing is commonplace in software. That means that you win at building highways and mining largely by controlling costs, which is a matter of decreasing variance, while you win at software by "hitting the high notes", which is a matter of increasing variance.

So trying to run a software project like a coal mine or a highway construction project is a recipe for failure.


And as a side note, this is why LLMs are such a huge sugar rush for large companies. The performance of LLMs is directly correlated to capital investment (in building the model and having millions of GPUs to process requests).

Software rarely has a system that someone cannot under cut in their bedroom. LLMs is one such (where as computer vision was all about clever edge finding algorithms, LLMs are brute force (for the moment))

Imagine being able to turn to your investors and say “the laws of physics mean I can take your money and some open source need cannot absolutely cannot ruin us all next month”


That's an interesting thought, yeah. But it also limits the possible return on that capital, I think.


You seem to have quite a bit of lived experience with that particular version of project management. Why not write it yourself?


Although any random bathroom-wall graffiti is better than the SWEBOK, I don't know what to recommend that's actually good. Part of the problem is that people still suck at programming.

“How to report bugs effectively” <https://www.chiark.greenend.org.uk/~sgtatham/bugs.html> is probably the highest-bang-for-buck reading on software engineering.

Not having read it, I hear The Pragmatic Programmer is pretty good. Code Complete was pretty great at the time. The Practice of Programming covers most of the same material but is much more compact and higher in quality; The C Programming Language, by one of the same authors, also teaches significant things. The Architecture of Open-Source Applications series isn't a handbook, but offers some pretty good ideas: https://aosabook.org/en/

Here are some key topics such a handbook or compendium ought to cover:

- How to think logically. This is crucial not only for debugging but also for formulating problems in such a way that you can program them into a computer. Programming problems that are small enough to fit into a programming interview can usually be solved, though badly, simply by rephrasing them in predicate logic (with some math, but usually not much) and mechanically transforming it into structured control flow. Real-world programming problems usually can't, but do have numerous such subproblems. I don't know how to teach this, but that's just my own incompetence at teaching.

- Debugging. You'll spend a lot of your time debugging, and there's more to debugging than just thinking logically. You also need to formulate good hypotheses (out of the whole set of logically possible ones) and run controlled experiments to validate them. There's a whole panoply of techniques available here, including testing, logging, input record and replay, delta debugging, stack trace analysis, breakpoint debuggers, metrics anomaly detection, and membrane interposition with things like strace.

- Testing. Though I mentioned this as a debugging technique, testing has a lot more applications than just debugging. Automated tests are crucial for finding and diagnosing bugs, and can also be used for design, performance profiling, and interface documentation. Manual tests are also crucial for finding and diagnosing bugs, and can also tell you about usability and reliability. There are a lot of techniques to learn here too, including unit testing, fuzzing, property-based testing, various kinds of test doubles (including mock objects), etc.

- Version tracking. Git is a huge improvement over CVS, but CVS is a huge improvement over Jupyter notebooks. Version control facilitates delta debugging, of course, but also protects against accidental typo insertion, overwriting new code with old code, losing your source code without backups, not being able to tell what your coworkers did, etc. And GitLab, Gitea, GitHub, etc., are useful in lots of ways.

- Reproducibility more generally. Debugging irreproducible problems is much more difficult, and source-code version tracking is only the start. It's very helpful to be able to reproduce your deployment environment(s), whether with Docker or with something else. When you can reproduce computational results, you can cache them safely, which is important for optimization.

- Stack Overflow. It's pretty common that you can find solutions to your problems easily on Stack Overflow and similar fora; twin pitfalls are blindly copying and pasting code from it without understanding it, and failing to take advantage of it even when it would greatly accelerate your progress.

- ChatGPT. We're still figuring out how to use large language models. Some promising approaches seem to be asking ChatGPT what some code does, how to use an unfamiliar API to accomplish some task that requires several calls, or how to implement an unfamiliar algorithm; and using ChatGPT as a simulated user for user testing. This has twin pitfalls similar to Stack Overflow. Asking it to write production-quality code for you tends to waste more time debugging its many carefully concealed bugs than it would take you to just write the code, but sometimes it may come up with a fresh approach you wouldn't have thought of.

- Using documentation in general. It's common for novice programmers to use poor-quality sites like w3schools instead of authoritative sites like python.org or MDN, and to be unfamiliar with the text of the standards they're nominally programming to. It's as if they think that any website that ranks well on Google is trustworthy! I've often found it very helpful to be able to look up the official definitions of things, and often official documentation has better ways to do things than outdated third-party answers. Writing documentation is actually a key part of this skill.

- Databases. There are a lot of times when storing your data in a transactional SQL database will save you an enormous amount of development effort, for several reasons: normalization makes invalid states unrepresentable; SQL, though verbose, can commonly express things in a fairly readable line or two that would take a page or more of nested loops, and many ORMs are about as good as SQL for many queries; transactions greatly simplify concurrency; and often it's easier to horizontally scale a SQL database than simpler alternatives. Not every application benefits from SQL, but applications that suffer from not using it are commonplace. Lacking data normalization, they suffer many easily avoidable bugs, and using procedural code where they could use SQL, they suffer not only more bugs but also difficulty in understanding and modification.

- Algorithms and data structures. SQL doesn't solve all your data storage and querying problems. As Zachary Vance said, "Usually you should do everything the simplest possible way, and if that fails, by brute force." But sometimes that doesn't work either. Writing a ray tracer, a Sudoku solver, a maze generator, or an NPC pathfinding algorithm doesn't get especially easier when you add SQL to the equation, and brute force will get you only so far. The study of algorithms can convert impossible programming problems into easy programming problems, and I think it may also be helpful for learning to think logically. The pitfall here is that it's easy to confuse the study of existing data structures and algorithms with software engineering as a whole.

- Design. It's always easy to add functionality to a small program, but hard to add functionality to a large program. But the order of growth of this difficulty depends on something we call "design". Well-designed large software can't be as easy to add functionality to as small software, but it can be much, much easier than poorly-designed large software. This, more than manpower or anything else, is what ultimately limits the functionality of software. It has more to do with how the pieces of the software are connected together than with how each one of them is written. Ultimately it has a profound impact on how each one of them is written. This is kind of a self-similar or fractal concern, applying at every level of composition that's bigger than a statement, and it's easy to have good high-level design and bad low-level design or vice versa. The best design is simple, but simplicity is not sufficient. Hierarchical decomposition is a central feature of good designs, but a hierarchical design is not necessarily a good design.

- Optimization. Sometimes the simplest possible way is too slow, and faster software is always better. So sometimes it's worthwhile to spend effort making software faster, though never actually optimal. Picking a better algorithm is generally the highest-impact thing you can do here when you can, but once you've done that, there are still a lot of other things you can do to make your software faster, at many different levels of composition.

- Code reviews. Two people can build software much more than twice as fast as one person. One of the reasons is that many bugs that are subtle to their author and hard to find by testing are obvious to someone else. Another is that often they can improve each other's designs.

- Regular expressions. Leaving aside the merits of understanding the automata-theory background, like SQL, regular expressions are in the category of things that can reduce a complicated page of code to a simple line of code, even if the most common syntax isn't very readable.

- Compilers, interpreters, and domain-specific languages. Regular expressions are a domain-specific language, and it's very common to have a problem domain that could be similarly simplified if you had a good domain-specific language for it, but you don't. Writing a compiler or interpreter for such a domain-specific language is one of the most powerful techniques for improving your system's design. Often you can use a so-called "embedded domain-specific language" that's really just a library for whatever language you're already using; this has advantages and disadvantages.

- Free-software licensing. If it works, using code somebody else wrote is very, very often faster than writing the code yourself. Unfortunately we have to concern ourselves with copyright law here; free-software licensing is what makes it legal to use other people's code most of the time, but you need to understand what the common licenses permit and how they can and cannot be combined.

- Specific software recommendations. There are certain pieces of software that are so commonly useful that you should just know about them, though this information has a shorter shelf life and is somewhat more domain-specific than the stuff above. But the handbook should list the currently popular libraries and analogous tools applicable to building software.


There are some people (such as the SWEBOK team) who would claim that software engineering shouldn't concern itself much with considerations like my list above. Quoting its chapter 16:

> Software engineers must understand and internalize the differences between their role and that of a computer programmer. A typical programmer converts a given algorithm into a set of computer instructions, compiles the code, creates links with relevant libraries, binds†, loads the program into the desired system, executes the program, and generates output.

> On the other hand, a software engineer studies the requirements, architects and designs major system blocks, and identifies optimal algorithms, communication mechanisms, performance criteria, test and acceptance plans, maintenance methodologies, engineering processes and methods appropriate to the applications and so on.

The division of labor proposed here has in fact been tried; it was commonplace 50 or 60 years ago.‡ It turns out that to do a good job at the second of these roles, you need to be good at the stuff I described above; you can't delegate it to a "typical programmer" who just implements the algorithms she's given. To do either of these roles well, you need to be doing the other one too. So the companies that used that division of labor have been driven out of most markets.

More generally, I question the SWEBOK's attempt to make software engineering so different from other engineering professions, by focusing on project-management knowledge to the virtual exclusion of software knowledge; the comparison is in https://news.ycombinator.com/item?id=41918011.

______

† "Binds" is an obsolete synonym for "links with relevant libraries", but the authors of the SWEBOK were too incompetent to know this. Some nincompoop on the committee apparently also replaced the correct "links with relevant libraries" with the typographical error "creates links with relevant libraries".

‡ As a minor point, in the form described, it implies that there are no end users, only programmers, which was true at the time.


I wrote:

> Code Complete was pretty great at the time.

Unfortunately it seems that Steve McConnell has signed the IEEE's garbage fire of a document. Maybe if you decide to read Code Complete, stick with the first edition.


Very interesting. Particularly their notion (paraphrasing) that SWEBOK attempts to record generally recognised knowledge in software engineering while excluding knowledge about more specific subdomains of software.

That over-deference towards general knowledge coupled with some sort of tie to a similar Australian effort probably explains why the software engineering degree I began in Australia felt like a total waste of time. I remember SWEBOK being mentioned frequently. I can't say I've gotten terribly much value out of that learning in my career.


I am guessing that you didn't get value out of it probably because you didn't work in avionics, medicine, defense, etc? Those industries where a software fault is unacceptable and has to work for decades.

In some industries like avionics and medical instruments, the programmer might be personally held responsible for any loss of life/injury if it could be proven.

Having read Software Engineering and Formal Methods 25 years ago, I could say that IEEE leans heavily towards SE like it is a profession.

It is not going to be appealing to the crowd of Enterprise developers who use Python, Javascript, Web development etc.


> In some industries like avionics and medical instruments, the programmer might be personally held responsible for any loss of life/injury if it could be proven.

If you aren't a PE, it's hard to hold you personally responsible unless they can show something close to willful, deliberate misbehavior in the development or testing of a system even in avionics. Just being a bad programmer won't be enough to hold you responsible.


If your software kills someone (by mistake), personal guilt is a punishment one never completes.


The SWEBOK will not reduce the number or severity of software faults; it probably increases both.


>What is going on at IEEE?

The IEEE has been a worn-out, irrelevant relic of the past for at least 2 decades now.


> software engineering has accepted as its charter "How to program if you cannot.".

Is that supposed to be a negative? Isn't that the point of any profession? Like are any of these analogs negative?:

Medicine has accepted as its charter "How to cure disease if you cannot."

Accounting has accepted as its charter "How to track money if you cannot."

Flight schools has accepted as its charter "How to fly if you cannot."


Yes, because those would describe, respectively, faith healing, spending money whenever you happen to have bills in your pocket, and levitation through Transcendental Meditation, rather than what we currently call "medicine", "accounting", and "flight schools".

"Software engineering" as currently practiced, and as promoted by the SWEBOK, is an attempt to use management practices to compensate for lacking the requisite technical knowledge to write working software. Analogs in other fields include the Great Leap Forward in agriculture and steelmaking, the Roman Inquisition in astronomy, dowsing in petroleum exploration, Project Huemul in nuclear energy, and in some cases your example of faith healing in medicine.


I really don't think he means "cannot" in the sense of "presently don't know how," but more categorically--along the lines of chiropractic being the profession for those who cannot cure the way an MD can. I think it's an indictment of hackery.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: