I like this book as a means to present the ways people describe how software is developed. However, as I read I'm itching to annotate it with notes on "what really happens". I've worked in this business for a few decades now, in various locations and various sizes of companies, including a few that are household names. I have NEVER seen software built on the ground in exactly the ways one sees documented (documented anywhere, not just in this book).
A couple of things I don't see mentioned (apologies if they're in there, I haven't read every word):
1. Process does (and should) vary tremendously depending on factors including the organization size, organization maturity, market maturity, experience level of the people involved, budget, etc. The book seems to suggest that there's a one-process-to-rule-them-all.
2. Often there are significant unknowns about a project : unknown technologies, unknown market needs, unknown requirements. Being able to accommodate the unknowns, which can mean not expending effort trying to know something unknowable, is important. The book I think gives an impression of quite confident smooth progress toward project completion that I personally have never observed.
Also: The value of Rubber Chickens is not mentioned...
Not so much for debugging but the cat is often an Oracle. I remember Sometimes when I started reading through a section in the documentation (probably the dfdd), the cat would come take a nap on the document (yes I printed it, yes I know if it has changed since because organizational inertia). I took it as a sign to stop.
Bear lives on a bookshelf at eye-level. Explain bugs to the bear, keeping eye contact, and you will be enlightened. Bear also does architecture design consulting.
Yes, it does mention these things. Perhaps not early enough? The "really happens"/disasters are towards the end of Part I, and there are few project cartoons giving hints.
There's so much to cover, I started out with the formal stuff first. It seemed like a good order, but perhaps it is a bit off-putting at the beginning of the book.
I'll argue that, as a book for students, it works better in the way of formality and then some informality/real examples/exceptions. it's necessary a base to compare ideas before explaining differences in a model. It becomes easier to understand, and therefore, study.
Why are "really happens" and "disasters" associated together? I assume by "really happens" you mean the ways reality departs from what's in the book -- but when process varies over time, adapts to different personnel, business circumstances, tech capabilities, etc., instead of remaining rigid, like ironically rigid adherence to Agile despite variation in the above-mentioned characteristics, that is what leads to disaster.
I'd see strict adherence to any sort of one-size-fits-all recipe for workflow management as a greater sign of disaster than healthy compromise, accommodation for individual working styles, and respect for the human need for autonomy and self-direction.
Right. I'm not talking about bad outcomes in the grandparent. Far from it. I'm more interested in the fact that project participants will for example fabricate a fake manifestation of a process (fake[1] project plans, fake task lists, fake design documents) for the benefit of external folks asking them to "follow process". Meanwhile they get on with building the product "The Way we Actually Do it[2]", successfully. This phenomenon can be confusing because outsiders might conclude that the process led to success when in fact it was the presence of people who knew how to build software successfully that led to success and the perception of process was a mirage.
[1] By fake I don't mean fraudulent but rather something that has as its only purpose to meet the needs of the process enforcement. For example a requirements document that nobody reads after it has been written, or a design document written before the manner in which the problem will be solved is well understood (and is therefore never read). Project plans that somehow turn out to have underestimated effort by 3X...
> This phenomenon can be confusing because outsiders might conclude that the process led to success when in fact it was the presence of people who knew how to build software successfully that led to success and the perception of process was a mirage.
See, I think this is actually intentional. The "process" itself exists to give management a surface area of metrics they can datamine for their own political needs, more or less independent from the actual success of the project or team.
I liken it to creating Dutch books. With each layer of management you go up, the pressure to create Dutch books that provide you with blame insurance and protect your bonus -- crucially, diversified across all different business outcomes -- gets more intense, and the people who pull this off get compensated hugely and can often make leaps into C-level executive teams.
Lower level managers might not consciously know about this, and on one hand may actually think there's some truly redeeming reason to care about the frivolous, irrelevant metrics (e.g. Agile burndown). But the higher up you go, the more directly and unabashedly people openly function honestly -- they know the metrics are bullshit; they know that you know that they know the metrics are bullshit ... they don't care ... they are just getting on with their own business of political games to diversify away bonus risk and arrange for metrics-based blame stories for scapegoats.
It's not that it's confusing where the credit lies. It's that people know the credit does not lie with the formalized process at all but that they need that excuse in order to politically manage how they get their share of the credit.
Unfortunately, I think despite the OP's strong efforts to present the topic in a useful, taxonomic way, the sad thing is that treating formalized process with this degree of veneration at all simply perpetuates the political system that needs inexperienced developers to roll over and play dead to the system.
This phenomenon can be confusing because outsiders might conclude that the process led to success when in fact it was the presence of people who knew how to build software successfully that led to success and the perception of process was a mirage.
Yes, was just an off the cuff remark, not completely accurate.
The book does take the time to provide alternates at many points. It should explain "what really happens," but it first does that for waterfall, then spiral, and agile, etc. If you are reading the waterfall part, for example, your "really happens" is going to be different if you are currently working under Agile. To avoid info-overload I've tried not to explain everything at once, but of course that means valid points have to wait until later.
+1 million for pointing out that process/workflow can and should vary according to many factors.
Watching an organization stress in vain to assert a one-size-fits-all approach, such as letter-of-the-law Agile, is unpleasant, especially once the political games set in. Even worse to be on the team and actively shunted from being able to do your job because of all the boilerplate, parochial adherence to cargo-cult process.
It does mention choosing the "right tool/process for the job" about five times, but I will take the idea to first class status under the intro chapter.
I think the point is that people should not start life by constraining their choice to some menu "Agile or Waterfall or ..." -- once you do that you're already dead.
Instead, do whatever works organically for your situation. Don't say to do whichever established, brand-name process (or, worse, some shiny new thing) -- that automatically casts the decision as if it has to be limited to a choice between a few specific (and all bad) options.
To be clear, there's a difference between saying "do whatever works organically based on the human properties of your team and your situation" vs. "do whichever one of these 2 or 3 big box things you can maybe lobby for."
It's like saying, "Here's a first course on how democracy works ... you've got Republicans and Democrats so pick one of them that suits you better." It teaches you to think that democracy (analogue to the ideal of a successful dev process) is intrinsically delimited into certain categories like Republic, Democrat, etc. (analogue to Agile, Waterfall, ...).
I think it's better that young impressionable minds are more open and not made to think in such constrained, delimited ways right from the start (even if real life will beat it into them later on).
Hmm, doing "whatever works" is fine when you know what the "what" is. The early chapters talk about requirements, design, quality, etc, which is what I'd believe the student needs to understand. Before the discussion happened here, the first mention of waterfall or agile was in chapter 7.
Sometimes you need to spec requirements out carefully. Other times you need to dive in, prototype, make mistakes, and revise. Sometimes you need to research design tradeoffs for a month before beginning a single item of implementation, and absolutely none of that research can fit into anything like a "sprint" structure. Other times you need to neatly divide up a known or perpetual task into points-based subtasks for tracking, planning, and estimation.
All these things vary all the time. When I say "whatever works" what I mean is no one knows what will work for your team. No textbook will know that. It might consist of some composition of management patterns from some books, but often it will not, and you need to develop an eye for deviating from that. There is no recipe. There is no cluster of things that all the good companies had in common -- they all did (and still do) things very differently. Amazon treats people like shit and hammers on them -- and delivers good products. Google forsakes some income opportunities to selectively uphold their "don't be evil" mantra. Apple puts usability ahead of everything. Microsoft puts enterprise saleability ahead of everything.
All these different value systems, management systems, workflow monitoring processes, etc., etc., have places and times when they'll work. If you're stuck thinking in terms of rigid structures, you won't see it, and it truly can make a big difference.
In every job I've ever left from it has been because the process and experience of being managed in that role had led to burnout even after talking openly with managers about what wasn't working. I'm (perhaps foolishly) a very loyal guy. I'd love nothing more to stay at a company for a long time and get in a groove, even if I was leaving money on the table by doing so. But I can't seem to find any organization that values human-affirming management concepts over and above ritualistic adherence to bureaucratic process.
Anyway, I just see a lot of "the road to hell is paved with good intentions" in this. If you are not already familiar, you may enjoy reading Moral Mazes, because I think it's impossible to approach the task of software management without first approaching the generic task of human management and how it fits into bureaucratic machines.
"In every job I've ever left from it has been because the process and experience of being managed in that role had led to burnout even after talking openly with managers about what wasn't working. I'm (perhaps foolishly) a very loyal guy. I'd love nothing more to stay at a company for a long time and get in a groove, even if I was leaving money on the table by doing so. But I can't seem to find any organization that values human-affirming management concepts over and above ritualistic adherence to bureaucratic process."
1000x this!!!
I do exactly the same thing, for example i'm going to leave soon because the development process is so crippled that it just makes me want to cry. I tried to improve it and talk to management but nothing changed. I started to feel a void inside, and i realised that there was nothing i could except leave.
Author here, writing this book was one of the hardest things I've ever done, and there is seemingly no end to the small issues I've faced (both writing and technical). Would appreciate some feedback to make it as good as it can be, AMA thanks.
The separation of design and construction into phases is a hangover from civil engineering. It has the baked in assumption that the design phase is relatively cheap, short and somewhat unpredictable the construction phase is expensive, long and predictable. The root problem is the assumption that specifications can be validated for correctness, like a blueprint for a bridge can. Nothing could be further from the truth. This is a persistent myth in software development.
Yes, it is often (mostly?) true, but not always. In the book I've tried to not give the idea there is only one way to do things. Even so, a few have commented that they've got that impression.
Also, from a practical standpoint, both the design and construction chapters are huge, so combining them doesn't seem to be a good idea. Perhaps I could add your warning though, if you don't mind me quoting you.
I upvoted your comment even though (as a generalization) I disagree that these chapters should even exist as such. If were truly honest about the civil engineering metaphor as it applies to software, the design of a system is indeed the source code. The construction phase is done by the compiler. So your construction chapter should be about compilers and interpreters. Your design chapter should be about the explication of the stakeholder's intent in the form of human-readable code: the source code itself. What do you think?
Hmm, you've blown my mind. I've used traditional metaphors to describe and organize, such as the sdlc and topics like construction as defined by books like Code Complete. In other words, about/around these topics, not strictly limited to their concrete form or process.
I suppose it isn't the only way to look at things, just common.
Though we are approaching the philosophical realm at this point… reminds me of the section of Philosophy class where you learn to question if you can even trust your own senses. I'm not sure, however this is a book for beginners, and I sit on the shoulders of those that came before. Not sure I'm qualified to reimagine software engineering from the ground up as you describe. If you write that book, I'd read it!
Thanks, don't do it! Unless writing comes to you easily? The most depressing thing is "finishing" the book and then have to spend a month hacking css and build scripts to work around kindle bugs and the fascists at the ibooks store.
In my world (enterprise software), the design phase is usually the most expensive phase, as it tends to be staffed with expensive architect/designer/technical lead-level folks.
Except a design that doesn't change after a certain arbitrary phase of a project simply guarantees your best outcome will be to deliver what the customer said they wanted once upon a time.
> The root problem is the assumption that specifications can be validated for correctness, like a blueprint for a bridge can.
We can validate specifications for correctness with automated theorem proving! The least we can do is model-checking. At least for the hard algorithms and core system interactions. We've had this ability for decades.
The problem with software development is that we get all hand-wavy about "architecture," and "design." I try not to physically groan when someone starts sketching out a box diagram. They're fancy pictures, sure, and useful at a conceptual level... but the reality is that the software will look nothing like that diagram. And there's no way to check that the diagram is even correct! Useless!
It doesn't have to be painfully slow, burdensome, and expensive to employ these tools either. Contrary to popular belief it doesn't take a team of highly trained PhD physicists to build a formal mathematics for modelling computations these days. There are plenty of open-source tools for using languages like TLA+ that work great besides! Ask Amazon -- they published a paper about it.
Drawing a picture is a perfectly valid and legitimate start. However I think there are many problems where a picture is not sufficient and it's necessary to have good, reliable mathematics to sketch out our designs and prove properties of them.
A sketch is still useful the the absence of nothing else. But we don't build bridges and skyscrapers that way so why is it good enough for our data-centres and applications?
I don't think lives have to be on the line in order for software to be considered harmful if not built correctly. There are plenty of problems where having a thoroughly checked design will save you plenty of headaches and afford you many more opportunities.
> However I think there are many problems where a picture is not sufficient and it's necessary to have good, reliable mathematics to sketch out our designs and prove properties of them.
It's necessary to have a formal language to describe this design. Such a formal language needs to have one and only one interpretation. We should be able to take a document written with that formal language and verify that it meets our assumptions. We should also be able to verify that the final implementation is isomorphic to the design.
Luckily such tools are ubiquitous. You can use whatever programming language you like to specify your design. In the case where the programming language is defined poorly, then you must also decide what build system and run time environments you will support ahead of time.
You can document your assumptions using the same programming language. This is made easier if you use a "test framework", but if needed you can also roll your own. You can validate your design by running the tests against your design. Normally it is best to break up your design into "units" that make it more clear how to validate your assumptions. You can then add some extra validation that the composition of "units" transitively validates your assumptions.
You can verify that your final produce adheres to that design because there will be a 1:1 mapping from the design to the final product (i.e. the final product will be implemented using the same code as your design). There are wonderful tools that will tell you the "coverage" of code used in the final product that has been validated against the assumptions.
Finally, you can even specify your requirements using the same formal language and verify that the design meets the requirements. Normally you do this by validating that the "integration" of "units" meets your assumptions. This should not be done without individually validating the assumptions on the "units", though, because the "integration" can lead to exponentially growing alternatives. Normally it is infeasible to validate each alternative.
Yes, this reply is tongue in cheek, but I am not ignorant of formal specification methods. They have their place. That place is currently not in a professional software development shop. We have better methods at the moment. Possibly formal specifications methods will improve to the point where we can reasonably use them, but we aren't there yet.
> Possibly formal specifications methods will improve to the point where we can reasonably use them, but we aren't there yet.
I disagree. Amazon has had great success employing TLA+ in finding bugs, testing design changes, and chasing aggressive optimizations [0].
Perhaps it is because there are myths that are still floating around regarding formal methods that still make developers cringe when they hear mention of them [1].
None the less I couldn't find reference to it in the book... did I miss it?
And besides... unit tests, I'm sure you are aware, aren't good enough alone. They can only increase your confidence in an implementation but they prove nothing.
If we want to start calling ourselves engineers I think we better start looking to how the other engineering disciplines operate. I don't think it will be long before an insurance company comes along and finds actuaries capable of monitoring risk in software development projects and liability becomes a necessary concern.
I had to think about if for a bit, but you are right. My statement was waaay too general. Especially for communication protocols formal specifications are useful now. I still think we have a long way to go before we will be using tools like this every day. The main issue is whether or not it is easier to reason about the correctness of something by inspecting its design or its implementation. We can never prove that the design is correct, only that it is isomorphic to the requirements and the implementation. If there is only one document, the implementation, then the proof of isomorphism is trivial. That was my point. I personally believe that for user facing behaviour is easier and clearer to express with with tests (either integration or unit) and I really don't expect that to change in the near future.
Anyway, I regret the tone of my previous message, which mostly made me look foolish, and thank you for your kind response.
I look for the critical bottlenecks in the application where the cost of failure or mistakes is fairly high. This is where I feel I will get the most "bang for the buck" so to speak.
You don't have to specify the entire application to get the benefit of high level specifications. Even specifying the protocol between two communicating channels can bring benefits.
There's literally millions in commercial and academic activity around it. Maybe tens of millions. It's not dead yet despite me wishing it never had life to begin with haha.
I've seen plenty more work on UML as academics & commercial vendors are still all over it. I couldn't find an example for "Processing" because they picked the stupidest name possible: a word so overused I'm getting results from the food industry, compilers, IRS, and computers all at once.
UML specs them at an abstract level. You build them in code or HDL's. Few, formal methods can do both. Generally, you prove properties in the abstract in one and correspondence with the other. Code-level proofs only came online only in past 5-10 years or so in a subfield 40 years old.
So, UML would let you specify data and behavior then confirm properties about them, catch inconsistencies in requirements, or aid integrations. SysML is used for this in industry with verification results in academia even for UML as I showed. So, it's reality rather than theory even if you or I think better methods exist. I'll take a combo of Z, B, CSP, and/or Statecharts over UML anyday. Coq and HOL if I was specialist enough.
Not saying you're wrong: At the end of the day to the degree that any of these methods (UML, TLA+, whatever) formally prove correctness is the same degree that they gain the features of a turing-complete programming language. Which comes back to the notion design is the source code itself and represents the majority of the work. The compiler or interpreter is the "construction" phase of the project.
Formal validation are perfect for when you know exactly what problems you're solving. Unless a business is offering a very specific solution like that, then it won't help them. If you're writing security software or low-level system controls or something. But if your customers aren't actually engineers themselves, formal validation is unnecessary and inefficient.
>There are plenty of open-source tools for using languages like TLA+ that work great besides!
Does your team know TLA+? Is it an efficient use of their time to learn it? Are a bunch of TLA+ beginners going to crank out properly written software?
They're good evrn for exploration if you're using formal specs. Reason is good ones knock out errors due to English ambiguity and inconsistencies. Z, Navy's SCR, B method, and Statecharts were all used for these purposes with success. Executable specifications did even better for exploration aspect.
Current research is taking it further with code generation from specs like AADL, UML, and especially SCADE/Esterel.
> Formal validation are perfect for when you know exactly what problems you're solving.
Or you're at the point where formal validation is a requirement.
There's nothing stopping one from using any of these tools as design tools. I'm working on a problem to check whether items delivered by a single producer on a FIFO queue with N workers can guarantee all work items will eventually be attended to. I could just write the code for that but I'll never prove the system works that way. The best one can do is gain confidence that for the prescribed scenarios it will work. You can get good coverage and use all of the tools we know to release something others may decide to use... but then you'll be fixing your errors after the fact when they are discovered and reported by your users. Or in the case of a system as large as AWS you may find that obscurity is no longer a comforting buffer... 1 in 100000 becomes a frequent occurrence.
update: There's nothing preventing you from only using formal methods on the critical parts of your system where a high degree of reliability is useful. One does not need to formally verify everything to gain the benefits.
> But if your customers aren't actually engineers themselves, formal validation is unnecessary and inefficient.
I think it depends on the difficulty of the problems you address with your software. If you're just sorting some lists or making system calls you might not want to bother. However if you want to guarantee consensus in the face of delays and partitions you'll need more than code to make any strong claims of efficacy. And if the public interest relies on your system I think it's necessary to serve them in the best capacity using the state of the art tools.
> Does your team know TLA+? Is it an efficient use of their time to learn it?
No. They could pick it up in a couple of weeks if necessary. It is an efficient use of their time: fewer bugs at scale means less downtime and less time spent chasing errors that slipped through.
> Are a bunch of TLA+ beginners going to crank out properly written software?
Are a bunch of beginner programmers going to crank out properly written software? There are different levels of experience and skill on any team. With training and diligence even beginners can learn to adopt the skill and ability to recognize when and where to use high level specifications and how to abstract systems into mathematical models they can test and prove.
Until then I suppose we have to live in a world where security breaches are common and the recall rates on cars will continue to increase as unreliable software continues to cause failures, lives, etc.
> I could just write the code for that but I'll never prove the system works that way.
Or you could just prove from first principles, like we all learn in formal algorithm design theory.
The question is not whether proofs are valuable, the question is whether translating a system into TLA and using that proof checker is more reliable and saves you time over just attempting a proof directly.
Even if your TLA proof checks out, it may be a false positive because it doesn't accurately reflect your production code.
By all means write your proof whichever way you see fit.
What TLA+ and other languages give you is an automated theorem prover or model checked that usually is built on some form of temporal logic and predicate calculus. This is especially good at modelling multiple communicating processes. Taking a logical approach to discrete maths has a few benefits here.
> Even if your TLA proof checks out, it may be a false positive because it doesn't accurately reflect your production code.
This is something one needs to be concerned with when writing high-level, abstract specifications. There's no way yet that I know of where we can synthesize the program from the specification although I am aware of research in that regard. However we can still gain the benefit of well-defined invariants and pre/post-conditions that we can use in testing our implementation. For now your implementation will be separate from your spec but you gain insights from the spec you would not otherwise if you had only written code.
update as to whether it is an efficient use of time to use a model checker or automated theorem prover... Well I think the reason for their invention was to specifically to handle the tedious task of proofs in formal maths. Some operations are tedious and mechanical and computers are better suited to the task than humans.
One area I find interesting is languages with dependent type systems like Agda and Idris. It seems like we're not too far away from being able to model and prove our specifications directly in the type system alongside the program that implements it.
Only have skimmed it so far, but it looks great. Will look more soon. Off the bat though, I was looking at it on Amazon and the 'look inside' seems to scrunch up some of the text. Not sure if this is something an author can do anything about, but wanted to make you aware. I'm viewing this on Windows 10, Chrome 51, normal zoom level, and with minimal plugins. Screenshot: http://imgur.com/uXuELxv
Yes, I've seen this :-( thanks. Also Amazon currently doesn't handle the .svg images correctly, I have no solutions unfortunately, despite lots of googling. If anyone has any tips?
Btw, besides the .svg problems, the kindle version looks fine otherwise on my iPhone, it is not squished, if that helps any.
First off great job, it looks like you put a bunch of hard work into this. One area for improvement is the images. Currently some of the more information dense ones are hard to read at small sizes and clicking on them does nothing. Perhaps expand them to full size on a click?
I had to back this out because of a conflict with the filesystem layout and strict epubcheck by itunes. :( Sorry. I did expand the dense image in the introduction to 100% width, which hopefully helps.
Wow - nice job! I just skimmed it, but it looks great - will read more later. With all the superficial, error-ridden, and rambling books being published these days (packt, etc.), it's a pleasure to see something information-dense, accurate, and also high-quality.
Yes, the version on iTunes is an epub. But they are not happy with the links to Amazon so I will have to remove them and put it back up. That will take a day or two, sorry.
A pdf of the full book I'm not sure about yet, should I be concerned about copying? I'm new to authoring.
DISCLAIMER: the following is my personal belief, and I don't have more than a few anecdotes to back it up.
You shouldn't be concerned about copying no matter the format. Almost anyone who gets a copy of your book illegally is someone who would not have read your book at all if he couldn't have gotten it for free. And with these people, you are better of if they do get it; they may share it with someone who will become a customer, or they themselves may become customers in the future, when they learn that they like what you do and/or their purchasing habits change.
Gumroad has a feature called PDF stamping [1] that puts the buyer's email address on the PDF. It's not perfect security, anyone who wants to ignore your copy rights probably will, but it might deter unthoughtful sharing.
Also consider a team/company license priced at some multiple (15x?) that will allow a manager or lead to buy your book for their company/team without having to worry about violating copyrights or managing buying the book for every employee.
Yes, I did see leanpub and liked it, but they didn't support sphinx, which is what I knew at the time. Didn't know that they took complete .epub will put that on my list.
I'm from Indonesia. A lot of countries are excluded from getting paid contents on iBook (e.g. India, Malaysia, Singapore, Hong Kong, Korea, UAE, etc): https://support.apple.com/en-us/HT204411
Yes Calibre can read .epub, and there is a decent extension for Firefox I've seen.
However, I'm not yet making the epub available. The book still needs a lot of work. When it gets closer to completion, I will probably will make standalone files available.
Yes, it tries to be useful for all types of software. Iterative/incremental is covered in chapter 7 under "gradual development." CI/CD under Construction and Quality.
Also, to help understand the new, it helps to know the "old." At least that's what I thought. A number of people are mentioning it, so perhaps I shouldn't have ordered it that way.
Thanks for writing and sharing this. It's not something I dived into deeply so feel free to ignore my opinion. But the main thing I feel this book lacks is a "show don't tell" mentality. I haven't read the content close enough to judge how insightful the technical nuts and bolts are. But one thing I learned only after working in software dev is the human aspect behind the pace and rhythm and success of projects. It's not just code and processes, but why such processes were implemented a n the first place.
I'm violating my own principle, so I'll give an example: the book, Enterprise Rails, opens with a chapter titled, " The Tale of Twitter". Here's an excerpt:
> Because Twitter was the largest, most public Rails site around, its stumbles were watched carefully, and the steps Twitter took to alleviate its scalability issues were thoroughly documented online. In one instance, the database was becoming a bottleneck. In response, Twitter added a 16 GB caching layer using Memcache to allow them to scale horizontally. Still, many queries involving complex joins were too slow. In response, the Twitter team started storing denormalized versions of the data for faster access. In a another instance, Twitter found its use of DRb, a mechanism for remote method invocation (RMI), had created a fragile single point of failure. It replaced DRb with Starling, a distributed messaging queue that gave it looser coupling of message producers and consumers, and better fault tolerance.
> It is of no small significance that Twitter’s engineers chose to absolve Rails of being at fault for their problems; instead of offloading the blame to an external factor, they chose to take responsibility for their own design decisions. In fact, this was a wise choice. Twitter’s engineers knew that reimplementing the same architecture in a different language would have led to the same result of site outages and site sluggishness. But online rumor mills were abuzz with hints that Twitter was planning to dump Ruby and Rails as a platform. Twitter’s cofounder, Evan Williams, posted a tweet (shown in Figure 1 ) to assure everyone that Twitter had “no plans to abandon RoR.”
It's not that every chapter should open up with a jaunty Malcolm Gladwell-seque tale about the life of loves of professional development. But some of your assertions could be made more compelled with some real-world examples:
> As a student of computer science and programming, you’ve learned a significant portion of what you need to know as a rookie professional. The most difficult parts perhaps, but far from the “whole enchilada.”
There's nothing wrong with that statement. But there's not much to it besides filler that students have been told for their entire college education. You yourself must have a few personal examples of what the first week of work taught you that 4 years of college didn't. And/or, you may remember a few interns who, despite their college pedigree, found themselves to be completely over their heads. Just even a couple of sentences of showing how you came to learn the wisdom you now dispense goes a long way.
Anyway, sorry for the extended critique. I am obviously skipping over the part above to how damn hard it can be to find compelling stories :)
Nope, I love this comment and lol'd at the "gladwell-esque" part.
Keep in mind though I'm not a professional writer, far from it, a hack basically. It took untold suffering to get here, where "there's nothing wrong with this statement." Because believe me there were ten wrongs and ten revisions beforehand to get to this point.
That said, I do have a number of stories included in the text, unfortunately under five. I will keep a todo item to include more, but honestly I'll never be able to produce suspenseful text like the above. Maybe if it makes some money I'll be able to hire a pro co-author.
Continuous delivery is orthogonal to any particular management ideology. There's nothing inherent to Agile that relates it to continuous delivery. You could interpret the Agile principles for the sake of delivering once a year releases if you wanted.
And many firms do continuous delivery for very critical products and services without using Agile nor anything even remotely like Agile.
In fact, the only parts of it orthogonal to continuous delivery are
5. Projects are built around motivated individuals, who should be trusted
6. Face-to-face conversation is the best form of communication (co-location)
9. Continuous attention to technical excellence and good design
10.
Simplicity—the art of maximizing the amount of work not done—is essential
11.
Best architectures, requirements, and designs emerge from self-organizing teams
12. Regularly, the team reflects on how to become more effective, and adjusts accordingly
And aside from (maybe) embracing remote work, I don't see those things going away anytime soon. I certainly wouldn't want to work somewhere that rejects them.
We could stand to lose the name, but probably not the ideas.
What if the client asks you to give them a product once per year. Does Agile recommend telling them no, turning down their money, and reply, "Sorry, but Agile says I have to delivery the product continuously."
The two words "continuous delivery" mean to deliver something in such a way that the customer doesn't experience gaps between the release of improvements, upgrades, fixes, additions, or desired changes, so that they are not staccato changes at major discrete instances.
Crucially, the customer, not you, gets to decide what "staccato changes" and "major discrete instances" means to them.
If the customer says to you, "Receiving these changes any faster than once a year does not help me" then "continuous delivery" for you, in that case, does not mean the same thing as the modern buzzword ideology of continuous delivery.
Nonetheless, you could still use an Agile process in that scenario. I wouldn't recommend it though.
In everything I've seen, heard, and experienced re: Agile, the core of it is "lots of small, fast waterfalls."
If a client wants to invent their own meanings for words which diverge significantly from those generally accepted in the community, I'm going to be extremely concerned about our ability to communicate effectively, and take a hard look at whether the risk/overheard of the minefield lurking in our vocabularies is worth the money.
If a customer thinks shipping once a year is "agile continuous delivery" they are wrong, just like if a car on the freeway thinks "35mph is fast enough" he is wrong. I mean, for him, sure, but not when attempting to interact with others cooperatively.
Though I'd probably humor anyone's belief of anything for enough money. And while I'd certainly prefer to work on a project that's actually doing agile, I don't doubt that under some circumstances it's more appropriate to do a traditional waterfall (and call it that).
> In everything I've seen, heard, and experienced re: Agile, the core of it is "lots of small, fast waterfalls."
Not every problem is decomposable that way, and one of the major failure modes of Agile is when you see people trying to shoehorn problems that can't be decomposed like that down into two week sprints.
> If a customer thinks shipping once a year is "agile continuous delivery" they are wrong, just like if a car on the freeway thinks "35mph is fast enough" he is wrong. I mean, for him, sure, but not when attempting to interact with others cooperatively.
Not all cars are on freeways. This analogy borders on absurd.
There are many businesses where infrequent software updates make tons of sense. For example, if you work with field deployed hardware that is not connected to a network. I worked with hardware like that in some defense situations before.
Submitting updates to the actual devices made no sense whatsoever except on an infrequent basis. The devices could not be connected to the internet for security reasons.
If you delivered software to a firm like that, and took the cocksure attitude that you seem to know better than the customer, you'd rightfully lose their business for reasons of poor software practices.
Man, the dogma of Agile is just so frustrating. It really gets me down.
>one of the major failure modes of Agile is when you see people trying to shoehorn problems that can't be decomposed like that down into two week sprints.
Then these projects are poor fits for Agile, and the appropriate solution is to use something else.
>There are many businesses where infrequent software updates make tons of sense. For example, if you work with field deployed hardware that is not connected to a network. I worked with hardware like that in some defense situations before.
Great! Then these are appropriate places to not use Agile.
That doesn't mean you can define Agile to be "whatever is most appropriate in this situation." It's a tool in the toolbox, and sometimes it's the wrong one, but when you do reach for something else you owe your fellow craftsmen the courtesy of calling it by the correct name.
> Great! Then these are appropriate places to not use Agile.
This is incorrect. You can use Agile for these problems, some organizations already do, and they do not violate any principle regarding continuous delivery by using Agile in these situations.
To be clear though, I feel Agile (or any fixed, one-size-fits-all prescriptive methodology for that matter) is always a bad choice, for any project.
However, trying to act like the words "continuous delivery" have a fixed, unchangeable meaning that never varies by the context of customer delivery targets is simply and unequivocally incorrect.
Agile isn't take it or leave it. You can adopt half the principles and be better off than none. You can adopt one of them. No one will send you to jail. I've seen plenty of places run their process through sprints, but based off of a comprehensive BRD written up front.
Three of the four agile principles from the manifesto are:
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
All of those strongly point towards continuous delivery being part of agile. Continuous delivery is also a given in XP, one of (if not the) founding agile methodology.
I think you need to dilute agile quite a lot to release once a year, although I daresay you can.
It's amusing to me that the first thing people thought to do was go look up the letter of the Agile law, as if that could possibly have any bearing on this. Such a strong indication of what an empty cult Agile really is.
I don't understand. The principles of agile are, to me, what agile is, not the cultish methodologies.
How would you define agile?
From your other comment, it seems like you're defining it as whatever is appropriate for the project. I don't disagree with that sentiment at all, but it does make the word rather pointless.
It's defined by whatever practices emerge through its usage. Which ends up being a big stew of political dysfunction, time-wasting meetings, and pointless metrics.
I feel it is a classic No True Scotsman fallacy to say that "any real agile implementation" does this or that, but all of these "false" Agile implementations lead to the dysfunction.
In a book like this, I'd like to see cost estimation mentioned earlier than Chapter 8. I'd argue that in nearly all software development projects (particularly the ones where this sort of book applies) the cost of implementation is one of the key factors in evaluating requirements. If you put together your requirements without considering their cost, you end up with a project that's too expensive to build, and you want to know that during the Requirements Gathering phase, not later on during construction. Even Agile projects need to start out with a rough idea of what's being built and a rough idea of what it'll cost to build it. Otherwise how does the person funding the project decide whether or not to approve it?
Of course, as we all know providing estimates during the requirements phase is very difficult, especially if they're treated as hard commitments rather than rough ballparks. Chapter 2 mentions that getting requirements wrong is a key factor in causing software projects to fail; I'd say that the reason for that is usually because of the implementation costs of the known requirements that were estimated inaccurately or not at all, and the implementation costs of requirements that aren't discovered until later. It always comes down to cost; I think it's relatively rare for a software project to fail because a requirement turned out to be impossible to implement. (Unless it involves AI. You always have to watch for people trying to sneak in an AI requirement.)
At my uni our Software Engineering I (CS301 or maybe it was II[CS302]) class was entirely based on requirements gathering and estimation.
It was a real eye opener for me- I'd been writing code in many languages since I was 10, but this was my first glimpse at "that other stuff" that takes up the majority of one's day.
All in all, I feel like it prepared me for what I would face in the real world. We had to do stakeholder interviews where the professor or a TA played the role of unforthcoming/neurotic stakeholder, were introduced to various general document types like stakeholder analysis, cost/benefit analysis, requirements overviews, etc. and the last 1/3 was pretty much applying all the interviews and data to an estimation process. We also did a greenfield project, an additional functionality project, and a system replacement project to work through the pitfalls of each.
I also think it was the first time I read The Mythical Man Month and Waltzing with Bears.
The two things, by far, that stick out about recent college grads (or really, new developers in general) are the inability to estimate and gather requirements and a complete lack of knowledge around source control.
GitHub and a lot of projects that are based around checking project source out has made a lot of Junior devs more familiar with at least the idea of source control, but very little prepares them for the flailing and hand wringing that comes with estimation.
Thanks to the author -- it does like like your labour of love and written with some personality. (That's a good thing!) Attractive layout, lots of insets/quotes/diagrams, and sources. Will read more later.
The "Models & Methodologies" chapters looks great. It may become my new "here, read this!" when people ask me "what's agile?" or "how else?"
Thank you, I've tried to do exactly that, give new people a good overview with not too much information at once, and encourage further study with copious references.
I'm disappointed to see a book aimed at "professional" developers continue to spread outdated and debunked information, such as the NIST "study" adduced as evidence for the imperative necessity of "defect cost containment". See my post on the topic: https://plus.google.com/+LaurentBossavit/posts/8QLBPXA9miZ
I'll add my voice to those that have already stated such a book shouldn't start by assuming the SDLC as a reference model: it embodies too many of those outdated assumptions. More in that vein in my own book http://leanpub.com/leprechauns
I haven't read all of it yet but it also seems to miss the vein of development that has led to movements such as Correct by Construction et al. The idea that we should model our computations in a high-level specification and check that those designs meet our goals and hold the invariants we press upon them.
Instead there's Agile and the idea that we can throw together something that roughly works and iterate until our confidence is enough such that we can release it. The so-called, beta-driven-development. (Perhaps a vestigial remnant of the unix philosophy?).
I'm not arguing that formal methods should be used for every software project. I think Carmack was right to point out that if all software was written like the software at JPL we'd be decades behind where we are now. However I do think that it should be a part of the experience of becoming a programmer so that when we encounter hard problems we have the correct instincts to rely on mathematics to help us.
This is a beginners book, and keeping in mind we can't teach them everything in the first "semester," do you still think this topic should be included?
Definitely think the groundwork can be laid out without covering the entire breadth of formal methods. Even a history lesson with Djiskstra, Lamport, Mizra, et al can be illuminating. It's good to lay the foundation so that when a beginner encounters a problem they have the instincts to know that they should look up these tools in order to help them.
I found it a shame that I didn't encounter these tools only until very recently. It's a well kept secret in academia that I think should be shared.
The NIST study graphic matched another from the book Code Complete, to illustrate the cost of defects. I did not have access to the CC graph so I used it instead.
Do you believe the larger point (about cost of defects) is incorrect? Your g+ critique seems to take issue with the details of the study, but I missed an assertion it is wrong.
As to the wikipedia timeline, I cherry picked from it. The point being to pick the important things a student should know, not list everything possible. Seems your WP edits didn't make it through?
I'd put the cost of defects claim in the category "not even wrong".
If someone made quantified claims about "the number of minutes of life lost to smoking one cigarette" I would refuse to take them seriously: I would argue that the health risk from smoking is more complex than that and can't be reduced to such a linear calculation.
This talk about "the cost of a defect" has the same characteristics. I don't mean the above argument by analogy to be convincing in and of itself, and I've written more extensively about my thinking e.g. here: https://plus.google.com/u/1/+LaurentBossavit/posts/8tB2RQoHQ...
But it's a large topic that quite possibly deserve a book of its own.
As for the history of software engineering, it's pretty much the same - to do it properly would entail writing a book, pretty much, and I didn't want to do it on WP unless I could do it properly.
Table and Figure 3.1 in Code Complete support the conclusion and is well cited:
Source: Adapted from “Design and Code Inspections to Reduce Errors in Program Development” (Fagan 1976), Software Defect Removal
(Dunn 1984), “Software Process Improvement at Hughes Aircraft” (Humphrey, Snyder, and Willis 1991), “Calculating the Return on
Investment from More Effective Requirements Management” (Leffingwell 1997), “Hughes Aircraft’s Widespread Deployment of a
Continuously Improving Software Process” (Willis et al. 1998), “An Economic Release Decision Model: Insights into Software Project
Management” (Grady 1999), “What We Have Learned About Fighting Defects” (Shull et al. 2002), and Balancing Agility and Discipline:
A Guide for the Perplexed (Boehm and Turner 2004).
Joel on Software also has a convincing narrative on the subject. Therefore I'm not in a big hurry to replace the image, though I will put it on my todo-list.
I'm all too aware of these many citations. A few years ago, I went to the trouble of chasing down most of the papers and books, and evaluating how well each of them supported the claim. To put it mildly, I was underwhelmed.
However, I haven't (entirely) changed my mind about TDD and similar practices. I do still believe it pays to strive to write only excellent code that is easy to reason about. I like to think that I now have stronger and better thought out reasons to believe that.
Looks like you've done your homework. Yet in my experience it has been true that the farther in space and time I've been from a bug, the harder it was to solve.
I'm an Atlassian engineer & heavy Aerobatic user. It's very nicely integrated with Bitbucket, and has great support for Jekyll, Hugo, and arbitrary npm builds. It also has some advantages over github.io like being able to deploy multiple feature branches from the same repository to separate sites, so you can have separate "staging" and "production" versions.
It is a paid offering, though you get two repositories free, and is very pretty reasonably priced beyond that.
I already have such a repo. But I would reckon the repo needs to be open source right? What if I would like to host all my open source repos on that subdomain?
Oh ok, it needs to be the entire url 'user.bitbucket.org'.
Definitely touch on a lot of topics here. And all looks well organized. It also does look like a laundry list of buzzwords and techspeak.
Obviously this wouldn't be used to teach anyone any particular topic in detail but to get them familiar with the general concepts/steps involved in software dev.
Edit:
The two books I read that I thought covered these ideas well were Code Complete and Code Craft. But it's been about a decade now. Perhaps they're too dated.
> It also does look like a laundry list of buzzwords and techspeak.
Thanks, interesting. I've tried to define difficult terms, and it is aimed at a technical audience, but there is definitely room for improvement. If there are any readers having trouble, I'd appreciate hearing where. Will take a look myself as well.
Yes, I think you're right. Design is such a huge topic and hard to condense into one chapter (one of the shorter). I will pull more details into it and perhaps others, thanks.
Kudos to the author for tackling this huge and controversial topic.
Read first several chapters and picked up the impression that the author doesn't put enough effort to point out how the processes/practices can (and should) be completely different depending on the circumstances. If a startup tries to use the same processes as Google or Facebook, it'll be dead in the water. If SpaceX engineers write software the same way as SnapChat does it, we will never see their rockets leaving launch pads.
Interesting book. I'll take some time to do a read through. A few things that bother me though off the bat:
1. Anything that uses the word 'protip' can not be taken seriously. I think this needs a law. 'The Law Of Silly Programming Memes - Anything using the word 'protip' cannot be taken seriously'
2. The 800px width format in the world of responsive design gives me pause. Basically this says 'I'm for mobile - screw you'. I would hope that a better format for this would be chosen in the future.
Oh, always read "protip" on reddit with a smile. It is aimed at the young, sounds like you're being a bit chatinho.
Also thought it was recognized that narrow columns are easier to read, such as in a newspaper. It uses the well-regarded "read the docs" theme. Maybe zoom would help?
Here's some for you collection given I see some omissions. Cleanroom, always omitted (sighs), is a big one as it was doing agile-like development in 80's with code so reliable it was sometimes warrantied. Also one of first, formal methods that didn't require a mathematician to use. Fagan's Software Inspection Process came before that in the 70's. I throw in Praxis and 001 for good measure as they're engineered software methods with better results than Cleanroom albeit at higher cost. Leave off plenty of others too constrained for most software development but did prove out in smaller projects. The B Method & Chlipala's Certified Programming in Coq are examples if you want to Google around.
Note: Altran/Praxis Correct by Construction is a modern high-assurance method with numerous successes. Cost a 50% premium for nearly defect-free systems. SPARK Ada is GPL these days.
Note: Margaret Hamilton, who helped invent software engineering on Apollo mission, deserves mention for the first tool that automated most of software process. You can spec a whole system... one company specified their whole factory haha... then it semi-automates design then automatically does code, testing, portability, requirements traces, and so on. Guarantees no interface errors, which are 80+% of software faults. Today's tools have better notations & performance but still can't do all that for general-purpose systems: always a niche.
Note: Added Eiffel method to make up for fact that I have little to nothing on OOP given I don't use OOP. Meyer et al get credit for a powerful combo of language features and methodology in Eiffel platform with huge impact on software. Specifically, Design-by-Contract has so many benefits that even SPARK and Ada both added it to their languages. Just knocks out all kinds of problems plus can support automated generation of tests and such.
So, there's you some reading on methods of making robust software that might fit into your book or something else you do. :)
It's not common. The problem is a combination of user demand, vendor incentives, social issues, and no legal liability. User demand kept pushing for more features, faster, and cheaper price at all cost. Quality, security, and maintenance are the consistent costs. Vendors have been pushing broken stuff intentionally since early days to make immediate profit on cost savings then more as they supply patches. They also try to ship as fast as possible to get First Mover advantage even though medium assurance methods still work with that. They spend a little time upfront to knock out lots of debugging time. Gabriel's Worse is Better essay and how much people hold onto C are examples of the social aspect where something spreads like wildfire. They then justify its failings or keep them for convenience. Lastly, the areas most prone to high assurance have regulations, policies, or legal liabilities that encourage its use whereas most vendors immunize themselves against liability with EULA's. Worse, customers seeing unreliable, insecure systems everywhere are conditioned to think that's inevitable rather than artificial and subject to change.
Note: After a big recall, the hardware field is the one exception where they have all kinds of formal verification and testing. They're big on that stuff. Not same tools as software, though, for the most part.
Far as those using it, it helps to look at what products are available and who vendors say are their customers. Look at high-assurnace plus medium as many former customers of high-assurance do medium these days due to above reasons. Even most vendors in the niche are saying "F* it..." since demand is so low. So, you get especially high-security defense, a few in private security, some banking, aerospace, trains/railways, medical, critical industrial (esp factories or SCADA), firms protecting sensitive I.P., and some randoms. The suppliers are usually defense contractors (BAE's XTS-400, Rockwell-Collins AAMP7G); small teams in some big companies (eg IBM Caernarvon, Microsoft VerveOS); and small firms differentiating with quality & security (Altran/Praxis, Galois, Sentinel HYDRA, Secure64's SourceT).
Here's some examples. Some have marketing teams in overdrive. Just ignore it for use-cases, customers, and technical aspects. ;) Altran comes first as they focus on high quality or effectiveness for premium pay, with some high-assurance. Probably a model company for this sort of thing. AdaCore lists lots of examples which are actual customers. Esterel has a DSL with certified, code generator that has plenty uptake. INTEGRITY links show common industries & specific solutions that keep popping up on security side. NonStop is highly-assured for availability with reading materials probably having customer info. Last one is a railway showing B-method, most common in that domain, doing its job. Hope this list is helpful. I can't do much better in a hurry since the field is so scattered and with little self-reporting.
A couple of things I don't see mentioned (apologies if they're in there, I haven't read every word):
1. Process does (and should) vary tremendously depending on factors including the organization size, organization maturity, market maturity, experience level of the people involved, budget, etc. The book seems to suggest that there's a one-process-to-rule-them-all.
2. Often there are significant unknowns about a project : unknown technologies, unknown market needs, unknown requirements. Being able to accommodate the unknowns, which can mean not expending effort trying to know something unknowable, is important. The book I think gives an impression of quite confident smooth progress toward project completion that I personally have never observed.
Also: The value of Rubber Chickens is not mentioned...