Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] How to Design Programs 2nd Edition (htdp.org)
253 points by noob_eng on May 2, 2023 | hide | past | favorite | 111 comments



Posted just 3 weeks ago by the same user no less https://news.ycombinator.com/item?id=35478871


Not in vain since I just saw it today.


i found out about it today


[flagged]


How did you come to that conclusion? Seems like a pretty tame post and submission history.


There's a lot of speculation about the approach used by this book to teach programming. It's the book that my own daughter used as a freshman in college. It was her first programming class, and she ended up deciding to major in CS.

The programming language taught in this book is Scheme. Students are introduced to programming concepts and language features gradually and the assignments are intended to be implemented using a growing subset of Scheme features as the students learn the language. By the end of the semester my daughter was familiar with much of Scheme and was comfortable with subjects like recursion, lambda functions, list handling, map functions, and closures. I'm not sure but I don't believe that she learned about continuations or macros.

Some comments mention the "toy" languages used in the book, but these aren't really toys, just subsets of Scheme that provide guard rails to keep the students from wandering into parts of Scheme that they haven't yet learned.

Another notable feature of the How to Design Programs 2nd Edition is its introduction of a defined sequence of steps for breaking a problem down into parts that are implemented as collection of functions that make up the solution.


I think this approach to the issue of 'what language to start with' has a serious flaw for basic programming instruction:

> "Our solution is to start with our own tailor-made teaching language, dubbed “Beginning Student Language” or BSL."

The only time the use of a toy language makes sense is if you're learning how to create a programming language in the context of understanding compilers, e.g.

https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/index...

Otherwise it's doing the students a disservice as they'll never use the language again and will have to relearn a whole new syntax later on. It's also far better to learn in a widely used language because there will be a wide variety of resources that can help you solve simple problems (and this is even more true in the era of ChatGPT).

The argument that a simplified language makes it easier for students to grasp high-level concepts doesn't work either: instead just use a restricted well-defined subset of a language like C, C++, Java, Python, Javascript - and explain to the students the rationale for doing so.


It's not one "toy" language. It's a series of languages that becomes gradually larger. It has the exact same syntax and semantics as the "real" Racket.

If a language is small, the compiler can give very precise beginner friendly error messages. I'll call someone who has programmed one or two weeks for a beginner.

As an example, in the beginner language all function calls begins with a name, so `((foo) 42)` gives an error message, that says function calls must start with a name. Note that beginners often use too many parentheses.

Later on when first order functions are introduced, the call `(foo)` might return a function, so `((foo) 42)` is okay. The beginner friendly "function call must start with a name" message is now no longer true.

There are four language levels used in the book - and they follow the progression in the book.


An experience report "On teaching how to design programs" from Norman Ramsey.

https://www.cs.tufts.edu/~nr/pubs/htdp.pdf

I am quoting section 4.1.

For over fifteen years I have taught programming languages using little languages (Kamin 1990; Ramsey 2016). With this experience as background, I cannot praise the Racket teaching languages highly enough. The language design is lapidary. I was especially impressed that functions in Beginning Student Language may not have local variables. At first I thought this restriction was crazy, but after observing students at work, I see that not only is the language simplified, 7 but without local variables, students are nudged to create helper functions—a notorious point of difficulty for beginning students


> Otherwise it's doing the students a disservice as they'll never use the language again and will have to relearn a whole new syntax later on.

They will have to to learn many new syntax in their career so that is a good start.

I never understood some people acting as if syntax were a big hurdle. Do people really need more than a few hours to learn a new syntax? From APL to C to Haskell to Lisp, I can't ever remember struggling with the syntax of a language.

> It's also far better to learn in a widely used language because there will be a wide variety of resources that can help you solve simple problems (and this is even more true in the era of ChatGPT).

This can be a pro and a con. While students should learn how to use publicly available information form search engines, stack overflow and so on, it can also be valuable to teach them ways of not having to rely on them. Learning to read proper technical documentation and working you way through a problem without had-holding is a great skill.

That said, shere is an good argument for using a "real-world" languages for teaching though: Studends will be be more motivated to learn them. Getting them interested in a "toy" language is just a hard sell. Motivation is what determines learning success more than having a technically perfect learning path.


Yes, for beginner programmers syntax is a huge deal. Sticking to something consistent as one starts out prevents conflating variations of design (encapsulation, coherence, atomicity) with variations of punctuation.


Well then at what point will they be learning the syntax of another language? They’d be well past the point of beginner by then, no? And even if they’d be in a junior role (assuming junior = beginner), how much does any of this talk actually matter when the job demands that an employee learn a language anyway?


I'm tempted to say that, at large, every program is already a de facto toy language. Worse, they are often toy taxonomies. Nowadays, they are often toy algebras.

Which is a disservice to all of them. They aren't toys. They are models. You can either do the model at the level of the language, or at the level of the constructs the language gives you. Starting users off with fewer constructs helps them stay free of the distractions that come with them.


I thought BSL was a subset of Scheme, so it avoids most of those problems you mention. Or do I misremember?


They should have just used standard R7RS small. Creating a variant only serves more confusion.


Not in a class room setting.

For someone that stumbles upon HtDP and begins in the middle, there will be some confusion.


It's not hard to learn a new language. It is hard for a complete beginner to get hung up on all the warts of C++.


Bit of a tangent but it seems to me that the talking, self-programming computers will make questions like 'what language to start with' completely obsolete.

When I read the title I thought, "You're writing this for the AIs now." as in the machines are the target audience ( since human programmers will no longer need to design programs, not more than once anyway.)


Why one should read this book? If you have read this book, what were your big takeaways from this?


I read this book when I was learning functional programming. Since I started learning fp with Haskell, I couldn't figure out in the beginning how you create any meaningful programs with it. All I had were small toy programming problems from the Haskell book I was reading. This book hand-held me through learning fp and the two things that, for me, helped connect the dots were the Space Invaders type game that you build in one of the chapters of HTDP2 and Scott Wlaschin's Domain Modeling Made Functional by Scott Wlaschin. The Space Invaders game showed me how you can create a bigger program that actually is of moderate complexity by composing little functions together!

However the book is an introductory book, so there is a lot of basic stuff if you already know programming.


It's all about systematic program design.

The authors has the following to say in the Preface.

PREFACE

Many professions require some form of programming. Accountants program spreadsheets; musicians program synthesizers; authors program word processors; and web designers program style sheets. When we wrote these words for the first edition of the book (1995–2000), readers may have considered them futuristic; by now, programming has become a required skill and numerous outlets—books, on-line courses, K-12 curricula—cater to this need, always with the goal of enhancing people’s job prospects.

The typical course on programming teaches a “tinker until it works” approach. When it works, students exclaim “It works!” and move on. Sadly, this phrase is also the shortest lie in computing, and it has cost many people many hours of their lives. In contrast, this book focuses on habits of good programming, addressing both professional and vocational programmers.

By “good programming,” we mean an approach to the creation of software that relies on systematic thought, planning, and understanding from the very beginning, at every stage, and for every step. To emphasize the point, we speak of systematic program design and systematically designed programs. Critically, the latter articulates the rationale of the desired functionality. Good programming also satisfies an aesthetic sense of accomplishment; the elegance of a good program is comparable to time-tested poems or the black-and-white photographs of a bygone era. In short, programming differs from good programming like crayon sketches in a diner from oil paintings in a museum.

No, this book won’t turn anyone into a master painter. But, we would not have spent fifteen years writing this edition if we didn’t believe that everyone can design programs and everyone can experience the satisfaction that comes with creative design.

Indeed, we go even further and argue that program design—but not programming—deserves the same role in a liberal-arts education as mathematics and language skills.

A student of design who never touches a program again will still pick up universally useful problem-solving skills, experience a deeply creative activity, and learn to appreciate a new form of aesthetic. The rest of this preface explains in detail what we mean with “systematic design,” who benefits in what manner, and how we go about teaching it all.

For more details on "System Program Design" means in practise, see the last section of the preface. https://htdp.org/2018-01-06/Book/part_preface.html


The copyright says 2014, the foreword says they spent 15 years on the 2nd edition since 1995-2000.

It also says it was released the 6th of March 2023

So is it new or not?


I remember the 2nd edition coming out in printed form years ago.

Perhaps there was some update in March of 2023, or perhaps this particular set of HTML files was published in March, but overall, the book has been out for a while.


There is a version number at the top: "v8.8.0.8". I believe they keep making updates to the book probably to fix typos/errata. The original link I had bookmarked https://htdp.org/2018-01-06/Book/index.html and this one look mostly the same - content wise.

But it would be nice if there was a CHANGES file or description about what does change between these versions.


The version number matches the latest version of Racket. It's simply the tool used to generate the book site that inserts the version number. It's more relevant in the Racket docs than in the book though.


I love the look of scribble docs. I don't really use Racket though. Are there alternatives that anyone knows about that have this look?


[flagged]


Programming is simple in the same way playing the piano is: You just press a key and out comes a note. Yet somehow, depending on who presses those keys, out comes beautiful music or horrible noise.

Sure, anyone can write and understand "ADD A TO B GIVING C", just like anyone can press a key on the piano. But the resulting program is a different matter entirely.


That's a meaningless claim.

There are two different axes by which programming can be arbitrarily complicated, one being more of a "vertical" one, where the state space is not too wide, but complex to get right - think of a quicksort at a lower difficulty, and some parallel algorithm on the other end.

The other dimension would be more "horizontal", like here is the service which would not be all that complex in itself, but it has different requirements based on country, with different access control, the whole state machine of a given entry also can change between legislatures and might be updated at any time, etc. This is complex similarly to how a biological cell is -- it has a bunch of legacy/redundancy getting added to it over time. For another example, dates/time libraries -- it is not complex computationally in itself, but due to all the edge cases it is absolutely not a trivial program to write.

Not really sure whether these categories have names to them, but I think these are fundamentally different (but a program may be complex from both axes).


Writing a sentence is simple, yet there are many skills to learn to do it well.


Correct. But it didn't disaproove the original statement.


Can you provide an example of something you consider to be complicated? I’d like to know how to calibrate your comment.


Try to read: Category theory for programmers. Wait, absolutely nonsense ;)



`In fact, if treated properly, most programming things are hard, even things that might seem simple. That is because you have complex pieces that you have to put together and to make them work. And the hardest part is when one has to write the complex pieces from scratch. Things only seem easy because you have people with 5, 10, 20 years of experience doing things that are easy to them because they did them many times before, because they made all the possible mistakes or thought about them and made sure they don’t fall in those traps.` https://dorinlazar.ro/2021-02-programming-is-hard/


I would not follow any of this book anymore.

Abstractions lead to coupling and complexity.

So many outdated principles I can’t enumerate them all.


> Abstractions lead to coupling and complexity.

If you're in the habit of programming in anything other than machine language, I think you'll have to agree that the real story is more nuanced than that.

I haven't actually read this book so I'm not prepared to defend it. But I would happily run my mouth and say that the truth is more that abstractions are a double-edged sword. They can be very effective in skilled hands, but they are dangerous in the hands of the untrained. Unfortunately, what makes a good abstraction (IMO it's algebras or GTFO) is not something in which most of us receive sufficient training.


If an "abstraction" leads to coupling, then it's not an abstraction.


Richard Gabriel coined a word for those things: compressed definitions.

When a thing looks like reuse or abstraction because it implicitly pulls in something else in a coupling way, it's really about exploiting shared context in a way that allows the new definition to be much briefer than it otherwise would have been. However, it still relies on full knowledge of the context, so in terms of conceptual load, there's no difference.

Just as with other types of compression, the shared context becomes the global coupling across definitions.


Could you give an example of that?


I have come across a codebase where there was an "Import data from HTTP" functionality that extended the "Import data from file" functionality in such a way that the HTTP calls were grafted onto and around the file operations. This probably made the HTTP class quicker to write, but

- Anyone wanting to make changes to the HTTP import functionality needed to also fully understand the file import functionality since basically all of it was used to import from HTTP, and

- Many changes to the file import functionality also broke usages of the HTTP import functionality. (Fortunately these breakages were always revealed by automated tests before they made it into master.)


Even that's a hard statement to make in such an unnuanced way. Oftentimes the whole point of an abstraction is to couple things together in a way that happens to be useful.

Take relational databases.

The relational model is a fantastic abstraction that revolutionized data storage and allowed databases to become vastly more powerful than they were prior to the introduction of the model. But it also couples the data representation to a model that's based on sets of tuples, which is a bit of a mixed blessing. It's necessary for the algebra that makes relational databases so flexible, but it also means that the RDBMS's native data format is fundamentally incompatible with how most application programming languages like to organize things, thus creating the need for a translation layer (read: API) to bridge the gap. That is, strictly speaking, a lot of extra work compared to using something like Gemstone, but I tend to think that it's a fair trade in large systems.

On the other hand, object/relational mapping often drives me nuts, because it introduces the wrong kinds of abstractions. It encourages tight coupling of the database's schema to the data model of the portion of the application that the person who created the table was working on at the time. This reduces the flexibility of the database, and may make it more difficult to predict or control the scope of impact of a schema change. Other methods like sprocs certainly have their problems, but at least they were trying to place the abstraction in a sensible place that doesn't create the kinds of couplings that make an application resistant to change.


Abstractions lead to coupling and complexity.

The wrong abstractions can lead to coupling and complexity. The right ones on the other hand are all about reducing coupling and complexity.


> The wrong abstractions can lead to coupling and complexity.

This comes to the old quote:

* "Make everything as simple as possible, but not simpler.” Albert Einstein.

Abstractions undoubtedly add complexity that quickly becomes unmanageable. By now everyone is already aware of the horror that's the enterprise version of Hello World, and YAGNI/gold plating are renowned antipatterns. Many codebases have already succumbed to the perils of premature generalization, where the good old rule of 3 of refactoring serves as a shield against it.

But still some developers succumb to the siren song of abstracting away things.


Okay, let's take an excerpt from one of those enterprise "hello world"s:

    IHelloWorldString helloWorldString = helloWorld.getHelloWorld();
    IPrintStrategy printStrategy = helloWorld.getPrintStrategy();
    IStatusCode code = helloWorld.print(printStrategy, helloWorldString);
Extracting two subobjects from an object to feed them back to the same very object is not an abstraction, it's merely adding a bunch of public methods (and interfaces) to the object. That may or may not help in abstracting things: and usually, the more handles and bells and whistles are available to pull and play with, the less abstracted the code actually is.


There's using "abstraction" in the classical sense of "harder to understand", as in "abstract painting". In today's world, good abstractions are the exception.

> it's merely adding a bunch of public methods (and interfaces) to the object

The object is the abstraction they're complaining about. Why have an object to begin with?


> it's merely adding a bunch of public methods (and interfaces)

What do the interfaces represent?


In this case, nothing particularly meaningful or useful, they mostly just shrink wrap the underlying (single) implementation's details and re-expose them as-is for the caller to cope with. Some people also think that that somehow helps with encapsulation too.


> In this case, nothing particularly meaningful or useful

You're either playing dumb or weren't able to understand what was in the code. IHelloWorldString is an abstraction over the way the string was implemented, IPrintStrategy is a strategy pattern that abstracts away how the abstract hello world string is supposed to be printed, and finally IStatusCode is an abstraction over how a status code is implemented.

> they mostly just shrink wrap the underlying (single) implementation's details and re-expose them as-is for the caller to cope with.

No, not really. Their purpose is to abstract away implementations. Just because there's a single implementation that does not mean this wasn't abstracted away.


How is gold plating the same as YAGNI? They're opposites, no?


> How is gold plating the same as YAGNI? They're opposites, no?

No, gold plating and YAGNI are two faces of the same abstraction coin.


Under-engineering is not the same thing as YAGNI.

I worked at a place where the former employee had engineered his whole S3 to files synchroniztion layer. We never needed that. That's YAGNI. (And if we did, the library already has it!)

Choosing crappy overly-simple ideas for abstraction is not what YAGNI is all about. It's about pulling in overhead+complexity when it's justified, and only then.


We have a history of abstraction-based code that’s turned applications into frozen balls of mud.

We have so many tools to help write good code that the short-term savings of shared libraries is superseded by having distinct codebases that can be modified without those dependency concerns. Same is true for finding bugs in many places and easily setting those up for maintenance releases.


There is no going around complexity. We are absolutely standing on the shoulders of huge giants; even if you think you don’t have any dependency, behind the scenes you use many decades-old libraries dealing with the complexity of floating point arithmetics and such.

Also, Brooks paper is still true, the only real, significant productivity boost is reusing existing code. Even if you rewrite everything from scratch, you will still have to carry the exact same amount of essential complexity. Managing complexity is pretty much the most important part of CS.


open any network communications protocol RFC or any CPU design architecture or operating systems memory tables or operating systems filesystems with journaling or hardware I/O

it is crazy that society hasn't collapsed yet


Some engineer from Netflix once said something along the lines: "Microservices, not too many, mostly alongside team boundaries". Meaning most microservices should be wholly owned by a team and a team should ideally own only one microservice (but exceptions are allowed under special circumstances)

Like microservices, I feel code abstractions should be thought in a similar way, mostly a thing we use in our team, not company wide

In my own team I have to fight very hard to keep dependencies to external code down, the amount of entangling people are willing to put their code through is just crazy. An argument I often put is: "Why should we use lib X from team Y from my company if they don't even feel it is good enough to open source?". Any external code that my team imports into our project should be good enough it could be open sourced (given IP-rights allowing)


I agree, besides, there's really no getting away from abstractions to manage complexity.


A stronger claim: everything in programming is an abstraction.

Somewhat tangential, but I sense tacitly related... I sometimes sense that many people who program adhere to a kind of computational atomism, that there is some kind of underlying "real" and that everything else is "just" some kind of arrangement of elements of the real. But that's just an implementation detail. Yes, when we implement a language that compiles to instructions or bits on a specific machine, we are indeed working to simulate that language. But computation and formal languages don't really have anything to do with physical computers. Their connection is entirely incidental. It is a matter of practicality, like the choice of using a hammer versus a rock to drive spikes of metal through wood. A computer language is the "base" language. From the formal perspective, there is no "low-level" or "high-level" language, just different languages that compilers can translate between. What is "low-level" (typically the target language) is merely low-level by convention, usually because we are targeting a given machine instruction set of some physical machine.

But even here, the notion of an "instruction" or a "bit" are abstractions. There are no "bits" in the world as ontological entities. Computation and data are abstract, full stop. All physical implementations are simply instruments for simulating that abstract model. There is no difference, in principle, between using checker pieces, differences in voltage, magnetic polarity, or thumbs up/thumbs down to represent bits.

The language should, ideally, be suited to the domain of discourse.


Arguably managing complexity is one of the main things you do when doing programming


Over time all abstractions become “wrong” as requirements change.


You shouldn't be baking business logic into your abstractions like that.

Or, perhaps this is a better way of putting it: if there's business logic mixed into it, it's not an abstraction, it's a concretion.


This is why refactoring is a thing.

Perhaps someone should write a book for managers that explains these two things.


> Abstractions lead to coupling and complexity.

So you write everything in assembly and rewrite it completely for every new target system? Seems a bit tedious to me /s

Just kidding abstraction serves it's function and the right amount of abstraction makes things better, more transferable, easier to maintain etc. I think for example about hardware abstraction layers (HAL) in embedded programming. But abstraction in programming is never a value in itself and its use needs to be weighed carefully.

I think it is best to teach people to stick to the data and to think about transformations between said data. If they don't know abstraction (aka beginner's spaghetti code) it is worth telling them about it.


I think ORMs fit that bill. For some reason they were (are?) incredibly popular, but I don't think they stood the test of time. In the case of SQL, it's already an abstraction. No need for elaborate wizardry to hide the fact you're about to JOIN a table or two. Just write the damn join.

Every. Single. Project. I see using ORMs resulted in a baseline 20+ queries being fired under the hood for every screen. They are not bad in and of themselves, I think that they attract or are amenable to a certain type of development mindset that ends in bad results.

I have a very hard time thinking of any kind of object oriented abstraction that turned out to be a very good idea.


OO abstractions are very effective in UI toolkits, but because a smaller proportion of folk are working on desktop apps any more, people tend not to get exposed to that use case.

I also think it depends on the type of ORM. Active Record? Really problematic except for simple use cases but because they make those simple cases easy they get popular on that basis. Data Mapper? Actually fine and very useful, but harder to write well and a bit harder to use.


Yes, that's actually a good point. UI widgets map nicely to objects, at least usually.


I can't imagine programmers using reasonably well tested and documented ORMs and still firing a minimum of 20 queries would be capable of writing a nice and equally bug-free SQL interface in place of that ORM...


I think the use of ORMs by people who understand databases is fine. I think when ORMs turn into, “you don’t need to know SQL!” Is when things go off the rails.

Everyone (management) wants a way to get productivity without loads of experience and tools that promise that tend to be misleading.


Are there really software jobs where managers are the ones making decisions like whether or not to use ORMs? Some of these comments make me feel like I've had a truly blessed career.


You better believe you have had a truly blessed career.

In most places the moment you say "this will take 67 minutes and not the 65 minutes you thought it would", all sorts of idiots that know nothing about programming will come and dispute your every technical decision.

Exaggeration of course, but not far from the truth. Worst of all are the CTOs that already sing only the CEO's tune, come and review your entire project in exhaustive detail, conclude that you did 99% of everything correctly and well and exactly how they would do it, then proceed to fire you over the missing 1%.

Again, kind of an exaggeration, but again, not far from the truth either.

In most places people are treated like interchangeable cogs. Better hold tight to your warm positions, it's not a given you'll find a better one if you figure that you must leave.

I dream of a workplace where after I prove my worth -- 1-2 months -- I'll just be left alone to produce and correct code and protect the company's interests without being second-guessed, even though I am one of the most productive devs there. I dream of that. I might cry if one day I realize that I've actually landed such a job.

And I am a senior. 21 years in the profession. Food for thought for you.

It's all about who you know and drink coffee with, apparently. Your abilities as a professional barely matter, it seems.


I’ve only worked in a few large companies where nobody even cared about code quality unless you brought down a production system with a bad change or something.

You’re seriously saying you’ve had 21 years of constantly being second guessed? What kinds of places are you working at?


I've been a contractor in the last 7 years and I've come to dearly regret it.

Never has my work been so second-guessed and ripped apart. I suppose my price was too high for them, not sure.

I'm still reflecting and have no good conclusions to offer.

(But part of the time I was in bad health and my work quality suffered. So that explains one percentage of the cases at least.)


I've been doing this 15 years and I've been dealing with second guessing or managers acting like we don't know what we are doing the whole time. The problem being the managers are MBAs who know little or next to nothing about software development. They don't like being told differently from whatever narrative they come up with and primarily solely focus on cost.

The company I work for considers themself a MSP. The parent company is a sales company selling non software products.

In any case, yeah it happens.


It’s usually indirect, “only use libraries from these approved internal systems”, “new projects use these templates”… etc. Eventually to go against this set of standards make you appear to be a squeaky wheel while everyone else seems to be working just fine.


Right but in those cases were these libraries selected by engineering managers and not developers? I definitely appreciate being stuck in a pattern but everywhere I've worked the standards were at least set by technical contributors (competent or not).


At a tech-focused business, probably, but think about all the programming jobs and IT departments that ultimately roll up to very non-technical people. Often what happens is a vendor becomes the chosen vendor for some expensive tool (think Microsoft, Red Hat/IBM, Oracle, whatever), and they have a solution for almost everything (theoretically). You start having to justify why you want to use a different thing to people who don't care.

Say you have 200 windows servers, and someone wants to use linux, "Why, what are you trying to solve?" Sometimes the industry forces the issue (think marketing people using Macs), but generally people are trying to streamline things they don't care about. "Use the tools that come in this box we buy from this vendor who we pay all our bills too, everything else is a weird liability I have to worry about hiring for and keeping track of."

The end result tends to be pretty mediocre though.


I don't think you are wrong and that's the actual cause of the problems, not the tools. But that's for another day..


The whole concept of a database as a separate system is an abstraction. An extremely successful one.


ORMs are okay, people just don’t know when to use them. They were always meant for OLTP, not OLAP workloads.

How else would you update this record, plus 10 other record pointing to it? That’s the point of ORMs, “writing” the boilerplate SQL for you, and also the other direction, mapping records to objects.

It’s not the tools problem that people refuse to learn new things and misuse it.


> I think ORMs fit that bill. For some reason they were (are?) incredibly popular, but I don't think they stood the test of time.

I don't think there is any truth at all to this personal belief. In some domains it's unthinkable to use anything other than the standard ORM. In C# you'd need to be nuts to roll SQL by hand instead of using Entity Framework, and Django speaks for itself.

The only drawback of ORMs is that their promise is that developers don't need to learn the intricacies of SQL and SQL-related design patterns, but in practice developers need to learn the intricacies of SQL, the ORM framework, and the SQL generated by the ORM. Naive developers might believe they are better off reinventing the wheel with ad-hoc SQL stuff, but that's another problem.


I am nuts! I often work on two main C# projects. One uses a data adapter with data table objects. The other, is about to have EntityFramework completely removed. Reduced it to only handling database connections.

Original developers used code first. Looked into the dynamic SQL created by the framework and it was overly complex statements. Replaced with simple hand crafted SQL and Dapper, used just to bind the results to objects. This cut transaction time down dramatically, about 30 seconds down to less than two. I could still cut that down further with batch transactions that are no longer logically grouped.

Simply combining SQL, CSS, and HTML with a custom DSL allows for creating interlinked reporting. Did this on a microC II OS embedded system with SQLite running on hardware with 8mb flash and 64mb RAM. Port the DSL, keep the same table structure, and the same reports can be used in a new environment with new hardware or different database backend.

EntityFramework might be useful in some places. I have yet to work in a domain where it is.


Yeah I think you are right to add the “depending on your environment” caveat. I have no experience in the Windows domain.


I second that OO abstractions shine in UI systems. Most complex UI applications use some form of ECS these days.


I've yet to encounter a single project which does not use ORM. Everyone uses it. So they definitely stood test of time. And popularity of ORMs says more about issues with bare SQL approach. IMO SQL is just bad. Developers want JSON, not CSV. Developers want to specify conditions with lambdas, not with strings.


I don't the "everyone uses it" argument works here. Sure, it has a lot of momentum, but I'm just not seeing the proposed productivity enhancements. I actually just spent a significant chunk of time to de-ORM a system to get some performance and, more importantly, clarity back.

IMO it's way, way easier to look at some queries instead of some leaky abstraction (on top of the already leaky abstraction that is SQL).

Again, I really don't think popularity is a sane metric anymore. (React, "Astro", Angular, web-dev in general)

Edit: I'm very much in a bubble. A lot depends on your tooling and environment. For example I can appreciate it being different in an MS environment with stuff like linq. I'm not in that environment.


Relational databases are an operational anti-pattern. ORMs have proven that.


Relational databases have shown, and will continue to show more longevity than object orientation.

Facts and the relationships between facts is a deeper foundational principle than the concept mashup of modularity, hidden [mutable] state, dynamic dispatch and interface subtyping that object orientation is formed from. Databases are more long-lived than applications, so applications tend towards adapting to the form of the data rather than the other way around.


The concept of mashup of modularity, hidden mutable state and dynamic dispatch, etc also stood its time, and so do relational databases.

There is a mismatch between the two models, but it can make sense to think of the entities at hand as objects at times, and relational data at other times. That’s why ORMs are a thing, and also why most popular programming language nowadays are multi-paradigm, so they can also support a more data-oriented approach.


Modeling a business’s processes does not equate to modeling a business’s data.

Starting with modeling data will immediately cripple software engineers from building what the business needs.

Starting with business models and then deciding what data storage is appropriate is a more practical way of designing software. Operational data very often fits in schema-less document storage.

Relational databases are excellent tools for analytics and reporting.

I stopped architecting software with relational databases “first” 7 years ago and will never go back.

Document databases are highly adaptive and denormalized data is inherently faster.

I’ve since found very rare cases where a boundary requires a relational database.

Welcome to my Domain-Driven Design rant.


relational databases are backed by relational algebra, an actual formal math definition. You can prove theorems like the output of Query X is equal to the output of Query Y, but Query Y is likely to run faster so that is what we will run

You can't outdate math (you can outdate math notation though, finding better ways to express the same information). To be fair SQL is not 100% relational algebra compatible, but it is close enough


I have the opposite feeling on RDBMS, based on my limited experience. I think you need a good reason to deviate from it, which is often very large scale. I'd say 99% of applications written are no where near this scale.


> But abstraction in programming is never a value in itself and its use needs to be weighed carefully.

I think this generalises well to anything in (or outside) programming. Slavish adherence to any principle without consideration for the underlying merits of the situation is likely to become a negative.


Actually it would have to be numeric codes, as Assembly is also an abstraction. :)


Would be nice to see at least a few of the principles called out. What makes them outdated?

And coupling/complexity are a touch on the weasel word space for our domain. They can mean things, sure, but they are often used for the emotion attached more than anything else. Worse, as programs grow, they probably should be coupled heavily. Loser coupling is almost always more complex to maintain and of diminishing returns. Is why the tires on your car can't be used on another vehicle. We probably could engineer the "one true wheel", but it would be done at the expense of capability. Not in pursuit of it.


This might not be a perspective of a programmer, but the code I've reviewed and tried to understand from academics is always easier to manage when it's not abstracted too deeply. Especially when reading the code is the main reason for trying to understand the underlying idea.

Unless you have an extensive (visual?) representation of how the code flows, abstracted code for beginners might be a hindrance towards understanding what's happening at the lowest level while still maintaining the contours of the whole thing.

Of course when everyone's on the same page and has similar understanding, this is no longer a concern.


Could you share a few examples of what you consider outdated principles?


Having read through the old version I cannot recommend that one either.


Would you care to elaborate?


I also found the outside-in way they solve problems (using stubs for inner calls) very bizarre. I could be wrong.. But I don't think anyone codes like that


I code like that. I got burned one too many times spending hours/days on an inner call that turned out to be unnecessary. When you solve outside-in, you are sure you're solving the right problem in the right way without overcommitting to intermediate steps.


"Using stubs for inner calls" is literally a top-level description of London-style TDD.


Only Detroid school! Mocks should be banned (unless they mock external system). London-style tests cargo culting is one of the reasons why OOP got bad name.


Leaving mocks in place once you're done should be banned, I agree with that. I think they're fine as an exploratory tool to figure out what an interface should look like.


Mocks means that there's a side effect neccessary (and that's quite rare) so the use is justified in such a case. If a stub is needed, then mostly it's gone when the code is refactored to a more functional style.

Unfortunately, most OOP developers bury side effects at the bottom of the stack, instead of passing an object that describes the side effect to happen (which it's executed at the shallow and simple application layer).

Said result object can be tested as any other function result and no mocking is required.


I love this sort of approach for one-shot externalities but what about when your entire program is a conversation with external components? My current project coordinates software repositories and services within AWS and I find myself using a lot of mocks in testing.

I can return a tree of lambdas but then I have to resolve them against something and that's just replacing mocks with lambdas, really. Not sure it's any better in practice.


Great question! I worked with two approaches: 1. Sagas - a centralised place to handle bussines flows 2. Event system

Start with 1. and eventually (pun intended) move to 2. Both ways allow parallelized interactions with external systems, deferred decisions, etc. whatever the bussines will require.


Sorry I wasn't clear, I'm talking about imperative code that coordinates multiple external actions. In a world where it's modeled as jobs in a distributed system, I agree that each job can be nicely functional (and I love this pattern).


> Mocks means that there's a side effect neccessary

Eh, kinda. London style says to use mocks as you work down the call stack, whether or not there's a side effect. At some point you might hit an edge where you're calling into third party code for a side effect (and all side effects are calling third party code), but that's not really the point.

This is where the "never mock code you don't own" principle comes in: if you're mocking out third party code, it needs to have a wrapper of your own code to hide it, so you're in control of the interface. At least in theory. You want to be passing that thing in anyway, so for the tests you pass down a MockThing, or a NullThing, or an InMemoryThing. That way the side effect can happen at the lower level, but the choice of exactly whether there's an observable side effect is still in control of the top-level application.

Really you and I are describing two different ways of achieving the same result: moving side effects to somewhere they're easy to handle independently of the logic, whether that's functional core/imperative shell, or dependency inversion. You don't really need mocks for either because the only time you're actually testing the side effect itself is probably an integration test, but they're a useful tool to get to that point.


> Abstractions lead to coupling and complexity.

Without abstractions, we would still programming using punched cards.


In fact, that could be useful to ellaborate a bit.


Primarily this book completely ignores communication with subject matter experts and the business. This is fundamentally more important than any language, process, or platform.


>Primarily this book completely ignores communication with subject matter experts and the business.

Probably because this isn't an advanced book on Software Engineering, but an introductory book to programming.


This is wrong, it’s a core part of the design process as described in the book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: