Hacker News new | past | comments | ask | show | jobs | submit login
Do We Worship Complexity? (innoq.com)
260 points by ScottWRobinson on Oct 16, 2018 | hide | past | favorite | 256 comments



I was interviewed recently. I was asked to implement a solution in Java. It was essentially a stack-based computation engine. The interviewer posed a question and there were enough tacit hints for me to see what he really wanted: an "OO" approach using a type hierarchy + command pattern. He offered unsolicited prompts and I took them: I coupled the command objects to the engine. He liked that.

What we ended up with was O(n) _classes_ (where n = the number of computation operations).

I remained in Java, but took a functional and dynamic approach yielding a three-line implementation of the engine where each command was simply a function. All the tests passed.

I was kind of holding my breath at this point. What will he say? We just went from this huge implementation to three lines. I thought he was going to love it. The crazy simplicity of it all.

But to my draw-dropping surprise he muttered something about adding new types and how he liked that the commands were coupled to the engine.

Is this complexity worship? Or what is this? It's almost as if his brain was refusing to see the solution. Like it couldn't possibly believe that the solution could be that simple. So it just ejected it, put a big black censorship block over it.

I'm currently reading a book on brain hemispheres. Apparently experiments have shown that the left hemisphere will completely and blatantly reject obvious evidence even if the right hemisphere 'has a clear counter proof'. Sometimes I think our industry suffers from an abundance of left hemispheric dominance. Or something like this...


A few years ago I needed an emulator for version N of a popular microcontroller. I found an emulator (written in Java) for version N-1 of the same microcontroller. It was written in the style you describe: one derived class per machine instruction type; program to run in emulation is a collection of these via their abstract base class. It was quite pleasant to extend, actually: I simply made classes for the new instructions available in the new version of the microcontroller, added instances of them to the giant table mapping codes to instruction objects, and It Just Worked.

I do note that that is an atypical experience to extending programs in general, and I don't have the alternate, functional approach to compare it to (the emulator was written in a very old version of Java + Swing).


In functional style, you'd probably have a table mapping machine instruction type to a function. These days, even Java (since version 8) supports that, and it roughly compiles down to the same thing (due to how Java implements lambdas).

This case sounds simple, but I still expect the functional approach to save you from a lot of line noise coming from Java class bureaucracy. But in more complex cases, it may be a difference between a couple dozen lines and a couple dozen files.


On the other hand, the OOP approach allows you to add printing the function name (for disassembly views), metadata attributes about what flags are affected, clock cycles, etc. You could argue that each of those could be separate tables. Funny how it comes back to the array-of-structs vs struct-of-arrays design question, or maybe I'm just seeing that everywhere because of thinking about Mike Acton's Data Oriented Programming talk.


You get that with Java 8 and later because you can just define a FunctionalInterface and then use a mapping table to function-to-use and it gets really compact. If you want to decorate arbitrary functions that should be fairly trivial too.

Wrote something small like that for a random data generator for an in-house properties tester in Java a short while back. It makes it easier, in my opinion, when you have all the code available in a single screen than if it’s spread out.


Clojure. Clojure is extremely data oriented.


I think the most of the data-oriented-programming stuff referenced nowadays are not about conceptual data (how humans understand it), but data in the hardware (how machines understand it). In that regard, in Lisp you can’t really control memory allocation/alignment as flexible as C/C++, (due to its conceptual assumptions with data), so I don’t know how you would do DOP well in Lisp:

(although it might sometimes serve well as a configuration language alongside high-performance C++ code like in gamedev...)


Those are two different kinds of programming, which unfortunately get similar labels.

The C++/gamedev "data-oriented programming" is about structuring your in a way that's friendly to CPU cache. It turns the programmer-friendly abstractions inside-out in order to make execution as efficient as possible.

The Clojure/Lisp/functional "data-oriented programming" is about building your (programmer-level) abstractions around basic data structures like lists and maps. It's meant to simplify your code and make it more robust.

The two "data-oriented programming" paradigms are pretty much opposites in every way except the name. They both focus on data, but the resulting data structures are vastly different.


Actual Lisp implementations need to interface to the C/C++ side, thus they have non-standard / quasi-standard ways to deal with raw/foreign memory and with C types.


I'm... not sure how we went from replacing a function-like object with a function to adding disassembly and clock cycles. Am I missing some context here?


Can't edit or delete anymore so replying to myself. I confused the subthreads and thought this was still the "interview quesiton" part. Obviously the assembly/clock cycles are relevant to the phaedrus's example.


IMHO the simplest and most straightforward solution for a CPU emulator is a single (or perhaps nested) switch statement. Finding the right case and adding code there is much less annoying than the overhead of creating and updating classes and likely also separate files.

On a related note, this is also why I don't like at all the current C#/Java "modern" style of dozens of files with perhaps one or two actual useful lines of code (the bulk of those files being boilerplate fluff and stupid comments like "// @return the return value".) Debugging is especially irritating since stepping through the code becomes substantially nonlinear with deep call stacks.


Over the years I’ve heard some good advice on this sport of thing attributed to Bertrand Meyer. I keep meaning to track down citations.

There are conversational styles you can apply to code that make it simple to test (or in his case, make coherent assertions about) that still keep the methods small but cleave it in a way that you can still track in your working memory.

It’s a style I aspire to, and put a great deal of time and energy into. But reading my old code I can say it’s harder than it sounds and it doesn't sound that easy to begin with.


IMHO a CPU emulator is so simple and well specified, that most any programming style will render a nice solution.

(A CPU must almost by definition must be simple to implement.)



Interestingly, you managed to illustrate the O(n) debate above pretty well.

O(n) is short hand for here is "what happens when n gets large, but it I'm telling you nothing about when n is small". For example some sorts are O(n log(n)) and some are O(n^2). Naturally for big sorts O(n log(n)) is preferable - but when n is 5 say the O(n^2) will often be faster. People use O(n) precisely because it carries all this unsaid nuance.

In your example you probably have upwards of 30 instructions but possibly hundreds, which sits comfortably in the big side of things. If there was a way to reduce that to an O(1) solution (which implies that if you can add many new instructions without adding lines of code) then you've coded it badly - but we all know that's not possible for a micro controller emulator.

In his example he used O(n), but implied a O(1) solution is possible. The same logic applies. His use of Big-O means for large n, it's possible to add many new cases without adding code. If that is true, his solution is far better than the one wanted by the interviewer. However, it's likely n was small because this code had to be written for a job interview. It's entirely possible he managed to find a enough commonality in those small number of cases to reduce it to being 6 lines, while in the real world problem is indeed O(n). In that case the correct solution depends on the instructions he was given - whether he was asked to solve this particular problem in an optimal fashion, or demonstrate how he would solve the problem in general using those n cases as an illustration.

He doesn't say, so we don't know. His use of O(n) does hint - but the grey beards among us would like to see proof the problem really can be solved with O(1) and the interviewer was demanding an O(n) solution despite that. It does seem like a really weird thing to demand.


I think I can explain this. At some point in the past the interviewer has had to get to grips with OO programming. This isn’t easily for everyone, myself included. But eventually it all clicks. And it’s a great thing. Then you start learning about design patterns. You struggle to get to grips with these, you try to memorise them, you don’t quite feel comfortable with them; but one day, it all clicks. And they are great. And you feel a massive sense of accomplishment. In software development, everything moves so fast, but the feeling that you have actually learned something that is almost timeless and will outlast languages and frameworks that come and go every week is a rare feeling. And I really feel that knowledge is priceless. Try then telling them not to bother with all that and write 3 lines of code :-D

If someone has invested a lot of time trying to understand OO and design patterns like this, and had the feeling of accomplishment when it all comes together, then the are going to want to apply them where they can.

I think all that stuff is invaluable knowledge but people shouldn’t fall into the trap of doing something one way just because they have invested the most time in learning how to do it that way.

Also, many software professionals probably find simplicity boring. Simplicity should be taken more seriously as a goal in software development. Sigh.


Wonderful insight!


What that kind of people need to understand is that "code is a liability, not an asset". Every piece of adhoc code is added maintenance cost, hence the less code the better, no matter the programming language or paradigm.


That’s what I feel too. It’s like in The Goal by Eli Goldratt. The machinery of building product is all cost. Inventory is cost. Machinery is cost. Only finished sold product is revenue.

Analogously, code is cost, infra is cost, only our sold product is revenue.

A thing I did not understand for a long time is the effect this has on value. Something that produces a low value over a long time may well have been value-negative over the period because of the hidden carrying cost.


I overall agree but I'd like to add some nuance: one-liners are hella compact but no way near maintainable in many cases. For instance 100% of code golf competition results should never be allowed in production.

Also there may be a balance between "some hierarchy in the code so it can evolve" and adhoc code that just works for the current problem. Only having to write a new class for a new feature because everything else is handed smoothly is a bliss.


Wanted to state the main idea first. I definitely agree about one-liners, Perl and APL come to mind.

There are some other aspects to this idea of code economy: correctness and reusability. This definitely depends on the programming language.

In Java(or oop for that matter) you can get correctness but to get reusability you need a whole hierarchy of classes, lots of added code/cost.

In Python you get more reusability out of the box but to get correctness you need to add swaths of unit tests, lots of cost.

In contrast with a functional language such as Haskell you get both with less effort, fp plus a rich type system reduce code enourmosly.


Same with Rust, to some extent -- in some cases it's extremely hard to get a program to compile, fighting with borrow checker along the way, but when it's done and done you can be fairly sure it won't segfault due to a dangling pointer or something stupid of the sort. It also makes it easier for you to trust the code written by others, e.g. when working on a large codebase collaboratively.


Rust is another favorite of mine, and with Rust you also get efficiency besides correctness and reusability.

I wish Rust would have gone with an ML syntax though, in particular the Hindley-Milner type signatures allow me to reason better about code reusability, the "fn" and braces just adds noise to me, too much Haskell I guess.


I have to agree with this comment. More often than not one-liner rewrites of even trivial functions are unreadable and can't easily be modified, all for the sake of having the appearance of elegance. It's more important to have code that is easily understandable and extendable, which isn't always the case with fewer lines of code.

For example I could write a massive list comprehension in Python in one line that could solve my problem, but it would be much more difficult to read and would be a pain to modify. I'd much rather work with the 10 lines of code it would take to actually spell it out.


There might be some sort of cognitive bias going on here. But, I think the greater driver is more "political."

Complexity creates opportunities for fiefs, little areas only one person understands.

I think we don't recognize what drives a good portion of people, in the tech industry. People pretend to, and are often expected to understand more than they do. Complexity helps us when reporting or manage (up and down). It's harder to grill you, challenge you or commit you to things when everything is complex. Complex requires more people, which is the main scorecard for corporate success.

It's also just unintuitive that simple is harder and better than complicated. Even if you get it, someone else won't.


And those little fiefdoms, well maintained, can be easy domains to control and reshape according to the needs of relevant stakeholders. Whereas an over simple solution can end up doing one job well but too difficult to modify or extend without rather rewriting it. Depends on the problem you're solving.


What led you to this conclusion?

I've come to the same one. I think it explains a great deal of what we see. Even in the hiring process and such. It's actually very primitive at its core. Who's part of who's tribe and who is jockeying for status in that tribe.


When I was in college one of the candidates for some open position in the engineering school gave an open talk which I attended. He had this whole thing where he researched the right/left brain tendencies of engineering students. He actually had two axis on his pictures but I don't recall what the other axis was so lets keep to the the left/right (logical/creative) axis. He said most students entering engineering were more left brained. He also said retesting people exiting engineering has shifted. Not that the right brains dropped out, but that the same students tested more to the left after an engineering education.

It wasn't clear if that was a shift in thinking or just improving ones skills at a particular way of thinking. In other words, no evidence of any loss just a shift in score on a right/left test. The other notable thing he said is that the right brained people who complete an engineering degree were rather likely to spend some time working in the field and then just change to something completely different.

I remember this well because I tested as a dominant right-brain thinker and have always had this odd feeling that I might want to just stop and do something completely different. Fortunately I've been able to find ways to use creativity in engineering :-)


> left/right (logical/creative) axis

The logical/creative dichotomy is bullshit based on fake science.

In reality the left brain (dominant for most people) prefers sequential, step-by-step thinking, while the right brain prefers simultaneous and spatial thinking.

Math professionals are actually biased towards right brain people. (It's hard to get anywhere in real math using step-by-step algorithms.)


>> The logical/creative dichotomy is bullshit based on fake science.

>> In reality the left brain (dominant for most people) prefers sequential, step-by-step thinking, while the right brain prefers simultaneous and spatial thinking.

I've never seen anyone make such a direct contradiction so clearly ;-) You define left-brain thinking the same way the "bullshit" does, and then you define the right brain in a way that reflects creativity - at least to me.


>simultaneous and spatial thinking

But can it really be simultaneous (at the same time)? I mean there's still a "micro sequence" involved. it seems that the real skill here is context switching, pattern recognition and applying patterns to different contexts.


The brain can think about things without you "hearing" it. Then it just mashes everything togheter in a coherent result that you can actively think about.

When I have to think about something hard I try to do it without words in my mind. I believe this makes it easier for the brain to subconsciously work on those, without having to "translate" it first.

However I still use "pictures" in my mind. I wonder if it's possible to actively think about something without using things linked to our physical senses (hearing, seeing, touch,..).


>The brain can think about things without you "hearing" it. Then it just mashes everything togheter in a coherent result that you can actively think about.

Sounds like intuition based on experience, where a new situation is compared against your experience, with the brain filling in the gaps.

>When I have to think about something hard I try to do it without words in my mind.

Hard in the sense of difficult to solve and unfamiliar, or in the sense of the level of focus or concentration?


Probably you're right, yeah.

For example, I know a heavily right-brained person, and when he reads a book, instead of the intuitive sequential pattern (even page-odd page-even-odd-even-odd) he may read it in a zig-zag (even-even-odd-odd-even-even).

The analogy is synchronous processing (left brain) vs multithreaded (right brain).

On the other hand, right-brained people seem to have problems doing 'obvious' things like finishing chores in order, telling left from right or positioning something on a blank page.


Is there really such a thing as left or right brained? Here's [1] an article discussing this. In this talk, what defined a person as "left brained" or "right brained"

1: https://www.health.harvard.edu/blog/right-brainleft-brain-ri...


> Is there really such a thing as left or right brained?

Yes, there really is, but the way it's described in American media is a load of crap. I don't think anybody published actual real science about this in English.


Such a left-brained thing to say :)


Anecdotally, I've noticed a significant difference in how I think in parts of my life where I was studying philosophy vs parts of my life where I've been doing primarily software development. The whole body pretty much works on a principle of use increasing capability, so it pretty much makes sense...


I had a job working for a large Canadian company several years ago.

Coming in, I was told that the specs were difficult, the tech had to be nailed, and they had already tried and failed.

I took a week getting used to the customer, stack, and team. Digging through what they had? They needed two lines of code. It was the same exact situation.

That was one of the more difficult consulting assignments I had. It took me quite a while to gently get the lead architect on-board. I had to position it so it looked like his idea. If I would have pushed hard, he would have pushed back...perhaps getting rid of me and hiring more of a team player.

I've seen this a lot, in a lot of teams, and I think it has its roots in OO/Platonism, although it happens all over. I think the key things most of these teams are missing is that all structure is derivative. That is, there's not a line of code that's in your app that isn't in there for a reason. You're either really careful about that reason....or you add stuff because it looks good or you might need it someday, you can "imagine" how it might be helpful, the guy sitting next to you said you had to have it, and so on.


The longer I've been in programming, the more I've noticed just how much people defend "their way" of solving a problem.

If you do it differently or with a different language, there must be some negativity to latch onto in order to justify the way they did it. The simple ability to say "Oh...cool! Hey Bob, come look at what Jim figured out!" is a rare find.


I was asked in a Google interview to randomize a List. I wrote Collections.shuffle(list); The interviewer was not pleased.


It's normal to not be pleased when a candidate fails to understand a question (or exhibits smart-ass behavior), though in this case the easy and correct thing for the interviewer to do is to say "OK, how is Collections.shuffle implemented?"


> though in this case the easy and correct thing for the interviewer to do is to say "OK, how is Collections.shuffle implemented?"

And very likely that's what really happened. I have been in such interviews and it is sometimes hard to guess whether the interviewer wants the practical answer (calling a library function) from engineering standpoint or the conceptual answer (demonstrating that I can implement the inner workings of the function) from CS standpoint. Often I would just ask a follow up question to clarify exactly at which level of abstraction does the interviewer want my solution to be in?

In situations where I offer a solution that does not match the interviewer's expectation, they clarify the expectations.


I think it's stupid to ask an interviewee to demonstrate something they should never do at their job.


I think it's stupid to assume that a decision regarding time complexity would never come up at their job. Where I work right now, and especially in Google (not where I work right now), it is important that one understands time and space complexity issues well. The data sizes are huge and optimization skills are highly valued.

Sure I can teach constant vs. linear time. But what incentive or reason do I have to spend time teaching these fundamental concepts when I can just hire an engineer who demonstrates the understanding of basic CS at the interview time itself?

Given two candidates with all things equal except that one demonstrates the understanding of CS fundamentals and other does not, why would I want to hire the second person and spend our time teaching him those concepts?


>why would I want to hire the second person and ...

Someone commented on here recently that they would take motivated candidates over knowledgeable candidates. That's one reason. I can easily think of at least two others.

There are balance points here that vary according to all sorts of things.


I said, "Given two candidates with all things equal except ..."

I mean if there are two candidates who are equally motivated and are more or less equal in all things except that one is strong at CS fundamentals and another isn't, is there a good reason to reject the candidate who is good at CS fundamentals.

In many hiring decisions, I am faced with a similar choice, and I go for the person who is good at CS fundamentals. If two candidates are good in all other ways, then the understanding of CS fundamentals becomes a tie breaker. I see no rationale for selecting the guy who does not demonstrate his strength in CS fundamentals.


I don't really agree ... doing thing A well usually means understanding B, C and D which underpin A. It's actually quite useful to ask people about B, C and D because if you only ask them to do A you don't know if they are doing A the right way because they really know what they are talking about or if they are just cargo culting something they pretend to understand but have no deep knowledge of.


With most people, it is impossible to gauge how good they are just by observing what they do in half a day on the job. Thus, any interview process must necessarily be somewhat artificial in trying to gain insight into a candidate's abilities in the span of a half a day.

A real life task might be implementing a much more complex library over several days or weeks. Asking the person to implement a simple function like shuffle is the closest you can get in the span of an hour.


And on the opposite side, my wife once had a job interview where they asked her to reverse a string. She wrote some code on the board to do it, and the interviewer just dismissed it with something along the lines of "No, that's not right. Try again."

She thought of a different method and wrote it out, and they dismissed it again. After that, she said that she didn't know what they wanted, and they condescendingly told her that she should just use the equivalent of a string.reverse() function (I don't remember which language it was).


Stories like yours and the grandparent comment give the impression that there is little communication actually happening in an interview. I guess if it was an at home assignment given in an email you might not know what the interviewer wanted, but in an actual in-person interview why wouldn't you just say "Well, you could do String.reverse(), is that what you're looking for[?], or would you like me to implement a solution without using String.reverse() or some other way?"

And on the other side, if the interviewers are looking to identify whether a candidate knows about String.reverse() and the candidate starts writing out some 20 line algorithm on the whiteboard, why would you just sit there and watch instead of being like "actually, we're looking for something else, let me ask it a different way; do you know how to reverse a list using the standard library? We're looking for a one-liner here, not an full algorithm implemented on the board."

If your story is true, then apparently there are at least some interviewers willing to just sit there while a candidate writes out 2 completely different algorithms on a whiteboard without interrupting and communicating what they actually want. Such an interviewer is either socially incompetent or trying to make a fool out of the candidate. Either way it doesn't reflect well on the company.


Yes, you would think. But for some reason, a lot of interviewers seem to treat interviews as an opportunity for them to prove that they're superior to the person that they're interviewing.


Yup. If I get even the slightest inkling of this attitude from the interviewers, I will refuse to join the company. I'll give a pass for one smug interviewer, but if I encounter multiple such interviewers, I assume it's not a company I'd like to work at.


That or they have already made up their mind and are just looking for an excuse to reject the candidate.


When I was interviewing for my current job I was asked to theoretically sort a list with with N elements where N >> M where M is the total number of bytes in RAM (but with infinite disk). I thought ok, easy, I basically explained every step of MergeSort except instead of doing operations in the memory, we need to some operations in the memory (i.e. merging) and then need to dump middle products to disk. You need to be a little more clever since you can't just merge in memory and save in disk (e.g. you can't perform the very last step of merging two N/2 N/2 arrays in memory since N/2>>RAM). Anyway, I was confident with my answer, and my interviewer sounded happy too. Then he asked me "but why don't you use swap?" Hmm I was very surprised. Since this wasn't a coding question (I was already asked dozens of questions involved programming) I thought this was a purely theoretical question. So, I never thought about the systems aspect. On a second glance, it indeed seemed like I just reinvented swapping.

I think CS interview questions can be hard because sometimes you don't know what sort of "model of computation" are you operating on. Am I on a totally abstract setting where all I have is an abstract machine. Or do I literally have an Intel CPU running Linux? Or am I even higher level than that and can think in terms of the abstraction of python. I think sometimes this is not clarified.

One other time in a different interview, I was asked "how does OS free memory in constant time". Having implemented malloc etc a few times I thought this was a stupid question because it depends on the C library implementation of malloc/free as one could also implement free in O(logn) using tree-like structures. Anyway, said something like "it just clears the pointer in the linked list in sbrk()'d space so that node is inaccessible" which apparently was the "correct" answer.


>"Or am I even higher level than that and can think in terms of the abstraction of python. I think sometimes this is not clarified." //

Which may be the point, they want you to ask "what am I optimising for", to be aware that there's no simple best answer without needing prompting. Also that the optimisation might be at the business level, like time critical implementation, or use of excess resources gleaned from some other part of the corporation.

So instead of telling you they want a one-liner, you saying "what are our constraints; how long have I got to implement it, ...".


Well, that is very informative, but probably not in the way the interviewer intended. Certainly doesn't sound like a great place to work?


It's clear to me that in a tech interview, unless otherwise told, it is your algorithm skill that is being tested. If I were you, I would begin talking about Fisher Yates, outline its implementation, explain why it correctly shuffles the list, show why it's an O(n) solution, etc. I might even talk about alternatives.

It's always better to demonstrate in an interview that you have a lot of knowledge about whatever that is being asked. It's not just being able to code, but also the ability to explain why and how it works.


Sometimes tech interviews will ask questions that are more along the lines of API memorization. They can be valid questions too, I've worked with "senior" developers that would implement their own sort algorithms (in .net) rather than use the built in ones. "Does this enormous framework I'm using have a way to do x" is a question some developers never ask.

It all comes down to the question asked, if they were literally asked to randomize a list then OP gave the correct answer, if they were asked to implement a list randomizer they were wrong. If the interviewee didn't make this explicit they were wrong.


This is completely subjective and depends on who is interviewing you. I could give a crap about someone's deep algorithim knowledge. I want to see them write productionizeable code.

Granted, there should be 1-3 people around who can be tapped for algo knowledge when bottlenecks need to be addressed. However the rest of the dev team just needs to worry about productionizeable code...


> I could give a crap about someone's deep algorithim knowledge.

That's just proving the point of the person that you replied to, though. If you could give a crap then you obviously care about it.


Often the people who know the algorithms struggle to write maintainable code that isn’t esoteric garbage to everyone but them and will fight to the death to say their way is correct. So your mileages may vary...


That's a reasonable answer.

Of course, the follow-up question is going to be: Randomize a list that doesn't fit into memory.


I would have failed you as well. The point is to test whether you have a clear picture of what goes on one layer of abstraction below or if you are confined to the surface.

I don't get the fuss about interviews. They seem to largely consist of basic programming exercises.

You don't need to know the solution before hand and it is easily intuited on-the-fly. I would never hold it against someone to miss some corner-cases or maybe go for a naive implementation first.

I remember a few years ago, someone was complaining that they had been rejected for not being able to reverse a binary tree even though they had a copious amount of OSS.

The thought process is simple and it's an exercise that students do within the first few weeks of their freshman year.

    (struct node (value left right))

    (define (reverse-tree root)
      (cond
        [(equal? root 'EMPTY) root]
        [else
             (node (node-value root)
               (reverse-tree (node-right root))
               (reverse-tree (node-left root)))]))
    
A tree is intuitively defined as a recursive data-structure.

There is one base case: when we reach a leaf.

We want to reverse the left and right subtree at every stage of recursion.

Combine all of this and it's done. I would even be content with a pseudo-code implementation.


I would have given a +1 for creativity. If the interviewer didn't say that standard libraries were off bounds that's fair game IMO.

I've asked similar questions and if I forget all the constraints just a simple, hah that's clever + reframe the problem is a reasonable approach and doesn't come off as being a jerk(keep in mind the candidate is interviewing you/company as well).


"Implement a method that shuffles a list" vs. "Shuffle a list"

I would want the candidate to ask me whether they can use their language's standard library. I would say no/yes depending on how the question is framed.

The absolute worst is when question is framed like 1) - the candidate just makes an unwanted assumption, and then acts outraged or smug when I tell them I expect them to demonstrate that they aren't oblivious to the underlying implementation.

Then we venture off whether they can make improvements/trade-offs if the list becomes 10K, 1MM, 1B, etc. records of N bytes etc.


I'm much more concerned if the candidate is a person that shows up on time, with a good attitude, takes heat and shares credit, builds people up and isn't afraid to speak their mind. If they have a rudimentary understanding of the time constraints of linear vs constant time that's a bonus, otherwise I could teach them that.


I come across candidate A who shows up on time, with a good attitude, takes heat and shares credit, builds people up, isn't afraid to speak their mind and offers a smart-ass answer to a problem designed to evaluate whether the candidate understands the underlying concepts.

Then I come across candidate B who shosw up on time, with a good attitude, takes heat and shares credit, builds people up, isn't afraid to speak their mind and offers a genuine and insightful answer to a problem designed to evaluate whether the candidate understands the underlying concepts.

I am definitely going to hire candidate B.


And where did they give a "smart-ass answer"? A question was asked, an answer was provided. Does the answer solve the problem? Yes. If you want to know more then ask. If you want to hear something else that's a failure of the interviewer, not the interviewee.


But all this amounts to trivia questions about things you would never actually do or have to know when working as a software engineer.

The point of a library function is that I don't have to worry about how it works, so I can focus on writing things that are not already in libraries!


> The point of a library function is that I don't have to worry about how it works, so I can focus on writing things that are not already in libraries!

The point of a library function is so it doesn't have to be re-implemented everywhere, not so you can be oblivious to how it works. If your oblivious to what is going on behind every function call you will write terrible code.


I was asked in a Spotify interview to reverse a string as a warm up. When I wrote std::string(s.rbegin(), s.rend()) the interviewers seemed pleasantly surprised.


To be fair, using a library to do what you want is not what they're asking for. This is completely different from the functional vs OO discussion above.


On the contrary, if someone that worked for me needed to randomize a List and implemented their own function, I would be gravely concerned about the rest of work and what they're spending their time on.


And when Google implemented Java for Android, how, pray tell, should they have implemented Collections.shuffle?


Implementing a stdlib isn't a thing you do every day. Presumably GP was referring to more common situations, where using the stdlib is (usually) preferable to rolling your own solution.


They were interviewing for Google. It's not uncommon to have to implement a standard library function (or not be able to use one) in many contexts at Google. Besides, the question is just the most basic setup to assess how a candidate would fare in comparable situations.


They were interviewing for Google.

99% of developers at Google are doing commodity work. It is are practically IBM or HP now.


Is it though? Because the answer to "suddenly I need to implement a standard library" is "I google the accepted solutions" and if I don't have Google then I goto the library and check out a bunch of old CS books.


And then the interviewer's follow-up question should be "fine, since you don't want to indulge me and implement a solution to this toy problem, I'll ask you to implement a much more complicated function which doesn't exist in any library!"


They grabbed it out of harmony.


But you would not be concerned about a candidate that does not understand that this question is clearly (very clearly) meant to assess how they understand and approach algorithms and randomness?

It's obvious that you should use a library function if there is a library function.


A question like "how do you reverse a string" surely is like asking a chef to cook an egg? Any idiot can do it, it's got to be an opening on to "who are my customers?", "what are the cost and time constraints?", "what team members do I have?" "what's our angle" ...? If the answer is just "boil it for 5 minutes" then you know you're looking at someone with simplistic thinking; which is fine if you just need someone to mindlessly do as they're told. They might be an absolute expert at it.


Yeah, it's just an opener. Setting the stage, making sure everyone is on the same page, and then you can build onto that à la "and now suppose xyz...".


This sounds like a good example of everything wrong with the cult of Java programming / design patterns.


Does it? The class-based solution looks very convenient to me for large projects. Let's say I am new to the project and I don't know what commands exist. I just ask the IDE to show me all sub classes of the command super class. That sounds much better than searching the functions implementing the different operations everywhere in the source code. Not everything that is easier to write is easier to read two years later (even if it has much less lines).


Unless there's some magic involved (like pulling code from strings at runtime, via reflection), you'll have a place in which all those functions show up together. A central table, a switch statement in a factory method, or something. You'll have it in either the class-based or function-based implementation.


> you'll have a place in which all those functions show up

I don't see why. For example, imagine a graphical editor. GUI events will trigger some actions and those actions will trigger some commands being execute (and placed in an undo buffer etc.). I don't see where that central place should be and why you would need it at all.

And now imagine a large project with different ways to execute commands ("execute", "executeWithUndo", "executeInTransaction",...), people composing new commands from existing ones, etc. Soon grep becomes your best friend. Or just press Ctrl-H or whatever in your favorite Java IDE to see the class hierarchy.


> and why you would need it at all.

Because in the functional model, you're not abusing the filesystem or inheritance hierarchy to do your bookkeeping.

Also, I said functions, not necessarily anonymous ones. There are ways to solve this use case. You can stash your actions as named public static functions in a "Commands" class. Or you can create one dummy class with @Functional on its only method, that essentially forwards the call. So in your UI handlers, you do button.setOnAction(new Command(someFunctionSomewhere));, instead of having a separate class for every possible action. You can now find all actions by looking up calls to Command c-tor(s), and you can build subtypes of Command as you need some special functionality. Note that this way, you don't commit yourself early to a huge type hierarchy.

In my experience, even in large projects, Command objects tend to sit in the filesystem, wasting space and people's time. But if you're absolutely sure you'll need this pattern, then go ahead and use it. It just shouldn't be the default, go-to way of solving this problem.


Is that the correct hemisphere association? I have a vague idea it'd be the other way around (left has clear proof, right rejects). And what's the book?

I've noticed in myself a tendency towards programming "aesthetics" that just "feel" right. Sometimes that intuition seems to overlap with faculties that help avoid complexity and find elegant solutions... but there's also plenty of times when it's either habit based in a collection of odd assumptions or even apparently arbitrary, and so it's something I've been working to interrogate.

OO approaches specifically have something like three decades of pop-tech discussion as being professional and sophisticated. That's a kind of conditioning that's hard to overcome.


Yes I referred to the correct hemisphere.

The right hemisphere is visuospatial. Also, the right hemisphere 'sees' time/process.

The left hemisphere puts together contextless symbolic model. So I see where your question comes from: the left hemisphere can do the contextless proofs. In this case (and in many cases) the more complex solution "works" and the left brain can "prove" it works.

But a 'simplicity' proof seems better suited to the right hemisphere (I am no expert here, mind you) .... So in my anecdote the "simpler" solution is the one that I would guess the right brain "sees" as simpler: i.e., it is visually much smaller, also much easier to manipulate (over time), etc.

The book is "The Master and His Emissary".


Don't over-complicate the reason. ;)

It's quite possible he was trying to test something else and the answer you gave didn't give him a meaningful answer to that question.

So while your solution may have been fine, it's not something you're going to run into in production code. But that hierarchy+command pattern is probably used in a more appropriate situation. And he wanted to see if you could deal with the pattern itself.

Of course, I could be wrong. He could just be enamored with the overly complex solution because it's more "clever".

Interviews are weird.


To be fair, I "passed" the interview. So he got what he was looking for. I argue however that he was preferring complexity over simplicity anyway.


Hmmmm ...

The point of a 'command pattern' is in effect it's generality, i.e. it implies less coupling in the scenario. The most classic example would be the 'action' that might be passed to a UI button when it's clicked.

So, if the situation calls for command pattern, use it, if not, don't. It's not a matter of 'in your face complexity'.

The notion of doing '3 things with 3 functions' is pedantic: it's obvious. It wouldn't make the basis of a 'question' so it'd be rather pointless to do it.

Maybe there was some confusion as to the point of the question ...


You can slap "command pattern" on anything. But I'm speaking directly in the traditional form that implies a OO type hierarchy.

You don't need that at all.

Here is a "proof" of sorts: http://mishadoff.com/blog/clojure-design-patterns/#episode-1...


Yup. Command pattern is short for "my language doesn't have first-class closures".


I'm aware.

But either it makes sense to use Command Pattern or not in any given situation and it has nothing to do with complexity really.

I don't see how someone could possibly use a command pattern when simple function would do - that would be beyond pointless. Which makes me believe that there must have been something to the nature of the problem question ...

(Though Lambda's wipe out so many of the simpler use cases of command pattern ...)


The whole point was that the GoF Command Pattern is a verbose OO emulation of simply passing around functions. In a language with proper first-class closures the whole pattern mostly dissolves – it’s obvious that if you want to customize behavior, just pass a function. The same holds for a surprising number of GoF patterns.


i think even if you use lambda's - i would still call it a command pattern - if you used lambdas to implement the command class.


It's an issue of perspective at this point. You can call it "The Command Pattern", as if it was something Important, while others will note that it's just passing a closure around (i.e. a trivial programming pattern at the level of "using if/else" or "throwing an exception").


Are you by chance reading "The Master and His Emissary"?


Yes that’s the book!


I had a similar but inverted interview experience years ago. The interviewer asked me how I would implement a system for selling concert tickets online. I suspected, and quickly confirmed, that he wasn't interested in the UI, just the back end systems for supporting the UI. I quickly sketched out a system and described how I would serve up information about the available concerts and tickets and implement the process of buying a ticket. Then I asked the interviewer if he wanted me to go into more detail on any particular component of the system. He was NOT pleased. Then began a long process of "don't you think that's unnecessary?" and "what in the world is this for?" As I explained the need for each part of the system, he started incrementally removing all the requirements implicit in his description of the problem.

First to go was the distinction between accessible and non-accessible seating, partially obscured views, basically any extra info about seats except their identity. Okay, great, that simplifies things.

Likewise any idea of redundancy or failover. "Just let the process restart. You should be able to ensure no failures anyway." Okay, that's kind of opposite to how I've thought in the past, but I'll keep my mouth shut and roll with it.

Just to check, I asked if I should support a concept of buying tickets for assigned seats. Nope, no assigned seating. Okay... so are there different price classes for each event? "No, the tickets are all the same!" Okay. Each event has a single price and a single pool of tickets.

But to sell the ticket the user has to be able to buy it, right? Nope. No payment process. Great! I said. So there's no need to put a hold on a ticket while the user has it in his cart. The interviewer looked like he was going to burst a blood vessel.

Then I said, okay, at minimum I think we need to record who we've sold tickets to so we can send them tickets or in some way ensure that the right people can be let into the concert and people who haven't bought tickets can be turned away. Um... right? "No, don't bother. If they say they bought a ticket, they did." Okay. Moving on.

You wouldn't think the requirements could get much simpler than that, but you would be wrong. It turns out that the system didn't need to know when a concert was happening so it could stop selling tickets at some point. Ticket sales for a concert would go on forever. What should I do if the concert sells out? "Don't worry about that. It won't happen." At this point, I wanted to scream, "Your honor, permission to treat the witness as hostile!"

It also turned out there was no need to track different events or venues. There's only one concert, it has an unlimited capacity, and it never happens.

In the end it emerged that he just wanted a simple TCP service that listened on a port and served a random long integer (a "ticket") to anyone who connected without repeating the same value, unless it got rebooted, in which case it was okay to potentially serve some of the same numbers that it served during its last lifetime, as long as it didn't exhibit any patterns that might enable an attacker to predict upcoming values.

What #%$%ing similarity does that have to selling concert tickets? I mean, in the phrase "sell" "concert" "tickets" "online" three out of the four words were completely irrelevant and misleading. I'm guessing it was just what he was working on that week, and in his system the values were called "tickets" so he threw out "let's sell concert tickets" in an interview hoping I would solve his problem for him. We were out of time, so we didn't have time to discuss it. I didn't get an offer and wouldn't have accepted one.


> I remained in Java, but took a functional and dynamic approach yielding a three-line implementation of the engine where each command was simply a function.

okay, so what do I do when I need to dynamically load an additional command by name from an external .jar ? that's like, the most basic thing Java was meant for.


The author describes a YAGNI approach. The interviewer and you seem to want a solution that's already generalized and future proof. YAGNI is always the right approach to take at the beginning of a problem. But that's one of the insights that often come with programming experience and reflection. Unfortunately, seniority is an unreliable indicator for either attribute.


Perhaps one should ask the interviewer whether they want a YAGNI-style solution or a more general/abstract solution. If you can give both, you may really impress the interviewer.

Asking first to understand the needs and intention of the application should be a first step anyhow. Diving into the deep-end face first and coding is rarely the correct approach (except maybe a startup racing for market-share).


> generalized and future proof

Reminds me of a blog post I read,

https://www.sebastiansylvan.com/post/the-perils-of-future-co...

At what point does it become over-engineering?


I think the article captures the problem even better than a similar one that was recently on the frontpage of HN: https://news.ycombinator.com/item?id=17850836

I can't remember a situation where undergeneralized code came back to bite me. I have however seen (and admittedly written some) prematurely generalized code that became messy legacy code.


You often can't tell up-front. If you never get back more than what you put into an abstraction, it was probably over-engineered.


You change the implementation. I think the fear of actually changing already existing code leads to the majority of problems with software development.


That's true. We're not writing code in stone tablets; at any point in time you can just dig in and change what you wrote before. It's infinitely easier if that code is simple, because was written straight to the point, instead of being wrapped in large abstractions.


This should be in requirements. It apparently wasn't in the ones OP was given.

Also, the answer should not be "large dependency injection framework".


Also I would say the answer is better left independent of the runtime environment.

Yes you can load Java classes into the JVM. But why not have a serialization format that is independent of the vagaries and specifities of the JVM?

Still, even if you lean on the JVM, that changes nothing about this particular problem. You don't need command pattern. You don't need a type hierarchy.


You forgot some Factories and Factory Factories ;-)


One of my friend who was not so strong in programming was never short of any jobs that were available to him.

He told me he took a ukulele to the interview. Each time when interviewer asked him a question, he would play ukulele untill he found the answer.

He got the job even when his interviews went wrong.

His job offer arrived months after an interview because he was the only guy who the interviewer could remember.

Often you hire for a spot in a company, after sometime that person leaves and management asks you to name any other person and when you try to recall last interviewee batch, the ukulele guy is only one the interviewer is able to remember.


I've spent a fair amount of time interviewing in London a few years ago. I would literally go to any company that send me an e-mail or contacted me via recruiters.

I didn't have enough money to buy new clothes at the time, so instead I've picked the shoes with the largest amount of holes and would never do any grooming before hands. I would come stoned to the interview. So the gist: instead of trying to appear on the lower mid end, I went all the way through to the lowest low end.

Thinking retrospectively I think that's pretty much the reason why I barely received any negative responses unless I had miserably failed at the technical side of things (e.g. I didn't know how to use generators in Python at the time and the whole list of questions would be about them and their syntax).

I have to admit that in this scenario I was quite good at the technical side of things. But the general philosophy was that if I don't try to appear too good - people will assume that I was better. Put humility and definite sprinkle of character on top of that.


Your friend is a mad genius. That's so brilliantly absurd but aligns so well with my experience in interviews that I can't dispute it.


That's awesome, and there must be more stories you can tell about this guy. If so, do tell!


> What we ended up with was O(n) _classes_ (where n = the number of computation operations).

Speaking of complexity worship...

Big O notation seems totally irrelevant here, unless the question involved scaling to arbitrary number of types of operations. Why not just say "we ended up with a class for every operation?"


Big-O applied to code artifacts-to-manage-over-time is very relevant here. If not the common usage of Big-O, which is in terms of algorithmic steps. But still highly relevant.

A system that has O(n) code artifacts for n "business requirements" (e.g.) is far more costly than one that has O(1), say.


> A system that has O(n) code artifacts for n "business requirements" (e.g.) is far more costly than one that has O(1), say.

I understand what you are saying, I'm just objecting to how you are saying it. Unless you are interested in the limiting behavior as the number of "business requirements" goes to infinity, big O is the wrong tool. An O(1) approach that requires a foundation of 1M LOC is probably worse than an O(n) approach that requires 100 LOC per business requirement.

Sorry to obsess over the details here - I just think big O is overused in general and saw an opportunity to make a bad pun on complexity worship.


"Big O" has evolved beyond its original intended usage. In informal and semi-formal setting, it just means "roughy proportional to". It's showcasing the first-order shape of growth, not just inviting to do analysis in the limit.

I.e. it's coder slang now.


I understand and I'm well aware that it has become "coder slang" - that's my point. On a thread about complexity worship, the first comment I saw was an unnecessary use of jargon that complicates things without new meaning. That, in my mind, results from a culture of complexity worship.

> Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent. - orwell


I think the slang version of "big O" is so common that there's no chance of misunderstanding. If anything, it serves as more of a shibboleth - if you don't get it, you're not part of the group the story is intended for.

RE that Orwell quote, I have mixed feelings. I agree with it in so far as it means "pick easiest words for your audience at the precision level you need". But in general, words are not equivalents, even if they're listed as synonyms - each word has its own specific connotation. Say, "car" and "automobile". Technically, they refer to the same thing, but they feel different. That subtle emotional difference may not be important in formal setting, but it's an extra dimension of communication in informal cases (like e.g. this comment thread).


> it serves as more of a shibboleth

Like the word "orthogonal" - always know I'm talking to a techie when that one rears its head.


I'd say "perpendicular" but that's somehow even more niche, despite being a word everyone learns in high-school geometry.

I guess there just comes a point where you've solved so many optimization problems that it's hard to not think of a bunch of solutions with different attributes as being embedded in an N-dimensional space?


That is an orthogonal concern.

Versus

That is an entirely different concern.

The second is more approachable but requires an adverb to have the same meaning, and adverbs are also discouraged. I think orthogonal is fine.


Also it does not communicate the precise meaning. "Orthogonal" fits when concerns may be related, but they're independent from one another, so you can discuss them separately.


Well, regardless of the adverb, you would need a modifier. Avoiding these technical considerations was part of Orwell's point I think.

So I guess we could nitpick over "entirely" and "basically" just as easily. Maybe he should have just singled out the engineers and other pedants.


Perpendicular is a weird word. And also harder to say, at least for foreigners.

I tend to use "mutually independent" instead of "orthogonal" when talking with non-tech people.


That (Big O as slang) sounds like a horrible situation.

It feels analogous to the widespread use of "exponentially" to mean "a lot" or "quickly" which is a really bad, silly thing. The difference is that few physicists and mathematicians misuse "exponentially" in casual conversation, whereas you are claiming that software people deliberately misuse "Big O". I'm not sure I believe you but either way this seems regrettable.


"Exponentially" isn't all that bad; what people actually mean when they say it is usually "superlinearly", but there's no practical difference between the two when talking about e.g. scaling problems.


YMMV, but, outside of technical contexts, the phrase "increased exponentially" is often used by people who don't even know the difference between linear, geometric and exponential growth. In many cases they don't even understand that the word "exponential" refers to a rate of growth, if they've heard of the concept.

Take the context of a high-profile art magazine, Frieze. (I googled "frieze increased exponentially"). This is shooting fish in a barrel—but the most egregious example in the first page of hits is this one:

"Seppie in nero’ – squid in its own ink – is my favourite Venetian delicacy. Although customarily served with polenta, I prefer it on thick spaghetti since pasta exponentially increases the naturally squirmy quality of the creatures’ tentacles, creating a Medusa-like mound of inchoate, salty matter."

So you've got an art critic writing slightly pretentiously about food, and he throws in an "exponentially" which has nothing to do with a rate.

This and similar usages of "exponentially" are extremely widespread now. People talk about exponential increases without any mental model of the rate of growth as a function of time at all—just the woolly idea that something is growing fast.

"The term "exponentially" is often used to convey that a value has taken a big jump in short period of time, but to say that a value has changed exponentially does not necessarily mean that it has grown very much in that particular moment, but rather that the rate at which it grows is described by an exponential function."

https://books.google.ie/books?id=aVovDwAAQBAJ&pg=PA36&lpg=PA...

The battle is lost on this word, as it is with "literally". To the man on the street "exponentially" really just means "a huge amount" now.


There is absolutely a very practical difference.


It's unfair to dismiss that as sloppy slang; that's exactly what Big O notation exists for: to concisely express such concepts as "the number of classes increases linearly with the number of business requirements", as differentiated from "the number of classes is independent of business requirements" or "the number of classes increases with the square of business requirements".

The Orwell quote is valid at the heuristic level, but when

a) there's an installed base of people who know the jargon, and

b) the everyday English equivalent takes a lot more words to say the same thing, and

c) something coherent is meant by the jargon that could be so translated if necessary,

then that's exactly when you should use the jargon.

Give me "a^2 + b^2 = c^2" over "the sum of the squares of the lengths of a and b is equal to the square of the length of c".


The point there is that Big-O notation also implies that the growth has given shape above some large-ish n. And metrics like "number of operators" or "number of bussiness requirements" rarely are so large that this makes sense.

And in fact approaches to system design that try to make the code size independent of number of bussiness requirements and their possible changes in future lead to exactly the kind of "complexity worship" discussed in TFA (various ad-hoc turing complete VMs that interpret code represented as rows in bunch of relational tables and what not).


People who write "x is O(n)" usually actually mean to write "x ∝ n", but 1. ∝ is hard to type on a keyboard, and 2. even fewer people (including CS majors!) know what ∝ means, than know what O(n) means well-enough to allow them to figure out its non-jargon usage from context.


So you think that the imprecision of saying "O(n)" when the "large-ish" behavior is not relevant is worse than the verbosity of saying "scales directly/with the square of/not-at-all with [relevant constraint]"?

FWIW, Big-O itself, even in technical contexts, gets imprecise with e.g. calling hashtables O(1), which is not possible, even under the idealized computer model (instant memory access, etc).

Is there a shorter way of saying "scales proportionally with n" that you would suggest the tech community prefer because of its greater precision?


But you can't just say "O(n)," you have to specify what n is - at which point it is no longer any more terse than the English alternative.

> What we ended up with was O(n) _classes_ (where n = the number of computation operations).

vs.

> what we ended up with was several classes for each operation.

The second is shorter, and says no more than what is relevant.


That's like saying there's no point in using pronouns, since you have to say the antecedent anyway.

It would be wrong in both cases because the context can make clear what a variable or pronoun refers to. If the problem context makes clear what the binding constraint is and you just need to talk about scaling behavior, then it is indeed shorter to say "O(1) rather than O(n)" vs "doesn't depend on the operations rather than being directly proportional".

>>what we ended up with was several classes for each operation.

>The second is shorter, and says no more than what is relevant.

It says less: the O notation is used to indicate that as you add more operations, you will need to add more classes, rather than only needing to add classes when there is logic the operations don't yet implement.


Math doesn't define how big a "large-ish n" is. In casual usage, it may as well be n > 3, if the context suggests so.


Got you. Thanks for the clarification.

Speaking of brain hemispheres, the left hemisphere doesn't get jokes like the one you made. The right hemisphere handles jokes, metaphors, etc.

Looks like I'm a left-hemisphere-dominant pot calling the kettle black...


Tough to say, really. If your O(1) is just a constant 2000 code artifacts, but your O(n) is 5 per business requirement, it is amusing to consider that 400 business requirements might be better spread over more systems.

My view is that, even in algorithm analysis, Big-O has been overused. Too many people will point out crap like "That is 4 items per entity!" When I point out we only have 50 entities and that 200 is an easily handled number, I just get evil glares.


Somewhat related: I've always enjoyed that the matrix multiplication algorithms with the best theoretical time complexity, are completely impractical for use on real-world matrix multiplication problems.

Wikipedia describes why people use the Strassen algorithm in real-world implementations, despite its inferior asymptotic time complexity [0] :

> unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.

[0] https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_a...


Had a fun discussion on that in this forum not long ago. https://news.ycombinator.com/item?id=17891360

Basically, I was hoping some of the optimized methods were competitive nowdays. Spoiler, still not practical. :(


Perhaps because they had a number of classes per operation, but that number was not important. Statements like this is what big-O is for.


The use of that notation implies the number of classes was proportional to number of operations (+/- constant factor), not that it was equal to number of operations. O(n) works both if there's 1 class per operation, and if there's 5 classes per operation.


Ok. O(n) also through a away a lot of useful information. It seems simpler in every sense to just say "several classes for every operation."


Again, you're absolutely right wrt. proper, formal usage. But big-O is also a part of programmer slang now. The OP used it in this form.

(It's also faster to write, or even say out loud, "O(n)", than it is to write/say "proportional to number of").


Frequently when writing O(n), the writer is making a comparison. e.g. "The GC was O(n log n) wrt memory, but now it's just O(log n)." That expands to a lot more text.

Or, to put that another way: there is a constant cost in human parsing complexity when writing "O(n)", but the number of parse-nodes in the text when using "O(n)" only increases as O(n), whereas without "O(n)", it increases with O(n^1.3). ;)


I'm currently reading a book on brain hemispheres. Apparently experiments have shown that the left hemisphere will completely and blatantly reject obvious evidence even if the right hemisphere 'has a clear counter proof'. Sometimes I think our industry suffers from an abundance of left hemispheric dominance.

I have two monitors on my desk. If I am stuck on anything I drag my emacs from one side to the other and look at the same code there. It works pretty well!


A 3 line simple solution is obviously better than the original solution. He knows this, Anybody knows this.

What happened was that you showed off how you're better than him at finding a better solution. He's probably a top dog looking for underlings. That's the reality of this world.


I can appreciate your somewhat cynical assessment, but I also think we're still squarely in the right/left brain hemisphere: empirically speaking, the left hemisphere has quite the ego. We're both saying a very similar thing.


Your situation was a perfect example of why I so violently, vehemently oppose object oriented programming and why I fight the people trying to promote it at every turn and opportunity (often in person and it can get extremely ugly -- and that is good so). Object oriented code is a nightmare to debug -- I've wasted countless years stepping through the debugger trying to follow the complex state machines I was debugging. Data hiding, one of the main selling points of this programming technique, makes it even worse with regard to building a mental model of the state of the machine. Resolving what the data is in order to understand where the problem resides ends up being extremely tedious and painful.

It's like they are trying to prove that they are really, really smart without anyone asking them to prove it. But here is my challenge: anyone can overcomplicate anything, but if one is really smart, make a complex thing simple.


Very likely the interviewer was posing to you the exact design that they have already implemented, which they did for good reasons, and very likely your trivial solution would not scale up to the complexity of their needs. So you might have proved you could simplify the problem at hand but failed to demonstrate that you would know how to deal with more complex requirements of a larger system. For example if the commands themselves are quite complicated and have their own relationships and state then how would that be modeled? Eg. how do commands with shared behavior share it? Of course it can all be done in a functional style if you are determined enough, but it could be somewhat awkward and sounds like it wouldn't fit at all with their style of programming. I assume you probably didn't want the job in the end but if you did I would say you didn't do yourself any favors.


Honestly, I think we as programmers have the opposite problem. We value simplicity so much that we neglect planning for complexity, because we think our simple, elegant solutions will last forever.

It's really fascinating to study modern CPU design. Modern high-performance CPUs are horrendously complex, with the very notion of superscalar architectures and pipelining resulting in guaranteed complexity explosion. Yet, as far as we know, there is no way around this. You need pipelining and superscalar execution to get adequate instruction-level parallelism in real-world code. Unless you want to use microcontrollers for everything, that complexity must exist.

Compilers are another example. Many people who go to implement compilers read the Dragon Book and think that all the complexity in GCC and LLVM is needless bloat. Then they quickly discover that they can't compete with them in performance.

It is of course desirable to avoid complexity where possible. But all too often the response that we as engineers have to discovering that difficult problems require complex solutions is to become defensive, stick our head in the sand, and stand by our simple "solutions". This is how we ended up with crufty Unix APIs, the security problems of C and C++, the pain of shell scripts, and so forth. We need to learn how to accept when complexity is necessary and focus on managing that complexity.


But don't we already have microcontrollers in everything? My keyboard has a microcontroller in it. The harddrive has a microcontroller. USB has a microcontroller (or two, or three). Heck, even our CPUs now come with microcontrollers embedded in them!


Not in my experience. I see far more engineered overly complex solutions to simple problems than the opposite.


And then people say: lets rewrite it, the old solution is overly complex and has a lot legacy code that no one uses, we can do better. They imagine simple & perfect solutions because they underestimate complexity of real world problems, just like parent said.


I would say rewriting is OK if you are eliminating code while keeping the same functionality.

Obviously that is a generalization and there will be exceptions, but as someone else said in this thread "code is a liability not an asset".


I’m a fan of KISS and I spend an inordinate amount of time getting people to reframe problems in a manner that meets the needs more directly (straightforward is on the road to real simplicity, and is a good place to stop if you can’t reach the destination).

But I can’t agree with this observation. There are a lot of smart people who are bored and solve a problem we never had to keep from going nuts, but the solution is so complicated it drags everybody else down.

But there are also a lot of people out there who think that if you pretend hard enough that our problems are simple, then a simple solution can be used.

If you oversimplify the problem enough, everything looks easy.


Clearly, the complexity of modern-ish CPUs does push up instructions/cycle — and thus single-threaded performance. That said, I did an awful lot on machines running at hundreds, rather than thousands, of MIPS (and quite a bit at single digits...). If the “big core” superscalar designs hadn’t happened, it’s not at all clear to me that the world would be a worse place. Perhaps we’d have more transputer-like designs, for instance (which I admit bring their own complexity — but could at least be rather more transparent than the current mess of speculation and complex cache hierarchies.)


You don't really need pipelining, or branch prediction, or microcode loop caches or whatever else necessarily. If you're optimizing for single core performance over everything else, then that's how you squeeze as much as you can into the processor. I don't believe GPUs do branch prediction at all, for example.


It's interesting to me that Parkinson's Law wasn't originally conceived of in the context of software projects, yet that's the only place I hear it applied today.

I've been working outside of software recently, and noticed to my delight that this Law hasn't applied at all. In one case, I was hired for 3.5 days of work, and we got finished after 2.5 days so we were sent home early -- nobody was dragging their feet to make it last 3.5 days. In other cases (more common), we've temporarily had too many people for the job at hand, so the team lead said "Just wait", and we do nothing until there's more work ready for us.

Why have I never heard of any software team ever saying "There's nothing for you to do right now, so just wait"? My first thought was the endless supply of bugs, but that can't be right, because I've never heard of a team lead saying "We have no work for you today so go fix bugs for a while", either.

It really does seem like every software team manager thinks that the proper amount of complexity in a system is perennially $(current_complexity + 1). The cases where program complexity approaches a constant asymptote (like Redis and perhaps SQLite) are so rare as to be notable. They're also frequently mentioned as being developer favorites.

Maybe the field just needs another 50 years to mature.


> Why have I never heard of any software team ever saying "There's nothing for you to do right now, so just wait"?

In addition to Parkinson's Law, there's this one: "A poem is never finished; it is only abandoned."

There's never nothing to do, because we can always improve things. And to a business or to a manager, "just wait" costs about as much as "work on something of little importance" but provides less benefit. It might be different if programmers were all on zero-hour contracts.

(OTOH... Firing engineers could work. I wonder why sites that seem "done" don't decrease their payrolls. Pride?)


I disagree that "we can always improve things". That only works in a few fields.

When Tom Wolfe was finishing up a new novel, adding more writers wouldn't improve the story. When a cancer patient is undergoing chemo, adding more physicians this afternoon won't improve the outcome. Adding cooks to the kitchen of my favorite noodle shop isn't going to improve the noodles one bit. Adding more actors to a film's shoot schedule isn't going to make it go any faster.

In all these cases, even if you gave me 100 more skilled people for free for a week, I'd tell you that we don't need them, and it would actually hurt us for them to participate. Changing the plan or going off-plan has a real cost. Mature fields like civil engineering have a great track record because they don't just let extra people make contributions at any point in the process.

BTW, according to Wikiqote, the correct (and unabridged) quote is: "A work is never completed except by some accident such as weariness, satisfaction, the need to deliver, or death: for, in relation to who or what is making it, it can only be one stage in a series of inner transformations."

For programming to be a mature field, "the need to deliver" must be a necessary component, and "satisfaction" should be the goal, not merely an "accident". We can't utilize the process of a 19th century French poet and expect to get results like 21st century civil engineers. This was a guy who (according to his Wikipedia article), "around 1898, he quit writing altogether, publishing not a word for nearly twenty years."


Individual incentive.

Every manager wants a bigger team and impactful projects for their resume. This bubbles up to the top, where rarefied execs don't see a reason to go to war with their own org and possibly lose. As long as it's still profitable, everyone's doing fine.


> Maybe the field just needs another 50 years to mature.

This thought is pretty obvious from my humble POV. Do we know of any field this extensive that matured (by any reasonable definition of "mature") in less than 100 years?

Civil engineering took thousands of years to get to a point where it's not taken for granted anymore that the construction of a large building will cost the life of some construction workers.


Sure. Powered flight went from "not invented yet" to "fastest and safest way to travel" in less than 100 years. Nuclear power has had a couple significant disasters but is otherwise mature. Antibiotics. Rockets and satellites (like GPS). Filmmaking.

For the most part, I think major new inventions of the 20th century have taken much less than 100 years to mature. We have science and engineering now, so being able to apply them to new fields is generally feasible.

I'd flip it around: what fields still have not matured yet? In what fields, since the dawn of science and modern engineering, is the median project still a failure? I'm hard pressed to think of any outside of software.


> In what fields, since the dawn of science and modern engineering, is the median project still a failure?

Social sciences (the replication crisis).


True, but social sciences (at least many of them) definitely predate the scientific method.

Also, observing that software engineering is not really doing much worse than social sciences at producing results certainly doesn't make me feel very good about software engineering.


I'd love to read a short sci-fi story about what it's like programming in a mature field. what problems will go away? which do we want to go away?

(will people still use emacs?)


The classic is https://www.fastcompany.com/28121/they-write-right-stuff and it's sci but not fi.


People definitely fetishize complexity. I often mention that I like an intellectual challenge at interviews, and the interviewers often mention 'oh, but this project is very complex' like they are saying something naughty. (Once, the interviewer followed up with "we're using algorithms). I suspect they are usually right, and the codebase is a pile of unnecessary design patterns (because those are what educated developers use, right?).

Where I currently work we're using Azure to do a shitload of computations. At the same time, many modules don't even bother to throw away intermediate calculations. They literally have a giant array of them, and save every result, for each step, even if it's not necessary.

But hey, the project is seen as a huge success, because it's so complex.


More Azure billage to the customer, more kickback from MS.


> (Once, the interviewer followed up with "we're using algorithms)

Even though I do database / web app level programming most of the time, I usually find Linus' quote very relevant:

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."


When talking about software complexity I'm always reminded of the old quote by German general Kurt von Hammerstein-Equord but attributed to various military leaders:

  I divide my officers into four groups. There are clever, diligent,
  stupid, and lazy officers. Usually two characteristics are combined. Some 
  are clever and diligent – their place is the General Staff. The next lot
  are stupid and lazy – they make up 90 percent of every army and are 
  suited to routine duties. Anyone who is both clever and lazy is qualified 
  for the highest leadership duties, because he possesses the intellectual 
  clarity and the composure necessary for difficult decisions. One must 
  beware of anyone who is stupid and diligent – he must not be entrusted 
  with any responsibility because he will always cause only mischief.
Clever and diligent developers devise complex solutions to complex problems, which may often be good enough.

Stupid and lazy developers can be entrusted to come up with simple solutions to simple problems.

Clever and lazy developers are able to find simple solutions to complex problems, a very desirable trait.

But stupid and diligent developers, given the chance, manage to implement complex solutions to simple problems!


I confess I also fear the clever and diligent when the are put in a situation which rewards complexity. The role of architect, for example.

At large companies the output of architects is generally things like directives, white papers, and other agglomerations of words. They are expected to be smart. Can non-technical executives judge the actual smartness of the work? Not really. So architects often get judged by sounding smart. Confidence. Complexity. Negative judgment. Performing intellectual toughness.

That's the opposite of what I really want in a technical leader, whose job is generally best done with humility and subtlety. The best technical leaders I've worked with are quiet and make a lot of small interventions that add up to big long-term results. But that's rarely what executives are impressed by.


But laziness is only good if its long term laziness. E.g. a clever lazy person that job hops a lot and/or only do greenfield dev tends to leave huge messes for maintenance programmers to untangle.

At least, in my experience.


This implies that complexity is something that is only arrived at by putting in more "work", while a lazy person, that puts in less effort, comes up with a simpler solution. I disagree. Simplicity is extremely hard; much much harder than complexity. Simplicity takes many hours of design and redesign, to realize which parts are out of the scope of the problem domain and which parts are not.


In my mind, as a trait, it's about how a person thinks rather than how they necessarily act.

A diligent person may be someone where creating more work isn't a blocker to an acceptable solution, whereas a lazy person has reduction of work as a requirement for theirs.

So a lazy person might work more at their solution, so long as it helps them avoid more work in the future.


I try to cultivate what I like to think of as strategic laziness. It is good to achieve something with the minimal amount of work, but it is better to find a way where the thing does not need to be done in the first place.


http://www.ariel.com.au/jokes/The_Evolution_of_a_Programmer....

I have been saying for many years that overengineering is the plague of modern software. Almost everything seems far more complex than it needs to be.


Yes, definitely. There are too many diligent and too few lazy programmers. Too many devs take pride from coming up with complicated, overengineered abstractions – after all, if they can manage the complexity they must be really smart, right?


Eh. Sometimes over-engineering is a matter of perspective.

Take JSON API, a standardization of HATEOAS / REST principles around JSON and HTTP, for example. If you just naively walk up to it you think to yourself:

> Wait, what? Can't we just return the simple data we know we want? Why complicate this all with relationships, links, and meta data?

But after a while you realize that 90% of what you're doing could be abstracted if only you had a predictable output. So EmberData comes along and you write a Rails backend that has the nice advantage of not needing to worry about HTML (outside of OAuth / emails, anyway) and you harmonize your API. You use Ember Fastboot for slow clients and call it a day.

Someone else looking at what you've done may say:

> Wait, what? Why don't I just create static HTML pages instead of using your over complicated service?

And they're not completely off-base. In fact it's what I do for my own personal site. It's just that the context of their situation has different tradeoffs.

The same thing is true for a lot of software. Excel is "over-engineered" for most people. So is HTML. So is Unix. But in general what happens is that the person with the most complex requirements usually wins because they usually have the fattest wallet and everyone else papers over the complexity with abstractions or uses something less complex that meets their needs.


> So is Unix.

The Unix Hater’s Handbook [1] is a pretty good read (or skim). It’s healthy to remind oneself that, even though unix-likes are a savior from the deeply unpleasant alternatives, warts are present.

[1]: https://homes.cs.washington.edu/~weise/unix-haters.html


>Too many devs take pride from coming up with complicated, overengineered abstractions

This is what I feel about the current JavaScript ecosystem. I feel like web development became popular so programmers from other disciplines (C, Java, etc.) jumped in and found it to be too simple so they've hijacked the ship and created a new JS ecosystem that is as complex as their old environments.


Coming from a (primarily Windows) desktop development background, I can definitely tell you that JS/HTML/CSS and the DOM is some of the most complex, over-engineered stuff to ever see the light of day. It seems simple, but that's because it was simple initially, and now it's grown into this monster that has very little design at all and is just piles and piles of APIs and code added on top of other APIs and code.

Desktop development is blessedly simple, in contrast. The problem is that desktop development APIs and UI toolkits have been neglected for many years in favor of web and mobile, so now desktop development is also a fractured mess because much of it is outdated or hasn't kept up with current graphic design and UI standards. But, at its core, desktop development doesn't struggle with oddball concepts like promises and other bizarre features that were introduced to overcome issues with the overall design of the language/environment.


It's more like this: all the work that was previously accomplished just on desktop apps is moving to the web and all the various camps that previously lived in their own language/framework silos are reinventing themselves in within the js ecosystem. If that happened in the historical silos, nobody would notice. But since it all happens on the common js substrate, we see the big mix.


jumped in and found it to be too simple so they've hijacked the ship and created a new JS ecosystem

No, no, it’s that webdevs looked at other areas of programming and thought “we have to be complicated as them in order to be taken seriously”.


We need to establish a rigourous and quantifiable definition of software complexity otherwise we can confuse complexity for misunderstanding the difference between essential complexity and accidental complexity. Remember intuition is not the same as simplicity!

From my experience, complexity to me means how many combinations of states are possible a program, how many paths one can take at any point, how many side effects are possible... etc. An abstraction is supposed to manage complexity, if it doesn't, that doesn't necessarily mean the abstraction is too complex, but that the abstraction may just not fit the problem or cover all the edge cases. Thus a leaky abstraction.


I think it could be that rather than worshiping complexity some people find it hard to value or respect simple things, and that that leads them on a path to "accidental" complexity. Like when someone find out that a rainbow is just water refracting light, or the sun is just a big ball of gas... that it takes some of the coolness out of it. There could be a tendency like that to find simple systems disappointing.

Or it could just be that people have a certain mental budget for maximum complexity, and they'll try and make sure they spend it all in the belief that it will cover more uses.

Or it could be that complex systems tend to stick around because they are far more immune to random management changes because everyone goes "oooh, we'd better not touch that". Simple systems might becomes the victim of their own ease and get subsumed by a more complex monster.


I have heard co-workers say that the code where we were working used to be dissatisfactory because there just wasn't very much of it, but now it's much better. My mind was boggled. Less code is almost always better (obviously there are exceptions, but rarely in production, in my experience).


I cringe reading this. I take far more pride in removing code while keeping the same functionality than writing new code these days.


It's a "oh shiny" thing from my point of view. People get bored with the old, and want to try the new Google/Netflix/Microsoft tech, irrespective of its suitability. Once the client/customer signs off on it, internally it becomes political.


Oh, I do dislike that attitude. "The sun is just a big ball of gas."

Dude. That "big, ball of gas" "just" spontaneously ignited due to gravitational forces and supports life on this planet. That's fucking wild. It's way more cool that it's simple. You try squeezing air hard enough to make it explode.


Also, it's a big ball of plasma, which means that it's so hot inside that the heat literally tears atoms apart.


We also worship simplicity. I like to think of things as quadrants. Imagine a 2x2 matrix with columns and rows titled as such: [simple, complex] [easy, complicated].

They are quite not the same since one dimension denotes the complexity of the thing whereas the other denotes the complexity of the act of building, maintaining and evolving the thing in question.

We should always strive for easiness, however complexity shouldn't always be avoided. Quite to the contrary, staying away from complexity locally often leads to that complexity being sprayed on a global level, and that's when it turns into something complicated.

I have too often seen "complex" code, i.e. code that works on "complicated" datastructures such as trees and graphs be discarded in favor of solving the problem "simply" and directly, which means by disseminating the problem's logic accross the codebase and with multiple, gradual bugfixes because the problem being intrasequely complex is underspecified and cannot be tested in isolation of the system.

Of course the same people that are baffled by "complex code" and think simplicity sums up to looking away and not anticipating future needs, are the same who advocate it. Actually, they like good design and typography, focus more on indentation than datastructures and generally have a taste for nitpicking with as many subtle details as they can come up with, not seeing past the filter of their own opinion about what simplicity and complexity entail, and of course that makes the social process of building code a slow nightmare that does not converge.

I sound harsh I know. What I want to point out here is that by denigrating complexity in favor of simplicy, we may get rid of medieval savants, but we open the door to plain idiots disguised as zen masters.


You actually sound diplomatic.

Wanna hear something harsh? Most programmers I ever worked with for 17 years of career do not deserve the right to touch a keyboard; they should work on farms. That would be [somewhat] harsh.

You are on point with everything you said. Ego, strong opinion enabled by a secure job position where no amount of professional failure will ever get you booted (because you are isolated from the business outcomes), echo chambers of fellow bros who think like you, and plain old fear of change is what drives most humans -- and programmers aren't an exception.

IMO it's high time some formal certification and legal liability to be introduced to our profession. Also, pick 5 imperative and 5 functional languages and make them "official". People love their religious language wars but it has to stop at some point because billions of bucks are being wasted on 25-year old egos.


Keeping on being diplomatic, I don't think it's an ego problem per se: instead, the egos fight over pointless things such as coding style. Don't get me wrong, I've nothing against adopting a coding style nor being told I misindented code by mistake. I'm fine with this, this is teamwork. But when the coding style keeps evolving, is not the same depending on the code base, when you have to take into account who your reviewer is in order to know which style to adopt, and when in addition you have very "diplomatic" ways (or, to put it more bluntly, when you have a dominated and introverted personality) you just end up being a punching ball for the whole team and serve as an indirection layer so that the biggest egos in the team can fight each other through proxy wars.

As for a formal certification, here in France "engineer" is a state-certified status. There is no legal liability I can think of (there might be some when you're engineering bridges, but I have not heard of something similar concerning software). To be frank, this status is bullshit. At work I'm using a payment API from a local shop. The CTO comes from a top rated school (the french Calltech), yet their API does not ensure the reception of async notifications like if communication failures wasn't the crucial characteristic of a computer network. In addition to that, they are unable to provide a dedicated infrastructure to their big customers and ended up implementing a rate-limiting system on top of their absolutely business critical payment API (and boasting about it in blog posts). Last time I checked my company experienced a 80% drop rate in payments due to errors or the rate-limit being exceeded.

I say "last time" because I haven't been to work in months, my sick leave being extend month after month by my psychiatrist. I worked a lot these past couple years. In 2017, I think I averaged 70h per week. Actually, I have no idea. I just know I worked a lot of 80/90h long weeks that year. What's certain is that my hourly pay rate dropped under the legal minimum, which isn't surprising since I have the lowest salary in the team (a little less than 40k€/y). Everybody comes to me to fix their shitty problems, and when I push PRs to avoid them code aesthetics and "simplification" takes over in the code review and it takes foreeeever, and I'm never guarded against bug i may have introduced, only indentation and inessential shit like that, so I have to review my code on my own, and these PR never see the light of day and then I'm suddenly considered the master of unfinished work, almost in a self-satisfied tone by people who work almost twice as less as I do and earn almost twice as much. Also I'm not allowed to work remotely (but everybody else in the team is) and I have to be at work before everybody else, and the fact I sometimes come two hours earlier is never taken into account. Oh and my sleep and medication is monitored and I have been subject to very condescending and just borderline illegal remarks about it.

I'm just insanely butthurt to be honest. I could just as well start complaining about being "talked to like a dog" (these are not my words, but what two persons independently told me about the way I was being treated in the team).

And you know what, they are all very concerned about keeping the project's complexity low. Coming from a Clojure background I exactly know what this means and what's wrong with their approach, i.e. they see complexity as opposed to simplicity, but what's pertinent is to actually oppose it to easiness. See Rich Hickey (Clojure's inventor) "Simple made Easy" seminal talk for an in-depth overview of software engineering from this perspective.

To give a concrete example of what this misunderstanding leads to, let me compare the simplicity of markdown, praised by many minimalism hipsters and whose main force is to make us forget its limitations, fascinated as we are by its fixed-width typographical beauty, VS the simplicity of acknowledging that mixing text with control characters (the old-fashioned unix way) is not a sound way to build complex things at a large scale.

Anyway, since then, I've become extremely wary of those who advocate simplicity just like an early or "private" christian would dislike Church (or what it has become).

I'm now getting back to working on extending Clojure's compiler to experiment with the idea of integrating the notion of IDE and editor directly at the language level, but deep down I'm not sure I can continue with this kind of bullshit career, where ability to handle complexity is not just disregarded, but is punished.


Yep, you ended up like the punching ball of the team. Everybody has frustrations and somehow you ended up being that person who everybody mistreats and vents to. You have to end it.

Usually I would tell you "leave your job" but me being from Eastern Europe and not in the cozy enabling environments of most of Europe... I realize you might not have the choice.

I don't know your situation so what I can recommend is probably misguided. Still, here goes, in case it can help you (and in case it's not obvious):

1. Leave the job if you can. You already are not working and are apparently still collecting some kind of paycheck. I am not sure any amount of medication of psychiatrists will help you overcome the fear of eventually coming back to that hell you have been working in. How do you feel about that prospect?

2. Take a creative break. It seems you already are doing something along these lines by working on things outside your immediate duties. Do they fulfill you? If not, definitely just stop. I lied on my ass for 2 months before actually going back out there and starting to get stuff done. Sometimes you just need it. Also, programming in your spare time isn't always relaxing.

3. Talk to people about your situation -- not only to people who are paid to listen to you though. That's a very broad advice so apply it as you like. ;)

It's OK to be butthurt/salty in this situation. We aren't angels or saints, these things can and do get to us. And you have been wronged, many times.

I faced the same. I actually come across as quite manly and assertive to a lot of people but that's mostly my appearance and my attitude of gettings things done without tarrying on petty differences. I strive to never argue emotionally or engage in yelling competitions with people and they very often mistake that for me being a pushover. What I usually do is: if somebody is becoming an obstacle, I just go to a higher manager and talk to them about it. If they don't care or have the wrong idea then I just reduce my efforts in the said work significantly and just move from paycheck to paycheck. Done that many times and now I am suffering financially for months -- very severely! -- because I want to choose a workplace where things are not like that. It seems the mythical "cultural fit" is not BS after all...

What I take from your gently shared pain is that you are not a conflicting persona. That's okay. You don't have to be. Do your best to find an environment where you don't have to fight daily with people. They do exist.


Engineers are supposed to simplify complexity, taking something complex and making it simple.

There are many engineers though that like to over-complicate and add complexity because it makes them look smart or they think they are expected to create complexity.

Complexity through simple parts is ok, but overall the job of an engineer is to take complexity and break it down to simplicity and simple parts.

In the game dev world for instance, Unreal used to be needlessly complex, still is a bit, then Unity came along and made things simple, so Unreal then looked to make things simple.

Or in the web dev world, a framework might abstract away the underlying standards and add complexity on top to seem simple, but actually make the domain more complex with more to learn and push for developer lock in to the framework over basic standards and simplicity. The first version of .NET with WebForms was an example of this, other web frameworks can be seen as this as well.

Lots of engineering people like to look smart by managing complexity but it doesn't always need to be so complex. Sometimes time pressure can create complexity and thus technical debt due to the complexity. Some simplification is misguided, a one liner that isn't understood 6 months later is not simplifying.

An engineer that takes something simple and makes it more complex for no reason other than job security or to look smart is the worst kind of engineer.


> Or in the web dev world, a framework might abstract away the underlying standards and add complexity on top to seem simple, but actually make the domain more complex with more to learn and push for developer lock in to the framework over basic standards and simplicity. The first version of .NET with WebForms was an example of this, other web frameworks can be seen as this as well.

I've been saying for years that this concept, that abstraction has a cost and isn't just a free simplification, is one of the primary misunderstandings that is holding the web ecosystem back. If every JS (and Python, and PHP etc) dev understood this, we'd be in a much better place, building much better software.


> An engineer that takes something simple and makes it more complex for no reason other than job security or to look smart is the worst kind of engineer.

I don't think most people set out do do this, most complexity comes from attempting to simplify things. A develop will see two similar bit's of code and try to stuff the commonality into a base class. In their minds it's simplified because there is less code but then when someone else picks up the maintenance it's more complex because now the logic is distributed. Then the requirements of each diverge slowly and over time each simple change is hacked into more code paths of this distributed logic. Eventually most simple changes take time because you have to be sure you're not breaking other potential code paths.

On a more macro scale, complicated architecture is another symptom of this over optimistic pattern matching that humans are susceptible to.


We the programmers sometimes get too caught up on extracting seemingly common code that we don't stop for a minute and think "but does it really make sense to extract it?".

It's a very classic problem and it's something that code reviews and pull requests for anything you do are such a good idea -- provided that your team is not a total echo chamber of course.


"The ERE [Enterprise Rules Engine] truly represents the absolute worst kind of software. It was as if its architects were given a perfectly good hammer and gleefully replied, 'neat! With this hammer, we can build a tool that can pound in nails.'" - Alex Papadimoulis


One word: "Plastics", no, actually it's "Job Security". There is an incentive in ANY profession to recommend or encourage more of your own labor. For example, a surgeon is more likely to recommend surgery to solve a problem than a non-surgeon.

This bias isn't necessarily intentional: human nature (competitive evolution) just naturally pushes us to interpret the world in a way that makes ourselves as valuable as possible.

In IT, complex solutions that we create or learn keep out competition. "Only Bob knows how to fix this monstrosity" is a common pattern.

Simplicity should be added to the employee evaluation process. This includes parsimony in both features (YAGNI) and in how the features are coded.

Further, avoid "eye candy" UI gimmicks that add complexity and fragility. End-users often love them, but they are often a longer-term maintenance headache. Beauty ain't free.


> This bias isn't necessarily intentional: human nature (competitive evolution) just naturally pushes us to interpret the world in a way that makes ourselves as valuable as possible.

For the surgeon example, the simpler explanation is that, given a particular ailment, they just know only the surgical treatment, not the alternative drug-based therapy.

Or if they know, they know it way back in their head and don't think of it unless explicitly prompted, whereas the surgical procedure is probably recent experience.

I could also see this apply to IT. Given some problem, maybe there would be a simpler solution if I implemented this particular problem in, say, Python, but I'll implement it in Go because I work with Go a lot and all its idioms are much more salient in my mind.


That's a variation of "if all you know is hammers, then everything looks like a nail". That's certainly an aspect of it, but I'm not convinced it's the entire story. It would be interesting to see studies on how prior careers/specialities affects one's decisions in new specialities.


I was with him until the very end, where he said that the cloud, microservices, etc. were the simpler approach, that other companies (not Google or Amazon) are trying to avoid.

What I see is the opposite; companies that would do perfectly well with a server or three, and a simple monolith, trying to instead use Kubernetes and an Amazon service salad because that's what Amazon and Google say to do.


I think it is worth knowing how you would grow into one of those solutions. That said, I do not know why you would try and start there. And, yes, migrating between solutions is always difficult. However, it is also the kind of problem only successful companies ever should have.


Microservices can be useful if you really can share services between applications without organizational friction. In other words, they have to obey Conway's Law.

Unshared microservices are a waste of time and code. Make sure your org is actually ready for sharing, because it does add inter-work-group dependencies that the management and team structure must be comfortable with. I've seen "build it first and they will come" fall really flat.


I think a massively underutilized core strength of developers is that we have a working thought process in front of us in the form of code, which can be used for more than just running a program.

In my experience, complexity in the code often reveals not only technical problems, but frequently also points to product and business issues. Exploding complexity and long iteration cycles are often a consequence of bad business decisions. Looking at the points of exploding complexity in the code can sometimes help identify these issues.

The key is to make the disciplines work together. Business and product decisions should not trickle "down" to developers. Instead, there should be a working feedback loop. For that to be achieved people need to be team players and talk to each other frequently. It also helps to have a 10% or so generalist in each team member's head to guarantee a shared understanding of a high level perspective.

I know, I know — this is all a given in agile methodologies. But in reality, it's unfortunately rarely executed that way.


One only has to look at any Node.JS codebase for the answer to your question. On the Java side, the Spring Guys love complexity as well: https://docs.spring.io/spring/docs/current/javadoc-api/org/s...


Complexity is a choice in most programming ecosystems. The most simple applications I have ever worked on are node applications.

In some ecosystems it is more consistently complex (Java?) but others just have many approaches with different complexity levels for you to choose from (node?).


Yeah, agree. I've worked on both very simple and intuitive Javascript codebases, and absolutely heinous ones that should honestly just be abandoned.

I do think JS has a lot more variation in complexity than other ecosystems though.


I think I still have Johnson’s book somewhere and I’ve been really curious to see how far modern Spring has diverged from his thesis in that book.


Everything is always at the edge of maximal complexity. If it were any more complex it would collapse under its own weight. If it were any simpler someone would say "it would be so easy to add..."


Simplification of systems should be a an actual college course, mandatory for all computer science grads.

Too many people these days manage complexity through ever more complex sets of tools instead of simply getting rid of it. They use complexity as an excuse to introduce more complexity and it just keeps growing.

One important thing I discovered for myself is the practice of starting with minimum viable representation of the problem. If I work on a system, I try to write down what it needs to keep track of. If I work on a component, I start with the minimal configuration it requires to instantiate. This helps to avoid anchoring myself with available tools and "common" solutions.


There is a kind of complexity worship I've noticed, but my accounting of it is that it comes from folks who aren't aware of the methods of simplification employed by those successfully wrangling complex subjects/systems; absent an alternate explanation, they assume that the mind of the complexity wrangler is fundamentally different from their own.

In other words, rather than imagining the complexity wrangler inventing layers of simplifying abstractions, they imagine him/her to just have a radically extended short-term memory or something on those lines.

The article points out the case of architects worshipping complexity in some cases by essentially over-engineering. I think this is a mistaken explanation. I think it's more of a fear-driven thing: the architect is like, "Oh shit, this problem I'm gonna be dealing with is gonna be really complicated; I'd better throw everything I've got at it!"


Primarily, people worship their own positive self-image. Complexity gives them the chance to view themselves as smart enough to handle this really complicated thing.


Management: "We wrote a new version, it was 300k SLOC!"

Me: "Wasn't the earlier version 30k SLOC?"

Management: "Yes."

Me: "So what was added?"

Managment: "You can load software via FTP now, and there's an embedded HTTP server!" (what they said)

Management: "270k SLOC!" (what I heard)

Me: "But now we have to manage that, maintain it. What capabilities does this give us?" (answer: none, the old way of loading software was only slightly slower, and because this is embedded you always needed physical access anyways)

Management: "But our programmers can handle it."

Me: "Today, because they wrote it. J over there is leaving to be a project manager on something else. Who will handle this after he's gone? Oh, and you actually started putting the maintenance on a group of contractors who weren't involved in the development."

Management: "We'll rewrite it again!" (what they'd have said if they hadn't walked away)


Yup. Also, they'll reason that the way from junior to senior is knowing and using design patterns. I've seen plenty of inexperienced cow-orkers overcomplicating things because they thought that's how good programmers build things. I know I did that too, when I was first learning to program.

(Popular books on OOP and code style definitely do not help keep things simple.)


i agree, I would say that this is where most of my experience comes in valuable, not knowing the latest framework, but being able to find a level of abstraction that promotes simplicity and maintainability in the code. Yet interviews seem to value the opposite.


After all, we all are looking for technical challenges and want to implement interesting projects.

In my experience everyone does, but the difference between the complexity-worshippers and (for lack of a better term) simplicity-worshippers is that the former like to take a simple problem and blow it up with complexity, while the latter like to take a complex problem and make it simple. In other words, some people like the challenge of making things more complex, while others like the challenge of making things simpler.

To make a concrete example, complexity-worship would be something like using half a dozen different new web frameworks/languages/libraries to set up a personal blog, while simplicity-worship would be more like an H.264 encoder in a single file[1] or a self-compiling C-subset JIT[2].

One thing that seems somewhat obvious about the complexity-simplicity divide, and which could also explain the prevalance of complexity-worship, is that it is relatively speaking much easier to generate complexity than to reduce it, and unfortunately it seems the majority of developers just don't have the skill to reduce complexity. It's easy to glue a bunch of existing code together without really understanding how it all works; it's much harder to take a spec that dozens of experts have worked on for many years, and condense it into a concise implementation.

[1] https://news.ycombinator.com/item?id=18045494

[2] https://news.ycombinator.com/item?id=8558822


> The managers, that this paper describe, worship complexity without realizing it. They want as large a team as possible and thereby they make a problem complex because a large organization can cause the architecture to collapse.

Maybe we have different definitions of "worship", but this seems like accidental complexity, not reverence. If anything, people tend to worship simplicity.


What people say they want, and what they do, are only loosely related. I have lost track of the number of in-house / one-off frameworks I have seen that have been justified on the grounds of making thing simpler, but which have the opposite result.

Efficiency is another common excuse offered for gratuitous complexity.


Other common excuses are "we won't optimize prematurely" and "we avoid NIH syndrome". You then end up with something like Node ecosystem.

Simplicity is hard.


>I have lost track of the number of in-house / one-off frameworks I have seen that have been justified on the grounds of making thing simpler, but which have the opposite result.

There's also 'simpler for me' vs holistically 'simpler'. A great example of this is a junior engineer comes into company, sees existing solutions and doesn't fully grok them, and jumps to 'lets throw this out and implement something better'. This is pretty common and the underlying motivation is indeed to make things simpler! But actually only from that individual's perspective.


Have you seen web development recently? The answer, apparently, is a resounding YES!


Conway's insight, and the subsequent law that bears his name, was so laser-precise that it is uncanny.

Local incentives matter too. In any social system there's an incentive to "flexing your muscles" and showing off our abilities, since that helps to build prestige and respect.

When coding moves away from "impact", and "scaling" and "agility" and back to old fuddy-duddy engineering staples like better up-front design, partitioning problems properly and solving them in toto before shipping (i.e. doing it right the first time), allocating the right problems to the right people--difficult to do in a team with junior members that need space to grow and experts that can dash things out in minutes--then we'll hopefully be sober enough to see through cargo cult practices down to what works.


There is "complex", and then there is "complicated", I believe the author refers to the latter. A complex solution involves a lot of moving parts, but each of these parts serves a specific purpose, such that it is possible to break down the solution into relatively simple, understandable blocks. On the other hand a complicated solution will have inefficiencies, some of its building blocks may be too big or convoluted to be broken down further, causing implementations that are hard to understand and/or may be redundant, feeding back into the complications. I think Rube Goldberg's Inventions are an excellent example of the difference I'm trying to make.

Social posturing is certainly largely to blame for over-complications, but I think more deeply, a lot of people hate to let go, to be told that the very thing they were responsible for, that they worked hours/days/... on isn't necessary any more, or perhaps never was. And then in the other direction, there are what I refer to as "exponential requirements", for lack of a better term. You buy a large bag of coffee that you could simply use as you go and store in the fridge. Or, you could have a jar with some of the coffee in it, and the rest of the bag still in the fridge. Then a plate to put the jar on. Then a new shelf to put those plates and those jars. By the time you've followed this track long enough, you're looking at buying a new house, even though your essential requirements haven't changed from the beginning.


The ability to recognize and manage complexity is surely a strong survival trait, akin to wisdom. As true for hunting and gathering as for particle research. Complexity is also somehow deeply related to beauty, or at least awe. Such emotions motivate us to become more observant of the complex, possibly therefore out-replicating the more oblivious.

Worship of the complex, or at least a particular focus on it, may be built in by Chuck Darwin.


The fetishism of complexity in engineering (and our machine loving culture) is much like the fetishism of helplessness in some of our religions. Both intensely value surrender to an unknowable omnipotent power. A power addressable only by blind ritual.

You might say that in both cases there is a longing for a dissolution of self. Or death even.

Is the machine-lover a death-lover?


when i was younger I used to think complexity was some sort of example of ignorance. I work as an automotive engine mechanic, so my gold standard for "its easy and works" was stuff like old fox body mustangs and such.

Fast forward to 2018 and stuff like timing chains are buried under a mountain of engineering sins. I have a honda with the oil pan, air conditioner, exhaust, and oil drained and in various states of disassembly and for what? a chain.

As i get older, I start to worship it. its inevitable that engines come with a spinal cord of three or four dozen sensors you need to carefully disconnect before doing anything. But what I cant abide by is the inclusion of demonstrably poor quality parts. Companies hope the engineering complexity will just baffle people into accepting the idea of disposable plastic valve covers and plastic water pumps, but I remember when these things didnt get thrown out.


Companies hope the engineering complexity will just baffle people into accepting the idea of disposable plastic valve covers and plastic water pumps, but I remember when these things didnt get thrown out.

I saw this part:

https://youtu.be/NZAWeR46z_Q?t=238

of this review the other day and began to wonder: how prevalent is this really?


The intro on Conway's law was more fun to read than I expected going into this.

It is interesting, because I've seen Conway's law used as a weapon against modularizing something. I think the key is that you need to weigh what you are getting out of the modules. Even if it is, in some ways, more complicated, if you can more meaningfully delegate ownership of parts of the problem out because of the the modularization, it is a good chance it is still worth doing.

I don't think I can give a brief example. In my case, we had three teams, rather than trying to put everything on one team, I was trying to ask for what the three major components of the system were. If we couldn't agree on that, it seemed unlikely we were using three teams well.


The "Simplicity: Not Just For Beginners" talk does a great work of touching the problem from the opposite angle : https://www.youtube.com/watch?v=n0Ak6xtVXno


Indeed a great talk.

Here's the related HN discussion link: https://news.ycombinator.com/item?id=18093158


Surely at issue is that we do not know how to generally handle complex problems well. Particularly those outside the domain of computing.

The old fallback of functional decomposition into parts that can be reasoned about and developed independently, is problematic due to the need to constantly rework and extend the hierarchical decomposition as more about the domain under consideration is specified.

How to predictably build a payroll system, or air-traffic control system or a much simpler (at first glance) yet inevitably complex real-world system, remains out of reach.

Surely we don't worship complexity, but outside of computer science, just still don't know how to deal with it?


The more complex a system is, the more people it takes to maintain, write, test for it. It builds the manager's empire and also increases mindshare for the complexity-driven approach. I've seen it many times.


I've seen an even worse version. If only the person who wrote it understands it, then when someone else tries to modify it, they will likely fail. Then the original author can back-channel predict the failure and use that as further evidence for their brilliance and senior-ness.


Any serious programmer simply can not worship complexity because then his/her work would never get done.

You can start with a trivial problem and grow it into a system so complex it's hard to even begin grasping what it actually does and how the software actually works.

If you start with a complex problem the only viable path is down to less complexity; often we won't reach simplicity but at least we're somewhat equipped to handle the underlying complications.


How many times have you stumbled upon something on Github that took 100x more lines of code and 20x more contributors than necessary? Probably every day.


Another law of complexity: the Law of Requisite Variety: If a system is to be stable, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled.

https://en.wikipedia.org/wiki/Variety_(cybernetics)#Law_of_R...


The law only applies when you need to control the system exactly. No real-world controllers attempt that.

Intuitively, the theorem says that to control a system exactly you have to see any external disturbances coming in advance, predict their effect (which requires knowing the entire state of the system) and generate control inputs to counteract them. But practical controllers usually don't bother with this, and just apply negative feedback based on the observed state.

For instance, to steer a ship exactly you need to know all the waves and wind coming at it in the future. But to steer it adequately, you can just look at the heading and apply a low-pass filter to generate rudder commands. (The low-pass filter might require state consisting of 2-3 numbers).

Or to control data transmission over the internet exactly you need to know the size and timing of everyone else's data packets, but you can do an adequate job with TCP just keeping track of ACKed packets.


Is there a better introduction to these ideas than Wikipedia? It makes intuitive sense, but I'd like to see the proof of this statement. Wikipedia mentions a "good regulator theorem", but the citation is pretty bad: "Conant 1970".

Edit: The paper must be this: https://www.tandfonline.com/doi/abs/10.1080/0020772700892022...

But I'm still interested if there's a better introduction.

Edit #2: Looks like Ashby's 1956 book around page 207 has an argument based on information entropy. I haven't read it, just skimmed it. I'll look more closely later. You can download the book here: http://pespmc1.vub.ac.be/ASHBBOOK.html


The whole "cybernetics" field is pretty weird to me, I think the notation and lingo have sort of moved on (I could easily be wrong and biased without realizing).

However the theorems you are talking about have useful, well studied analogs in the field of control theory.

For example the Law of Requisite Variety is very much related to the concept of under-actuation (https://en.wikipedia.org/wiki/Underactuation).

The good regulator theorem becomes the internal model principle. https://en.wikipedia.org/wiki/Internal_model_(motor_control)

I think you'll have much better luck finding good introductions to these topics in control theory than in trying to learn more about cybernetics.


but who controls the controller?


Because we can feel smart if we created something that others can't simply understand.


I very much doubt this originated from me but whether I heard this somewhere and took a liking to it or thought of it myself, I became fond of saying this while I was at Google:

"How do you know if you're engineering if you're not overengineering?"

Now I don't direct this at Google specifically. This is more about the mental traps we, as engineers, can easily fall prey to.

Take interviewing and the FizzBuzz thing that was popularized (if it didn't originate with) Joel Spolsky. The beauty of FizzBuzz is that it's a quick and great _negative_ signal. This doesn't mean that if you ace it, you're a superb engineer but it does mean if you fail it, you're almost certainly not. Hiring (from the employer's perspective) is a numbers game. Every candidate costs you time (giving interviews, writing feedback, etc) and is a huge opportunity cost (other work that could be done, other candidates that could've been interviewed) so the goal is the filter out the "no"s as quickly as possible.

So the engineer trap when faced with something like FizzBuzz is to think "oh wait, that's too simple!" and they go on to change the problem to something you'll pass if you've heard of the particular obscure algorithm and will probably fail if you haven't. It's a easy fallacy to fall for. Harder is better. However now you're not optimizing to cull early candidates. Now you've designed a test that optimizes for people who do well coding under pressure (with some degree of luck) on a whiteboard. That is completely different from the original intent and (IMHO) almost entirely useless as a hiring signal.

Another example: another thing I was fond of saying at Google was that there was a hierarchy of engineers:

- At the top level (in their minds) were the C++ engineers. You're not engineering if you're not writing code in C++ (basically)

- Next tier were the Java engineers. They thought you weren't engineering unless you were writing in Java or C++

- Next came Python and Go

- Last came Javascript

Now at Google I met some engineer who were _superb_ C++ engineers. Like truly amazing. I also met others who were like walking Wikiepedias for the C++11/14/17 standards. And no, that's not the same thing.

So one example that springs to mind once was surprised to learn that I didn't know what perfect forwarding was. This is something that probably only needs concern you if you're writing a widely used C++ library using templates (especially template metaprogramming). As a user of such libraries, it's not often something you need to know.

My point here is that with all the knots C++ has twisted itself into over the years as a result of its origin and legacy, the trap some fall into is viewing such complexity as a virtue instead of baggage.

IME these aren't the people you want making design decisions on complex systems. The people you want making design decisions on complex systems are the ones who reverently follow Postel's Law.


people are definitely suckers for complexity.

Complexity of a system actually creates meaning for people that do understand it and can navigate it.

Complexity also justifies headcount and effort put towards maintaining, enhancing something.

Finally, people associate complexity with being smart. If you like simple things you are either a newb or you are [very] senior/old/experienced.


YES


No.


Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: