Using a lightweight, comprehensible framework is good, until you hit the limits of that framework and start pulling in lots more libraries to fill the gaps (the Sinatra/Cuba world). Using a heavyweight, complete library is good, until you start suffering from bugs caused by incomprehensible magic buried deep inside (the Rails world).
I see the same problem in microservices versus monolithic apps. If you use microservices, you have a lot of duplication and inter-component configuration management complexity. If you use monolithic apps, you get the big ball of mud.
Or, as I sometimes put it, "Which kneecap do you want to get shot in?"
The underlying problem isn't fashion, or bloat. It's that software is very, very complex. Unless you're doing hardcore embedded work in assembly language, you're building a raft on an ocean of code you won't read and can't understand.
A friend of mine put it well once. He said that you should have deep understanding of systems one layer from yours (ie your frameworks), and at least a shallow understanding of things two or three layers away (operating systems, etc). If you don't have some grasp of the things you depend on, you're relying on magic and are vulnerable. But you can't realistically expect to know it all.
"I don't believe in the no-win scenario." -Captain Kirk
A lot of the problems we face in software engineering are Kobiyashi Maru tests. They're no-win scenarios, tests of character rather than ingenuity. There's a certain irreduceable complexity to all interesting problems, so at some point, you're not solving complexity, but merely pushing it from one place to another.
There's a certain irreduceable complexity to all interesting problems, so at some point, you're not solving complexity, but merely pushing it from one place to another.
Right, but I'm not about to believe we're even close to that limit. Every day we have people writing bugs which, from a state of the art perspective, are already solved problems. It's like people are out there riding horses while others are driving past in their cars.
I think it's a little better to say that a lot of the problems we face in software engineering are cultural, not technical. So many people adopt these tribalistic mindsets when discussing their preferred technologies rather than allowing themselves to be open to better ideas.
> It's like people are out there riding horses while others are driving past in their cars.
This is a good analogy but in software development, as in life, there are roads and there are trails. Some developers try and drive their car on the trail and their horse on the road! But sometimes you need a horse and face already solved problems because your environment requires it (e.g. using C for embedded development).
> So many people adopt these tribalistic mindsets when discussing their preferred technologies rather than allowing themselves to be open to better ideas.
The problem is there is always a better idea. You have to strike a balance between being open to new ideas and just getting the job done. But there is definitely tribalism because that is human nature; You earn your way into a technology and community through pain and suffering so you become attached to it. Maybe some people can avoid that but I believe it's pretty hard. Especially when the differences between technologies are mostly cultural. You're less likely to be tribal when considering C vs. Ruby but perhaps more so when faced with Ruby vs. Python.
Sure, there are lots of projects that are cranking out junk redundant code on legacy systems. But even if you do use the best tools, the best processes, the best analysis, the irreduceable complexity remains.
For example, I have to read data from one source, and write it to another. I can have a single app provide api for both the writing source and the reading source, or I can have two separate apps and two separate apis. But I can't get away from the problem of reading and writing. That's irreduceable. If I have separate apps for the two apis, I have a configuration management issue. If I have a monolithic app that does both, I have a coupling issue.
Imagining that Bleeding Edge Technology of the Month will make this go away is wishful thinking. But facing the truth is awful, so we choose the wishful thinking.
I'm not merely talking about junk redundant code but entire classes of problems such as memory errors (null pointers etc.) and race conditions (in many, though not all, cases).
If you compare software engineering to another field like aerospace or structural engineering it's laughable how little rigour and formality takes place in most software. It's shocking how much disdain programmers show for mathematics which gives us the tools to prove that our applications are correct.
It's shocking how much disdain programmers show for mathematics which gives us the tools to prove that our applications are correct.
I personally think these things are great but too expensive. My customers care about solving their problems not perfection. There's a balance to be struck and I'm not excusing bad software, but if the customer isn't willing to pay for better and is happy with the current quality, there is little incentive to put in this extra effort (and therefore cost) when the resources could instead be put to work on providing value in ways the customer does care about.
I guess after you exceed the customers quality requirements/expectations, there are diminishing returns.
I'd love to produce perfect software, but nobody will pay me to do it.
I personally think these things are great but too expensive. My customers care about solving their problems not perfection. There's a balance to be struck and I'm not excusing bad software, but if the customer isn't willing to pay for better and is happy with the current quality, there is little incentive to put in this extra effort (and therefore cost) when the resources could instead be put to work on providing value in ways the customer does care about.
I didn't say you had to do all of this yourself. I don't believe every programmer has to have a degree in mathematics. Where I take issue is with the refusal to take advantage of the hard work of others. There exist languages, tools and libraries which have a strong mathematical basis that are ignored in favour of the latest fad. Places like stackoverflow are full to the brim with people asking for help solving problems that they wouldn't have to deal with if they'd chosen better tools.
Ah, you're thinking down at the lines-of-code level. I'm thinking at a much higher, big-project requirements level. It's hard to say which one is worse.
I do, however, think there's a "which kneecap" problem to rigor as well. Rigor isn't free - it comes with costs, in learning curve, in the quality of developer required, etc.
I've been bouncing back and forth between not-rigorous Ruby, kinda-rigorous Go, and pseudo-rigorous Java. I don't think rigor is a killer solution, nor do I think it's a gruesome waste of time. But I find the really ugly problems to be at the requirements and architecture level, not the code level. Not surprising, considering coding isn't my primary work.
After reading that I feel about as lost as I did after that XKCD about correlation and causation https://xkcd.com/552/
Also, just because your comment is popular does not mean your comment is good.
The popularity heuristic may be a little better than the OP suggests though, maybe.
I find it pretty interesting that the only real concern was "can this team solve my technical problem using the best tools in the shortest amount of time" but did not seem to consider things like, "What happens when this team moves on". One of the biggest assumptions that I've made is that when choosing tools, choosing the most popular ones gives you the highest chance of bringing someone on board who already groks them, finding learning material, etc.
I seem to be noticing more often now that "the best tool for the job, for the person, at the time" is completely acceptable. I feel as though this didn't used to be the case, and I know a lot of more "established" engineers who believe it's naive to choose tools based on an inclination or personal/team preference. While in this case, we're getting less code in the dependency, I'm highly suspicious of how their working knowledge, method of code organization, etc... can be transferred over time.
Ultimately though, knowing what actually happened over time with this project would be the most interesting. Does he eventually find new team members who convince him to switch back to a framework that is more widely understood and practiced?
> but did not seem to consider things like, "What happens when this team moves on".
FTA:
> 3. Cuba itself is extremely easy to work with because it barely does anything. You can read the entire source in 5 minutes and understand it completely. I'm confident future teams could pick it up.
I guess the main concern here wouldn't be Cuba, but the "handpicked solutions for common problems that a web app would face" added by the development team.
> While in this case, we're getting less code in the dependency, I'm highly suspicious of how their working knowledge, method of code organization, etc... can be transferred over time.
This article feels like a follow up on "On Ruby"[1] and related to "It’s OK for your open source library to be a bit shitty"[2]. The common theme I find in all three is the quality and health of a dependency. I agree with OP that stars and latest commit are not clear indicators of quality.
I think more useful indicators are tests, documentations and pull requests.
* Tests. These are things you can actually run to see if the library actually works. In most cases, you can read them and understand how it is supposed to be used. On some platforms even if the tests passed in the past, they may not pass now, for example using different version of the language or platform. The coverage of the tests would be nice, but it is hard to measure at first sight, therefore using the ratio of tests LOC to library LOC might be an indicator.
* Documentation. The documentation itself does not change how the library behaves but it is a clear indicator whether the author expected someone else to be able to use it or not, and shows a sort of responsibility. Most of weekend hacks would not have one.
* Pull Requests. If there are old and open pull requests, that seem useful, it is a bad sign for the maintainer of the project.
"That didn't prevent further criticism, and many comments followed recommending Rails and Sinatra as alternatives based on familiarity and popularity."
Ah, the "industry standard" argument, where industry standard seems to be defined as "whatever I've read the most blog posts about", or "whatever was used at my last job", or possibly "what I learned in school (last year)".
One of my major dissatisfactions is newly hired, junior (or mid-level) developers coming on board and then immediately wanting to change everything to be more "industry standard". They rarely seem to try to understand why we're already doing what we're doing, they almost never seem able to explain why what they want to do is strongly better, and they frequently just go off and do it, leaving a pile of projects each doing the same task in different ways.
The problem with stepping into legacy systems from the outside is the "Wow, this all sucks, let's do it over!" thing. Experienced engineers who have dealt repeatedly with the ball of mud (hopefully) learn that completely restructuring legacy code is, at least, a dangerous endeavor, with no guarantee of success. The really good ones know that the unknown unknowns are lurking in that muddy swamp and can swallow development teams whole. Sometimes, that tangle of spaghetti code is actually a cage for a dangerous monster.
Part of the reason the ball of mud is as ugly and messy as it is is because of repeated attempts to fix it, just getting absorbed into the structure. An application that was shaped by many different engineers and leads over many years can have a lot of different flavors of weird.
Another problem (not you doing it) is quoting a Joel Spolsky article from 15 years ago saying "don't ever rewrite from scratch". NEVER ever accepting a rewrite, based on someone's experience in a different field... that's also a bad situation.
About 20% of the projects I've dealt with would have been measurable (by pretty much every measurement you can come up with) by a complete rewrite. One of the commonalities I noticed is no one from the original technical team was still around. The notion of losing "institutional knowledge" about the code goes out the window at that point - it's already lost. You now have new people now just adding on more crap (or fixing crap) without ever even knowing why it was crap in the first place.
If someone only wants to keep their systems running - that's 100% fine - no need to rewrite. If the org expects new functionality on a regular basis, and there's no one who truly understands any of the current code, a rewrite may make sense.
One of the problems with popularity is you will get a lot of mediocre developers who don't know what they are doing. You know, the people who pick up tools only because they think it will land them a job more easily. The same people who picked up VB.
You can't build a product with a mediocre team, no matter what tools you have.
If you have a decent team, I think they will be able to manage to build great products even with not-so-great tools. Of course, if you force them to use shitty tools, they might just decide to leave.
One of the problems with popularity is you will get a lot of mediocre developers who don't know what they are doing. You know, the people who pick up tools only because they think it will land them a job more easily. The same people who picked up VB.
I understand that you're saying it's a bad practice to adopt a tool for reasons other than its utility in addressing a specific problem.
But is it really so terrible for a developer to want to learn a new tool so he can broaden his skillset and make himself a more attractive candidate to potential employers (so long as the tool is the right one for the task at hand)?
There are types of developers who are very limited in what they can do. They learn some basic "tricks" in the form of "If write this magic spell, I get this magic result".
These are the mediocre developers.
They tend to pick some popular tool and try to market themselves as "an X developer" where X is something that's popular right now.
Then there are developers who can create magic. They can build something almost from scratch without having to have a library that already does it for them. These developers can pick up X or Y or Z in a week or two and then they will become 20x more productive with it than the mediocre developers are.
I'm not saying it's a bad thing to pick up a tool to market yourself as an "X developer". I'm just saying there's a lot more mediocre people in the pool of "X developer", and if that's what you go looking for, you'll get mostly mediocre people.
At the same time, theres a very high amount of managers and HR who want someone who has worked in those libraries, period. They want people that will tell them "I'm an X developer".
Why? If I'm running a business and I intersect closely between the business side and software side of the company, but need help on the side. I'm hiring for the stack that I use and already understand.
It's not mediocrity, it's called playing to your advantages.
Your problem is, you are hiring for a "stack", not for competence.
For the developers that go out of their way to learn a tool only because people like you will hire him, their problem is that they'll be hired by people that hire by a "stack", and not by competence. Jobs that select this way normaly place a negative value on competence, and the developer will face the choice of adapting and become mediocre, or going away and losing some short term benefit.
A bright person can pick up a new language or framework in a few days or less, but it will take some time working in that ecosystem before they're really writing natural and idiomatic code. All other things being equal, I would prefer to hire someone who has experience in the same stack.
So, when it's time to move to a different stack, do you train your old team or can them? Or worse, relegate them to only maintaining the old stuff until the new stuff is in production?
If you don't move to a new stack, how's your retention?
He talks about the virtue of using new and untested languages and frameworks. The example he uses is Twitter's use of the relatively new Ruby on Rails framework.
Unfortunately for him, this is a bad example. Twitter was notorious in its early days for being down all the time. So Twitter decided to migrate to Java - a dull, over-verbose, yet reliable language.
You're making the mistake of thinking that the software that Twitter builds today should be the same as the software they built early on as a startup company. You can't make those comparisons because as company during those times Twitter was facing completely different realities. What were those realities early on? Twitter had:
1. Small team, small budget, smaller set of technical skills.
2. The need to do a lot with less.
For these needs, I would argue that Rails was the perfect tool for Twitter early on. It likely gave them a competitive advantage in their development.
However as with all fast growing prodcts, nearly every app experiences a new set of challenges every time its traffic scales up by an order of magnitude. Twitter being an extreme example of among the most difficult-to-scale apps you can imagine (data is rapidly being created and read -- by its nature its inherently hard to cache).
Their original app was built by relatively inexperienced developers hacking together an MVP. By the time they reached scale the had a roster of senior developers capable of rebuilding the system way more professionally than they ever could have in the early days. Rebuilding the core functionality was trivial, and this time, they could build it from the ground up to support 5000+ tweets a second.
I'm primarily a Ruby developer. Literally every single time I've started a web-oriented project using something like Sinatra or Padrino etc., I've ended up regretting it. I end up building a shitty, half-featured, buggy version of Rails. My experience is that most other projects end up the same.
I suppose mileage may vary; perhaps it's easier to write bloated apps in Rails. I'm not convinced that's a good reason to avoid it, though.
I don't know anything about the libraries mentioned, but it sounds like you've decided that the Rails way is the correct way to do things, and then try to shoehorn that into the others.
An extended version of the quote would be "Don't just believe that because something is trendy, that it's good. I would go the other extreme where if I find too many people adopting a certain idea I'd probably think it's wrong. Or if my work had become too popular I probably would think I have to change." I took it from this video: https://www.youtube.com/watch?v=75Ju0eM5T2c
Unfortunately this sentiment is why software is plagued with an unhealthy culture of constantly needing (/wanting) to learn new languages and frameworks.
The "elite" developers (who often don't actually maintain real production apps, or need to live with the consequences of their design choices) begin to fad over a new language. O'Reilly publishes a book and sure enough... Influential senior developers who obsess over these elites start fangirling, leading them to convince their bosses/teams/companies to also adopt this new "cool" technology. As the popular tide of generic bandwagoners rises, the "elite" developer begin to feel the pressures of wanting to redefine their identity again, and soon jump ship to switch to the next hot thing.
I think you are correct, I don't think he means popular ideas are likely to be wrong. Just more likely to be misused or misapplied. As in the linked article, they received a lot of advice to use rails not because it was the right tool for the job but because it was the popular tool for a vaguely similar job. Also no one had the knowledge to comment on their selected tool because it wasn't the popular tool.
As a developer in the "out-back" (California Central Valley / Sacramento area), I find it a bit funny watching somebody who gets to use Ruby (at all) angsting over which framework to use. At least they have escaped the XML-Hell trap that is JEE.
I'm happy for the guy that he gets to do anything at all beyond that which is promoted by Oracle or Microsoft. I suppose Google might belong on the list of "Promoters not to be Ignored", as well, but they haven't flogged the use of inappropriate hammers enough, yet.
This is the most ironically stupid thing about hacker culture. It's just as 'follow the leader' as directly following the leader, and it makes the culture and the people in it definably predictable.
Actually reasoning about stuff when stuff can be reasoned about, and considering most opinions to be superfluous nonsense is the right direction. Opinions follow abstract models. You can pretty much find a computational or mathematical model, throw some nouns and verbs on it, and bam, you've got an opinion that has nothing to do with reality.
"You can pretty much find a computational model..."
I humbly request some illustrative examples. Note I agree with you. Examples can be powerful (and, unfortunately, polarizing). Whatever you can share would be appreciated.
Epistomology, psychology, and information theory tend to reflect them in an almost mirror way.
Then again, is it just me who sees the patterns in the words, or are the patterns actually there? I think it probably depends on how you mind constructs analogies and relations between it's map of cultural topics and dialogue, etc. It just bother me when I see flashes of pictures that I normally equate with programming and math, that model the relational structure of what I'm reading. The two have nothing to do with one another, and yet they persist in flashing across the visual processing part of my mind without me doing anything. I am occasionally fascinated by determining whether they mean anything, or where they come from.
People use terms though - black and white thinking, etc. Lots of analogies and metaphors have very simple abstract forms. It's memes, deeply validated patterns. They don't have to be true in reality as long as people keep talking about them and agreeing they exist.
I feel as though I have become a scientist scientist. Thanks for the kind words, but I'm just trying to get through a rough mental space. I prefer maintaining a zen beginner mind, because it is fairly easy to silence doubt.
If most people believe X and that makes you think X is probably wrong, then you are probably in one of the groups that believes alternatives to X. There are lots of groups like yours, they can't all be right, and therefore your chances of being any more right than X is are slim (all things being equal). At least, with X, you have the support of the population and can fit into society during your lifetime.
And the nice thing about X is that it has been battle-tested by many people, and while it may be "wrong", it WORKS.
The way you put it, it looks like it's a matter of trust rather than understanding. If you don't understand what X does and how, if you don't want to invest the time to analyze it, then you are gambling. The fact that a lot of people use X makes it look like a safe bet, but the point is that even if it turns out to be a safe bet, it's still a gamble. Isn't that cargo cult programming?
That last line is important. Depending on software hardly anyone uses means depending on software that hasn't been torture tested in production. And frankly, a lot of the ugly weird parts of mature code are scars and armor from the wounds suffered in the real world.
With the past year in retrospect, I can say with certainty that I'm quite pleased with our decision to embrace Cuba. It's led to a highly-declarative codebase with clear layering and very little magic.
If anyone has any specific questions I'd be more than happy to answer.
Mainstream stuff isn't necessarily the best, but it's "good enough". The big benefit isn't the product itself, but a market effect: multiple third-party add-ons and service will be available. Bugs and gaps will have been found and gaffa taped. A large skilled employee pool, commodity-priced.
Basically, "if we all make the same mistake, it will be worth someone's while to take care of us" (baby boomer effect).
This is what makes established products unreasonably difficult to dislodge (from a technical perspective).
At the other end of the capability spectrum, Alan Kay said there's an exception to the rule to reuse not reinvent: those who can make their own tools should.
Exactly, unless you're doing an app with high concurrency or real time updating, you can pretty much scale any Rails app to the stratosphere if the app is properly cached and optimized for performance -- and you get to do so while staying within the confines of useful conventions.
Really I think the developers just chose Cuba more than anything so that they could scratch the itch of getting to try something new,
Counter-point: at least in the open source world, popularity can translate into lots of eyeballs on the code, lots of bug fixes, better ability to hunt down solutions on Google, hire people, etc.
Counter-counter-point: lots of people depending on a project may slow its progress and prevent its maintainers from correcting fundamental design mistakes, all for fear of breaking existing installs.
Even in a non-open-source tool, lots of users means lots of reference on Stack Overflow, and an exponentially better chance of not encountering bugs for your particular use-case.
Any framework of moderate size has bugs in it. The only question is, has the bug that would slow you down been fixed yet?
That's why the size of the alternative is so important. If the alternative is just 300 lines of code (as in this case), then even if you do run into bugs, fixing them is gonna be trivial. But what about 3,000 lines? 30,000? ...
I wouldn't say everybody should automatically go with the popular option, because this kind of herd mentality can really be destructive. If another, less popular tool seems better for your use, give it a fair consideration. I've actually personally chosen that route several times, and I would again. But by doing so you accept the strong possibility of dealing with bugs which your competition doesn't need to worry about.
> When Rails was created, Ruby wasn't mainstream. When Twitter launched, Rails had been out for just four months. Adopting Ruby and Rails were bold moves made by people that didn't care about popularity. They were forced to assess the quality despite the lack of stars.
Apparently they didn't do a good job since they've long since abandoned it.
On the other hand, while it's pretty obvious that Rails isn't appropriate for global-telecomunications-level infrastructure, it was a project that allowed Twitter to get off of the ground pretty quickly. That's a pretty valuable feature to have!
I don't remember his name, but one of the creators of Angular said (paraphrasing) "When choosing between libraries and in doubt, choose the one that has more testing in place". This is something I've personally found true the majority of the time.
Which kind of testing? Is it better for a library to be in wide distribution and therefore battle-tested but without unit tests, or to be unpopular but full of unit tests?
It depends on the size the library. I'm sure people will point to some outliers but I'd hard-pressed to find a large library that is both widely distributed and severely lacking in tests.
Sure, sure. If a tool or an idea is popular it still may not be any good.
But the thing about groupthink is, it is at its most insidiously effective when its participants don't recognize it as such. It works best when one feels oneself an iconoclast for saying what most everyone around them believes.
"Following the crowd is easy, but it's often a shortcut to the wrong place."
In the first place, if the product does not work well, it won't even get much attention or popularity. There are reasons why products got massively popular in the first place.
for micro vs. monolithic, why does it have to be versus decision? "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
Using a lightweight, comprehensible framework is good, until you hit the limits of that framework and start pulling in lots more libraries to fill the gaps (the Sinatra/Cuba world). Using a heavyweight, complete library is good, until you start suffering from bugs caused by incomprehensible magic buried deep inside (the Rails world).
I see the same problem in microservices versus monolithic apps. If you use microservices, you have a lot of duplication and inter-component configuration management complexity. If you use monolithic apps, you get the big ball of mud.
Or, as I sometimes put it, "Which kneecap do you want to get shot in?"
The underlying problem isn't fashion, or bloat. It's that software is very, very complex. Unless you're doing hardcore embedded work in assembly language, you're building a raft on an ocean of code you won't read and can't understand.
A friend of mine put it well once. He said that you should have deep understanding of systems one layer from yours (ie your frameworks), and at least a shallow understanding of things two or three layers away (operating systems, etc). If you don't have some grasp of the things you depend on, you're relying on magic and are vulnerable. But you can't realistically expect to know it all.