Hacker News new | past | comments | ask | show | jobs | submit login
Minimalism – An undervalued development skill (volument.com)
506 points by tipiirai on Dec 24, 2019 | hide | past | favorite | 172 comments



I have super mixed feelings on this.

Minimalism makes for a clean, easy to understand codebase, that avoids some of the performance pitfalls associated with bigness.

But sometimes it’s easy to build 80% of what you need, and then massively difficult to get to 100%, and the ease of getting to 80% can lead you astray.

For instance, I can write a pretty fast matrix multiplication algorithm in maybe a few dozen lines of code. But then you look at OpenBLAS, Eigen, MKL, FBGEMM, etc. and you see thousands of lines of code. And it’s not because I’m 100x better at programming than the developers of these libraries, it’s that they’ve really put in effort to getting the best performance, in all corner cases, on all platforms.

You could argue that matrix multiplication is an extreme case, and it is, but I think it’s still valuable to critically think about what you give up by opting for minimalism before you write off other people’s software as aimless bloatware.


The message I took from this article was minimalism in features and not necessarily code. And certainly not about library usage. If you're building a product, focus on building a few key features extremely well.

I suspect that part of the reason for feature bloat is that a when product is assigned dedicated, long-running teams, it tends not to reach an 'end state'. The team would rarely say "okay, now the product is mature, let's place it on maintenance and go on to build something else". Instead, they may (unconsciously) continue to justify their existence by continuing to 'improve' the product.


Microsoft applications released as store apps were an attempt at minimalism in features. Their music, skype, mail, even the control panel replacement "settings." Windows 10 has been cutting features left and right.

To Microsoft, minimalism means:

Rebuild a popular piece of software from scratch

Release the NEW improved shiny app that follows a new design fad. Gloss over the fact that it's unreliable, slower, more crash prone & lacks over 50% of the old apps features by promising fix the 'issues' and missing features with future updates.

Retire the old but fully featured (still working) app while the new stuff still lacks basic functionality.

Retire this NEW stuff when it's still lacking promised features. And replace it something that lacks even more features.

Windows 10's attempts at minimalism means that a feature you rely on today might be gone forever with the next update or hidden - requiring a treasure hunt.

There's just one app that MS didn't screw up by going minimal -- OneNote. I still can't figure out the Win32 version. The store version just clicks for me. They once included an excellent RADIAL Menu for OneNote but removed it inexplicably.

(Radial Menu of OneNOte https://www.youtube.com/watch?v=Job4Rg-sbDo)

*

Developers pursing minimalism tend to release software that:

Require too many steps to accomplish simple tasks because controls are HIDDEN away behind hamburger menus and/or frustrating gestures. Example - windows 8's charm bar that required moving the mouse off screen to the top right corner to reveal the settings menu.

Have too much white space - and have tiny text (self explanatory)

Replace intuitive, native, user friendly controls with a Minimal but frustrating version - example removing the Underline of hyperlinks in webpages, even changing the mouse icon on hover. Sometimes, links and normal texts are indistinguishable. Another example super tiny (or hidden) scroll bars

Going mobile only - even when the site / service doesn't use any mobile only features.

Mobile first - Mobile first and minimalism are often mentioned together and is an excuse to release featureless but bloated websites or apps.


MacOS has been going in a similar direction. But Microsoft really has user-hostile faux-minimalism down to an artform, consistently removing and hiding features for minimum user benefit and maximum irritation.

Small case in point - the search bar in Excel 365. Until recently it was a text box next to a magnifying glass, and you could type into it.

Now it's just a magnifying glass. You have to click on it to reveal the text box. Only then can you type your search term.

This change produces absolutely no user benefit. There is no rational reason to add an extra click. Long-term users now have to go through "Wait, what...?", where previously muscle memory would just work.

Another example: the AutoSave switch in Office 365. Users quite understandably assume that "AutoSave" means the file is saved automatically.

In fact this is a OneDrive-only form of AutoSave. AutoRecover - a different and much older local drive auto-save feature - is unrelated, and has a separate setting in Preferences.

Providing a minimal switch in the UI which doesn't mention the distinction is guaranteed to confuse anyone who doesn't already understand the difference - especially when clicking the switch tells you AutoSave isn't available because of your privacy settings, and doesn't mention OneDrive at all.


Windows 10's networking settings is the perfect example of this. You open the Windows 10 network settings, find the button to get to the Windows 7 version, then click the button to get the XP version that actually gives you a list of all the network interfaces with the controls you want.


It's both. Minimalism should be applied to all layers from the big picture (features) to the implementation details.

> focus on building a few key features extremely well.

This. You nailed it.


Recently, on my first software job (freelance) I messed this up. I had very little opportunity for contact with the client and they were slow and unreliable with providing feedback. I could tell we would go over schedule if I didn’t produce a lot more work with each feedback cycle and I desperately wanted to do a good job so I let myself get weighed down with imagining things that they might think I was stupid for not including in each version.

I am still unsure of how I should have approached this, but I know I messed up because at the end it was just way too much effort to fix bugs and add features, but I didn’t have the time to refactor. If anyone has advice, I would be grateful.


Send a spec of what’s reasonable given the budget and ask for approval before continuing. If you’re worried about bugs increasing the time consider adding a contingency to the budget. Pad everything out a bit. If this runs to multiple cycles, give them deadlines for feedback and push out the schedule if you don’t hear in time.

Also if it’s your first time and you need the cash and experience, you can overdeliver a little. But if nobody is ever unhappy with you then be aware that you probably are overdelivering.


1. Having such negative experience pushes you to make a harder, but better, decision in the future. Embrace this, it will happen repeatedly in different ways to you forever.

It is the power base of personal change.

2. It's ok to make the client responsible for everything in writing. Especially if you do not have a deep committed relationship with them.

I have clients I have worked with for over 15 years and they don't even need a quote from me to start work, and they send me a check for any reasonable amount up front.

I have other clients where I spell out every single detail before I start and I do not deviate (add or subtract) from this list (for both our benefits) without written request from them and possibly a fee change.

3. I give away free work to many clients for many reasons. Some I don't even let them know, and others I make sure to put it on the invoice as a discount and rate. This way they know that I did the work, I did it for free, and they should recognize that. Also so that it's easy to charge for this work in the future.

A lesson I learned hard was a few failures:

1. When to start: I did a bunch of work when a client said ok on the phone. When I invoiced he was mad because he didn't remember saying "ok" to start work. So, now I only start work with an email record, period. Even if it's annoying, even if it delays work, even if it makes the client annoyed. Every single time I ask "can you send me an email with the ok to start this work?" (or I prompt them with an email and they reply) Again, some clients verbal is ok, but only if I know them really really well. (ie, lots of previous work with them)

2. Extra work: I did a bunch of extra work/features on one project, and they wanted me to support the extra work also for free forever. (forever... sigh) And were angry when I said I couldn't...

3. Accountability for timelines: I had a project that every weekly meeting more features got added to the project. We spent half to over half our budget in meetings. (yes that bad) So I made sure to document all time spent on everything one month. The next month the lead project manager started cutting back the meetings instead of us devs complaining about not having enough time, the project manager _knew_ we didn't.

4. Specs and Expectations: Numerous times I "imagined" what the client actually wanted because I knew better than them. I would build something expecting to be paid for it (or at least appreciated) and it would be presented and the client would ask for it to be _removed_ from the project.

This was the last kick in the teeth I would ever take from this, ever, ever again.

Then later a new client (a state university) had visual layouts of interfaces given to me. I had learned some lessons about being screwed over before, so I was going to stick to their layouts no matter what. After it was built, they were _really_ pissed, and had the gaul to say to me:

"Why did you build it this way?" And I said "Because it was how it was designed and specced out". Their reply?

"That isn't what we wanted, you were supposed to make what we wanted."

Unreal, but I won this argument hands down. No one can read minds and people who expect you to don't have a reasonable argument if you provide your experience on why you will never do this again. What can they say to you?

My long term clients _never_ give me a hard time about anything extra I do, ever. I know them, they know me and they say "make something that solves this problem the best way you can", and we may tweak, but we respect each other and I fix my errors, and they pay for theirs and we meet happily in between no matter what.

Also, I will no longer support software for free indefinitely, I say I offer free bug fixes for 6 months. Any new features is paid work. And support after 6 months will need to be negotiated. (all this depending on the client and project)

I state as much as possible up front about everything (in writing) so there are as few arguments as possible (I hate arguments with clients they really suck). Doing this extra work kinda sucks sometimes, but the older I get the less I have to do this. The first time it saves your ass you will be happy. And as soon as you write it and hand it over you will have instant peace of mind.

It may take you a few projects to get the handle of these ideas, but it's obvious you recognize there is a problem. But communication and clear expectations is the solution. I have also learned to say "Sorry, I meant to state this up front, I failed to do so, so I will give you X for free for my screw up, but I still need Y to do Z"

Keep at it, what you are facing is normal, and your desire to do good work is commendable and will pay off in the long term in ways you can't imagine.

Cheers!


That's interesting that extra work got you on the hook for extra support. Could you give a little detail about the extra feature you added that they asked you to remove? That's the sort of experience that I don't want to have myself.


Been in the industry over 20 years now, so I am sorry I for not being more specific with my examples here. (I learned this lesson many years ago) But here's some recollections.

In the process of building something if I added say a color picker or a calendar date selector, but the client wanted just a text field. Then the color picker/calendar date selector has an issue with a really, really old browser and I have to spend time debugging it at their request. (again because "young and inexperienced" me can't explain it properly)

My current self would just remove it, and/or explain it will cost extra. Soo 100% of the problems I described above are solved with mature communication that I learned the hard way.

--Features requested vs features built--

A client will ask for 100 features up front, we build 10 and by the time we have an alpha, he bails on 90 of them, and adds 20 new ones. Of those we bail on half again. This is so normal that I now bring this up at meetings to help us prioritize development.

I have had very long term clients ask me to remove something, then later ask "where did that feature go?", ha. I have to laugh at this, and I explain to them "I don't remove features without written request, if you want me to spend time looking up why this is removed (likely from an email) I can do that..." But they never have me do this.

I have built extra templates/views, sorting features, even tools to add/remove things the client didn't want users to have access to, so I had to delete them. (no one would pay for a toggle to disable a feature they didn't request, hence the delete on my dime)

One client even went so far as to say he didn't want _his_ clients (it was a CMS for web sites) to see how easy it was for him to build sites using the software. (his clients had access to the software for content maintenance)

I had built an LMS (Learning Management System) with a group once (again, I was young and eager to impress) and I vaguely recall adding some features for working with quizzes or something, but because I had at least 2 PhD's on the team, that mightly protested something about "that isn't how that process is supposed to work...blah blah", that I had to remove it.

I was the sole developer on a project with at least 2 designers and 6 (yes 6) project managers... ugh. They asked for the dumbest things and I would protest a little by building it a "smart" way that wouldn't piss off the users. But I had to revert everything and try to hack my way through making their overly complex visual designs work on web 1.0 _and_ support super old versions of IE at the same time... because project managers know what users want best.

--Design alterations on the fly--

When CSS was still immature, and many layouts were very hard to build, I made some complicated things in a mapping program that insurance workers would use to map out and plan routes and such. Google maps was very new (so maybe 2006/7 ish) and we were building on top of this. And I would build layouts that I thought were ideal for users, because the client was just the business guy he left the design work up to me. And I had to rework the design a few times. Even after I had already specced everything out, and the client agreed.

The lesson here was that the client blamed me for bad design decisions based on our poor communications. I had visually laid out the entire app before building it, but because we kept redesigning the visuals, doing a visual spec for each change was overwhelming. So I just made changes sorta willy-nilly based on meetings. And the client would come back and call a design decision a "bug" so he expected me to fix it for free... ugh.

--Verbal change requests--

I had a client freak out on me (he had a temper and I was young and didn't have courage) claiming I had "told him to do X & Y" on a phone meeting 6 months prior. And therefore I was on the hook to fix it for free. I don't even recall what it was, but I remember the verbal bashing I got.

This is where I learned to stand up for myself in a mature manner. I told him I don't recall saying that, and that I make a point to only make changes like he was describing in writing. And I pointed out that it was unfair and unreasonable to expect me to remember what I said on a phone call last month, let alone 6 months ago. And this is one the reasons I get do changes like this in writing, to avoid these exact conflicts.

He was angry, and I weathered the storm, but it was a valuable lesson. Not everyone is nice and reasonable, but you may have to work with some people this for a time. And having a good set of communication and process rules keeps you from getting blamed for something that isn't your fault.

This is getting long, but I didn't mean to imply "support" was some huge on-going thing for some giant feature I had spent days on. But more edge cases that the OP was bringing up, but seriously impacted my motivation on the work and affected my relationship with the client.


Wow, lots of experience there. I guess I should not expect written communication to cover me if I want to do anything that doesn't come out of Jira.


I think it all depends on your client and your relationship with them.

If it's a large corporation with no relationships possible (ie, this is a hard learned lesson in itself), I'd likely want as much in writing as possible.

If you are working with a small shop with people you know well, maybe it doesn't matter so much?


AKA "The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time."

—Tom Cargill, Bell Labs


Is this the original version of this quote? I’ve both repeated and heard it many different ways. Most often “the first and last 80%s”.


I've also gone the other direction from minimalism in codebases, and consider it a positive change as a developer. As a junior, I always wanted to roll my own: roll my own advanced multi-select autocomplete input, roll my own Modbus communication library, roll my own internal tool for the company to use rather than an existing product, etc. So what happened? Now we have a relatively buggy implementation that we have to support.

Now? I see things differently. I look at the problem and evaluate existing solutions. 90% of the time, there's a battle-tested open-source (or cheap) solution which fits our requirements that I can quickly integrate without burning piles of company cash building and supporting my custom crap. Yes, it's not minimalist. But it's overwhelmingly a better decision most of the time.

(Mind, I suspect one's experience is highly dependent on the developer ecosystem one is in. Just my 2c.)


This can be considered minimalist. For example:

We use Heroku instead of a custom AWS setup We use Sentry instead of a hand rolled exception tracking system We use Postgres instead of a custom

We are minimalist in the sense that we minimize the code we have to maintain in house.


That there are two opposite interpretations of "minimalist" in the article and thread with respect to build-new vs. use-existing decisions probably suggests that it isn't a very meaningful metric.

It would be ideal if you could start with the tactic the article advocates and then easily switch from a stripped down custom approach to a more common denominator standard approach exactly when it begins to have better ROI. But in the real world, switching costs are high, ROI is impossible to measure perfectly, and there are other variables that matter a lot (like your future employees' familiarity with tools). My personal preference is in line with the article's suggestion - I strongly prefer building little purpose built things to reading documentation for and hacking around inconvenient parts of standard tools - but I tend to think it makes better business sense to go with the standard tools for things that aren't directly in your project's core competency.


It's best handled case by case as with most decisions.

Should one build their own database? Probably not. That introduces more complexity.

What about an npm package to pad strings? It's simpler to write that logic than to deal with one more dependency.

Giving two extreme examples to illustrate that "minimalism" may not be a useful criteria.

So seems like we're in agreement :)


Minimalism isn’t a metric, because as you say, there is no object or method of measurement given. Might as well say that your metric is “betterness”. Better what?


I've found that, to some extent, the reason that I tend to roll my own implementation of various things is that I don't have the authority to add dependencies to our codebase.

Some of the transition from rolling your own to just using something else probably comes naturally from gaining sway over a project to the point that you can actually make the call to import a new package.


I've followed a similar trajectory over the years, but I'm currently back to minimalism. Perhaps the pendulum will swing again. In any case I'm not as extreme about NIH as I used to be. I try to make the decision for pragmatic reasons, not emotional ones.

What pushed me back was the current JavaScript landscape, with endless breaking changes to the major UI frameworks and constant security vulnerabilities being reported in our code bases by GitHub (they're even doing auto pull requests now).

In my experience, updating any decently sized web app project is a huge pain, and very time consuming. Everything is constantly moving. Libraries, frameworks, build tools, node, etc etc. These are all dependencies.


It looks like yours was a case of NIH syndrome. That has nothing to do with minimalism.


NIH is absolutely relevant here since article showed 3 (out of 3) examples of it.


The comment was relevant, and welcomed. Sorry if my response gave other impression. I just don't agree with that premise of the article.

There is extremely bloated software that can be replaced by a few lines of code but, in general, minimalism is hard to get right and requires a high level of expertise.


I agree with what you're saying about how the greedy approach to covering all your cases isn't usually optimal, but if a manager tells me that their rat's nest of a 30-year-old enterprise software codebase is an outcome comparable to the decades of research put into matrix multiplication algorithms, I'm going to assume that department is bonkers until proven otherwise.

Often these systems grow according to that greedy approach, sprouting a feature here and there as sales tells the team to add things, and then you're left with a system that was more grown than designed.


I wish I could read the article but it got the hug of death, so I'm basing this on your comment:

I agree with you -- that's why I think clean code is less important than clear boundaries. All that messy code for matrix multiplication poses a lot less risk for consumers of it when the API is clear. eg. Pass in a 2D array of integers, receive an integer in return.

That's pretty obvious for me to say, I think, but you run into situations where an app, for some reason or another (the main culprit IMO being Not-invented-here-syndrome), couples code like these matrix calculations to constructs unique to the codebase (objects, database structure). That's when it turns to hell.


You bring up a good point, but I think the takeaway is that you need to be thoughtful about the process, not necessarily that one way or the other is always right.

How often are breaking changes introduced to the APIs of OpenBLAS or LAPACK? I would guess approximately never (I could be wrong though. I'm actually surprised and a bit disturbed how much activity the GitHub repos are still getting).

Beyond that, how much of the API do you need, or are likely to need in the future? Why not implement that matrix multiplication yourself to start, but abstracted in a way where you can drop in OpenBLAS in the future? You can even copy their API if you don't want to abstract, and include a bit of profiling to warn you if it suddenly gets very slow on a certain platform.

In my experience the chances are good you'll never need that last 20%.


I believe there's work to be done for each new CPU architecture (Broadwell, Skylake (AVX-512), Cascade Lake, let alone ARM or other architectures). The code needs to be updated for things like L1 cache size, number of registers per core, and number of adders per core. So there will likely continue to be frequent work on BLAS implementations until there's some very smart optimizing and profiling compiler (which is related to what ATLAS does I think).


That activity is probably grad students and post-docs looking to publish any overall optimizations or implementations for exotic architectures they can come up with.


Some of the activities in the last decade includes significant improvements in the documentation, build and test automation. The code might have been good before, but the usability and quality of testing has improved significantly.


If true; even more disturbing.


If you need the full power of BLAS, etc then it’s absolutely the right call to use the library. The situation outlined in this article is different: you just need one function (say dot product) and so the minimalist approach is to not take the dependency.


I surely understand what you are talking about. However, the three examples mentioned on the article are such examples where only 0.5% of the code does the core job. I'm sure there are more such examples, but doesn't apply to all project of course.


Yes, something like that is the reason libraries like React exist, even tho many devs can write a VDOM in under 100 LoC.


Why not use APL for matrix multiplication

Minimalism without sacrafice in performance

Look at kx.com, shakti.com


Do not use APL if your main requirement is fast matrix multiplication. I implement Dyalog APL—our matrix multiply is not as good as typical BLAS implementations and I think this is also true of other APLs (most currently maintained APLs are not performance-focused).

J replaced their matrix multiply with a good BLAS-based implementation a few years ago but only supports double-precision floats and tends to be slower than Dyalog for other operations. I don't know about K's performance.


I feel like this misses the point

Those projects are organized around one monolithic package of code that could be broken out into there separate libs and composed together in novel ways.

A GUI is just the set of default compositions an opinionated team felt would connect best with an abstract perfect customer in mind.

Each chunk of code could be managed then by smaller teams, and the inputs/output spec more easily understood by all.

Unix composability at scale is what the web and devops has been peddling philosophically, IMO.

Lots of people like the desktop metaphor. Personally more of a “computer is a text editor I use to compose interesting outputs”.


This applies at all levels of the stack, even hardware. I realize the author is talking about javascript, but I can think of several hardware projects where minimalism is clearly superior.

At a previous employer (whom I cannot talk about), we were working on an ASIC. The ASIC team was new, brought in for the job, and was eager to include every possible feature that stakeholders wanted. The stakeholders seem to view this as a "its my birthday" kind of scenario, and so asked for lots of features (it wasn't going to cost them anything, after all). We had one senior hardware engineer, who was not on this ASIC team that was asked to sit in on meetings, and he was the "no" man. Somebody would request a feature, and he'd sketch out on the whiteboard what it would cost in chip area, power, and verification time. The ASIC was delivered more or less on time, and was wildly successful (I think, I'd left the company by this time). If it wasn't for this "no" man, I think they'd still be trying to verify all the features other teams had asked for.

I never worked at Intel, and I don't know the inside story, but I'm guessing the same thing happened with their server NICs. The original 10G and its respin (82598, 82599) were simply brilliant in their simplicity, and they were one of the first full speed 10G NICs in terms of packet rate. Flash forward to the 40g / 10g xl7xxx series, where you have a kitchen sink worth of features, and the host has to do crazy things to deal with limitations in a basic core feature of an ethernet NIC (no more than 8 DMA s/g segs per emitted TSO packet). Flash forward again to the 100g columbiaville, which seems to be more of the same -- with the same TSO bug!


I'm glad to hear other people talking about how great the Intel 82599/Niantic was and how they have come off the rails since then.

I'm so fed up that my new year's resolution is to make a new NIC that's like the 82599 but more so.


I worked for another NIC vendor at the time, and we were totally in awe of the 8259x. We had a 10G NIC that could do line rate with full packets, but we did a lot of PCIe in firmware, so we could only handle ~1.5Mpps (about 10% of line rate in terms of pps).


I deal a lot with white box networking hardware and software and have never encountered this bug. Can you point me to some more info? Thank you.


See "ixl_detect_sparse()" in the FreeBSD ixl driver: https://svnweb.freebsd.org/base/head/sys/dev/ixl/ixl_txrx.c?...

Or __i40e_chk_linearize in the Linux i40e driver: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

And their 100g NIC has the same bug (at least according to the FreeBSD driver that's in review now).

ARGH!


Thank you.


I agree with the article. There is too much crap today in software. Much of it external crap.

When I was young, I used to scoff when old programmers told me not to include external/3rd-party libraries. What did they know, these libs are so convenient! Today, I clearly see the downside of doing that and the slippery slope that it introduces.

Rather than do the hard work myself upfront and pay that cost, I let someone else do it for me and simply included their work. Great, right? Not really. Now, I'm at their mercy. They break things all the time, don't address security issues, abandon the code I rely on, don't support OpenBSD armv7 (or whatever you run on), etc.

Now, I'm much older and only include when there is absolutely no other way (very rarely).


> old programmers told me not to include external/3rd-party libraries

That's me. I also avoid plug ins in things like our bug tracker and build system. I'm the person that maintains our Redmine and Jenkins machines and over the years I've gotten lots of requests for plug-ins that do pretty neat things. I almost always refuse just because it makes updating such a pain in the ass.

Generally, I do as little custom configuration as I possibly can these days. Twenty-five years ago customizing Enlightenment was how I spent about 30% of my time at the computer.


I am the same way with 3rd party libraries. My policy is that I will allow the use of a 3rd party library if it can bring some immediate value without compromising the downstream roadmap of the product. Additionally, any 3rd party library used should be continually evaluated for risk. Keeping an eye on how frequently certain things are maintained, their various roadmaps, etc is quite the chore. But, if you don't do this you are setting yourself up for disaster. At the slightest hint of something looking like it would compromise us in a 3rd party lib, we will take the time to implement it from scratch 100% our way.

That said, there is some interesting nuance I am starting to see in various communities. For instance, in .NET land, the Newtonsoft.Json library is extremely popular and was eventually scooped up into the official Microsoft API (and then further enhanced performance-wise). As a result, we had a clean migration path from a 3rd party library to the 1st party's API implementation of the same functionality. This is the most ideal situation from my perspective, and I would really like to see more of this.

Perhaps a new system from various vendors where a 3rd party library author can apply for 1st party sponsorship and/or certification of their library. Since Microsoft owns GitHub now, this could be something they consider integrating directly into the product. GitHub already has quite a few features that could help to support such capabilities (I.e. static code analyzers, action pipelines, etc). Additionally, if you are using a CLR language like C#, exceptionally powerful tools like Roslyn are available.

I realize this may create certain perverse incentives, but seeing a "Certified by Microsoft" or similar label affixed to a GitHub repository would give me a substantial boost in confidence when using this author's library. This would also serve as an excellent canary (i.e. if this status is revoked based on malicious activity, too many incidents, lack of maintenance, etc.).


I shudder everytime I run npm install on my lab main software project, which downloads hundreds if not thousands of external dependencies and lists more than 400 vulnerabilities.

On the reverse when I own a project, dependencies are kept to the minimum, often code I own too.


We have a new React front-end for our application. It's fairly simple but the amount of crap installed by npm is incredible. Then trying to fix bugs is a nightmare depending on what needs done.

This stuff was supposed to make our lives easier...


.. This would make sense if it worked. There's a fine line between "not-developed-here syndrome" and minimalism-as-you-mention-it.

If you can account for all the problems with the basic functionality the original big library tried to solve, this might work. But, if you're rediscovering all the problems again, and trying to solve them again, it's probably not the best use of your time.

Further, if something is big enough to become a norm (like Disqus for comments) - creating yet another sign-up form for comments has enough friction to discourage at least me from commenting. (And the one on this blog is broken).

I'm not saying your approach is wrong, but there's pros and cons to both, which don't seem to be captured here.


There is indeed a fine line between NIH and minimalism. They have a strong correlation. You have to compromise between unnecessary features/bloat/crap or simply do it yourself. It's often surprising how easy it is to make something from the scratch. Hence NIH.


It's not that fine a line. There's tons of space in the middle and I find a steady supply of coworkers who love to write code and hate to use libraries who never bother to understand half of what the libraries they do use are capable of. I may be slightly unusual in my habit of trying to find new uses for the tools we already have, but I'm not abnormal. This is something mature people should be doing as a matter of course.

This "everything sucks (from my myopic viewpont)" thing is like a bizarre form of learned helplessness. And I say that as someone who complains a lot about the quality of tools and libraries. Only about 10% of the time do I think that means we should write our own. 40% we should pick a different tool, and half the time we should try to convince the author to fix the worst aspects (when the problem is deeper than a PR can address).

Code you wrote is easier to understand for you. Of course you're going to be more productive with your code but what about everybody else? Are you making any compromises for the comprehension of others? Do you remember nothing about middle- and high-school coaches complaining about 'ball hogs'?

The optimists think all of their code is amazing and bug free and hence cheap. But it costs everyone else time, respect (either for you, or themselves), but perhaps most importantly, resume material and independent support options. You are hijacking the project to fluff your own ego.

For me, the worst quartile of my jobs have been NIH that turned into codependent bullshit drama, and it causes a pretty intense histamine reaction (as is evidenced in my phrasing in this comment) for me with people who are not self-aware enough to allow at least the possibility that maybe 50 people who have been thinking hard about a problem for five years are better equipped to solve it than you can in a week of bravado, followed by months or years of dodging responsibility for the problems you created by substituting your judgement for the rest of the team and the entire Internet. Just get over yourself and be a team player for once in your miserable narcissistic life.


This can truly cut in either direction, and one of the benefits of long experience is learning to judge the over and unders on what is often a crucial educated guess.

Certainly, though, it's disastrous to assume that some existing library/product/whatever is going to be superior to home-grown, just because a lot of people use it, there's a big company behind it, or etc. That piece of software was written to cover a certain area of design and business space, and that may not intersect very well with the design and business space you need covered.

For myself, I do start with the hope that I'll be able to avoid writing some big chunk of code (by using some existing piece of software). But I'm very wary, as external software comes with costs, even if it's "free".


I agree — there's tons of space between "everything sucks" and "bloated stuff sucks". We use libraries a lot, but often less is more.


I think the key is having someone on the team experienced enough to estimate how much time to spend experimenting up front, and to recognize when things are getting into the weeds.


An example of your point is ORMs. I’d much rather work with the bloat of Hibernate than any of the in-house ORMs I’ve seen.


At a recent job, people were advocating for ginkgo, a non-standard "reinvents the wheel" type of testing framework in golang. If you don't know golang, it has near perfect tooling out of the box, including a standard test framework.

I joined on the brink of this decision i.e. moving to ginkgo. I was the first and only person to ask what does ginkgo do that "go test" does not for us? Nobody could really answer that.

Turns out, much like myself, and most engineers who weren't Googlers at the time, didn't really know golang and its environment. They all kinda assumed that like any other popular language, esp. for system programming (like C++) golang in its infancy requires a test framework supplement.

Well, no it doesn't. "go test" is standard which if we're lucky, means we never have to re-write tests 10-20 years down the line because it's the standard.


I had to go look up Ginkgo... and I'm not sure it's an Apples to Apples comparison. The built-in Go framework for unit testing is great but I don't think I would want to try to write clean BDD tests with it.

Part of the value-prop for Ginkgo is that it gives you a standard BDD DSL which if you're doing BDD is half the battle... describing and testing the behavior with a human-friendly programmatic interface.

On the other hand if none of the team could articulate why they needed Ginkgo that probably means they don't actually know what BDD and your point still stands.


Which, IMHO, would behoove the team to learn BDD conceptually, in order to assess its appropriateness as a tool in any situation.


I’m yet to see BDD used even though I’ve seen BDD frameworks used a lot. BDD frameworks happen to be excellent for unit testing too! They force you to write the testing code to be better because you need to describe each action. I find in normal unit tests developers use shenanigans to shortcut testing something.


oh yeah absolutely, ginkgo has its use. It was not suited to what we were after, though.


> Nobody could really answer that.

Sounds like it was dismissed without evaluation, your comments seem to support this, you seem to think it offers little or nothing over Go’s standard bare-bones testing.

Ginkgo doesn’t re-invent the wheel regarding Go’s out-of-the-box testing, it builds upon it. Ginkgo tests run just fine with ‘go test’, except the output is a little less detailed/pretty. And you can still use standard Go tests alongside Ginkgo, they play just fine together.

Go’s OOTB test story is pretty good. But one cannot easily have hierarchically structured tests, the output doesn’t clearly describe what a failed test was testing, one cannot easily enable/disable whole sections of tests (while getting reminders that tests are ‘pending’ so one doesn’t forget about them later), randomised test ordering, clean/easy before(after test methods. And those are just the features I use, from the top of my head — it does much more.

And Ginkgo is usually paired/installed with Gomega. Assertions written with Gomega are very easy/nice to read, no matter what kinds of values one is checking, and Gomega also includes all manner of helpers for assertions and matchers.

Go’s OOTB testing is pretty good. But eventually one will realise that one is re-inventing wheels when writing yet another bunch of testing helper funcs — then the path either leads to building one’s own library of helpers, or realising that someone has already done a good job of this already.

I’ve been programming Go for a few years now. For some smaller projects, I just use the standard testing stuff, but for others I can write more readable tests, test more thoroughly, and write such tests in much less time when I use Ginkgo and Gomega.


God. I had to take over R&D of a service at work and despite the rest of our division already using 2 unit testing libraries and 3 testing idioms, they picked a fourth and a totally different reporting tool.

To be fair, one of those I introduced, because the oldest one we were using was demonstrably broken and prone to human error. I followed it up by shopping it around and getting others on board. One guy I deputized got a promotion in part from finishing what I started. The old one is just barely hanging around due to inertia and fear of breaking changes, but the coverage (% of overall, and I'm pretty sure LoC as well) continues to dwindle.

These guys picked a new one and then did nothing. Never talked about it, never pitched it to anybody, just forked our testing process. If that was the biggest problem with this code I'd just roll up my sleeves and fix it, but they broke so many other rules too - ours and ecosystem rules - that this is the least of the problems with the code. I might get around to looking at it in six months. I've been working on it for three months already.


> If you don't know golang, it has near perfect tooling out of the box, including a standard test framework.

Does it really?


I would agree, "near perfect" seems a bit extreme.

But golang tooling is often simple and "good enough", which IMO is sometimes the best option available :)

Never underestimate "good enough" - it's good enough :D (But don't try to explain that to someone you're dating, LOL)


Integrated stuff can sometimes also be borderline unusable. For the .NET project we’re currently working on we decided to log using system.diagnostics.tracesource and co. It happens rather regularly that we wonder why something doesn’t get logged and we wonder where something gets logged to.


I see the point but I don't agree with the author. If I'm not mistaken that it took more than two years just to build their product and they're still pivoting in a way that they can monetize their users. I'm not actually sure if this delay is caused by your minimalism approach but startups usually need to focus on their core business in order to be able to find the product-market fit.

In an ideal world, we should be focusing only on our core business, not the tooling around the product. OK, you just set up a hook for your error events in your front-end code that lets you collect the errors as events but Sentry is doing much more than that. If it is the only feature that you need, that's OK but the reality is that as your application code grows, you will need many more features for your error tracking.

Eventually, you will need the UIs that lets you understand the context of the user, aggregate multiple events as one exception, send the errors to the relevant people, etc. and you will spend much more time building out these features and the switching cost to a real error tracking tool will be much higher.

The tools such as Sentry and Google Analytics provide an opinionated way to accomplish specific set of tasks for typical use-cases. The advantage is that they have seen many customers, use-cases and edge cases so adopted their product in order to cover them all. You don't need to the same unless it's your core business.


I think you're right generally, but not universally. There are cases where it makes sense to use a one size fits all dependency and cases where it doesn't. I think the author is right that at least some of the time, it's better to DIY certain features at the risk of needing more work later than to pull bloat into a project. Responsiveness matters for how polished an application feels. Someone web browsing with an i9-9900k computer and gigabit fiber won't notice at all, others will to varying degrees. Some people rely on Hughsnet satellite because no one will run cable to them and cell service is crap. It's my experience that people like that with slow connections or slow hardware have to frequently endure garbage-tier UX because developers preferred overwrought 3rd party tools to rolling their own features, to no apparent benefit. (Dev time included in some cases) Even in the middle ground, 300ms vs 150ms matters. While "disadvantaged users" may not seem like a low hanging fruit to go after, If you impress someone like that in your audience, standing out as having more polish than usual on your product, they'll assume the best about your engineering ability and maybe even turn into a grassroots marketer for you.


Yeah, I came in here to call out the error tracking use case...a good error tracker needs to do a _lot_ more than that.


I suppose your requirements are different. We want to deal with errors like we deal with inbox, fix issues until we have "inbox zero". On our case that is all the client we need. There's obviously more logic on the backend, but not much.


Yeah it needs to do that, but that likely means that deploying a particularly egregious bug will fill your inbox with repetitive and useless data. It should also give you information on the current user, the browser, locale, etc. None of that extra stuff is really extra, it's critical for quick debugging. It's also hard to get right, and doesn't preclude inbox zero in any way.


Internally we use Redis sorted sets to merge the errors into one entry on the inbox. We get the browser info from server HTTP headers. It's a cute list of errors that we regularly run trough.


Well, your error tracking solution is not simple as a javascript snippet then.


Good for all projects, but how much does the project at hand actually require? And adding dependencies later is a lot easier than removing them.


This reminded me of the Twain quote: "I didn't have time to write a short letter, so I wrote a long one instead."

Sometimes I don't know how to solve my problems with tiny amounts of code. It often takes skill and effort to come up with an elegant solution. But I agree it is something we should try to do when we can.


> the Twain quote: "I didn't have time to write a short letter, so I wrote a long one instead."

Friendly amendment: This has been attributed as being originally from Blaise Pascal in 1656 [0]

[0] https://en.wikiquote.org/wiki/Blaise_Pascal


Or maybe even earlier by Cicero. Seems like a common sentiment through history. Thanks for the link!


My rewrites always end up much smaller than my original rushed implementations. I think it comes down to a few reasons

* I get better at writing code. * Better understanding of the problem, the feedback from the first iteration usually tells me what the code actually had to do, not what my boss thought it needed to do. * A working but messy and bloated first iteration shows me that I can solve the problem at all, the rewrite let's me do it with some grace.

The hard part is minimizing the time spent on the first messy implementation so more work can go into the good final iteration.


The nice thing about writing a long letter is that it doesn't have to be maintained at significant cost over time. :-)


Author here: over the years this has been my prime strategy on building new stuff. First jQuery Tools, then Head JS, later Riot JS. Happy to talk about minimalism on this weird announcement day.

Happy holidays!


The link for the Disqus bullet (to try your alternative commenting system at the bottom of the page) is broken. It points to:

https://volument.com/blog/minimalism-the-most-undervalued-de...

... but what you want is:

https://volument.com/blog/minimalism-the-most-undervalued-de...

Two things suggest you've still got some work to do cutting out the crap:

1. The commenting widget manages to foil my ability to select text and keep it selected.

2. If I disable JS (see #1) and reload the page, I just see one big blob of gray-green-blue.


The link is now fixed. I'll look the other two issues after the Christmas chores. The JavaScript-disabled issue is probably non-trivial, but obviously doable. Thanks!


Without javascript this shows a blank page. And it's about minimalism?


It's a bug. Not related to minimalism :) Thanks.


Ok, but now you have to tell us whether the bug was in your code or a dependency, and tell the truth!


The bug is on our code.


tsk tsk if you had used a dependency it would have been bug-free /s


Yup. In case all the dependencies are bug-free.


Another valid option is to fix bugs. Now the page renders nicely with JavaScript disabled.


How do you weigh minimalism in a build vs. buy decision?


It's a strategy. To use minimalism as a competitive advantage. In an attempt to build products that are compelling. People only buy products they want.



We embraced minimalism on my team a long time ago—no tests, no docs or comments, variable names are usually 1 or 2 characters, no indentation, no code reviews or QA, no meetings, etc.


...even no code. /s


That’s so minimal i’d call it basic.


The most important kind of minimalism in my mind is YAGNI. Every single time I coded something I didn't need yet; I never did, at least not the solution I came up with. And once the complexity is in there, it's very difficult to scale back.

This applies to everything I've worked on, from personal projects up to 10KLOC to 3MLOC commercial systems. It's just never a good idea from my experience.

The code [0] I write these days tends to be pretty raw, but every single line fills a fundamental purpose.

[0] https://github.com/codr7/libceque


YAGNI conspires against me.

When I spend a lot of time building potential future features or creating a bunch of abstractions, I pretty much always never end up using any of them and it ends up as a (potentially huge) waste of time.

When I'm building something and thinking about whether or not to make it more abstract / support potential future features, and decide "nah, ain't gonna need it", inevitably within a week or month I get some feature requests for those exact things, often requiring me to do a lot of rewriting from the ground up which I wouldn't have otherwise needed to do if I had started off with a more abstract model or added those features when I was initially considering them.

(Obviously this isn't literally always true, and I'm getting better at it over time, but those are the ones that stick out.)


This is the pain with “agile”. The problem with building the skateboard first and morphing that into a car (as per the cartoon strip meme/trope) is that a car is nothing like a skateboard. It’s a rewrite. If you can know you need a car upfront then you save a YAGNI/OFINI cycle. I’ll let you guess what OFINI means :)


It's certainly no Silver Bullet, and can be taken too far; but it was born as a reaction to the industry going way down the opposite drain. At least at that point, you know what kind of car you need. I still find it easier to refactor a simple solution into a more elaborate one than going the other way, easier to motivate as well as ripping out features is pretty much impossible in a corporate setting. Truth is always somewhere in between the extremes.


Agile could never have solved this problem well. But it did encourage the problem to appear by claiming that it could.


I'm all for minimalism, but the very first code example is a bad example. It's saying "look at how small your error handling can be" without mentioning that now you have to build something to inspect said errors.

There is minimalism and there is punting the issue down the line, and this is an example of the latter.


The core purpose of an error reporting client is to send the errors to the server. Anything more is some extra features (that we haven't had the need so far). Most of the logic resides on the backend, and it is not much to get the job done.


"Be minimalist and don't use third party libraries and services. Except for ours. Use ours, because we don't have two decades of cruft under our belt excuse me, we're /minimal/.

The article kinda uses a language of a sentiment I share to an extent (I think most devs are far too eager to pull dependencies — avoiding NIH doesn't mean you have to pull the entire 3rd party dependency, not if you can read a paper and implement it in a dozen lines), but while a completely in-house analytics package could be smaller, there's a reason why commercial packages are big.


> while a completely in-house analytics package could be smaller, there's a reason why commercial packages are big

But are those reasons your reasons, and do some of them warrant questioning in the first place?


Yes. There is a reason commercial packages are big/bloated/complex etc. This is an opportunity for all of us.


There seems to be a bug in the comment form on their site. It gives me the option to edit/delete the last comment and when it asks me to create an account, it shows me someone else's email address. I feel like this is the major reason for not rolling your own.


The email address issue is now fixed. We'll fix the edit option visibility issue a little later (Christmas time happened)

Bugs should not be a reason to give up rolling your own stuff, but I'm sure you didn't mean it that way.

Thank you for reporting this issue!


> Minimalism

Ah, but what's really discussed in the article is speed

How fast a page loads. How quickly a tool can be understood. How quickly a tool can be used.

How quickly the user can understand the product. How quickly they can figure out what they're doing wrong.

Speed is the one true killer app.

It's one of the few things that, when it's better than user expectations, will illicit stunned vocalizations of "bullshit, do it again. Oh shit, it's really that fast" from nearly everyone.


Here's new blog post about a powerful way to make your website faster.

https://volument.com/blog/spa-the-biggest-website-performanc...


Minimalism certainly makes speed easier. But you can also have a minimal, one call API that takes forever and a day to resolve because it joins 20 tables


Here's new blog post about a powerful way to make your website faster.

https://volument.com/blog/spa-the-biggest-website-performanc...


Feature minimalism is a good thing to strive for usually.

Wrote my own dirt simple lazy loader & slider libraries, because I didn't need many of the features other libs had.

On the flip side, if I need a reasonably complex component that would take quite a while for me to implement, like say. A date picker. I just grab the lightest weight, good looking one that I can find. (That also doesn't appear to be abandoned)

Minimalism in everything, even minimalism.


If you value minimalism, then why doesn't the page load any text when Javascript is disabled?


Because JS-disabled clients are an edge case and a minimalist approach attempts to solve most cases with the least amount of code. Nevertheless, we will make our site work with JS disabled. Thanks!



That's a good reference article indeed. I remember that. Seems Chickenshit Minimalism is defined as "the illusion of simplicity backed by megabytes of cruft" — which is not in play on the article. But yeah... need to make that page work on non-JS browsers. Thanks!


Unfortunately I can't agree.

There's an old quote I can't remember the exact source of, that "Microsoft Word users only use 5% of its features... but each user uses a different 5%."

It's not bloated. It's just that other users have different needs from yours, and you're ignoring other users, assuming they're the same as you. But they're not.

It's easy to write a minimal version that works on your dev machine in your browser. But now get it to work on all browsers on all devices on all OS's. And now build in all the little feature that big clients require for legitimate reasons. And now ensure it works with several legacy versions of API's, etc. Is it "bloated"? No. It does what it needs to do.


A minimal core with just enough functionality to be useful, then a plugin framework with hooks for all the other features that everyone else needs to build in.

A minimal phone with just enough functionality to be useful, then an app store to get all of other functionality you need via downloadable apps.

Its a model that works well, providing an ecosystem to anyone who wishes to extend.

Code is kind of the same now.. Get the basic thing going, then people fork it, improve it, submit patches, sometimes over decades.

The worst software imho, is a closed architecture that has the kitchen sink included that nobody can extend.

Modularism and a pluggable architecture is much better than just minimalism on its own.


This touches on the topic of doing something yourself vs getting someone else to do it for you, and I would recommend anyone interested check out Wardley Mapping. https://en.wikipedia.org/wiki/Wardley_map It's a way to generate discussion in your company/project/whatever about when it's appropriate to custom-roll, and when you should use commodity.


This also applies to features. Easy example: if you’re implementing commenting on a social media app, you could use a textarea instead of embedding a rich text editor. The simpler option usually decreases your overhead and helps keep your product easier to use, easier to explain to potential customers, easier to develop...

It’s easy to keep adding stuff (especially if your customers are requesting it) but you should evaluate if it’s truly necessary.


It's very hard to write perfect minimal code from scratch. It takes a lot of foresight and consideration that mortals like myself simply don't possess.

I think finding the optimal solution to any problem usually requires delving through a lot of crap, trial and error etc. Maybe this is why I find the trend towards minimalism a little bit condescending, if that's the right word.


Or time to re-evaluate, refactor and rewrite.

Once you've done that, you might gain that foresight. It's also known as experience.

Too bad there are so many places where you're just constantly rushing to add new features on top of old stuff without ever having time to take another look at the design, fix it, and learn from it.


This echos a sentiment I've been feeling on a few recent posts of mine.

The argument against Entity Framework, and for micro-ORMs. - https://pknopf.com/post/2019-09-22-the-argument-against-enti...

You don’t need Cake anymore; the way to build .NET projects going forward. - https://pknopf.com/post/2019-03-10-you-dont-need-cake-anymor...

My blog is also written from scratch, using a stupid tool.

https://github.com/pauldotknopf/pauldotknopf.github.io/tree/...


First and foremost: beautiful and performant website.

I don't understand how exactly the Volument product actually performs A/B testing (or distinguishes between different variants). The only code docs [0] I've found are very, very light on details and doesn't mention how the active variant is detected. To be fair, the product hasn't launched yet so perhaps the docs are currently internal-only.

That being said, very interesting idea of linking engagement to eventual conversion to get "results" faster. It certainly seems reasonable that there is a causation relationship (increased engagement, e.g. more scrolling and longer visit duration, eventually leading to increased conversion), but I don't know how statistically valid that is.

[0]: https://volument.com/learn/client-api


Here's how Volument works: https://volument.com/learn/volument-data-model

A/B testing in Volument is quite different from, say Optimizely. You can compare a lot of things in apples-to-apples fashion: landing pages, inner pages, market segments, campaigns, site versions, etc. And you can compare both how much traction something generates, or how engaging a page is.

The causation thing is just something we measure. We measure how people behave on their first visit, how intensively they return, and how much they convert. It's not about the statistical validity, it's about choosing metrics that are essential for conversion optimization.


> We have a single developer taking care of the codebase. Does Optimizely need ten people to handle theirs? Or twenty? We don't know, but one thing is for certain: 20 people make more redundancy, technical debt, and bugs than one.

I get the point that having too many developers writing too much code can make it too much to handle. But I've seen what one person operating alone in a vacuum can do as well, and it is not pretty. It seems like really you want maybe 3 people to really understand some code. That gives you a good bus factor, but not so many people injecting ideas and patterns that the code becomes unwieldy. More important, it gives you extra code reviewers who can push back against bad code smell and hopefully find bugs, or at least try to test it. 1 might be better than 20, but 3 is better than 1.


Usually product owners and business are the biggest enemies of engineering minimalism

- [PO] A/B testing says button must be outside table

- [DEV] But that makes the code unreliable and overcomplicates things the framework is not designed to do that, maybe if we include a regular link...

- [PO] BUT AB TESTING, ACQUISITIONS, CRITICAL MISSION, bla bla bla


Minimalist design looks good but in practise when software were designed it was good at early stage but when complexity grows because of growing and expanding customer needs to meet different type of customer taste, software tend to get heavier and we end up writing lot of crap. Unfortunately.


Web site does not call external domains. Good.

Still in the 15th percentile on https://www.websitecarbon.com/website/volument-com-blog-mini... . Can it be improved?

Something in the html/CSS prevents displaying the page until it's complete (observed on Firefox on Android). You should investigate, this will provide better impact to visitors.

Is the whole web page some kind of advertisement?

Overall, I agree. Perhaps you should make clearer that you compare bytes client side, not server-side handling (e.g. for error collection).

Keep up the good work!


Wow. Thanks for pointing out Websitecarbon. We will definitely use it to opitimize our site further. Currently our bottleneck is actually fonts. We tend to emphasize the importance of typography and make it perfect for our own taste at least. I suppose we can tweak the woff2 files and perhaps use a little less of different font-weights.


Reductionist garbage. Their "minimalist" sentry clone literally does nothing as presented on the page. There must be a back end that stores errors and an interface to view them, etc.

There's a reason most apps are complex: because the world is complex. Making software that does nothing and pretending it's as useful as real software in the name of fashion, "minimalism" is frankly unimpressive and uninteresting. There are no shortcuts in real life. No matter how good their sentry backend is, I guarantee it's not nearly as useful because they almost certainly left out the ui that makes sentry the useful tool it is.

So tired of articles about fashion.


Yes. There is a backend that stores errors and an interface to view them. This article simply compares the size/complexity/feature-richness of the frontend code. Sentry UI is good and we are familiar with it. So is ours. It's just more minimalistic.


Alton Brown complains about single-use kitchen tools. I find his theories on implements to be very sympathetic to my own. Invest materially and mentally into a few good tools and learn to wield them properly. Don't half-ass two things. Whole-ass one thing.

For one of my hobbies I have a little speech about how you could get started with only 5 specific tools. And the punchline of that story is that it's actually 4 tools + 1 duplicate. Because the 4 are that useful on their own, and one of them wears out much faster than the rest.


I want to agree, but the story is incomplete. A lot of those products also started as MVPs, but accreted functionality in response either to customer demand or opportunities to monetize.

The classic example of this is Word which started out as a small clone of Bravo but kept adding features that some subset of customers found indispensable until the whole codebase and UI became a mudball.

I don’t really like MS software but a big part of the reason is a consequence of something I admire: their strong attempts at backward compatibility and preserving features that are actually used even if by a small minority. In other words: you can’t win.


But as a user of a SaaS, how many of those customers are your customers?


Well, take disqus: starts as a simple blog comment tool, saving you writing one. Then they add a nice feature where logged-in users can follow responses on other pages, which is nice for the blog owner as it increases engagement. But the payload got bigger and then starts to spy on you or your users. Then, as the product is free perhaps they add an ad or start to sell the spy data. And....

Or there's an address widget you used, but it kept being updated because even though you only serve the USA, other sites need support for various foreign countries...

I don't want to write everything, but sometimes indeed these 3P packages are a moment of convenience leading to a lifetime of regret.


Kinda anecdotal, but I've been working on OSS libraries for the last few years and what I've learned is very similar: 80% of the features aren't used by anyone, and if someone really wants something they just open an issue.


Yeah, same experience here.

I tend to be very stubborn about adding features to libraries, though. A handful of times I got pull requests that almost duplicated the size of small libraries just to cover some weird corner cases. One was support for an ORM discontinued since 2012. Another was someone who wanted a build tool to inline their assets during development phase. After asking, most people admitted that they don't really need the feature, but were simply trying to make the app complete.


I like using minimalism as an starting place, but I think it's idealistic in most cases.

We use senrty and we rely on many of the features you miss out on by just posting minimal exception information.


I see your point. Sentry is clearly the least bloated service from the three examples mentioned.


We can finally pursue our minimalistic religion, iterate quickly, and collect maximum wins.

Religion implies dogma, and I don't like dogma in my field.

Side note: the biggest takeaway for me here is: be careful when writing such self-confident articles. The general public doesn't take kindly to such displays.


> If you want a cheap trick to make a difference — here's one: minimalism. Focus on the bare essentials and get rid of the rest. It's an easy way to differentiate, because most others are doing the opposite: tons of crap.

For a moment I thought this was about real life minimalism.


What do those pie charts show (e.g. Disqus vs Custom)? No idea how to read those.


Oh it's the size of the code in thousands of kilobytes?


It's comparing the size of the code.


Weird though, no? Am I wrong to expect a pie chart to represent the "total" or "100%" and the slices a portion of the whole? I could be wrong, but this seems like the wrong usage of the pie chart.


On those pies, the example SAAS product is the whole 100% and the tiny white slice is the custom alternative.


"Make it as simple as possible, but no simpler." ~Einstein


Absolutely. This is highly correlated. Seems many minimalist geniuses are from Germany.


And the Italians and the Renaissance, all those geniuses in one relatively tiny area. What's up with that?


One adage I like is "the best code is the code you don't write", meaning that we should write as little as possible because every new line is a potential new error(s).


I agree. I wish more people had this attitude.


Basecamp has a saying "underdo the competition"


Great article. I wish more people would understand minimalism and accept it as company/product culture. Web bloat is ridiculous and worth fighting.


Love it. I've been thinking [0] about this a lot lately. My rule of thumb these days when tempted to introduce a new dependency is try to build it from scratch first. I might spend 1 hour or 1 day, depending on my estimates of the costs/savings in the long term. I often find I just don't need the dependency. Sometimes I do, but after trying to build it myself I can fully justify why (to myself and others), and also understand how to use the dependency better.

A few months ago I wanted to add an issue reporter to the help page of our open source genetics data visualization tool https://bam.iobio.io. I threw something together in less than 100 lines of code [1]. It doesn't even include any rate limiting or security measures. A couple weeks back I ran into a slight wrinkle that brought into question whether to keep using the tracker. I quickly analyzed the number of issues that had been submitted (including a recent influx of layman users), and simply removed it. Email support works fine for us based on our usage data. The decision was made easier because I didn't invest a bunch of time up front integrating with some external service, or building a 100% solution the first time around.

> It certainly didn't help, that our former A/B testing tool emphasized short-term wins. When we added a huge sales overlay, it appeared to bring more conversions — since the return visitors were ready to take action. The new visitors probably hit the back button as soon as they arrived.

This is great. All of your analytics insights are completely dependent on a) measuring the right thing and b) interpreting it correctly over the long term.

EDIT:

I do want to mention that these articles always point out downloaded code size as a primary factor in the decision, but I rarely see any data showing how the reduction in code size translates into an improved user experience, based on the principles of human perception. In my experience, the really big gains come from techniques like background fetching.

Not that code size isn't important, but I think the main point should be the reduction in complexity that comes with it. That pays dividends in developer time and avoiding bugs, which translates directly to an improved user experience.

[0] https://anderspitman.net/11/dependencies/

[1] https://github.com/anderspitman/issued


> I do want to mention that these articles always point out downloaded code size as a primary factor in the decision, but I rarely see any data showing how the reduction in code size translates into an improved user experience, based on the principles of human perception. In my experience, the really big gains come from techniques like background fetching.

Size is a huge factor when you're on a limited data plan. Maybe one website won't make a difference, but it adds up.


Not saying you're wrong, but again I never see these claims backed by data. My guess is most people eat their data plan with video and Spotify, not overweight JS.


I just see a blank grayish blue page.


This was due to a comment from a removed user. Now fixed. Thanks!


I still see a blank greenish-gray page.

It appears to be because the BODY element has a CSS rule setting "opacity: 0", which appears to be for the purpose of enabling a CSS transition to fade in the page on load. Not exactly minimalism in terms of complexity, but it does appear minimalist in terms of content when the page is blank. :P

Presumably the transition is triggered by some JavaScript. However, that JavaScript fails to execute because cookies are disabled, which causes an error:

    Uncaught DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.
    at new u (https://volument.com/v1/volument.js:17:858)
    at window.volument (https://volument.com/v1/volument.js:17:133)
    at https://volument.com/global/index.js?b:1:14673
    at https://volument.com/global/index.js?b:1:60085
As a result, when cookies are disabled and JavaScript is enabled, the page appears blank.

If I had a dollar for every time nowadays that I see a blank Web page because of invisible-by-default content that's made visible by JavaScript that fails to execute because it can't handle blocked cookies...


This is now fixed and you can see the content when JavaScript is disabled. It's not as pretty yet but does the job for the Christmas day. Will check the localStorage thing next. Thanks!


Now the cookie issue is also fixed.


If disquss is hailed as an example of good design we’ve fallen so hard...


YAGNI ?


KISS - Keep It Simple Stupid is my mantra


The title now matches the HN staff edit.


This page is completely empty on iOS.

I guess the author should have made a minimalist static HTML page instead.


Is it HN's hug of death? I see the same on my PC.


It was a JavaScript error on the commenting component due to a comment from a user that removed their account. Now fixed. Thanks!


Do you appreciate the irony that your article that brags about writing your own comment system was taken down by a bad comment?


Haha. I little — have to say. I'm actually quite happy how well it works. This is only the second time the commenting widget is in active production use.


Perfection comes not when you have nothing more to add but when you have nothing left to take away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: