This post points to perverse economic incentives as being one possible cause, but I have also seen this happen in open-source projects. It's a matter of listening to the wrong people, in my view. User feedback is incredibly valuable, but when user feedback comes in the form of GitHub issues rather than careful testing and conversation, the team will inevitably find themselves building more and more and more for no real benefit.
I've quoted this before, but what Don Norman says in The Invisible Computer still applies:
"Don’t ask people what they want. Watch them and figure out their needs. If you ask, people usually focus on what they have and ask for it to be better: cheaper, faster, smaller. A good observer might discover that the task is unnecessary, that it is possible to restructure things or provide a new technology that eliminates the painstaking parts of their procedures. If you just follow what people ask for, you could end up making their lives even more complicated."
A useful metaphor we use in game dev: Players are the patient, you are the doctor. They're great at finding pain, but not at knowing how to heal it. It's on you to figure out what the underlying problem is and how to solve it.
Also, some of my favorite quotes on this subject:
You listen to all your fans and they always say "You should add this" or "You should add that." They never say "Take this out, take that out." They say "add more, add more!" There's an old saying that I love about design, it's about Japanese gardening actually, that "Your garden is not complete until there is nothing else that you can remove." I think a lot of designers think the opposite way - "What else can we add to the game to make it better?" -Will Wright
"People don’t know what they want until you show it to them.” -Steve Jobs
"Writers and people who had command of words were respected and feared as people who manipulated magic. In latter times I think that artists and writers have allowed themselves to be sold down the river. They have accepted the prevailing belief that art and writing are merely forms of entertainment. They’re not seen as transformative forces that can change a human being; that can change a society. They are seen as simple entertainment; things with which we can fill 20 minutes, half an hour, while we’re waiting to die.
It’s not the job of the artist to give the audience what the audience wants. If the audience knew what they needed, then they wouldn’t be the audience. They would be the artists. It is the job of artists to give the audience what they need." -Alan Moore
There's upsides and downsides to complex and simple games.
Fortnite may be simple in some aspects (basically a war game) but it's more complex than Temple Run.
Both cater to different markets but achieve simplicity within their markets well.
I don't think it is useful to think in terms of simplicity vs. complexity. Good design is about knowing what features work well together. Composition, harmony and efficiency are more important than minimalism.
I love lots of classic games like Contra which are at their heart very simple. But Yakuza 5 has everything from rhythm games to a little arena shooter built in and it's all optional content. A lot of it is kind of half-baked because it's so ambitious. That's kind of fun in its own way.
I’m not saying this to argue against your points: I’m a fan of Tangerine bank and have been continually and loudly telling them to remove the balances on the account overview screen as it now triggers “burning a hole in my pocket” psychology.
I only mean to say at least a few fans are actively asking for things to be removed, simplified, and (thoughtfully) refined.
I think it’s a valid request. I wish Apple’s iMessage App would allow hiding or moving conversations to folders to remove from sight. I may have an emotional or upsetting text with someone that I don’t necessarily want to see every time I open the app, but also don’t want to delete it. Or may need to use my phone in a presentation etc, and not want to show certain messages.
To be more clear: I only want to see the balances from specific accounts. The two chequing accounts in this case.
I don’t want to see the savings account that has a slowly (and automatically) increasing balance because then I subconsciously count that as “too much” and think things like “I could spend a little and it wouldn’t make a big difference”.
To be fair, I’m also working hard at shifting my brain after a lifetime of ‘paycheque to paycheque’ patterns and it’s not enjoying the shift.
Really off-topic here, but for savings accounts that are going to tick up over time and don’t need to be available within 24 hours, you can open a savings account with a different bank that has better savings interest rates and set up an automatic or manual transfer to that savings every month. You can shop around for the best interest rates, but I’ve seen Ally be consistently better than anything I have locally.
The benefit of having it in a separate bank is that it hugely increases friction when you start thinking about spending it, since you have to move it back to your primary first. You can mostly forget about the specific numbers in it as well since it isn’t visible in your primary bank dashboard.
No that’s an excellent idea. I was doing this for a bit with my “old” bank and thinking that Tangerine was the only good no-fee bank with the best interest rates so I had not reappraised that lately.
I just looked and there are other options available. I think this might be the excuse for me to go to Wealthsimple and start taking a more active role in making that money do some work.
Sorry, I realize I was not bringing in enough background to explain why it mattered.
You’re right for sure, and I do find benefit in the account overview, it’s just that I only want to see the “spending” accounts, and not the long-term savings accounts.
Much like it might change your financial choices if your bank overview also showed your available credit (instead of the balance owing) and a realtime update of home equity.
(Not affiliated, just a happy customer that finally broke decades of bad financial habits due to this app and its attendant personal finance philosophy.)
After skimming past recommendations for YNAB I looked it over just now, and because of Tangerine’s multiple accounts (max is about 15) as well as the money-transfer automation, I’ve actually been using the same principles just within my bank tools.
That’s why I was frustrated when they changed from a summary of chequing account balances to a full “dashboard” that showed all accounts.
Up until then I was doing awesome avoiding the “I’m flush with cash” triggers because I was only seeing my actual spending money and not the bills account, the rent account, and the various proactive savings accounts.
Essentially I had all the benefits of YNAB principles because of the tools Tangerine provided
Most times when users say they won't use a product because it's missing a feature:
- Those specific users won't use it anyway even if you add it
- The problem they identified is a legitimate problem that was preventing other people from using it
- Whether your metrics actually go up depends on where that feature was in the critical path of your funnel. All else being equal, fixing legitimate problems with your product is unlikely to move your metrics much, because most (randomly distributed) problems aren't at the frontier of the critical path.
It's a mistake to think that adding features that customers ask for will immediately improve your core metrics, but it's also a mistake to think that features that don't visibly improve your core metrics were a mistake to add.
> Those specific users won't use it anyway even if you add it
I think people read this and think, "Why bother, then?"
As someone who is often this user, I don't end up using your product because I've already moved onto a competing product or service; or because I never hear that you have added the feature. Whether your metrics move after adding the feature might be a matter of timing.
There's also the chance I will come to your product in the future. Hypothetically, let's say you offer a password vault application, but I dislike it because it lacks a feature I want, so I end up going with your competitor who offers the feature. You add the feature, but I don't switch because I'm now content with your competitor. Later, your competitor starts pushing towards a subscription model while simultaneously showing a real lack of professionalism and social grace towards customers in public. Since you've added the feature that I thought was lacking, your product might now be an option for me. If you haven't added the feature, there's still no chance.
> competitor starts ... showing a real lack of professionalism
Betting on eventual competitor's incompetence in the future - is not a reliable strategy.
In such situation it may be better to implement that feature only when entrenched competitor with that feature will start pushing their customers away.
This situation is further complicated when you’re making enterprise software where the purchaser often isn’t a user and the majority of the users don’t have a say in the purchasing decision.
A smart purchaser will define their purchasing criteria based on the needs of their users, but in practice, I’ve found that some haven’t done an accurate job of determining their users needs, and/or inject their own agendas into the requirements.
Some of the best advice I've been given on this is to look at how the potential customer is already solving the problem today. If they're just ignoring the problem altogether, then they're not going to spend any money on you to solve it. If they're spending considerable time and/or money working around or manually solving the problem (maybe by working weekends, or buying a whole team of vendors, or outsourcing to Mechanical Turk type stuff), and you can solve the problem for them for less money and/or time, then it's a feature worth shipping.
It can give false negatives, especially with future looking and platform-type work, but it's a great heuristic for weeding out useless feature work
> Those specific users won't use it anyway even if you add it
These users often understand what features are missing because they rely on them in other products. At that point your product is already dead to them.
Having watched recordings of people using our product and spent too much time reading their reviews and feature requests, I can safely say that neither does the average user know what he wants or needs, nor can he articulate what the current problems or improvements are.
It gets worse when you have a specialist of a given field consulting you on how to build software with his field in mind.
“A fish doesn’t think of water” is a fitting quote I once heard.
I really liked Blizzard of Ghostcrawler era - he plainly stated that they listen to users carefully to identify problems, but do not listen to them carefully about the proposed solutions. And well - his design team got stuff mostly right.
Designing user research around watching people do something, and figuring out their needs is something I've covered in a recent post: https://adnankhan.space/user-research/2020/01/28/ask-users-f.... The idea is that your research objectives could be very different from what you're actually testing.
I think if we look at Excel, we see that additional features were welcomed and it became very big, same with Word, but then we look at today's google docs and spreadsheet from google and wonder where all that stuff went, but we adapt and now i am fine with simplicity
I'm part of the technical leadership at a company who is transitioning from small to medium-sized company. This article is disparaging virtually all of the initiatives we're trying to actually implement.
It's actually really hard to transition from anarchy into a more process-oriented where each person has a role to play so that Devs are no longer responsible for literally everything because everyone is used to Devs doing everything. PMs used to verbally communicate vague ideas of what the customer was looking for and it was up to us to interpret and decompose and deliver on dates agreed upon without our input.
There is a reason that the points in this article exist at all - because the alternative is actually worse!
This article isn't disparaging having a process for developing features or for maintaining a pipeline of features. It's disparaging conditions that prioritize the release of features for the sake of releasing features as opposed to for supporting the business' needs (as determined by data--including analysis of the performance of features, market conditions, and other factors).
This article describes the last company I worked for very well: that company was and is struggling precisely because the senior leadership promoted a culture of feature releases without consideration for their impact and consistently changing the focus of feature development not based on data, but whim, so that the product was steadily losing focus and coherence.
Exactly. Feature factories build features based on random assumptions, and not based on actual data.
I just wrote about this yesterday (https://teamsuccess.io/hdd). I work as a consultant with lots of Scrum teams, and many devs feel like they're just sitting in the factory, cranking out features. Without knowing whether they're adding real value to the end-user.
So what can you do to break free from the feature factory? Something that I recommend for the teams I work with, and noticed that actually works, is "Hypothesis-Driven Development".
In short, replace the items in your product backlog with experiments rather than user stories. Instead of starting with a user story or epic, start with a testable hypothesis. Then run small experiments that will prove or disprove that hypothesis.
My favorite question nowadays is "Wait, why are we building this feature again?" :)
In my experience people are afraid to ask that question. The answer to that question may lead to questions around total addressable market which could cause uppermanagement to question the viability of the product or at least the cost / benefit of having a big development and product management team in the first place.
This is a terribly sad comment on the structure of our society, people worried they might be making something pointless. And the solution is to make sure the people paying for it don't realize.
What were the circumstances that people were afraid like that? For example, was this an in-house development team, or hired consultants? And how did you find out that people were afraid to ask these questions?
Afraid with good reason. I think it's more likely that it would be seen as disrespectful to managment, product, bizdev, etc. who asked for it. That it's challenging their judgement.
A big change we incorporated is working more closely with the data analysis team. It helped us create analytics that were more useful for them, and helped us answer some good user centric questions, like which features were used most, why were others not used, how long were they taking to load, etc...
> as determined by data--including analysis of the performance of features, market conditions, and other factors
What would be refreshing are internal ROI dash boards for each project and group instead of just technical dash boards.
On almost every project I've worked on this data is not available and there's never any real evidence presented to the practitioners for things such as tech stack or process decisions. Typically, it's just a bunch of assertions, hand waiving and requests of trust.
Impact is spread over time and multiple vectors. So now you have three problems in addition to getting working code the door - prioritising proposed features based on estimates of value, finding accurate metrics for assessing value after release, and mapping value changes over time.
The nice thing about feature-driven-development is that it avoids the hard work of having to understand what you're actually doing.
You can make the hamster wheel spin really fast and persuade yourself you're really going places.
I have no data to back this up but I'd say the majority of projects I've worked on during my career my time would have been better spent working on things that made the development teams more efficient. That is to say, with rare exception, I don't think the software actually generated anywhere near the benefit it cost to develop and maintain.
Describes a company I was at very well too. Focus on timelines and delivering some defined scope. Most of the symptoms in the article were present.
Is this just a natural course for startups unless conscious effort is put in to counteract it? I mean in the earlier days of startups just delivering features and catering to customer asks might be a good thing.
The article is describing a broken process that is often installed as organizations scale.
If this matches what you’re currently installing, I suggest reconsidering your plans.
Medium organizations do need more process than small ones, but not all processes are good.
In particular, if you have a growing product, and the plan is to ship a ton of features without internal (to engineering) coordination, and without improving, or even maintaining the core product, then you’re probably doing more damage than good, and neither the developers nor the customers will thank you for it in the long run.
It would be important context to know if this is an agency or a product owner company.
There is a whole spectrum of client-agency relationships, and some of them are more like a feature factory than others, and there's nothing wrong with that.
Some companies want to hire a development team to work as if they were in house developers. Some clients just want to ask for Feature X and have Feature X delivered. They keep most of the discussions about business value, metrics and forward planning internally.
It doesn't have to mean that you have a business with bad processes, it could just mean you have the latter client relationship and a fairly mature process of delivery that doesn't require constant turmoil, refactoring and retrospectives.
That's often the case. If your agency is part of that discussion then you can provide enormous value to the client from your experience, and hopefully PMs are pulling the developer input up into discussions with clients as well. But then that is your relationship changing, and you can start to address many of the concerns in the article anyway.
I guess my point is just that there are certain relationships where it's just not your decision whether or not feature X is a good fit. I wouldn't want people reading this article thinking their company is broken, when it's just a different type of engagement.
Sometimes the CEO wants a popup on the homepage, and every single person in the chain between the CEO of the client company and the Developer at the agency agrees that it's a bad idea, but you still have to implement it.
That depends on who the clients are, and why did they hire you. If you're just a subcontractor in an org run by smart people, it may very well be that they're asking for X, actually need X and expect to get X - they just outsourced the boring work to you.
Why would you think that? The project manager has communicated well with the client and the requirements getting passed along are correct and full. The developers are specialists in their platform, so implementing the feature is no problem.
From experience, it's actually a really nice environment to work in. Everyone is good at their job and you don't have to deal with the clients directly.
Because AFAIK the best programmers also tend to be perhaps over-invested in their work, so that just "doing feature X" won't cut it for them ?
Or am wrong ?
Depends. In many b2b custom software contexts, the client does really know exactly what they need (e.g. when you’re implementing integration between two products they are already experts of).
I don't understand what you mean. A quarter of the items - "no measurement", "no connection to metrics", "no PM retrospectives" - are directly calling out immature development processes which don't quantify success and failure. Many more - "success theatre", "infrequent acknowledged failures", "no tweaking", "chasing upfront revenue" - are the direct result of not knowing whether a project succeeded or failed.
Are your initiatives somehow removing the third step of define problem -> build solution -> learn from solution -> define problem?
As someone working in my 2nd 1000+ developers “feature factory” company, this article describes exactly how things work here in general - and makes for a pretty disfuncional environment. At least in my experience, I think this article is pretty spot-on. If you see yourself just starting to drive to these sorts of initiatives - eg establishing “weekly cadence syncs” that require “wins”, or “virtual squads” as the main team structure, you still have time to ponder and avoid walking that path. Walking back from these cultural shifts is VERY hard.
We have a feature factory with 5 developers. we could do with more people OR we could do to simplify our product. Instead we keep adding more and more to the tech stack which is already difficult to keep on top of with such a small team....
The article is not saying "big feature" pipelines (where there is coordination across several teams) are inherently bad.
The article is disparaging that process if it is the only process. (And I've worked at companies where this was nearly true).
There must be efforts to experiment and refine (and possibly remove) existing features.
There must be efforts to refactor and improve existing code, to make it easier to maintain, less painful to be on-call, and make it possible to add subsequent features without breaking existing functionality.
You have to be able to do both large and incremental work to improve, in other words.
> There must be efforts to refactor and improve existing code
Some of the best tech management advice I received was to never get explicit approval for refactoring from your immediate boss - whether you're the line engineer or CTO. Instead, you pad dates as needed to get the refactoring work done implicitly.
This has yet to fail me. Granted, I don't work in embedded systems and my code is deployed on owned & operated servers that's easy to update.
Good for you for leaving things better than you found them.
But when management compares your output with Cowboy Chris, the fastest code-slinger in the West, they'll think he's better for the company even though he's racking up tech debt on his journey.
Ideally you'd get credit for the good work you do that makes the whole team more efficient.
being (sustainably) fast and producing quality code often go hand in hand. If management doesn’t know Cowboy Chris generates most of the debugging time, that’s a management problem.
A good management team is aware of which people are fast in short sprints but end up tangling things up on longer efforts, which people take time to build steam but never hit that slow-down, and which people fit other patterns (for example, the extremely rare fixers who can take Cowboy Chris’s shit and funnel it into something useful, or the even rarer speed demons who can sprint like that forever because they are like Chris but they work cleanly too)
Good engineers, given the chance, will find and stick with those managers too.
> But when management compares your output with Cowboy Chris, the fastest code-slinger in the West, they'll think he's better for the company even though he's racking up tech debt on his journey.
Maybe.
OTOH, not only can focussed refactoring make delivering the features it is associated with faster, but refactoring—delivery-focused or not—is a really good way tonbuild knowledge of and proficiency with a code base, so if you aren't doing huge quantities of non-germane refactoring, it can actually be a personal delivery-speed enhancer, as well as the benefits for the team.
My experience is that implicity is a huge problem as it's too often taken for granted and precisely not valued. Only when you stop doing people realize that since Jeff is on long leave server aren't updated and we've been hacked or that Lucy quited and everything seem to crawl to a halt because she implicitly was doing housekeeping on the database.
It might not have failed you yet but be sure the day it does there a good chance it will be spectacular.
The first sentence contradicts the other two and it contradicts the previous position where refactoring was done implicitly without anyone above in the org being aware of it.
A challenge I keep hitting with this is non-technical team asking 'why it takes this long' anytime the provided date is >X (for some X is a day, for others it's a week). They want to understand the internals of the ask so they can then nitpick that "well that's not needed". Fundamentally it's a lack of trust IMO, but any tips on addressing this would be welcome :-)
Which is usually a failure of communication. The whole "just trust us" bit doesn't work, and it takes real work to build a rapport with your stakeholders to earn "blind" trust, which is what most people are really asking for when they say "just trust us". This almost never happens, unless people have been working with each other for many years.
> then nitpick that "well that's not needed"
First, ask questions. Find out why they're willing to put energy into this - most people are not wasteful when they push back on things (most). Maybe their boss is riding their ass, and it's CYA. Maybe they're just stating a preference, so I'll ask questions to make this explicit ("So you just prefer it this way? No data? No customer feedback? Your boss didn't ask?").
Second, socialize things early and often. Tell people weeks, months, quarters in advance about some technical debt you're eventually going to get around to paying off. This gives people time to vent, challenge things, or say they're stupid. Use this time to refine your story, and get better at telling it in a way that gets the least resistance (don't need support, just fewer detractors).
Sometimes that "socializing" is complaining about a shared frustration. "Ugh, System X gobbled up Sharon's purchase order again and she has to spend another two hours after working salvaging things. Really wish they'd let us spend time fixing this, but you know how it goes."
By the time I've scheduled a meeting with stakeholders to discuss something (especially something I know will be difficult for them to see value in), I make sure I've individually discussed it with everyone in the room. When they see less disagreement between each other, there's less negative energy to build off. There may not be any positive energy, but that's much easier to deal with.
This only works for management, who are not actually doing any of the tangible work, but doing diversion work instead. For the kindergarden style teams, there are multiple non-productive diversion people already telling different stories to various other functions of the company. Then you have the next legs beyond that telling wilder stories. The dysfunction only grow worse as the system tries to scale the dysfunction.
Today most of business side has simply become too incompetent to be able to complete a dialogue with devs. Same goes for dev side, but is nothing new.
Sadly, it is experiential, both personal and observed in environment. Business people of today are great at monologues. Of course, management experience is different, and would hide the fact of hitting the wall as success.
This is interesting and will take some introspection, but perhaps I can improve these situations by fostering better communication. Thanks for taking the time to respond.
I really push this with my team. I will flat out reject stories that are "refactor". Instead, just do it, and spare us the prioritization debate. If you really can't get it done, you'll find out without eating everyone's time to debate whether you can. And if you get it done, great, things have improved.
Now, you will have other priorities. Make sure you aren't dropping them. But most things are communication based. In time, you will find ways to improve without disrupting. Backwards and forwards compatibility will be tools, not burdens. Stability of the core will similarly work for you, such that you may start choosing to leave parts alone as you focus on peripheral changes that will have clearer impact to users.
The stability part is hard to overstate. There is a reason the Arduino uno is relatively unchanged. There is massive value in having a stationary target for what you are building. Custom everything down the line is an easy recipe for failure. Even if there are improvements that can be made down that line.
> I will flat out reject stories that are "refactor". Instead, just do it, and spare us the prioritization debate.
Or alternatively, don't, because the message from management is that maintenance takes a back seat to shipping new features.
For example, my team has been using a home-brewed NodeJS-to-Kafka library. Now, everyone on the team understands that this library is suboptimal in a number of ways (most notably because the developer who wrote it is no longer with the company) and that it should be replaced by some alternative from Github. But this is a change that will require quite a few downstream changes (mostly because the way the existing library was used was not well-factored).
There's no way of doing this change without it being a separate story. It's just too big a change. It's also not really something that is easy to do efficiently in an incremental fashion -- having two Kafka libraries simultaneously in the codebase is an even worse situation to be in than having a single, suboptimal library. And management is not willing to authorize a separate story for a "refactoring" such as this one, even though it would result in significant operational savings (less memory usage, fewer server restarts, etc). So we bumble along, waiting until the pain becomes so severe that we're ordered to embark on a hasty rework in order to hurriedly patch in the new library when the system finally melts down.
I do work on embedded systems and it works there too. Much better in any kind of project though is if you can get a management team that actually sees the value in it.
I still consider it pretty much mandatory to build core system improvement into feature estimates but by being more explicit and interactive about it you can get good feedback to refine your own estimates of what parts are even worth improving. It may be that the product line that uses the subsystem you want to improve is about to go a new direction, for example, and the better thing to do is to start preparing for that subsystem to go away.
yes, refactoring is never something that I would plan as a standalone thing - it's just a part of building features or fixing bugs.
If you're going to be working on a part of the codebase that you know is difficult to work with, you need to pad your estimate whether you're planning to refactor it or not - either you use the time to do the refactor, or you use the time to sort out the bugs you introduced by working on scary code without refactoring.
> Some of the best tech management advice I received was to never get explicit approval for refactoring from your immediate boss - whether you're the line engineer or CTO. Instead, you pad dates as needed to get the refactoring work done implicitly.
This is the thing that I cannot agree with what so ever. Companies do not employ developers to write beautiful code. Companies employ developers to write code to support business. Every single code base that shipped contains warts that became obvious as soon as the final commit was one, which means that refactoring is a cost and costs need to be accounted for and prioritized according to business objectives ( which at the end is making money -- the money that pays developers salaries ).
Is it your claim that among FAANGs it is a normal practice to say "This will take X weeks" when in reality it will take Y weeks to implement and K weeks to refactor something else where Y + K is X?
That these companies don't treat engineers as a cost structure the way you were talking about. This creates a drastically different culture since tech has a seat at the table, and the CFO can't just willy-nilly makes demands that impact the entire engineering organization without consent. It makes a big difference.
> That these companies don't treat engineers as a cost structure the way you were talking about.
That's absurd. The reason those companies are making money is because they are costing every single thing. That's the reason why tech gets a seat at the table -- it is a cost and its top of the line managers understand that this is a liability and drive that understanding through the entire tech organization.
> The reason those companies are making money is because they are costing every single thing
Terribly untrue. This is fundamentally what makes them "technology" companies, because of how engineering expenses are treated on the P&L and the say it has at the C-level.
There's a fundamental different between how things roll-up n your 3 sheets, vs. how your company internalizes those numbers and acts upon them.
You've made a lot of broad generalizations without backing anything up with specific examples. It's difficult to have a conversation with theoreticals and ideas.
> in reality it will take Y weeks to implement and K weeks to refactor something else where Y + K is X
This is misleading, you should not be refactoring some other random code. You should be refactoring the code you are adding to/changing as part of the work. And it should be in proportion to the size of the change.
Who established the process control? The same developer that does the refactoring? Because if it is done by someone above him on the org chart then the developer has to either lie about what he is doing or he has to break it down into the refactoring + feature.
The practices you discuss are incidental, the real criticism the article makes is that there must be focus on measuring and delivering customer value; that h focus must be on impact rather than a train of features.
No worries, most of the points boil down to: are you bothering to find out if the feature you shipped actually helped users? Is it actually helping to sell more product or keep existing customers happy?
If you have that feedback, you are good, mostly. Then you will learn to ship what is important to the business.
> Large batches. Without the mandate to experiment, features are delivered in single large batches instead of delivering incrementally. You might still work in sprints (yay, we’re “Agile”), but nothing new is reaching customers at the conclusion of each sprint
Lol, I've worked on "Waterfall" projects more agile than this...
Done poorly, all that process is going to hurt you. You'll end up with teams of people who know their role, celebrate the process, but don't know or care how to connect their work back to the bigger picture.
Agile and Sprints can lead to this. You will see a lot of ceremony, and the team might increase their throughout. But you find your team disenfranchised and producing substandard code that doesn't really do what it needs to do.
That’s funny. I work at a billion-dollar corp and agree with almost everything said in this article. This is not about “devs doing everything”, but about not replacing your value stream with busywork and misguided delivery metrics. It’s very much on point.
I cannot imagine someone willingly adopting “no metrics”, “shiny objects”, “no retrospectives”, waterfall isolated processes, etc. so you must be just exaggerating? what exactly are your new efforts like?
It is a question of balance. Don't do a process just for the sake of the process. Don't do a feature just for the sake of doing a feature. If you cannot tie those things to actual business goals, that is when you have a problem.
I'm part of the technical leadership at a company who is transitioning from small to medium-sized company. This article is disparaging virtually all of the initiatives we're trying to actually implement.
It is impossible to verify with as little context as you have given whether this is a good or a bad thing. But it clearly has a lot of bad in it.
In the case of an "enterprise software project", the sales process is driven by feature lists. The result is described very well by https://www.mail-archive.com/kragen-tol@canonical.org/msg001.... As frustrating as it may be for developers, a "feature factory" is in fact a rational outcome for the business.
If, however, you are trying to deliver real value to real existing customers who have real use cases, it is very, very important that you find ways to measure outcomes. The measurements might be internal to your clients, they can be soft, but they have to be measured. If you do not have such measurements, and there isn't transparency around them, then it is guaranteed that people aren't getting useful feedback upon which they can do better.
It's actually really hard to transition from anarchy into a more process-oriented where each person has a role to play so that Devs are no longer responsible for literally everything because everyone is used to Devs doing everything. PMs used to verbally communicate vague ideas of what the customer was looking for and it was up to us to interpret and decompose and deliver on dates agreed upon without our input.
I'm sorry, but this paragraph sounds like a pile of red flags wrapped up in a self-serving excuse. As an experienced software developer with decades of experience who has worked at companies like eBay and Google, being told this by management would make me question whether it is time to find a new job before this one goes downhill.
Yes, you want PMs to have a good sense of what is actually needed and to be an interface. No, you don't want devs involved in every choice. But I have never seen a PM who is as good as a good dev at looking at a customer need and figuring out whether there is a minor tweak to what already exists that can satisfy that need without building a new feature. And a system built as a collection of features that the devs have no context on is guaranteed to go wrong.
If you shut the dev side of your organization out of those conversations, I can guarantee bad results. No matter what excuses you give for your decisions.
There is a reason that the points in this article exist at all - because the alternative is actually worse!
This is dead wrong.
An example of an alternative that is appropriate for a lot of web companies (for example Google, Amazon and Booking) is an A/B test culture. Release features as A/B tests with transparent metrics. Share the metrics with everyone who is part of the decision INCLUDING devs. Proceed with general rollout of the features that actually produce positive results and rollback of the ones that don't.
When every feature has to be justified by actual customer impacts that have a concrete measurement attached, you get a much better product and more involved devs. The flip side is that when you start a project you have no idea whether you add millions per year to the bottom line or whether your feature will be a flop.
The challenge is that PMs and devs have to be willing to accept that sometimes they were wrong and that's OK. But the alternative is significantly worse, which is to have PMs and devs who are often wrong, refuse to accept it, and therefore fail to improve!
"An example of an alternative that is appropriate for a lot of web companies (for example Google, Amazon and Booking) is an A/B test culture. Release features as A/B tests with transparent metrics. Share the metrics with everyone who is part of the decision INCLUDING devs. Proceed with general rollout of the features that actually produce positive results and rollback of the ones that don't."
If this is the process Google follows, how do its products (e.g Gmail) get worse over time? (genuine question, not snark). Are some things (e.g loading speed) not measured? Perhaps the things measured ('time spent 'in app' ') don't measure customer satisfaction?
This is a process that some of Google followed as of 10 years ago. I cannot tell you whether internal pressures have caused groups to not follow it or they are measuring the wrong thing.
I know from experience that A/B testing can make it hard to have a coherent product design (Amazon's page is a good example of that). But it is still much, much better than the usual alternative.
I suggest re-reading this article in the negative: Assume that your team implemented every single one of these bullet points perfectly, for every feature in every sprint. Now imagine that each of those bullet points is a recurring meeting invite on your calendar. Now imagine how happy you'd be spending all of that time in meetings or reading process-related e-mails so your company could satisfy the criteria of not being a feature factory. Does a feature factory sound so bad now?
I thought this article was great when it first came out, but I've since changed my mind. I've seen too many engineers, especially junior engineers, become overly cynical about their jobs after reading too much into this one article.
1) With 12 different "signs" of a feature factory, it begins to read like a horoscope: Almost everyone reading it can find something to identify with. Multiply this across every different project or initiative at a company, and everyone can think of multiple times their job has resembled the descriptions in this article. Does anyone actually read this article and walk away thinking their company has never once resembled the vague signs in the article?
2) It begins with an implied assumption that working in a feature factory is a bad thing. Combine that with the horoscope-like 12 possible indicators of a feature factory, and the reader will always conclude that their workplace is the bad thing.
3) It sets unrealistically high standards. The alternative to a "feature factory" is defined in the negative in this article. Can you think of any company that would score a perfect 12/12 on every bullet point in this article across every team in every department? The process overhead would be significant, and it would very easily translate to a lot of meetings, e-mails, and overhead.
The kernel of truth within the article is still very valid. Teams should absolutely take the points into consideration and apply them judiciously, where it matters. However, it's a mistake to use this article as a checklist by which to judge your company's process. Real work is always a bit messy, communication is never perfect, and just because you don't see these things with your own two eyes doesn't mean they aren't happening somewhere in the company.
As a product manager, suggestions are always welcome and I'm happy to discuss reasoning for decision making, but I also don't burden the entire team with every detail of every step of the way.
Ultimately it isn't an engineer's place to complain if they're working for a product that needs 100 more features to succeed in a competitive marketplace and win over customers.
I had the exact same reaction as you to this article after seeing it again, and wanted to share why I've also changed my mind on this. I was in "a feature factory" for few years and absolutely hated it. After leaving and going to a "not-feature factory" AKA "employee-happiness machine", I miss the feature factory.
Imagine you're building a Great Pyramid with 100 workers. The workers can't see the Pyramid and don't care about why a Pyramid is important. Is there time to explain to each worker exactly why they're moving a specific block? What about time to "review" why that one block didn't fit?
"Real work is always a bit messy"
Would you care about anything else other than how quickly each block got into place? The Pyramid can only be completed when all the blocks are in place, and most importantly- each block doesn't have to sit perfectly, and also, you can get by without ever delivering a "perfect block".
This is why product managers will never be 100% aligned with engineers. Product managers see "is the block there" and engineers see "is the block perfectly set" and "why am I not being appreciated?" and "why do they only care about putting down blocks".
It's bizarre to read how this person is so opposed to "up and to the right", that's the key indicator of you business and what will pay your bills. Yes, everyone is always going to be obsessed with revenue and you will never stop hearing about it.
Of course as an engineer- Resolve tech debt. You have to. But from the view at the top, it's a very temporary detour and needs to be resolved quickly so they can continue pumping out features.
Yes, I am making the analogy that engineers should accept being a "slave" to the product manager, or more directly, the product itself.
> This is why product managers will never be 100% aligned with engineers. Product managers see "is the block there" and engineers see "is the block perfectly set" and "why am I not being appreciated?" and "why do they only care about putting down blocks".
I just want the damn PM to be able to tell me where the block should go instead of fucking around with bullshit vague descriptions like "the block should be in a good place". And after I get instructed to place a block midair somewhere, I'm going to want the PM to show that he actually decided "where" based on some kind of information and not by throwing darts.
It is also totally reasonable to expect that someone is looking at each block as part of the overall goal, and identifying that those blocks which were just placed on the ground next to the pyramid were a total waste of time. It doesn't have to be me - but if nobody is doing it, who says we'll end up with a pyramid at all!?
I think a lot of people are missing the point of the article. It's not anti-feature. It's against building features that have no measurable impact to the real goals of any business: better product for the customer AND therefore more revenue.
The problem with the Pyramids analogy is that the Pyramid was the goal. In software companies, features aren't the goal. Better products for customers are. The right features are the means to the end, everything else is a waste of time and an illusion of progress.
I think you are more aligned with the author than you think.
A feature factory is bad for the business not because it's not "fun", but because resources are tied up producing huge amounts of work that don't really matter.
Kind of like a civilization using its vast resources building giant piles of rocks in the dessert because that's what they've always done.
Pyramids are the perfect analogy to disprove your point.
The pyramids were useless tombs that wealthy kings wanted because of religious beliefs.
As an engineer, I have spent many months building features that were disabled/deleted because the measurements showed they were no good after release.
Now, for the 1% of product managers who make the right call every time on their own, having an engineer participate in the discussion may be a burden. But in my personal experience, every product discussion I'm in I bring great value to.
And from the engineer's perspective the frustration is: We're so busy lifting blocks on top of blocks, that we don't have time to come up with better tools to do it faster. Also some of the blocks are triangles and this pyramid is 5x bigger than the last one and will probably collapse if not re-designed. The slaves are exhausted because they just finished working overtime to build the last pyramid you wanted, and in a given work day, half of them are working on patching up the other crumbling pyramids we built.
But the only metric the manager cares about is how many blocks per week we're pumping out. And that's why we'll never get along...
What actually happen is: There are multiple pyramids. Sometimes 20-30 pyramids being built simultaneously. Then priesthood comes up with their own tasks, and add random initiatives at random intervals to the load. There's also certifications and external verifications/revisions, each time a big surprise.
The blocks, pyramids, tasks are often unfinished or neglected after completion. Because nobody has time to make proper utilization of it all!
Fortunately, the author wrote a great many other articles that go far further. It seems to me like the article is fine for what it is, it's just calling out a problematic pattern that commonly exists. It doesn't pretend to prescribe that to fix this pattern you need to do the opposite of every one of his bullet points. He instead writes lots of other articles dealing with the nuances of how to do better.
But it also sounds that you're worried about engineers constantly challenging product manager's decisions? I don't think that's the implied end state. Engineers don't see all the input to PM's decisions, and yes it would be more comfortable for the PMs if the engineers just trusted them to get it right all the time. But we (I'm an engineer) aren't dumb. We know nobody gets it right all the time, and we don't expect you to. But if we never learn which things worked out and which didn't, and some of the why, then what you're asking for is faith, not trust.
You don't need to include engineering in the decision making (at least, not more than you need for feasibility and cost estimates.) You do need to include engineering in the feedback loop, because we're not dumb monkeys whose only value is in realizing your vision. We know more about what's possible, and how to overcome some types of challenges with relatively little effort compared to the cannon that you'd need to (ask us to) build.
> But it also sounds that you're worried about engineers constantly challenging product manager's decisions?
No, that's the exact opposite of what I said. In fact, I explicitly ended my post with a call to action for engineers to communicate concerns, suggestions, and questions to their product managers.
> It doesn't pretend to prescribe that to fix this pattern you need to do the opposite of every one of his bullet points.
That's my point: The article seeds this unrealistic idea of an over-idealized product management process that explicitly includes the reader at every step of the way. It manufactures an anxiety in the reader that a feature factory is a vaguely bad thing and if you recognize any of these 12 vague points, you're working within the bad thing.
Again, my problem isn't with the core suggestions to improve PM process that might be elaborated in the author's other posts. My problem is with the trend of people reading this article, finding some point to identify with somewhere, and erroneously concluding that their employer is doing it wrong and that's a bad thing.
My point was that if you want to be involved in the decision making process, provide feedback, or understand the reasoning behind the decisions you should be proactive about communicating. Instead, I see too many people reading this article and passively becoming disgruntled with their employers, without taking any steps to be more involved. Or the more they are involved, the more the complain about too many meetings, interruptions, and process overhead consuming their time that they'd rather use for quiet focus to get their work done. You can't have your cake and eat it too.
> The second has no clue if new things are working (or at least, nobody bothers to inform me), but they’re quite successful.
I had a similar eye-opening experience. I worked for a company that insisted on doing everything the right way, with mountains of process, planning, metrics, measuring, followups, reviews, and customer feedback. It sure felt like we had the recipe for success, and it felt like we were checking every item on this list. We always felt super busy, as if we were doing important work every minute of every day. Yet it took us forever to ship new features, and the onerous planning, review, and feedback requirements turned into planning gridlock.
I then switched to a company that focused on quickly shipping features above all else, only measuring feedback with random sampling of customers and spot checks of quality. All of our customers loved the company because we could deliver their features quickly, and they could always see that we were moving the product in the right direction.
It's difficult to communicate the stark difference between these two environments unless you've seen both sides of it. It's even more difficult to convince engineers that the process-heavy, data-driven approach isn't necessarily the best way to run a business.
I don't think the article is saying engineers need to have a meeting for each of these. I think you're entirely missing the point.
It's simply saying that you measure before you build a feature [and again after]. Any good product owner will already have data to support their prioritizations (e.g. this bug affects 10% of users, this feature only applies to 2% of users)
> I don't think the article is saying engineers need to have a meeting for each of these.
I think you've missed the point of my comment.
As I said, the core principles of the article are not wrong. It's the framing of the article that causes problems.
When engineers and other ICs read this article, they tend to assume that these planning sessions, feedback loops, retrospectives, and other mechanisms aren't happening because they don't personally see them. That's why I pointed out that most engineers wouldn't be happy if they were pulled into every single planning, retrospective, and feedback meeting that the product managers are doing. I'm not saying they're bad, I'm just saying it's bad to assume you work in a terrible feature factory if you don't see every item on this checklist.
This goes both ways, of course. It would be silly for product managers to read an article entitled "12 Signs You're Working In A Code Factory" and then start second-guessing all of their engineers' decisions or assuming the engineers aren't implementing proper process behind the scenes. That type of article would generate outrage on HN, but engineers second-guessing product management is always well-received in an engineer-centric forum.
>> When engineers and other ICs read this article, they tend to assume that these ... mechanisms aren't happening because they don't personally see them
I think you're missing the second point of the article here. Per the article point 1 "Or, if measurement happens, it is done in isolation by the product management team and selectively shared. You have no idea if your work worked"
So the article is also highlighting failure to share as a failure mode. Every good company I worked at, I [the lead engineer] had equal ownership as my product owner. The respected my opinion, learned not to doubt my warnings, trusted my intuitions, and made adjustments based on my recommendations.
That balance of shared ownership is a defining indicator that it's not a feature-factory, whereas a "I call the shots as product" mentality is more feature-factory.
Is it possible you [rightly] worry about this article because you are what it's talking about?
Organizations where individual contributors are respected find ways to communicate what's discussed in those meetings. I wouldn't want to be a manager in an organization where engineers can assume those meetings aren't taking place.
Most of the points in this article are relative, for instance what does it really mean to have no care for technical debt drawdown? How much care is not enough? However, one thing that it's specific about is the lack of measurement. There are some companies out there that have zero focus on customer feedback, and since there's only one zero, that indictment is not relative to people's expectations.
> There are some companies out there that have zero focus on customer feedback, and since there's only one zero, that indictment is not relative to people's expectations.
No company has zero focus on customer feedback, but the company may not be taking the feedback through channels that are most visible to you.
One of my most eye-opening experiences as a product manager was realizing that the most important customer feedback was not from the vocal customers complaining loudly on social media. The most important customer feedback was number of new customers signing up and their retention rate. To my naive surprise, chasing the feedback of the loudest complainers and detractors rarely turned them into proponents, and was even less likely to turn them into paying customers.
Someone, somewhere, is always making decisions according to customer feedback in some form.
Sigh... At some point, most healthy adults who are not living in terrible circumstances come to the mature realization and compromise that their job is largely to produce value for someone else, not to be the primary source of personal fulfillment, and they come to understand that there is dignity in being productive, if not creative. Be creative on your own time. Lead a civic group. Be a good parent. Volunteer at a charity. If your main gripe at work is that you're on the value-adding side of a company, which is what feature dev is in software, you have it pretty good in this world. Change jobs if you must, but the realities of companies, especially as they gain traction, rarely means the grass is ever greener. If this is unsatisfactory still, start your own company. I just don't understand the problem here.
Disagree. This is an endorsement of accepting the fact that your work, your professional work, done by most of us at a job, makes you feel like a cog, a unit of production. It is better called a surrender than a compromise.
Surely it's often case that life has people in this sort of scenario and there is no realistic way to a new path. In that case, sure, take care of your loved ones and yourself, stay put, and make the most of it. This scenario arises from the messy mix of circumstances and decisions. There is no blame.
But! "Change jobs if you must", must, be changed to "Change jobs if you can.", especially for younger people. It's not mature to accept an unfulfilling job. It's a life-scale bummer. Tenaciously go after fulfilling work. Don't write off 1/3 of your remaining life. An approach that worked for me: distance yourself from the profit motive. It's not that hard.
> This is an endorsement for accepting the fact that your work, your professional work, done by most of us at a job, makes you feel like a cog, a unit of production. It is better called a surrender than a compromise.
The grass is always greener on the other side of the fence.
The problem with cog in the machine analogies is that people only think of the plus sides of having more input in decision making processes.
What people generally ignore is the added responsibility and liability that comes with being more invested in the decision making process. In my experience, after you start holding people accountable for making the wrong decisions, most people quickly go back to being happy about being a cog in the machine and taking orders from someone else. For the few who enjoy calling the shots and accepting the blame when things go wrong, you can always move up into management.
> and accepting the blame when things go wrong, you can always move up into management.
In most companies, accepting blame is career suicide. Most people who move up in management are good at taking credit for wins and pinning down someone else for failures, or moving on to different projects before their decisions come back to bite them. And that's the kind of management that leads to broken process that the OP mentioned in the article.
Blame will make problems go underreported and responsibilities shunned. A healthy org will invite organic planning and distribute power while providing psychological safe space for real autonomy. The moment of blame or downward finger-pointing, all of this too easily gets lost.
> It's not mature to accept an unfulfilling job. It's a life-scale bummer. Tenaciously go after fulfilling work. Don't write off 1/3 of your remaining life.
For many (probably most) people all jobs available to them will be unfulfilling. For the rest, many of the jobs can be fulfilling, but have unreasonable demands, like concentrating for 40 hours a week and constant stress.
Obviously everyone's body and mind are different and if yours can easily deal with stress and 40 hours a week of concentration, then go look for a fulfilling job. If you're like me, on the other hand, you will burn out due to weariness even in what is supposedly your dream job. In such case, the only way to win is to make as much money as possible and retire early. The alternative - working while constantly tired and grumpy till you're 65 - is grim.
Sure. Someone is installing blinkers at the BMW factory which may never be used. There are plenty of jobs where you could say that the work being done isn't fulfilling because it is a waste of time. The problem is that at the BMW factory you know exactly what you signed up for when you became a blinker installer. At some software companies you are told that you are going to be part of a team that is building a successful product, then when you get there you see that everyone is running around trying to make features by certain dates for no good reason.
> creative. Be creative on your own time. Lead a civic group. Be a good parent. Volunteer at a charity. If your main gripe at work is that you're on the value-adding side of a company, which is what feature dev is in software, you have it pretty good in this world.
I don’t think this is a good solution. It is certainly “A” solution, and one that many folks pick.
Software Development is ALL about creativity. It’s really hard to make computers work; the best developers make them sing. Time and again I’ve seen fairly mundane teams hire an engineer with the energy and drive to push the boundaries of what’s possible and end up doing great work. I do think that developers can be agents of change this way; it’s not easy but it’s possible. And it can be very fulfilling and inspiring to be both someone who benefits from great work and to be a person that does great work.
I do agree that life may decide your priorities for you at times. Kids, spouses, relationships are all very important and compete for time and at some point you have to decide if it’s worth it to put in extra to do great work. It doesn’t mean though that Software Engineering is devoid of creativity. It’s been pointed out that software engineers can be one of the most productive assets of a company, and that productivity isn’t achieved by magic but by creative problem solving.
Until you come to the realization that most developers are “dark matter developers” doing yet another software as a service CRUD app or bespoke app that will never see the light of day outside of the company.
At the end of the day, if you got hit by a bus, your company would send flowers to your funeral and have an open req for your position before your body was buried.
> At the end of the day, if you got hit by a bus, your company would send flowers to your funeral and have an open req for your position before your body was buried.
Which is exactly what I expect them to do. Its a business and not a family. They compensated me for my work, I don't expect them to cry for me when I'm gone, but to hire someone immediately to continue the business.
Which is exactly why it's best to maintain an emotional distance. If you are looking for it to be fulfilling then in most cases you are going to be disappointed. Almost by definition there will be little or no intrinsic rewards, the extrinsic ones will drive that out.
> Which is exactly why it's best to maintain an emotional distance. If you are looking for it to be fulfilling then in most cases you are going to be disappointed.
I do see your point but the difference is that in my perspective the emotional attachment to something I build lasts only as long as I'm building it. Once the thing is delivered, its up to the users to ultimately decide to use it or discard it. I think my emotional attachment is to the process of building things rather than to the thing produced by the process.
I do agree that if you want to see the thing you built get used then yes, very bad idea to get attached to that emotionally. But the way I see it, after its built its out of my hands.
That's a pretty cold way of looking at it. I've certainly never worked anywhere where the death of a coworker would be as inconsequential as you make it sound.
I’ve only worked at one large company in my life - at the time it was a Fortune 10 non tech company. I didn’t have a name in any official sign on or documentation. I had an “SSO number”. My 2nd level manager wouldn’t have known me if he bumped into me in the street.
I worked at a startup where the founder thought he was irreplaceable since he was the only one who knew how to modify and compile the custom C/MFC custom IDE/VM/compiler that he developed and everyone in the company had been using for years before I got there.
They somehow convinced him to show me how everything worked since I was the only person who had a C++/MFC low level optimization background. As soon as they were comfortable that I could do it, the board pushed him out, laid off a bunch of other developers and gave me the responsibility.
If a crappy PM wastes an engineer’s time by having them churn out useless product features, the PM is ruining their own career and the career of the engineers.
It’s important to work on meaningful work in order to have a meaningful career. That doesn’t mean everyone needs to heal the sick, but it can mean a PM doesn’t waste my time making me do stupid shit like build a feature no one uses.
It’s very difficult to level up if you never build anything worthwhile. PM’s should focus on getting it right the first time or not doing it all, because in the world of software development, adding technical debt for something not worth anything anyway is worth less than doing nothing at all
I've never worked somewhere that could actually take a concept to a complete, well-tested feature delivered to users in a two week sprint.
I've worked at little startups and a medium-sized tech company as well as FAANG. In each case, we spent most of our time working on projects with 2-3 month time horizons. The small and medium companies called their development strategy "agile". In FAANG we mostly just did our work and didn't spend any time on the Kafkaesque exercise of "sprint retrospectives" and "grooming sessions". Our product manager didn't tell us what to build, but acted more as an advisor for the engineering team leads.
I think in some case scrum is an attempt to compensate for inexperienced management.
I assume we just weren't "doing agile", but when I was younger it always left me feeling vaguely inadequate: if I was just more efficient and competent, maybe the 2-week sprint would fit better. Maybe that was the point.
Has anyone here ever delivered useful features to users on that kind of cadence? What was it like?
When I was doing full stack, solo development, I could get features with real customer value delivered in <= 2 weeks. The flexibility afforded when doing solo development in an area you are competent can unleash some great velocity, but has pretty predictable downsides (can't scale past what you can do yourself, when the project gets big enough, the context switching is a killer, etc.). I don't think I personally have ever hit this level of personal development velocity when working in a group bigger than two people.
Yep. Agile theater often happens when whoever wears (or doesn't) the PM "hat" doesn't have a clue beyond their so-called certifications and amateur "expert" non-experience because they're afraid of appearing incompetent. Then, it reverts to: Big Design Upfront -> Big Deliverables at the End -> sad, expensive fail.
Agile means:
- Build-test-feedback-adjust loop is as tight as possible... hours or less to get a fix or a quick feature.
- The end-users are involve from the beginning to constantly get what they really need (capture good requirements, not necessarily what they ask for directly) and feedback to make usability/feature improvements.
- Someone is prioritizing fixes and features from what users need right now to do something useful, rather than arbitrarily shotgunning features in milestones.
- Just enough intra-sprint time is dedicated to test improvements, bug hunting and refactoring.
- Using continuous integration (CI) and/or continuous deployment (CD).
- You are what you measure and post publicly. A giant dashboard in the office to show key metrics like open tickets, who's working on what, app load, app latency, shopping carts/sales, current scaled-infrastructure costs, etc.
- Minimum ad-hoc meetings and very few scheduled all-hands.
- Let developers focus on one or two tasks rather than constantly interrupting them. Some sort of office GTD system combined with industrial andon light towers to signal: serious concentrating, light work, need to socialize, or not in.
- Unless you're building an elevator, fission reactor or an intersellar space ship, be against waterfall development.
I'm forgetting a million other things like mindset and differences in processes, but the above is a rough sketch.
I do this all the time: It's too easy and too comfortable to get wrapped-up in technologies, methodologies or building out elaborate processes and bussiness-y minutia, which I think is a procrastination / ego defense-mechanism because of a core fear of many people of rejection from the user(s). It's so very easy to build something for yourself, but emotionally more difficult to build things others will see differently.
The point of Scrum is not to deliver a full feature in a 2-week timebox. Its to deliver 'something of value' in a 2-week timebox. 'Something of value' could be stuff like:
- A tracer bullet implementation of a new DB technology to de-risk full implementation
- A skeleton workflow so you can validate with stakeholders that you're on the right track in terms of understanding how they work
- A completely failed attempt at an implementation of a particular feature as to outline key lessons, go forward plan, and the next set of experiments you want to run to mitigate technology risk (will this work), customer risk (will they like it), or product risk (will it sell)
> I've worked at little startups and a medium-sized tech company as well as FAANG. In each case, we spent most of our time working on projects with 2-3 month time horizons. The small and medium companies called their development strategy "agile". In FAANG we mostly just did our work and didn't spend any time on the Kafkaesque exercise of "sprint retrospectives" and "grooming sessions". Our product manager didn't tell us what to build, but acted more as an advisor for the engineering team leads.
Wtf? I've worked mainly for startups, with a couple of large companies in-between, and I've had the exact opposite experience. The engineering in every startup has been fairly lean and efficient with a few mistakes here or there, and every large business has been an 'agile' hell where everything feels ludicrously slow and nothing gets done.
I've always avoided interviewing with FAANG etc because I just assumed they'd be in the latter category with the sheer number of employees they have. It's interesting to hear that Facebook actually runs smoothly.
I agree with the parent poster. I've spent 6+ years at FB and the whole time I've just worked on what I thought was important. No one was ever assigned a project to me. The process normally involves managers and PMs highlighting problems and potential solutions and then developers picking projects that interest them. It sounds crazy but it is a controlled chaos. We incent people to work on non-sexy work by aligning with performance reviews. So everyone is required to show some better engineering impact (like writing documentation, unit or integration tests, cleaning up deprecated code etc).
It's the longest I've stayed at a company (worked a decade at startups and web dev companies) because of the freedom to constantly work on things that are interesting. For example I've changed the type of developer I am twice since joining.
So if it's worth a look if that's a style that works for you.
It likely depends somewhat on the team and specific company.
However, the baseline level of individual competence and motivation is relatively high, which seems to reduce the need for process.
EDIT: the medium-sized company I worked at (~200 engineers) probably falls into the category of "agile hell" you described. The startups weren't as bad.
I currently work in one week sprints and we do this. Not 100% of the time of course, and it's an internal facing team where the customers are two desks away. But I've also done it on other teams with proper paying customers.
The devil is in the definition. As an example, we have a tool that bootstraps a new Git repo and we're adding a feature that will include our company Rubocop config if it's a Ruby project. There is no reason on earth that this should take 3-5 developers a week to accomplish (never mind two). But it's a feature and it could be useful. And on top of all the other features, the tool is quite useful nowadays compared to the first extremely thin slice we did.
The trick is reducing what you're counting as a feature to the smallest thing that is actually worth bothering to use. It won't always fit into one or two weeks, especially if you're dependent on hardware manufacture, or beholden to the iOS store approval timetable or whatever. But for your line of business SAAS apps there's absolutely no technical reason why you can't do it; there may be human obstacles to doing so. (Hell, maybe we're all wrong and you shouldn't do it.)
I have worked at a variety of companies and on a variety of projects. What kind of cadence works varies widely. And the factors that go into it are everything from what kind of software it is to the codebase to the organization it is embedded in.
There was one reporting app where I was the sole developer and it was only for internal use. My regular cadence was that I would field a series of questions every day. Better than 90% of the questions already had a documented way to solve it. But many days a new feature would be needed. Most features were delivered on the same day. That was a fun project, and my reporting system wound up adopted in every department of that company.
I have worked on a legacy codebase with a Kanban style ticket system. Most features were delivered in less than a week. However a significant fraction of "features" were actually bugfixes. Such is life with legacy code.
I have worked on systems where features realistically took a month or three, but we pretended to biweekly sprints. The sprint cycle seemed silly there.
I work in a place that does this in 2 week sprints and that's probably the reason we're a public company with hundreds of millions in ARR.
We're also totally a feature factory and I know for a fact that I would hate to be on one of the development teams and am absolutely glad I do ops instead here.
The things we have the hardest time with are hiring and retaining senior talent and executing on difficult work that can't be accomplished over a period of a few sprints.
Sadly this is the case in our company. I worked as a dev team manager for the last 9 months and I was constantly wondering why do we work on these features...
The more time I spent with our CEO though, it turned out that he wanted to decide everything feature-wise. When I proposed that we should maybe do some market research or user interviews instead, he boldly declared that he knows the market the best. So go figure. No wonder that the whole company is a shit-show and we're losing a ton of money each year. Also when new features are prioritized I dared to propose to other managers that maybe we should tie the priorities to our business plan. Again our genius CEO told us that the business plan numbers should not be taken very strictly.
The more I thought about this why would anyone give money to such a moron, I figured that he is basically a very effective sales person. He can easily convince you how great he is and his vision, but he lacks all kind of strategic or operational skills. And as someone commented here in HN before, he also has management myopia: "If I can't understand it, it must not be hard" he probably thinks.
There was even a case when we looked for a product manager. This is not exactly the CEO's fault, but even 8 months were not enough to find one, even though that we interviewed perfectly capable and matching candidates. But there was always at least one person in management who found some excuses to ditch the candidate. Now in hindsight I think these people were afraid that the current status quo of the feature factory would change, so they sabotaged the whole PM candidate screening.
So now everything in our company is as it is written in this post. Success theater (this might be the same BTW that is called vanity metrics by Eric Ries), no connection to metrics, no connection to user values, hand-offs, etc.
Maybe you're spot on 100% right in terms of vision, in terms of business plan, in terms of psychological blindspots, in terms of product market fit, in terms of a path to profitability. You seem pretty smart so maybe it is fairly insulting to you to not be listened to.
In what way is your world view helpful? And I want to be be gentle here, and not mean harm. In what way is it helpful for yourself, for your own wellbeing?
Let's say you could fix the company and stop the losses. Then trying to do that seems worth pursuing. Let's say you can't! We've all come across situations hard and unmoveable. So now, unless some break happens for you (because it's unlikely they will wake up and listen to you if they never did before...just a pragmatic observation), you are stuck being chronicly slightly unhappy.
It's not my place to suggest, and I'm trying to respectfully ask, would either maybe letting go or conversely doubling down and standing up for yourself and finding another job where they value you hurt you less? Completly serious question because everyone deserves to be happy.
And also, a bit of conjecture, but if you're not profitable it's that same CEOs ability to communicate a vision that's funding payroll right? I'm not saying absolve him of every misgrievance because if the company isn't doing well it's ultimately on his shoulders.
My point is maybe there's a softer path to walk here....?
I don't actually understand what's bad about this, maybe someone who is in a company like this can explain why it's a negative? I am working with a company that is moving more towards this approach and I'm really enjoying it. Isn't this essentially what Basecamp's Shape Up method advocates for? Not everyone feels they need to be building something that changes the world I suppose. Each to their own.
The problem is that, if you're churning out features without thinking critically about why, your work might not even change the company. I've seen engineers, teams, even entire departments spend multiple quarters being a net drain on the company. But they never noticed, because they kept consistently defining release targets and hitting them.
> churning out features without thinking critically about why, your work might not even change the company
Isn't this just speculation? Why would anyone build features without any thoughts about their benefit?
In practice you never know how a feature is going to work out and benefit or harm the company, you just have to try it. Not trying is a sure way to lose to bolder competition.
There are lots of bad actor reasons (putting career advancement ahead of product success), but there also structural issues that can lead teams to operate this way.
The most common: product decisions are made higher up by well intentioned leaders who lack the product management expertise and the context the team does.
Usually this looks something like a sales person hearing from a couple of customers “we really need feature X”. The sales person doesn’t ask what actual problem they’re trying to solve but instead reports back “the market is telling me we need X!!”. An executive picks this up, writes a business case, and then hands a solution (build X) instead of the problem (our customers are struggling with Y) to a product manager. That PM is no longer really empowered to think about the overall outcomes but rather is tasked with discovering requirements and then project managing the feature through.
At some companies this is how all product decisions are made. It’s not that there is no thought about benefit, it’s that the wrong people are the ones doing the thinking and as a result you end up with teams who spends years never delivering anything actually valuable to their users.
I hear that these situations exist, yet this is the complete opposite of what I was taught, which is that (at least !) the project manager had to have frequent and direct contact with the customers starting from trying to figure out what their real needs are - for the project to have any hope to succeed?
In software/tech world, PM's customer are often internal stakeholders, often not all relevant ones. Very rarely company's customer. PM juggles internal issues like milestones, priority, resources, budgets and even some politics.
I've seen proposals where one of the requirements was that the developers would have as little contact with the customer as possible. What little contact was allowed was through the most junior person at the customer.
I’ve seen quite a few features added simply ‘because we can’, without any thought to ongoing maintenance, support, or additional infrastructure requirements/costs.
A little bit of common sense and forethought can go a long way in preventing unprofitable or revenue-decreasing features from being added to the product.
If you keep adding features, without increasing your conversion rate or user revenue, you are simply decreasing your profitability.
You are basing your sentiment on assumptions that cannot be verified in practice without implementing/releasing the features. Nobody wants to implement revenue-decreasing feature but nobody knows beforehand whether they will increase or decrease revenue. So such statements are useless.
There are lots of ways to test and to map potential features to customer problems. Launching features because _you_ think they’ll resonate with users is a complete crapshoot. More often than not, people inside the company are way too close to the product to be able to determine which features are the right ones. I recommend “The Right It” by Alberto Savoia, “What Customers Want” by Tony Ulwick, and “The Lean Product Playbook” by Dan Olsen if Cutler’s essay doesn’t convince you.
There are a couple reasons I've seen why you'd do that. I'm sure there are others I haven't seen.
* Sometimes companies have declared focus areas, and it's advantageous to be working in those areas. If the CEO says Project Foobar is going to be the next big thing, you want your team to be touching Project Foobar even if you have nothing valuable to contribute.
* Some kinds of projects are costly to push back on. If someone comes to you and says "this is a security feature, I'm going to build it to increase security", you have to demonstrate that the system is secure without it and not just tell them no.
* In some environments, people are measured by their ability to produce lots of features. This makes it embarrassing and politically damaging to not do a feature you proposed. So once it's publicly known that you have a feature, it's too late to think about whether it's useful, you have to just buckle down and do it.
> Why would anyone build features without any thoughts about their benefit?
> In practice you never know how a feature is going to work out
Those two sentences are somewhat contradictory. If you think through the features you should know exactly how they further your initial goal. When you add things based on what some people might like, that's when you are guessing and you can't know.
This all boils down to having or not having a vision.
Some leaders have a vision and you can see the long-term goals, others (like Google) throw things at the wall to see what sticks then shuts them down. If you're not a monopoly, that strategy will not work.
Vision is good, but any product decision is still a guess about what will further your goals. Guesses can be right or wrong -- or if they are never wrong, then you're not doing anything that every other competitor in that space won't already be doing too. (And in that case, why have product management at all? Fire them and hire more engineers so you can follow the taillights faster.)
There's a difference from a blind guess and an educated guess.
And honestly yes, you shouldn't have product management. Projects should be coordinated by product leads and their engineers. The only people that can plan a vision for software are the people who make it. You're paying these engineers for their knowledge, use it in all capacities.
Google certainly had visions for all their products. They just didn't work out, for reasons that became apparent much later (although some people will always claim they "knew" beforehand).
That's the whole point of agility: Release working software often. Make new business decisions based on customer feedback. Remember Google Beta? Not saying it doesn't offload responsibility and potential damage to end users..
No that's not the point. The point of agile development is not to get stuck in the water with long releases and no feedback cycle. It doesn't mean you blindly follow customer feedback and guesses until something works. It means you incrementally release your vision.
tldr: agile tells you how to release, not what to release
Agile Manifesto and IT industry is based on customer vision only. If you are startup CEO, you are customer of the programmers, ie. the one who pays the bills!
This is about making teams completely responsible for the product as you possibly can. Maybe even profit/loss responsibility. The idea being they're closer to product then anyone, and thus should make better decisions.
If you follow the feature factory route, the team will just produce things they are told too. If it doesn't work, it's not the teams responsibility. They did what they were told to.
Otoh, those who “tell” are closer to the business/client, and thus should issue better requirements. I honestly can’t see what’s wrong with doing what you’re told to. You cannot play a jack of all trades and program the damn thing perfectly at the same time. (And if you can, you should not waste your time bringing profits to that Corporate, Inc anyway.)
Teams that take ownership have a product person, who sits with the team. And is part of the team. Their entire job is work out the best product direction. They interact, sit with, pair, answer questions from developers everyday, talk to customers etc. Work with the team to set future product direction.
It shouldn't be external group pushing features into a team backlog.
Devs talk to customers, look at the logs, debug issues, look at the stats being logged out. As result they have far better understanding of product usage than most people higher up.
More features mean a more complex system, which takes more effort to maintain and adapt than a simpler system. So if you're not measuring and constantly reevaluating the value of your features you can wind up with parts of your product that cost the company far more than they are worth.
A combination of sign 3, 8 and 12 can be awful. The features that are shipped are half-baked, teams always say that they will come back and refactor it but of course never do. Developers start to care less and less about quality because that isn't what gets rewarded. Feature ship rates go up in the short term but in the long term slow down as the spaghetti gets worse and worse.
Good developers either see what is happening and jump ship or just get bored and leave.
I think the high-level summary of the article is ‘fire-and-forget development’ - ie producing features, but not measuring impact/benefit/usage.
I’m not convinced ‘feature factory’ in itself is problematic (definition not provided). There is no problem (IMO) in optimising to ship features - as long as the overall approach is to measure impact/benefit/usage and then learn and iterate.
I beliece the issue the article is attempting to convey is that a preponderance of these ‘symptoms’ indicates a lack of optimization, and a failure or refusal to measure the impact.
I agree and think it’s a divide between product and engineering. Product managers (at least good ones) love thinking about the problem, while developers love thinking about solutions. I’m generalizing of course, and both mindsets are needed, but it’s the product manager’s job to make sure the developers are solving the right problems and not just launching features that don’t speak to a customer benefit.
I'm surprised by so many negative comments here (actually, I'm not). This great piece summed up a lot of awful aspects about commercial software development that I've also observed over the past 20 years, but have been unable to organize together and articulate as single problem.
People are asking why this is bad. One way to think about why the Feature Factory is bad is that it's mostly open loop: You "launch" feature after feature at the customer, but there's no real feedback coming in to understand whether what you're doing is worthwhile, to understand what to improve or fix, or to drive future decisions. The only feedback that gets measured are things like revenue and sales, and the only thing leadership sees as driving revenue is the constant spew of features--because that's all you're doing. The insights that you get from revenue are very generic: "Money is coming in--keep doing what you're doing" or "We're losing money. Do something different!". The insights you get from salespeople are even worse: "We'll land this sweet-ass deal (and I'll get my bonus) if only you churn out otherwise un-needed features X, Y, and Z!"
I guess this is fine if you're an agency or consulting shop that just does one-off projects and then moves on to the next client. On the other hand, if you're writing software for actual people to use, trying to be the best in class at one thing, or building a platform to last decades instead of months, then your product will suffer if all you do is bolt half-baked features onto it over and over.
I think the reason why I was confused is that it seems there's a very fine line between a feature factory and Google/Twitter/Facebook/Amazon/...
At a lot of these high-functioning places, they seem to have "feature factory" moments with some tweaks (e.g. actually building metrics, refactoring, etc. so that the SW won't implode in 6 months down the line).
People complain about it, but Google being able to literally ‘pull the plug’ and shutdown unprofitable or disproportionately resource draining projects is important to note.
To me, that is a characteristic of the un-feature factory, or whatever the opposite would be called.
I've heard this called 'being on the hamster wheel', you are running forward as fast as you can but not going anywhere.
I've had an experience where I dreaded the daily stand-up, it was all about 'story points' in a mad scramble to find traction with customers. Management was stabbing in the dark, we didn't have a direction. Once we did have a direction we lacked the required input to push us in that direction.
Feature addiction is exacerbated by VC. Without real consequences to a poorly planned product roadmap due to the years of runway afforded by massive funding rounds, entire orgs fall into the “one more feature” cycle and never focus enough on measuring success in terms of real dollars. No one knows what went wrong when all the money is gone and devs/designers/PMs rinse and repeat at a new well-funded startup.
This actually isn’t true. VCs don’t want companies to build features, they want companies to listen to users maniacally and find product-market fit, then focus on growth (and/or revenue) like mad. That companies focus on features is a management issue and a sign of a failing company, not a sign that VC is somehow bad.
I agree that the issue is a problem of management. Though without oodles of cash to wash away missteps, poor management would be much more obvious. The ability to distill user feedback into the minimum number of valuable feature enhancements is what sets successful product orgs apart from failed ones. When cash is largely not a constraint, the tendency is to build exactly what each customer has said they want rather than try to come up with succinct improvements. The result is often bulky products that cost a fortune to maintain and only appeal to small number of customers.
VC is good if used correctly. But dumping wads of cash into a company to develop a product that isn’t capital intensive (like web or mobile tech) has the tendency to create bloated product orgs which optimize for the wrong things.
I once worked in a place where we spent $100,000+ to add a feature that only two customers used. They paid around $8,000 for the feature. It was deemed a success. Features serve Hope while quantifying the value to customers rains on people's parades
Depending on the customer and the work, I might not — or, to be more precise, I might only know some aspects of the product's quality based on what the customer tells me. Or I might only directly know whether certain aspects of what I'm doing are good. Arguably that's actually always the case, but if you're doing consumer-oriented mass-market stuff, then it's quite possible that "what the producer knows" overwhelms what the "customer" knows. But not all software is something you can dogfood or A/B test, and software developers are _very_ quick to assume that they know better than customers, especially if the customer is in some way locked-in.
If you know it is going to be a nightmare to support, it will consume tons of resources (cpu/storage), and nobody wants to pay for it (or not pay enough to cover its costs), it is not good.
As has been mentioned many times on this site, it is very easy to sell $2 for $1 all day long.
Clients will say they'd pay for it, then when you deliver it they'll say they realized that your competitor provides that and everything else that is really important to them in the standard fee. The competitor is actually no better, and would require the same amount of development to get to parity in other areas, but they win by claiming that everything will be rainbows.
Clients will say they'd pay for X, but once you deliver it they'll suddenly realize that it won't actually work for them until you also implement Y, but they won't pay any more for X+Y than they would for X alone.
Clients will say they'd pay for X, but once you deliver it they'll have shifted direction or something has changed and they don't need it anymore.
Clients will say they'd pay for X, but once you deliver it and they start using it they'll discover that it's not at all what they actually needed.
it's more that they're measuring the wrong thing. wash rinse repeat here just leads to implosion due to neglecting the big picture, death by 1000 cuts, etc.
In this thread: When your whole life is based on assumptions of 98% of it is wrong, you're gonna defend yourself to the tooth. It is emotionally challenging and damaging to the ego.
Watch Jim Keller (designer of A4/A4 Apple chips, Ryzen processor, x86 spec co-author and a legendary chip designer) make this point better than I can [Video link copied at 1:22:34 marker]: https://youtu.be/Nb2tebYAaOA?t=4954
People would say that the author of this post is arrogant and want to reject the status-quo - But I would say the opposite, people who have vested interest in the status-quo because their reputation depends on it, their salary depends on it are the arrogant ones because they reject reality in favor of their own good.
Much of this describes some of my past experience at a Fortune 100. That company is now undergoing much discussion at a senior level about agile delivery. Senior executives are walking around talking about tribes and chapters and have no idea what that really means or of the day to day work of teams. It’s unfortunate what will come out of it in the end is some people doing stuff now that will have new titles and maybe more ceremonies but not a substantive change from what they do now or the way work gets done.
Middle management thinks they work in a factory and treat people like numbers. If you trust the same group of people to implement the change then you can bet their sole goal is to make it look good. Unless you change the middle then not much will change.
In principle agile is supposed to be guided by higher level processes accounting for this kind of strategic problem. In practice, yeah, I do sometimes see agile teams get away with writing "objective: ship my features, KR: 5 features are shipped".
I'm waiting for the inevitable open office plan that has a conveyor along it's primary axis for moving white boards along the production path, like in a modern factory. There can be a sub-team that programs robotic arms to sketch things on the white boards as they pass each stage.
Factories are fine, but if you walk into a factory you're going to be finding a lot of people working on the factory itself. Process tweaks, new machines, maintenance, etc.
In the software "feature factory", people have mostly forgotten about that, usually because someone does it "in their spare time". I guarantee you that no factory worker lubricates the machines off the clock. I am not sure why we should treat software any differently.
A factory can produce almost anything at scale, but how do you know it is worthwhile and providing the best possible value? In a "bad" factory, the "factory workers" are like cogs in a machine and have no influence over what is built. There is someone ordering to build this or that and they produce it. Feature factory is in this terminology should be seen as opposed an alternative process, where the "factory workers" themselves decide what to produce and how, and as long as the measured value stacks up, it will be more efficient and derive more shareholder value than the "feature factory".
Factories are efficient because they make a bunch of things in the same way. The feature factory isn't a factory in that sense. It's a giant craft shop in which every job is unique and no real scaling is occurring.
The problem is that "shipping stuff" is not what companies are supposed to be solving. They're supposed to be providing _value_ to customers and investors.
A "feature factory" model is bad because it masquerades as progress.
"Value" is harder to define, and often can't be measured _simply_ with metrics - you need metrics for insights, but most metrics are very much trailing indicators. Also there are lots of silly metrics like "tickets closed" that are easy, and naturally companies gravitate towards anything easy as the number of people rises. And factories love metrics.
Which is great, so long as they're not just creating debt for themselves by e.g. creating broken products that have to be recalled to be fixed by the factory.
when it comes to software you do not want to produce code at scale. the fewer lines of code the better really. it'd be like measuring the quality of your genome by counting the rungs in your dna.
Been there. Anecdotally, a good indicator of a feature factory is the turnover in marketing, a particularly gruelling department to be in when you are not finding any consistency with the message you've been tasked with communicating. That kind of situation is okay and perhaps even fun if you can count your colleagues with your fingers, but at companies larger than that, the general lack of understanding of what your software does is a kind of debt, perhaps even classifiable as technical debt.
I tried to raise red flags to my bosses when our colleagues in customer support we're making feature requests for features we already had. The company as a whole lacked the courage or enthusiasm to tackle those design flaws, and instead would request additional features. I tried really hard to fight for removing features too...
IMO, this is an ego-centric article. Unless you're trying to change the world...be happy you have a continual stream of work. If they don't have a need to keep you busy that's when you should be worried. As a programmer, I try to keep my employer happy and I don't try to meddle in business decisions. I try to understand them as best as possible and give suggestions or ask questions when things are not clear or don't make sense but knowing that my job is to make the things to match specifications. When I see "feature factory" I initially thought it was good as you are building new things but didn't know it was a negative article until I started reading. I'm grateful in other words as I'm pretty normal, not some bright programmer working at SpaceX.
At a certain company size, there simply is not the level of control (ODA loops) to know what product to make or features to add - it is a fairly good survival strategy to have multiple teams duplicating huge amounts of work simply because a few will deliver the right thing.
The internal politics of most organisations has a similar feel - lots of duplication looks like competing camps and frequently is, but the duplications is partly survival strategy (what if those others don't deliver) and partly evolution (one team will deliver something that actually fits the market.)
Boy there are better ways to arrange it, but this seems to be a local maxima that's easy to reach.
Yes! Very well put together list because at my last role (in the same company) this was essentially what I was trying to escape and when I look at the work that team is doing, it’s not impactful or noticeable but the devs are always building some new complex thing that won’t even see the light of day once.
On my current team, feature building is the second to last step (right before a retro) because product development requires many non-coding steps to first understand problem. Some times, I believe it should be “problem solving” instead of “product development”.
There is also a perverse incentive for employees to point this out. Why would a product manager highlight that the status quo is flawed? Why would an engineer shake stuff up and optimize tired old code instead of just cranking out a shiny new features?
Raising these issues would ruffle a lot of feathers. I would not do that just to make my own life more difficult and possibly point out that my own position is redundant.
such a good phrase. premature 'mission accomplished' is always embarrassing even when you're just the one stuck cringing and golf clapping in the corner
Common thread here seems to be lack of argument / buy-in process. Central leaders need to (1) convince the team that tactics accomplish team goals and (2) have team goals. Bad teams do none of the above.
As the average tenure of an engineer (or PM, for that matter) at many medium-to-large software companies edges towards two years, it is BARELY enough time for the average employee to 1) ramp up on HR stuff/culture, 2) understand the implementation of an existing system, 3) make and deploy a change to the system without incident.
Best case, all of that takes 6-9 months, but the reality is more like 12-18.
Having time to understand the "why" behind features in a deeply critical way is a LUXURY given this average 2-3 yr time frame. And all of this assumes an absence of reorgs, mission-critical integration work, etc., which further complicate understanding.
The reality is that customer needs are changing so rapidly that even CEOs and heads of sales fail to grasp the "why" most of the time, and merely focus on just doing whatever it takes to win the next big contract. "When one can see no future, all one can do is the next right thing."
Most of the points come from disregarding the cardinal rule that every developer and development company should abide to (obligatory IMO): Fight for the users.
In this case, you can't fight for them if you don't know what they want or how the product is helping them.
Dear lord this is depressing to read. My personal annecdote here that's probably not a big contribution, but this puts a label so well on the product organization I just left on a large project @ a fortune 50 company.
What's really bad is that the much larger R&D organization (who is I now know to label a feature factory), absorbed our smaller highly productive team that essentially did everything opposite to this. The things we thought we were good at were suddenly the huge flaws of our team - flaws that lead to a number of us leaving to get away from the huge amount of process that got laid on us.
unfortunately this is almost every place i've ever worked. so many good points here - but the most poignant for me is "celebrating shipping" vs celebrating customer success ... so true
I worked at that kind of company, wouldn't recommend it to anyone. I could write a very long essay why it sucked so much, meh.
In short, it's very easy to burn out and overall it's just stupid. The pace was surreal, some of the features very complex, many people were overworking. Everyone is super stressed out, the best decision is to just leave that environment.
Bottom line, don't work for that kind of companies, because your value is equal to the speed of your fingers typing.
I’ve found #8 to be very important. If you don’t allow for revisions you get code rot and suddenly every new feature becomes exponentially more difficult to implement.
Ah man. This makes me realize why my time at a recently IPO-ed tech company, as an engineer was so horrible. Except #10 the company is guilty of every single point.
While this focusses on places selling software (features added to close deals, etc.), it is probably an even more pervasive problem in internal IT shops—and a more tragic one, because internal IT organizations should be better positioned to tightly align on business value rather than sales, though this is often hampered by internal organizational structures which sometimes incentivize arms length, contracting-like interactions.
I don't know, everyone. All the best software is so great because whatever you want to do, you can find in 1 minute with a Google search. I hate googling and finding "I don't think this is possible. I know you can do this in (a feature factory competitor of what you Googled) but I don't think (what you googled) has this feature."
I've seen comments like that thousands of times. Haven't you?
A big part of product management's function is buffering all the features being asked for so that they can take a step back and design a solution that isn't just 100 new features. The feature factory process is mostly a symptom of product being bad at their job.
Sure, but my point was that this article seems to be focused on data-driven product decision-making. Creating good designs with far reach in my view is unrelated. I wanted to counter the narrative that unless you are spying on your users and using that to 'prove' your features are being successful you are doing a bad job at product development.
It's certainly possible in my mind to do requirements gathering, feature design, implementation, deployment, and iteration without automated data collection from user behavior. You can, you know, talk to people. Nowadays when someone says "data-driven" they typically mean "instrument the hell out of your product and observe user behavior." Perhaps that's not what the author is getting at. But if they are, I think it's important to tell people to not feel guilty for not spying on their users. If you are doing the follow-up work to actually communicate with customers to understand their needs, you should feel confident that you're doing your job well. And you should be proud that you are doing so without having to spy on people.
Sure it is. "Spying" is just a deliberately loaded term to make it seem bad. What would you think of an office manager who set things up only according to requests, never trying to observe and anticipate employee needs?
If you care about your user experience, you want as much data as you can get about what exactly your users are doing. This generalizes across all companies in all industries, and I would call it "data collection". Saying "we satisfied the requirements so it succeeded" is pretty universally recognized as a cop-out, which you resort to when you don't really care or have no other choice.
There exist strategies that step over the line to spying (you shouldn't, like, put tracking devices on your users), but the article doesn't suggest that you should do those.
Yours is the consensus view, I'm contrarian on it. Appealing to it as being "universally recognized" as an argument about its validity is a logical fallacy.
The methodology of recent history has been to instrument everything, and then use that to analyze. I'm not ignorant, I've done this. It's an effective way of not fooling yourself. However, the downsides of such data collection have, for the most part, been ignored in the process of determining what to instrument, how that data is stored, and what is to be done with it. This is an industry-scale oversight and is going to rapidly change, especially in light of new regulations.
Beyond the fact that such data is a liability (and usually can be de-anonymized) and the legal requirements of having it being burdensome, organizations need to ask the question of if their data collection policy is ethical. Your "universal recognition" in my mind is more of a "universal blind spot". On the contrary, instead of saying that lack of a data collection policy is a cop-out due to laziness or lack of care, I'd argue that falling back on collecting data shows a similar kind of laziness, where you've failed to come up with a better solution to your problem of understanding how well your product is performing that doesn't require violating the privacy of your customers. (Even if you aren't violating it per se, you're only one data breach away from someone else doing so, and that would be entirely your fault.)
The correct methodology is to try to solve these problems with minimal data collection, ideally none. It also means sometimes you will not collect data, and knowingly potentially miss insights, because the trade-off doesn't make sense.
That doesn't mean you don't do it, but it does mean that when you do, you should recognize it as a liability, a deep trade-off that has many negatives, where the benefits should outweigh it. If it's not actionable, don't collect it. If you don't need it anymore, permanently delete it.
Most organizations don't see it that way, but I predict many will in the coming years.
I've never heard of this term before, but when I think of "feature factory", products like Jira and Yahoo! immediately come to mind. Such products feel like a sinking bag of features with no real cohesive, central purpose.
If your input regarding how the company is run is valued by upper management, then make a clear argument that they are running a feature factory and what they can do to change.
If no one cares what you think, then the best outcome for yourself and the your company is for you to find other employment. You labor is freed from the feature factory for more productive purposes and the probability of survival for your sub-optimal company is diminished, opening opportunities for more optimal companies in the same space.
Where this exists it's usually driven by management, especially #12. The fix is to shift focus towards measurements and a longer timeline. It's really hard to change management's focus, but you can implement that mindset for your and your teams work and once management sees the benefits it will be easier to convince them it's worth adopting.
Does this ever not happen? I am being serious... Have any of you worked at a company for any substantial period of time where some for of this has not become the norm?
To a certain degree, and in my experience, all companies have this problem. Sometimes it’s at a team scale but for others it’s at an organization scale.
I've had to make the decision to steer my design resources into "feature factories" because my customers are sometimes fickle.
There are a number of instances where features were paid for, developed, and then not deployed.
It hasn't become discouraging yet because there has been professional growth in developing methods for constructing and deploying said features, but I am concerned as to our financial ability to identify and reject contracts that are just un-exciting features. Most of my devs are satisfied with the paycheck despite the work, but a few are showing signs of not being valued. And I'm not sure how to address that yet to avoid losing them. It's like: taking a lame contract for $$$ vs. boring a good employee. I hate how personal it becomes, I feel like I'm cajoling them to do the work, ... it feels icky and toxic.
I dunno, if I were an engineer in an environment like that I'd be fine with it if (1) you were up-front about this being a necessary hack to sustain the company for a bit longer, and (2) there is a vision and a process to refine that vision of where you would all like the product to go. (2) is so that these seemingly short-term features could either be seen as prototypes or first cuts at the future you'd really like to get to, or walled off as extraneous stubs that don't interfere with the core architecture. (1) is important because I feel good about helping keep the company afloat, but I feel crappy about just doing stuff that I know is bullshit simply because I'm told to (or worse, lied to that it's "important" for its own sake).
This also makes it possible to track the trajectory of the company over time. When you're doing better, you can afford to turn down these types of requests. Communicate that decision and why you made it to engineering; we love to hear that sort of thing.
The safest answer that a salesperson can give is "yes". The safest answer that an engineer can give is "no". You can't let either side always win, but you can communicate the reasoning and importance to bridge the gap and make it a team decision even if it wasn't the team making it.
Thanks for the feedback. That's pretty much how the discussions go. The team is small so I can still be 1:1 with them, and be direct about the state of things.
>This also makes it possible to track the trajectory of the company over time. When you're doing better, you can afford to turn down these types of requests.
Yeah, this is a really good indicator. Heck, if this keeps up -I- won't want to do it anymore! Which I think is perfectly natural.
This post points to perverse economic incentives as being one possible cause, but I have also seen this happen in open-source projects. It's a matter of listening to the wrong people, in my view. User feedback is incredibly valuable, but when user feedback comes in the form of GitHub issues rather than careful testing and conversation, the team will inevitably find themselves building more and more and more for no real benefit.
I've quoted this before, but what Don Norman says in The Invisible Computer still applies:
"Don’t ask people what they want. Watch them and figure out their needs. If you ask, people usually focus on what they have and ask for it to be better: cheaper, faster, smaller. A good observer might discover that the task is unnecessary, that it is possible to restructure things or provide a new technology that eliminates the painstaking parts of their procedures. If you just follow what people ask for, you could end up making their lives even more complicated."