> Make feature flags short-lived. Do not confuse flags with application configuration.
This is my current battle.
I introduced feature flags to the team as a means to separate deployment from launch of new features. For the sake of getting it working and used, I made the mis-step of backing the flags with config files with the intent to get Launch Darkly or Unleash working ASAP instead to replace them.
Then another dev decided that these Feature Flags look like a great way to implement permanent application configs for different subsets of entities in our system. In fact, he evangelized it in his design for a major new project (I was not invited to the review).
Now I have to stand back and watch as the feature flags are being used for long-term configurations. I objected when I saw the misuse- in a code review I said "hey that's not what these are for"- and was overruled by management. This is the design, there's no time to update it, I'm sure we can fix it later, someday.
Lesson learned: make it very hard to misuse meta-features like feature flags, or someone will use them to get their stuff done faster.
Sadly, this is a battle you are destined to lose. I have almost completely given up. The best you can aim for is to use feature flags better rather than worse.
- Some flags are going to stay forever: kill switches, load shedding, etc. (vendors are starting to incorporate this in the UI)
- Unless you have a very-easy-to-use way to add arbitrary boolean feature toggles to individual user accounts (which can become its own mess), people are going to find it vastly easier to create feature flags with per-use override lists (almost all of them let you override on primary token). They will use your feature flags for:
- Preview features: "is this user in the preview group?"
- rollouts that might not ever go 100%: "should this organization use the old login flow?"
- business-critical attributes that it would be a major incident to revert to defaults: "does this user operate under the alternate tax regime?"
You can try to fight this (indeed, especially for that last one, you most definitely should!), but you will not ever completely win the feature flag ideological purity war!
In my org, I think I’ve go the feature flag thing mostly down.
We started with a customer specific configuration system that allows arbitrary values matching a defined schema. It’s very easy to add to the schema (define the config name, types, and permissions to read or write it in a JSON schema document).
We have an administration panel with a full view of the JSON config for our support specialist and and even more detailed one for developers.
Most config values get a user interface as well.
From there we just have a namespace in the configuration for “feature flags”. Sometimes these are very short lived (2-4 sprints until the feature is done), but others can last a lot longer.
There are an unfortunate couple that will probably never go away at this point (because of some enterprise customer with a niche use case in the “legacy” version of the feature that we’ve not yet implemented compatibility with and I don’t know when it will get on our roadmap to do so), but in the end they can just be migrated into normal config values if needed.
A little tooling layer on top lets us query and write to the configs of thousands of sites at once as well.
We have an interesting hybrid between the two that I'd like your take on. When we release new versions of our web client static assets we have a version number that we bump that moves folks over to the new version.
1. We could stick it in a standard conf system and serve it up randomly based on what host a client hits. (Or come up with more sophisticated rollouts)
2. Or we can put it as "perm" conf in the feature flag system and roll it out based on different cohorts/segments.
I'm leaning towards #2 but I'd love to understand why you want to prohibit long lived keys so I can make a more informed choice. The original blog posts main reasons were that FF systems favor availability over consistency so make a pour tool if you need fast converging global config, which somewhat becomes challenging here during rollbacks but is likely not the end of the world.
2) The downside of rolling it out based on host is that you could refresh your page, hit a different host, and see the UI bouncing back and forth between versions. As long as you always plan to roll things to 100%, this is the perfect use case for a feature flag.
Or... see them for what they are: runtime configuration. The name implies a use case scenario, but in reality it's just a configuration knob. With a good UI, it's a pretty damn convenient way to do runtime configuration.
So of course they'll be used for long-term configuration purposes, especially under pressure and for gradual rollouts of whole systems, not just A/B testing features.
I think the reason feature flags are never removed is because the timeframe that a given feature-flag is top-of-mind is also when it's at its most useful. Later when it's calcified in place and the off-state may be broken/atrophied, no one is really thinking about it.
I'm also not convinced it's always a huge problem. I can imagine sometimes it is, but in most codebases I've worked on, it's more of an annoyance but not cracking the top 3 or 5 biggest problems we wanted to focus on.
IMHO the best solution is not something heavy handed like a policy that we only use run-time config for fixed timeframes, or a process where we regularly audit and prune old flags. It's simply to keep a record of the config changes over time so anyone interested can see the history, and a culture where every engineer is encouraged to take a little extra time to verify and remove dead stuff whenever it crosses their path .
The mental overhead of reading code like this is massive. Leaving feature flags in with the alternate branch left to rot leads to a codebase that is nearly impossible to understand. No purpose is served by not deleting the now unused branch except you save one developer an afternoon of work. But that time is quickly recouped when the entire team, and especially new hires, only have half as much code to understand.
There is a need for runtime configurations, yes, but it's important to put them behind an interface intended for that, and not one intended for something else.
I can immediately see if the config is being requested, which system requests it, what are the metadata of the request, etc. I can do conditional rollout of a configuration based on runtime data. I can reset the configuration to a know-good failsafe default without asking for approval with a break-glass button. I can schedule a rollout and get a reviewer for the config change.
IME the feature flag interface is next to perfect for runtime configuration. I don't care for intended usage at all. You could say feature flags have found a great product-market fit, just that a segment of the market is a bit unexpected but makes perfect sense if you think about it.
This gets messy at larger scales, both as teams grow and software grows.
Resetting to a know failsafe works as long ask the risk of someone changing a backend service (or, multiple services) at the same time is low. Once it isn't, you can most definitely do more damage (and make life harder for oncall).
Who controls the runtime config? One person? Half a dozen? One hundred plus? Is it being gated by approvals, or can anyone do it? What about auditability? If something does go wrong, how easily can I rule out you turning on that flag?
Finally there is simply the sheer permutations you introduce here. A feature flag is binary in many cases: on or off. A config could be in any number of states.
These things make me nervous as an architect, and I've seen well intentioned changes fail when good flag discipline wasn't followed. Using it as fullblown runtime config seems like a postmortem waiting to happen.
I am tempted to agree: if separating the two is key (I’m not convinced that it is, but happy to assume) why not copy the interface and the infrastructure of the feature flag and offer it as a configuration tool.
I feel like you could easily add a status to flags, to mark whether they are part of a release process, or a permanent configuration tool, and in the latter case, take them off the release interfaces.
Could you expand on what you think the different interfaces should be? you keep stating that these things ought to be distinct but haven't explained why beyond dogma.
Our FF system uses our config system as its system of record. There's some potential for misuse, and it's difficult to apply deadlines. On the plus side all our settings are captured in version control. Before they were spread out over several systems, one of which had an audit system that was pure tribal knowledge for years.
The main challenge is when things go wrong. Feature flags are designed for high-rate evaluation with low latency responses. Configuration usually doesn't care that much about latency as it's usually read once at startup. This context leads to some very specific tradeoffs such as erring to availability over consistency, which in the case of configuration management could be a bad choice
Yeah, and assuming they are done well, they probably have better analytics and insights attached to them than anything else except perhaps your experiments!
Long lived features flags is a development process bug, I'm not sure we can solve it with the feature toggle system.
I'm at the point of deciding that Scrum is fundamentally incompatible with feature flags. We demo the code long before the flag has been removed, which leads to perverse incentives. If you want flags to go away in a timely manner you need WIP limits, and columns for those elements of the lifecycle. In short: Kanban doesn't (have to) have this problem.
And even the fixes I can imagine like the above, I'm not entirely sure you can stop your bad actor, because it's going to be months before anyone notices that the flags have long overstayed their welcome.
I'm partial to flags being under version control, where we have an audit trail. However time and again what we really need is a summary of how long each flag has existed, so they can be gotten rid of. The Kanban solution I mention above is only a 90% solution - it's easy to forget you added a flag (or added 3 but deleted 2)
I faced something similar, and I think it's unavoidable. Give people a screwdriver and they'll find a way of using it as a hammer.
The best you can do is expect the feature flagging solution to give some kind of warning for tech debt. Then equip them with alternative tools for configuration management. Rather than forbidding, give them options, but if it's not your scope, I'd let them be (I know as engineers this is hard to do :P).
> Give people a screwdriver and they'll find a way of using it as a hammer.
I feel like feature flags aren't that far off though. They're fantastic for many uses of runtime configuration as mentioned in another comment.
There's multiple people in this thread complaining about "abuse" of feature flags but no one has been able to voice why it's abuse instead of just use beyond esoteric dogma.
Feature Flags inherently introduce at least one branch into your codebase.
Every branch in your codebase creates a brand new state your code can run through.
The number of branches introduced by Feature Flags likely does not scale linearly, because there is a good chance they will become nested, especially as more are added.
Start with even an example of one feature flag nested inside another. That creates four possible program states. Four is not unreasonable, you can clearly define what state the program should be in for all four states.
Now scale that to a hundred feature flags, some nested, some not.
It becomes impossible to know what any particular program state should be past the most common configurations. If you can't point to a single interface in a program and tell me all of the possible states of it, your program is going to be brittle as hell. It will become a QA nightmare.
This is why Feature Flags should be used for temporary development efforts or A/B testing, and removed.
Otherwise you're going to have a debugging nightmare on your hands eventually.
Edit: Note that this is different from normal runtime configurations because normally runtime configurations don't have a mix of in-dev options and other temporary flags. Also, they aren't usually set up to arbitrarily add new options whenever it is convenient for a developer.
Branches are difficult to reason about? Yes, I agree.
Are branches necessary to make the product behave in a different way in some circumstances? Most of the time.
Do those circumstances require a branch? Unless you’re super confident about some part of code, yes? But why would you be?
Runtime configuration is not about making QA easy. It’s introduced because QA has been hell already so you can control rollout of code which you know wasn’t properly QA’d - or it was but turns out the thing you built isn’t the thing users want and the release cycle is too long to deploy a revert.
I’d say ‘branches are bad but alternatives are worse’.
The fundamental diff between feature flags and config is the former is meant to be a soft deploy of code where everyone is expected to eventually be on the new code. Thus it should have a timer built in where it stops, and you should consider all new customers launching with it on.
As for why: if you don't deprecate the feature flag in some time span, you're permanently carrying both code paths. With ongoing associated dev and qa resources and costs against your complexity budget.
Permanent costs should only be undertaken after careful consideration, and should be outside the scope of a single dev deciding to undertake them. Whereas flags should be cheap to add to enable dev to get stuff into prod faster while retaining safety.
Permanently making something a config choice should be done after heavier deliberation because of the aforementioned costs, and you often want different tools to manage it. Including something heavier duty than a single checkbox/button in your internal CS admin tooling. These are often tied into contracts or legal needs, and in many cases salesforce should be the source of truth for them. Or whatever CPQ system you're using.
I feel like this is a solvable problem:
1) make feature flags be configured to have an expiration date. If over the expiration date, auto-generate a task to clean up your FF
2) If you want to be extra fancy, set up a codemod to automatically clean up the FF once it's expired
I don't see the problem with developers using flags for configuration as a stopgap until there's a better solution available.
It can be done by opening a PR, I haven't tried it yet, but I'm curious to try out https://github.com/uber/piranha or maybe hear some experiences if someone has used it
AFAIK, it'd only open a PR if the flag is fully enabled and has some heuristics to determine when it's safe to remove. Honestly, I haven't tested it but I'm curious to know if someone had either good or bad experiences.
If all the PRs are instantly rejected, that would be a bad sign, but I couldn't find someone who effectively used it. I mean, it's been around for a while but it didn't spread out, so that already gives me some hint
If the cleanup only happens if the flag is not used, then the "expiration date" is basically meaningless. You can either delete it or you can't. Who cares if it's expired or not.
I think expires is just a signal for a feature that should "potentially" be removed. I believe it's a good way to focus on the ones you should pay attention to. But, it might be cool if you could say "Yes, I know, please extend this for another period" (or do not notify me again for another month)
Sounds like "other dev" found some business case they could unblock with existing system, and you thought the business was better off not solving that, or finding a more expensive solution.
Curious how you plan to justify cost to "fix it" to management. If it ain't broke...
I think it's better to admit they actually are config, just a different kind of config that comes with an expiration date.
Accepting reality in this way means you'll design a config management system that lets you add feature flags with a required expiration date, and then notifies you when they're still in the system after the deadline.
I agreed. My perspective is that there are two kinds of feature flags: temporary and permanent.
Temporary ones can be used to power experiments or just help you get to GA and then can be removed.
Permanent ones can be configs that serve multiple variations (e.g. values for rate limits), but they can also be simple booleans that manage long term entitlements for customers (like pricing tiers, regional product settings, etc.)
We did the same. We were early adopters of unleash and wrangled it to also host long term application configuration and even rule based application config.
The architecture of unleash made it so simple to do in unleash vs having to evaluate, configure, and deploy a separate app config solution.
It’s one of the main reasons to start with something like unleash because they have stale flag warnings built in. Plus, since you already have a UI it’s harder for it to be hijacked.
For those that dont know about the project, check out Open Feature https://openfeature.dev/ which is sort of like Open Telemetry but for feature flags. Helps avoid vendor lock in. We're a young project and looking for help and to build the community!
This feels a bit like the dicta on 12 Factor: rules handed down from a presumed authority without any discussion of the tradeoffs. Engineering is tradeoff evaluation. Give me some discussion about the alternatives, when and why they're inferior and don't pretend like the proposed solution doesn't have shortcomings or pitfalls.
I agree with you that tradeoff evaluation is crucial in engineering, but I don't see the 12 Factor methodology as a set of strict rules. They're more like guidelines that are generally a good idea to follow for building modern applications or services. Some of the suggestions apply for any type of software, like having a single version controlled codebase, separate build/release/run stages, and using stateless processes.
So it's good to be aware of _why_ those guidelines are considered a good thing, but as with any methodology, an engineer should be pragmatic in deciding when to follow it strictly, and when to adapt or ignore some of it.
That said, I wouldn't want to work on software that completely ignores 12 Factor.
It's true that there are more long-lived use cases, but if you have the ability to choose, runtime controlled ones cover both cases, while compile time only cover some use cases. But fair point
I dedicated a day to evaluating feature flag software based on specific criteria:
- Must support multiple SDKs, including Java and Ruby.
- Should be self-hosted with PostgreSQL database support.
- Needs to enable remote configuration for arbitrary values (not just feature flags). I don't run two separate services for this.
- Should offer some UI functionality.
- it should cache flag values locally and, ideally, provide live data updates (though pooling is acceptable).
Here are the four options that met these basic criteria and underwent detailed evaluation:
- Unleash: Impressive and powerful, but its UI is more complex than needed, and it lacks remote configuration.
- Flagsmith: Offers remote configuration but appears less polished with some features not working smoothly; Java SDK error reporting needs improvement.
- Flipt: Simple and elegant, but lacks remote configuration and local caching for Java SDK.
- FeatureHub: Offers fewer features than Unleash and Flagsmith; its Java API seems somewhat enterprisly but supports remote configuration and live data updates.
Currently, I'm leaning towards FeatureHub. If remote configuration isn't necessary, Unleash offers more features, and if simplicity is key and local caching isn't needed, Flipt is an attractive option.
Hey thanks for giving Flipt a look! I'm the creator of Flipt so would love to chat more about your needs to see how we could make it work for your use case! We're actively looking into providing local caching for all our SDKs btw and would love to learn more about what your requirements are for remote configuration as it's also on our radar!
Feel free to send me an email at: mark (at) flipt.io.
As an engineer, I am generally against feature flags.
They fracture your code base, are sometimes never removed, and add complexity and logic that at best is a boolean check and at worse is something more involved.
I'd love a world where engineers are given time to complete their feature in its entirety, and the feature is released when it is ready.
Sadly, we do not live in that world and hence: feature flags.
This misses the point. A big point of feature flags is that you don't yet know how features will be perceived until you get them in front of real users.
I get what you'd like "as an engineer", but it ignores the needs of the business.
Isn't that the job a product manager? There are other means and methodologies for gathering user sentiment before you go and build something.
You should get as close as you can, release the product, and iterate.
Todays world is release the product in some ramshackle form or fashion, collect feedback, iterate. To do that introduces a new construct of Feature Flags that would otherwise not be necessary.
"Also, if one customer is having a particularly bad time we need to be able to disable the feature for them while continuing to collect feedback from everyone else."
Exactly! And now feature X and the feature flag that governs it is in your code base forever.
In my opinion this all gets back to the way we build product and the expectations we have for our product managers. I have no doubt that their jobs are difficult in many ways, but the lack of actual focus on product specifically as it relates to customer sentiment always strikes me as lazy especially when that data collection is basically passed off to the engineers.
That is not what feature flags are typically used for.
They're typically used as a way of enabling a change for a subset of your services to allow for monitoring of the update and easier "rollback" if it becomes necessary.
They can be used for A/B testing, but this is not what they're typically used for.
I just item 1 (“Enable run-time control. Control flags dynamically, not using config files”) and it’s almost exclusively focused on what to do but not on why to do it.
It seems to be skipping past the use-cases and assumptions, in particular, describing what a system with feature flags looks and acts like, what the benefits and drawbacks are.
Background: I work at Block/Square, on the team that owns (but didn't build) our internal Feature Flag system, and also have a lot of experience with using LaunchDarkly.
I like the idea of caching locally, although k8s makes that a bit more difficult since containers are typically ephemeral. People will use feature flags for things that they shouldn't, so eventually "falling back go default values" will cause production problems. One thing you can do to help with this is run proxies closer to your services. For example, LaunchDarkly has an open source "Relay".
Local evaluation seems to be pretty standard at this point, although I'd argue that delivering flag definitions is (relatively) easy. One of the real value-add of a product like LaunchDarkly is all the things they can do when your applications send evaluation data upstream: unused flags, only-ever-evaluated-to-the-default flags, only-ever-evaluated-to-one-outcome flags, etc.
One best practice that I'd love to see spread (in our codebases too) is always naming the full feature flag directly in code, as a string (not a constant). I'd argue the same practice should be taken with metrics names.
One of the most useful things to know (but seldom communicated clearly near landing pages) is a basic sketch of the architecture. It's necessary to know how things will behave if there is trouble. For instance: our internal system uses ZK to store (protobuf) flag definitions, and applications set watches to be notified of changes. LaunchDarkly clients download all flags[1] in the project on connection, then stream changes.
If I were going to build a feature flag system, I would ensure that there is a global, incrementing counter that is updated every time any change is made, and make it a fundamental aspect of the design. That way, clients can cache what they've seen, and easily fetch only necessary updates. You could also imagine annotating that generation ID into W3C Baggage, and passing it through the microservices call graph to ensure evaluation at a consistent point in time (clients would need to cache history for a minute or two, of course).
One other dimension in which feature flag services vary is by the complexity of the rules they allow you to evaluate. Our internal system has a mini expression language (probably overkill). LaunchDarkly's arguably better system gives you an ordered set of rules within which conditions are ANDed together. Both allow you to pass in arbitrary contexts of key/value pairs. Many open source solutions (Unleash, last I checked, some time ago) are more limited: some of them don't let you vary on inputs, some only a small set of prescribed attributes.
I think the time is ripe for an open standard client API for feature flags. I think standardizing the communication mechanisms would be constricting, but there's no reason we couldn't create something analogous to (or even part of) the Open Telemetry client SDK for feature flags. If you are seriously interested in collaborating on that, please get in touch. (I'm "zellyn" just about everywhere)
[1] Yes, this causes problems if you have too many flags in one project. They have a pretty nice filtering solution that's almost fully ready.
One more update. I spent a little time the other day trying to find all the feature flag products I could. I'm sure I missed a ton. Let me know in the comments!
Here's my first draft of the questions you'd want to ask about any given solution:
Questionnaire
- Does it seem to be primarily proprietary, primarily open-source, or “open core” (parts open source, enterprise features proprietary)?
- If it’s open core or open source with a service offering, can you run it completely on your own for free?
- Does it look “serious/mature”?
- Lots of language SDKs
- High-profile, high-scale users
- Can you do rules with arbitrary attributes or is it just on/off or on/off with overrides?
- Can it do complex rules?
- How many language SDKs (one, a few, lots)
- Do feature flags appear to be the primary purpose of this company/project?
- If not, does it look like feature flags are a first-class offering, or an afterthought / checkbox-filler? (eg. split.io started out in experimentation, and then later introduced free feature flag functionality. I think it’s a first-class feature now.)
- Does it allow approval workflows?
- What is the basic architecture?
- Are flags evaluated in-memory, locally? (Hopefully!)
- Is there a relay/proxy you can run in your own environment?
- How are changes propagated?
- Polling?
- Streaming?
- Does each app retrieve/stream all the flags in a project, or just the ones they use?
- What happens if their website goes down?
- Do they do experiments too?
- As a first-class offering?
- Are there ACLs and groups/roles?
- Can they be synced from your own source of truth?
- Do they have a solution for mobile and web apps?
- If so, what is the pricing model?
- Do they have a mobile relay type product you can run yourself?
- What is the pricing model?
- Per developer?
- Per end-user? MAU?
I would have thought so. But flagsmith apparently does primarily server-side eval. And even OpenFeature has `flagd`, which I guess is a sidecar, so a sort of hybrid approach.
And LaunchDarkly's Big Segments fetch segment inclusion data live from redis (although I believe they then cache it for a while).
I more or less know all the answers for LaunchDarkly (except pricing details), and for the internal feature flag service we're deprecating, but I haven't gone through and answered it for all the other offerings. It would be time-consuming, but very useful.
Also, undoubtedly contentious. If you want an amusing read, go check out LaunchDarkly's "comparison with Split" page and Split's "comparison with LaunchDarkly" page. It's especially funny when they make the exact same evaluations, but in reverse.
> One best practice that I'd love to see spread (in our codebases too) is always naming the full feature flag directly in code, as a string (not a constant).
Can you elaborate on this? As a programmer, I would think that using something like a constant would help us find references and ensure all usage of the flag is removed when the constant is removed.
One of the most common things you want to do for a feature flag or metric name is ask, "Where is this used in code?". (LaunchDarkly even has a product feature that does this, called "Code References".) I suppose one layer of indirection (into a constant) doesn't hurt too much, although it certainly makes things a little trickier.
The bigger problem is when the code constructs metric and flag names programmatically:
That kind of thing makes it very hard to find references to metrics or flags. Sometimes it's impossible, or close to impossible to remove, but it's worth trying hard.
Not OP but multiple code bases may refer to the same flag by a different constant. Having a single string that can be searched accross all repos in an organization is quite handy to find all places where it's referenced.
especially when you have different languages with different rules, `MY_FEATURE_FLAG` and `kMyFeatureFlag` and `@MyFeatureFlag` might all be reasonable names for what is defined as `"my_feature_flag"` in the configuration.
Using just the string-recognizable name everywhere is...better.
If you create your own service to evaluate a bunch of feature flags for a given user/client/device/location/whatever and return the results, for use in mobile clients (everyone does this), PLEASE *make sure the client enumerates the list of flags it wants*. It's very tempting to just keep that list server-side, and send all the flags (much simpler requests, right?), but you will have to keep serving all those flags for all eternity because you'll never know which deployed versions of your app require which flags, and which can be removed.
Well, it seems to be a common theme to build a server that uses the flag eval _server_ SDK to evaluate a bunch of flags and then pass them back to the client.
For example, a client may call myserver.com/mobile-flags?merchant=abcdef&device=123456&os=ios&os_version=15.2&app_version=6.1 and the server will pass back:
flag1: true
flag2: 39
flag3: false
flag4: green
This seems to be a common theme. For example, LaunchDarkly has a mobile client SDK, but they charge by MAU, which would be untenable. So folks tend to write a proxy for the mobile apps to call. If the client (as in my example above) doesn't specify which flags it wants, then the metrics are missing, whether you're using a commercial product or your own: it'll simply tell you that all the flags got used. (Of course, you could be collecting metrics from the client apps).
But based on our experience, you'd be better of having the mobile client pass in an explicit list of desired flags. Which will give accurate metrics.
> I'd argue that delivering flag definitions is (relatively) easy.
I'd argue that coming up with good UI that nudges developers towards safe behavior, as well as useful and appropriate guard rails -- in other words, using the feature flag UI to reduce likelihood of breakage -- is difficult, and one of the major value propositions of feature flag services.
The system we're building now meets most of these but not necessarily in the way described.
First, we're building a runtime configuration system on top of AWS AppConfig. YAML/proto validation that pushes to AppConfig via gitops and bazel. Configurations are namespaced so the unique names is solved. It's all open in git.
Feature flags are special cases of runtime configuration.
We are distinguishing backend feature flags from experimentation/variants for users. We don't have (or want) cohorting by user IDs or roles. We have a separate system for that and it does it well.
The last two points - distinguishing between experimentation/feature variants and feature flags as runtime configuration are somewhat axiomatic differences. Folks might disagree but ultimately we have that separate system that solves that case. They're complimentary and share a lot of properties but ultimately it solves a lot of angst if you don't force both to be the same tool.
>Organizations who adopt feature flags see improvements in all key operational metrics for DevOps: Lead time to changes, mean-time-to-recovery, deployment frequency, and change failure rate.
Is this true? unfortunately there's no sources indicated, and a quick check on scholar doesn't show me anything of the sort.
There are a few case studies listed in most of the feature flag solutions, of course, each organization is completely different and the maturity of each organization varies. But feature flags are a 2-way-door decision, meaning that you can adopt them at smaller scale, try it out and see if it works for you before making a decision.
We use Unleash. There are many things you can do with feature flags and Unleash helps with a lot of them. However, my feeling is 80% of the value comes from 20% of the features. Even a much simpler system provides a ton of benefit. For me, it is top of the list after having automated tests and automated deployments.
With regard to web-based services, once you’ve got the ability to do canary testing, IMO flags/toggles are less compelling — busier code and logic you’ll have to pull out later.
Canarying gets you a 1/n treatment group, but it might be skew geographically (all affected users are near the canary’s datacenter). You need a percentage in a feature flag if 1/n is too big and you want, e.g., 0.1% of traffic.
I agree that if you have only a few changes going to prod, fast and doing canary testing, you should be covered. In my experience that's rarely the case because of multiple teams deploying changes at the same time, and even deployments in external services causing side effects in other services.
Emergent inter-service issues are challenging to deal with regardless.
I’ve absolutely seen canary testing work in large environments with a lot of teams doing frequent deploys. The teams need to have the tooling to conduct their own canary testing and monitoring.
As soon as you’re involving external services or anything persistent you may not be able to undo the damage of misbehaving software by simply disabling the offending code with a flag.
In practice the cost/benefit of feature flags has never proven out for me, better to just speed up your deploys/rollbacks, the caveat is I’ve only ever worked in web environments, I can imagine with software running on an end user device it could solve some difficult problems provided you have a way to toggle the flag.
I couldn't find an easy link from these docs to the product page on mobile. Seems like a wasted opportunity. I had to edit the URL to get to the company website.
TL;DR if you break long posts into pages, at least have an option to see the whole thing in a single page.
I use a browser extension to send websites to my Kindle. It's great for long-ish format blog posts that I want to read, but I don't have the time at the moment. However, whenever I see long blog posts that are broken into sections, each one in it's own page, it becomes a mess. It forces me to navigate each individual page and send it to my Kindle. Then in the Kindle I have a long list of unsorted files that I need to jump around to read in order.
I understand breaking long pieces of text into pages makes it neater and more organized, but at least have an option to see the whole thing in a single page, as a way to export it somewhere else for easy reading.
Definitely getting strong uncanny valley prose vibes.
Hard to tell if it's generated or written in an attempt to be as plain English as possible, but either way feels strangely vacuous for a technical opinion piece. There's no writer's voice.
I think it's absolutely an opinion piece - defining specific items as principles by definition means expressing opinionated ideas about the relative priority of those items over others. Also, imperative mood contains value judgment, which is inherently opinion-based (e.g. "Never expose PII"). Making arguments for why you should or should not do things requires expressing opinions about relative importance, weight etc.
If this were instead an article describing what feature flags are, or one performing a survey of various approaches to building/scaling them, I think the lack of voice is just fine - that's dealing in statement of fact. But this article mandates and implores and exhorts - the value judgments inherent in that pathos are empty without genuine authorship.
Also I'm not saying the lack of voice is bad even for conveying meaning or teaching - more that it is jarring and uncanny to read imperative claims in an empty robotic voice devoid of ethos.
Finally, I also might be biased by my first documentation love, the zeromq guide, which is an extremely-strongly-opinionated piece of docs that does its job exceptionally well. I think when writing about how or why, a strong writer's voice is more compelling. This article stretches past just the what into those other question words, so its seeming lack of authorial authority falls flat to me.
This is my current battle.
I introduced feature flags to the team as a means to separate deployment from launch of new features. For the sake of getting it working and used, I made the mis-step of backing the flags with config files with the intent to get Launch Darkly or Unleash working ASAP instead to replace them.
Then another dev decided that these Feature Flags look like a great way to implement permanent application configs for different subsets of entities in our system. In fact, he evangelized it in his design for a major new project (I was not invited to the review).
Now I have to stand back and watch as the feature flags are being used for long-term configurations. I objected when I saw the misuse- in a code review I said "hey that's not what these are for"- and was overruled by management. This is the design, there's no time to update it, I'm sure we can fix it later, someday.
Lesson learned: make it very hard to misuse meta-features like feature flags, or someone will use them to get their stuff done faster.