Hacker News new | past | comments | ask | show | jobs | submit login

Unrelated to ArgoCon but related to ArgoCD:

I’ve been working on implementing ArgoCD and I am surprised by some design choices, so I wonder if someone could shed some light as to why they were made:

- `Application` resources can only be created in the `argocd` namespace (argocd >=2.5 tries to address this, but it is not a stable feature[0] and has bugs). This is surprising since in Kubernetes resources are generally Namespaced, or, offered in 2 flavors: `ClusterResource` and `Resource`. This is a problem on multi-tenant clusters where you do not want users to have any permissions to the `argocd` namespaces. I would have expected ArgoCD to offer `Application` and `ClusterApplication` resources.

- The ArgoCD controller has full admin access to the cluster, and authorization is implemented directly by ArgoCD with its own RBAC system[1]: why didn’t ArgoCD rely on the native Kubernetes RBAC system instead?

[0] https://argo-cd.readthedocs.io/en/stable/operator-manual/app...

[1] https://argo-cd.readthedocs.io/en/stable/operator-manual/rba...




Argo has a number of deployment and usage models, some which I think were not a part of the original design.

I think in the early days Argo was more intended to be interacted with via the UI, they put quite a bit of work into ACLs between their Projects, Applications, Clusters, and Repositories which would let a team pretty easily self-manage their environment. I don't think it was as common as today, with the app-of-apps pattern, to use Argo itself to manage Argo; so while the applications themselves were a gitops workflow, the bootstrap into that workflow was UI-based.

My favorite install method is per-cluster, run as something that might fit better in kube-system, we use the UI primarily for status info with a few users having permissions to run actions like restart and delete. With autohealing, deleting resources is often a great way to get things "unstuck".

Every other interaction with the cluster is via our gitops repo. We bootstrap a pretty vanilla argo install and then basically `kubectl apply -f clusters/primary/appofapps.yaml`. Included is an Application that takes ownership of the argo install and manages its configuration from the repo as well.

The Argo configuration basically has one cluster(local), one repo, and one project(default), so we don't really use Argo's ACL system much at all as all config happens through our standard PR approval workflow.

Its config is pretty flexible though, it's based on kustomize, so it's fairly easy to tailor it as you see fit. It doesn't need full access to the cluster and in fact even supports a namespaced install that has no cluster access at all (often running many instances on a control cluster with k8s ACLs giving users access to their own).

Ultimately, I think what you're seeing are some growing pains, but primarily features that make Argo flexible enough to target multiple types of control, ACL, repo, and cluster topologies. Them adding the ability to put Applications in other namespaces feels like a continuation down that path, just adding one more option (which I think will be pretty popular, like you mentioned, in multitenant situations).


Not a maintainer, but in my own opinion:

- `Application` is a special CRD. You can define one application to control namespaced resources in more than one namespace, cluster-wide resources, or even resources in another cluster if the controller have access to that cluster. So how to implement secure access to applications is a difficult organization/management problem (and not necessarily a technical problem).

- I don't know why the project decided to implement its own RBAC, but I like that I can give some users limited visibility in some clusters without implementing k8s native RBAC for them. From a pure GitOps perspective, I treat it as an anti-feature because it enables a lot of imperative operations via its own API, bypassing git. But not all organizations are so comfortable to be 100% gitops, I think it's an acceptable middle ground. And I think you can deploy ArgoCD with reduced permission just fine. I heard some folks do that, although there might be some gotchas.

If it's up to me, I would completely lock down its own API's access and just do everything from git, leaving the UI as read-only for informational purposes.


The permissions model could be a lot better, but ArgoCD is really designed with a git repo as the primary interface.

It seems there's an expectation that ordinary ArgoCD users don't have (write) permissions to the cluster, and handling permissions is delegated to the checks you have on your git repos.

It does feel like a shortcut that limits the situations where ArgoCD can be used, but I can see how this could have been justified during the design process.


You might like Flux instead of Argo. A single controller still reconciles cluster-wide, but it does have the ability to define resources in different namespaces and repositories, as well as dropping privileges to enforce k8s RBAC for resource creation.


I'm working through the Argo vs Flux debate right now. Not that they have to be mutually exclusive, just would rather start with one. It's a pretty tough decision.


Deploy both and see which patterns you prefer, and what fits into your organisation better.

I have used both, but find Argo can be unnecessarily complex, and focuses solely as git as a source of truth for your k8s resources. The image updater can even write back to git to reflect version numbers etc, which is arguably an anti-pattern (git is not a database). However, the UI is excellent and is very powerful, and if your just getting started in the gitops space, its very intuitive.

I feel like the weaveworks team (who created flux) have encountered the problem of using git as a source of truth at scale. They let you specify other sources such as S3 and OCI containers, this gives you a lot more power to build custom, powerful workflows.

This means that you define your k8s resources (kustomizations definitions defining k8s resources, and flux resources) in git, but build, lint and test them in a CI/CD pipeline and publish them as a container. Then you can just tag that container with the cluster name or environment and treat your k8s resources like you would code. You can observe this with the flux ui too.

I think people get too hung up on the git part of gitops. All infrastructure should be defined in a version control system, and follow a sane CI process, but the way your cluster pulls that state to enforce it should be any source that is a reflection of that versioned code in SCM.


Absolutely, and argo really falls short when you have more complex patterns, like monorepos and promotion between different envs. Then you have to revert to argo events and workflows anyways and script your way through it.


Totally, then you have argo rollouts, argo workflows,argo events, and now kargo https://github.com/akuity/kargo

They're encouraging adoption of the entire stack, which is interesting on its own.

Argo has arguably done a much better job selling themselves with all the resources poured into their marketing.


The two features that worked easily in Flux that pushed my team to pick that over argo are 1) Flux can source helm values (and postRender variable replacements) from configmaps and secrets on the cluster; This allows us to setup each cluster with environment specific information at creation time (immutable information about the cluster essentially) without having to copy that data to our flux repos and 2) The ability to use kustomize to patch manifests from helm post-render as well. You can patch anything before it's applied to a cluster, which allows you do deploy upstream charts without having to fork them for your needs.


At dayjob we run ArgoCD, at home I prefer to run Flux which is much lighter for my home lab.

No GUI, redis, dex, and the various controllers and servers. Flux is very simple and is able to suite my needs.


You can use ApplicationSets which templates Applications based upon "generators" such as the contents of a git repo. You can add allowed destinations and resource whitelists and blacklists. https://argo-cd.readthedocs.io/en/stable/operator-manual/app... Once something like this is setup then developers do not need access to the argocd name space and can simply deploy to the git repo.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: