Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Atlas – A deployment pipeline platform built on Argo CD (greenops.io)
66 points by mihirpandya on Feb 2, 2022 | hide | past | favorite | 12 comments
Atlas is an open-source deployment pipeline platform built for cloud-native applications.

Atlas allows users to: - Create continuous pipelines across all their environments and clusters - Add custom tasks/tests plugins (Python scripts, K8S manifests, Argo Workflows, environment setup, etc.) - Automatically rollback applications in case of failure or degradation (Atlas watches the application past the scope of a pipeline run to ensure and enforce stability) - Use all existing Argo features

Would love to hear all of your feedback and thoughts on this!




I'm a big fan of Argo CD (and their other products), so am interested to learn more about Atlas.

> To build deployment pipelines with Argo Workflows, users have to set up Argo Events and write custom scripts to trigger an ArgoCD deployment, which can be a complicated process.

It sounds to me like the major value-add, compared to Argo Workflows, is simplifying the implementation process – but then the first step in "Getting Started" is to setup Kafka…

I'll have to spend more time looking at this.


Thanks for the comment. Fair point about Kafka :), although there are good tools now when it comes to deploying Kafka easily (Strimzi, Confluent, etc.).

I like to think of Atlas as connective meshing for Argo which adds a reactive element. Atlas can connect tasks/tests, deployments, and automated stability under one standardized interface. It also adds a lot of automation, picking up feedback/changes at any point during the lifecycle and making changes based on it (rollback, progression, etc).


can't workflows do these things?


Argo Workflows would need to be fully configured with Argo Events and Argo CD to make this happen, specifically with additional Workflows for error processing, multiple sensors for input, and a secondary tier of custom sensors for watching tasks or Argo CD apps past the pipeline run. The workflows (main pipeline and handling pipelines) would likely require custom logic as well. Definitely feasible, but can be complicated.


How does this compare to tekton, I'm currently exploring projects in this "k8s native pipeline" space


Hi! Sorry for the delayed response. Atlas focuses more on deployment pipelines, and is a more structured system for deployment-specific use cases. It has a lot of support for testing, application recovery, and state remediation.

Tekton is quite similar to Argo Workflows - they are both frameworks that provide ways to declare workflow pipelines for execution on Kubernetes. Per one of the Argo project leads, the majority of the users are using these workflows for ML, ETL, data processing pipelines, and CI.


Don't Stripe have a product called Atlas?


Yes. It is such an overused name.

Stripe Atlas. MongoDB Atlas. HashiCorp had an Atlas. Just last week was Atlas CLI "terraform for db migrations" ....


Any suggestions for other names?


Solinas- pipeline in Greek


Jason, leader of the Argo-nauts :)


Hello! I'm building argonaut.dev . Not related to the Argo project though we do use some parts under the hood.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: