Hacker News new | past | comments | ask | show | jobs | submit login

> for example you can't create a kubernetes cluster then add a resource to it

I have no love for HCL, but you can do this by creating a kubernetes provider with the auth token pointing at the resource output for the auth token you generated for the cluster.




Yes, however this will work (typically) if the cluster already exists (a previous run), but typically not if you creating the cluster, and kubernetes provider, as part of the same run.

IIRC you'll end up with a kubernetes provider without auth (typically pointing at your local machine), which is 1, not helpful, and 2) can be actively bad.

I believe the core issue here is that providers don't have the ability to specify a `depends_on` relation: https://github.com/hashicorp/terraform/issues/2430


This works even without the depends_on property. All you need to is have the module you use for creating the cluster have an output that is guaranteed to be a computed property.

Then use that computed property as input variable for whatever you want to deploy into Kubernetes.

We're using this with multiple providers and it works. Of course, an actual dependency that's visible would be better.


I'd love to see an example of this actually working, because I have had the opposite experience (explicitly with the Kubernetes and Helm providers); I've had to do applies in multiple steps.


This should work (as in, it will create the cluster and only then add the k8s resource to it, in the same plan/apply).

Here the module creates an EKS cluster, but this would work for any module that creates a k8s cluster.

  module "my_cluster" {
    source                          = "terraform-aws-modules/eks/aws"
    version                         = "17.0.2"

    cluster_name                    = "my-cluster"
    cluster_version                 = "1.18"
  }

  # Queries for Kubernetes authentication
  # this data query depends on the module my_cluster
  data "aws_eks_cluster" "my_cluster" { 
    name = module.my_cluster.cluster_id
  }
  
  # this data query depends on the module my_cluster
  data "aws_eks_cluster_auth" "my_cluster" { 
    name = module.my_cluster.cluster_id
  }

  # this provider depends on the data query above, which depends on the module my_cluster
  provider "kubernetes" {  
    host                   = data.aws_eks_cluster.my_cluster.endpoint
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.my_cluster.certificate_authority.0.data)
    token                  = data.aws_eks_cluster_auth.my_cluster.token
    load_config_file       = false
  }

  # this provider depends on the data query above, which depends on the module my_cluster
  provider "helm" { 
    kubernetes {
      host                   = data.aws_eks_cluster.my_cluster.endpoint
      cluster_ca_certificate = base64decode(data.aws_eks_cluster.my_cluster.certificate_authority.0.data)
      token                  = data.aws_eks_cluster_auth.my_cluster.token
      load_config_file       = false
    }
  }


  # this resource depends on the k8s provider, which depends on the data query above, which depends on the module my_cluster
  resource "kubernetes_namespace" "namespaces" { 

    metadata {
      name = "my-namespace"
    }
  }


I literally implemented this not a month ago. I don't understand the complaint at all. Terraform is easily able to orchestrate a cluster then use it's data to configure the provider. The provider details does not need to be available until resources are created using the provider, which won't occur until the EKS cluster is available.


Using something similar, but it doesn't handle well cluster deletion.


You can do this with either:

1. depends_on = ... 2. implicit dependency, ie reference some cluster property in your deployment, which causes the same behavior as depends_on




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: