This should work (as in, it will create the cluster and only then add the k8s resource to it, in the same plan/apply).
Here the module creates an EKS cluster, but this would work for any module that creates a k8s cluster.
module "my_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "17.0.2"
cluster_name = "my-cluster"
cluster_version = "1.18"
}
# Queries for Kubernetes authentication
# this data query depends on the module my_cluster
data "aws_eks_cluster" "my_cluster" {
name = module.my_cluster.cluster_id
}
# this data query depends on the module my_cluster
data "aws_eks_cluster_auth" "my_cluster" {
name = module.my_cluster.cluster_id
}
# this provider depends on the data query above, which depends on the module my_cluster
provider "kubernetes" {
host = data.aws_eks_cluster.my_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.my_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.my_cluster.token
load_config_file = false
}
# this provider depends on the data query above, which depends on the module my_cluster
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.my_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.my_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.my_cluster.token
load_config_file = false
}
}
# this resource depends on the k8s provider, which depends on the data query above, which depends on the module my_cluster
resource "kubernetes_namespace" "namespaces" {
metadata {
name = "my-namespace"
}
}
I literally implemented this not a month ago. I don't understand the complaint at all. Terraform is easily able to orchestrate a cluster then use it's data to configure the provider. The provider details does not need to be available until resources are created using the provider, which won't occur until the EKS cluster is available.
Here the module creates an EKS cluster, but this would work for any module that creates a k8s cluster.