You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we do not define the metadata.namespace, we can apply the terraform but then any modifications or deletions hang on the finalizer.
My Temporary Workaround:
Do not define the namespace in kubernetes_manifest and then delete the finalizer:
resource "null_resource" "my-docker-secret-clustersecret-finalizer-patch" {
# We need this to trigger every time we run terraform, or at least every time we update the resource (room for improvement here)
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "kubectl patch clusterSecret my-docker-secret --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'"
}
depends_on = [kubernetes_manifest.my-docker-secret-clustersecret]
}
Implications of workaround?
I presume none but I would be very interested to know this :)
The text was updated successfully, but these errors were encountered:
How to replicate:
Create a clustersecret via terraform and include the namespace field:
Error from terraform:
Cluster level resource cannot take namespace
This is coded in terraform at https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/manifest/provider/validate.go#L236
If we do not define the
metadata.namespace
, we can apply the terraform but then any modifications or deletions hang on the finalizer.My Temporary Workaround:
Do not define the namespace in kubernetes_manifest and then delete the finalizer:
Implications of workaround?
I presume none but I would be very interested to know this :)
The text was updated successfully, but these errors were encountered: