Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes deployment: use secret instead of configmap #861

Open
1 task done
techtrd opened this issue Jan 11, 2025 · 2 comments
Open
1 task done

kubernetes deployment: use secret instead of configmap #861

techtrd opened this issue Jan 11, 2025 · 2 comments

Comments

@techtrd
Copy link

techtrd commented Jan 11, 2025

Describe the Bug

When using the installation method for kubernetes, kustomize creates a configMap in which secrets are stored.

Kubernetes has a native secret object for secrets which should be used instead.

See: https://kubernetes.io/docs/concepts/configuration/secret/

Additionally, the current implementation of the kubernetes installation does not provide a way to manage the installation via GitOPS Tooling like argocd, since the installation relies on using a makefile. Should not be hard to fix, but i will need to figure out how to use kustomize in ArgoCD first.

Steps to Reproduce

use kustomize build . > manifests.yaml or the documented make file to generate the manifests and deploy into a cluster, in the hoarder namespace there will be a configMap containing the secrets.

Expected Behaviour

Instead of being stored in a configMap, secrets should be stored in a secret object.
When using kustomize this could be achieved with using the secretGenerator and using env[].valueFrom.secretKeyRef in the deployments.

Screenshots or Additional Context

I need to catch up on how to do a proper PR and will try to provide a fixed kustomize.yaml.

Device Details

No response

Exact Hoarder Version

v0.21.0

Have you checked the troubleshooting guide?

  • I have checked the troubleshooting guide and I haven't found a solution to my problem
@techtrd
Copy link
Author

techtrd commented Jan 11, 2025

When using the release tag as a static stable tag the deployments also would need the imagePullPolicy set to always, so it does not get stuck on the last release image.

Following the kubernetes documentation the default imagePullPolicy if not configured in the deployment is: IfNotPresent.

Also following the kubernetes documentation it is discouraged to use tags that do not have a defined version number, since that can cause confusion for the human operator on what version is really running inside the cluster.

i propose to put the version into the kustomize file and leave the image tag empty in the deployments, kustomize will then put the proper version number into the deployments.

This also makes it easier to generate the secret file out of the .env file since the version number is not in the .env anymore, just the secrets.

There is then however still a need to create a configMap for the NEXTAUTH_URL, since this is not a secret, i think. But that can be put into the kustomize.yaml directly with a configMapGenerator.

I also think if properly documented the deployment could be done without kustomize, you can create the configmap and the secret with two kubectl lines and then do a kubectl apply for the other manifests, but then the cluster operator needs to watch the version tags of the deployment, if not defined with the release tag.

@techtrd
Copy link
Author

techtrd commented Jan 11, 2025

I created a pull Request: PR 862

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant