-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRDB-45670: helm: automate the statefulset update involving new PVCs #443
base: master
Are you sure you want to change the base?
Conversation
b8da680
to
b3af194
Compare
echo "release_name: Helm release name, e.g. my-release" | ||
echo "chart_version: Helm chart version to upgrade to, e.g. 15.0.0" | ||
echo "namespace: Kubernetes namespace, e.g. default" | ||
echo "sts_name: Statefulset name, e.g. my-release-cockroachdb" | ||
echo "num_replicas: Number of replicas in the statefulset, e.g. 3" | ||
echo "kubeconfig (optional): Path to the kubeconfig file. Default is $HOME/.kube/config." | ||
echo | ||
echo "example: ./scripts/upgrade_with_new_pvc.sh my-release 15.0.0 default my-release-cockroachdb 3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also take an input for values.yaml file. User could have custom values.yaml file they created for cockroachdb. User could be passing this custom values.yaml file using -f
option in helm upgrade command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At L59, I'm currently using ./cockroachdb
as the chart location to upgrade to. I am guessing this would default to using the values file at ./cockroachdb/values.yaml
, right?
Once we do provide a flag to provide the values file (-f
), I am thinking if it's okay to keep the chart path as ./cockroachdb
.
# However, at times, the STS fails to understand that all replicas are running and the upgrade is stuck. | ||
# The "--timeout 1m" helps with short-circuiting the upgrade process. Even if the upgrade does time out, it is | ||
# harmless and the last upgrade process will be successful once all the pods replicas have been updated. | ||
helm upgrade $release_name ./cockroachdb --kubeconfig=$kubeconfig --namespace $namespace --version $chart_version --wait --timeout 1m --debug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If one replica is not upgraded and not joined to the cockroachdb cluster properly and we move to the next one. Wouldn't it affect the quoram of the cluster if there are 3 nodes and 2 nodes are not accessible at the moment?
We should identify that the replica we have updated is joined to the cluster before moving to the next one.
In scenarios where a new PVC is getting added to a statefulset as part of Helm upgrade, we need to perform the upgrade as per the below steps:
For every pod:
This is required as we cannot attach a volume to an existing pod.
It is not reasonable to expect customers to manually perform these steps as part of their Helm upgrade, so we would like to offer a script which could automate the same and reduce friction for the customers.