Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unknown flag: --leader-elect, all leader-elect flags are unknown since 1.32.0 #7668

Open
idebeijer opened this issue Jan 7, 2025 · 0 comments · May be fixed by #7672
Open

unknown flag: --leader-elect, all leader-elect flags are unknown since 1.32.0 #7668

idebeijer opened this issue Jan 7, 2025 · 0 comments · May be fixed by #7672
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.

Comments

@idebeijer
Copy link

idebeijer commented Jan 7, 2025

Which component are you using?:

/area cluster-autoscaler

What version of the component are you using?:

Component version:

1.32.0

What k8s version are you using (kubectl version)?:

kubectl version Output
$ kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.32.0

What environment is this in?:

AWS but not with EKS just EC2 and self-managed k8s.

What did you expect to happen?:

When upgrading the cluster-autoscaler from 1.31.0 to 1.32.0 the --leader-elect flag would still exists.

What happened instead?:

The cluster-autoscaler crashed saying the --leader-elect flag is unknown:

❯ k logs cluster-autoscaler-7d767d94fd-dk97q
unknown flag: --leader-elect
Usage of ./cluster-autoscaler:
unknown flag: --leader-elect

How to reproduce it (as minimally and precisely as possible):

Start the cluster-autoscaler version 1.32.0+ and set any of the following flags:

      --leader-elect                                                       Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. (default true)
      --leader-elect-lease-duration duration                               The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
      --leader-elect-renew-deadline duration                               The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than the lease duration. This is only applicable if leader election is enabled. (default 10s)
      --leader-elect-resource-lock string                                  The type of resource object that is used for locking during leader election. Supported options are 'leases'. (default "leases")
      --leader-elect-resource-name string                                  The name of resource object that is used for locking during leader election. (default "cluster-autoscaler")
      --leader-elect-resource-namespace string                             The namespace of resource object that is used for locking during leader election.
      --leader-elect-retry-period duration                                 The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled

Anything else we need to know?:

This bug is likely the result of c382519.

In the main func since above commit, kube_flag.InitFlags() is called before componentopts.BindLeaderElectionFlags(&leaderElection, pflag.CommandLine). When I check out the repository and call kube_flag.InitFlags() after componentopts.BindLeaderElectionFlags(&leaderElection, pflag.CommandLine) like before the above commit, then running go run main.go --help shows the flags again.

@idebeijer idebeijer added the kind/bug Categorizes issue or PR as related to a bug. label Jan 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants