We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We are noticing vsphere-syncer spamming following log messages, when K8s cluster spans multiple-VCs and have CSI migration disabled:
sigs.k8s.io/vsphere-csi-driver/v3/pkg/syncer.initVolumeMigrationService /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/syncer/util.go:366 sigs.k8s.io/vsphere-csi-driver/v3/pkg/syncer.podAdded /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/syncer/metadatasyncer.go:1852 sigs.k8s.io/vsphere-csi-driver/v3/pkg/syncer.InitMetadataSyncer.func7 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/syncer/metadatasyncer.go:530 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/controller.go:243 k8s.io/client-go/tools/cache.(*processorListener).run.func1 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/shared_informer.go:973 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 k8s.io/apimachinery/pkg/util/wait.Until /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161 k8s.io/client-go/tools/cache.(*processorListener).run /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/shared_informer.go:967 k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:72 2024-12-12T20:35:41.93236794Z syncer/metadatasyncer.go:1853 podAdded: failed to get migration service. Err: volume-migration feature is not supported on Multi-vCenter deployment sigs.k8s.io/vsphere-csi-driver/v3/pkg/syncer.podAdded /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/syncer/metadatasyncer.go:1853 sigs.k8s.io/vsphere-csi-driver/v3/pkg/syncer.InitMetadataSyncer.func7 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/pkg/syncer/metadatasyncer.go:530 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/controller.go:243 k8s.io/client-go/tools/cache.(*processorListener).run.func1 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/shared_informer.go:973 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1 /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 k8s.io/apimachinery/pkg/util/wait.BackoffUntil /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 k8s.io/apimachinery/pkg/util/wait.JitterUntil /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 k8s.io/apimachinery/pkg/util/wait.Until /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161 k8s.io/client-go/tools/cache.(*processorListener).run /go/src/github.com/kubernetes-sigs/vsphere-csi-driver/vendor/k8s.io/client-go/tools/cache/shared_informer.go:967 k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1
Do we even need to watch for podAdd events in clusters where CSI migration is disabled?
podAdd
cc @divyenpatel
The text was updated successfully, but these errors were encountered:
Also, I actually had csi-migration: false set in featurestate configmap. But apparently this feature is hardcoded and can't be turned off.
csi-migration: false
So, I am thinking of multi-vc are detected, we should simply disable the migration feature.
Sorry, something went wrong.
cc @xing-yang
Successfully merging a pull request may close this issue.
We are noticing vsphere-syncer spamming following log messages, when K8s cluster spans multiple-VCs and have CSI migration disabled:
Do we even need to watch for
podAdd
events in clusters where CSI migration is disabled?cc @divyenpatel
The text was updated successfully, but these errors were encountered: