Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove usage of flexvolume in kubeadm #2135

Open
neolit123 opened this issue May 12, 2020 · 21 comments
Open

remove usage of flexvolume in kubeadm #2135

neolit123 opened this issue May 12, 2020 · 21 comments
Assignees
Labels
area/ecosystem kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@neolit123
Copy link
Member

neolit123 commented May 12, 2020

distroless effort:
kubernetes/kubernetes#70249

see this message from sig-storage https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-dev/zC8jCLg142w/2P3BN5oTAgAJ

We are gauging the viability of deprecating the Flexvolume master API calls.
...
There is an effort underway to move to core k8s component images to distroless.

kube-apiserver and kube-scheduler already moved to distroless, but the kube-controller-manager was blocked due to flexvolume. kube-proxy is yet to move too.

currently kubeadm has related logic to manage flex volumes for the kube-controller-manager static Pod:

  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir

IIRC the above is currently GA and required for the KCM to run properly.
in case the KCM / kubelet deprecates and remove flexvoluime support kubeadm should follow.

upstream ticket:
kubernetes/kubernetes#98815

@neolit123 neolit123 added area/ecosystem kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 12, 2020
@neolit123 neolit123 added this to the Next milestone May 12, 2020
@neolit123 neolit123 self-assigned this May 12, 2020
@xlgao-zju
Copy link

@neolit123 Since kubernetes/kubernetes#91329 has been merged, shall we remove the mount of flexvolume-dir now?

@neolit123
Copy link
Member Author

so my understanding is that flex volume support does not work with the distroless image for the KCM that k8s ships by default now. however, someone might decide to build their own image and override the one that kubeadm uses from k8s.gcr.io.

this leads me to believe that instead of removing the kubeadm support for flex volume today, we should wait until flex volume is completely removed.

cc @dims please correct me if needed on this one.
cc @rosti

@dims
Copy link
Member

dims commented Jun 11, 2020

yep well said @neolit123

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 9, 2020
@neolit123
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 9, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2020
@neolit123 neolit123 removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 8, 2020
@BenTheElder
Copy link
Member

kube-proxy distroless would require us to do the iptables bits so I'm not sure if anyone will pick that up but is maybe still a good idea.

flex volume seems problematic in general, because e.g. updating the userspace in this distroful images is also technically perhaps a breaking change...

But it still needs an official timeline.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2021
@fabriziopandini
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2021
@fabriziopandini
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 9, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 7, 2021
@neolit123
Copy link
Member Author

neolit123 commented Nov 8, 2021 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2022
@neolit123
Copy link
Member Author

neolit123 commented Feb 7, 2022 via email

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2022
@BenTheElder
Copy link
Member

Seems like we should just freeze this until there's a pending timeline for removal from kubernetes that kubeadm can follow?

@neolit123
Copy link
Member Author

Yep, that is true.
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 8, 2022
@pacoxu
Copy link
Member

pacoxu commented Aug 25, 2022

Deprecation of FlexVolume
FlexVolume is deprecated. Out-of-tree CSI driver is the recommended way to write volume drivers in Kubernetes. See this doc for more information. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver.

FlexVolume was deprecated since v1.23.

But according to https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#working-with-out-of-tree-volume-plugin-options

So you can continue to use it without worry of deprecation, but note that additional features (like topology, snapshots, etc.) will only be added to CSI not to FlexVolume.

FlexVolume was deprecated but will not be removed, but will be still maintained.(No new feature.)

@neolit123
Copy link
Member Author

neolit123 commented Aug 25, 2022

we can remove the kubeadm integration which will be fairly easy, but perhaps we should do it once k/k core removes it. if we remove it now it's not clear how many kubeadm users we will break.

@pacoxu
Copy link
Member

pacoxu commented Aug 25, 2022

Upgrade behavior should not be changed. So if user upgrades their cluster, the cluster will not break.
Can we just change kubeadm init to not use flex volume? Or just keep it as is.

@neolit123
Copy link
Member Author

neolit123 commented Aug 25, 2022

i think after kubeadm upgrade if the manifests no longer have the flex volume bits, an existing user of the deprecated flex volume support from core k8s will be broken.

for init it can be done, but we should tie it to the k8s core removal instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/ecosystem kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants