-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-3243: Update the design to mutate the label selector based on matchLabelKeys at api-server instead of the scheduler handling it #5033
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mochizuki875 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @sanposhiho |
/cc @alculquicondor @wojtek-t (or @alculquicondor) do we need PRR review in this case? (I suppose Yes?) |
/retitle KEP-3243: Update the design to mutate the label selector based on matchLabelKeys at api-server instead of the scheduler handling it |
cc @dom4ha |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the content based on the conclusion kubernetes/kubernetes#129480 (comment)
existing pods over which spreading will be calculated. | ||
|
||
A new field named `MatchLabelKeys` will be introduced to`TopologySpreadConstraint`: | ||
A new optional field named `MatchLabelKeys` will be introduced to`TopologySpreadConstraint`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should keep this part.
A new optional field named `MatchLabelKeys` will be introduced to`TopologySpreadConstraint`. | |
A new optional field named `MatchLabelKeys` will be introduced to`TopologySpreadConstraint`. | |
Currently, when scheduling a pod, the `LabelSelector` defined in the pod is used | |
to identify the group of pods over which spreading will be calculated. | |
`MatchLabelKeys` adds another constraint to how this group of pods is identified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to update other sections such as Test Plan
and other PRR questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, please add the current implementation for Alternative
section and describe why we decided to move to a new approach.
I appreciate your comment. |
71f1c8c
to
c6e0a76
Compare
Use `pod.generateName` to distinguish new/old pods that belong to the | ||
revisions of the same workload in scheduler plugin. It's decided not to | ||
support because of the following reason: scheduler needs to ensure universal | ||
and scheduler plugin shouldn't have special treatment for any labels/fields. | ||
|
||
### remove MatchLabelKeys implementation from the scheduler plugin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
### remove MatchLabelKeys implementation from the scheduler plugin | |
### implement MatchLabelKeys in only either the scheduler plugin or kube-apiserver |
Then, briefly mention why we have to implement it in kube-apiserver too.
kube-scheduler will also be aware of `matchLabelKeys` and gracefully handle the same labels. | ||
This is for the Cluster-level default constraints by | ||
`matchLabelKeys: ["pod-template-hash"]`.([#129198](https://github.com/kubernetes/kubernetes/issues/129198)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-scheduler will also be aware of `matchLabelKeys` and gracefully handle the same labels. | |
This is for the Cluster-level default constraints by | |
`matchLabelKeys: ["pod-template-hash"]`.([#129198](https://github.com/kubernetes/kubernetes/issues/129198)) | |
Also, kube-scheduler handles `matchLabelKeys` if the cluster-level default constraints is configured with `matchLabelKeys`. |
disabled, the field `matchLabelKeys` is preserved if it was already set in the | ||
persisted Pod object, otherwise it is silently dropped; moreover, kube-scheduler | ||
will ignore the field and continue to behave as before. | ||
disabled, the field `matchLabelKeys` and corresponding`labelSelector` are preserved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
disabled, the field `matchLabelKeys` and corresponding`labelSelector` are preserved | |
disabled, the field `matchLabelKeys` and corresponding `labelSelector` are preserved |
creation will be rejected by kube-apiserver; moreover, kube-scheduler will ignore the | ||
field and continue to behave as before. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-scheduler cannot determine which label selector(s) is generated by matchLabelKeys at kube-apiserver, and hence it couldn't ignore matchLabelKeys
even after the downgrade. the cluster-level default constraints configuration is the exception though
creation will be rejected by kube-apiserver; moreover, kube-scheduler will ignore the | |
field and continue to behave as before. | |
creation will be rejected by kube-apiserver. | |
Also, kube-scheduler will ignore matchLabelKeys in the cluster-level default constraints configuration. |
In the event of a downgrade, kube-scheduler will ignore `MatchLabelKeys` even if it was set. | ||
In the event of a downgrade, kube-apiserver will reject pod creation with `matchLabelKeys` in `TopologySpreadConstraint`. | ||
But, regarding existing pods, we leave `matchLabelKeys` and generated `LabelSelector` even after downgraded. | ||
kube-scheduler will ignore `MatchLabelKeys` even if it was set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
kube-scheduler will ignore `MatchLabelKeys` even if it was set. | |
kube-scheduler will ignore `MatchLabelKeys` if it was set in the cluster-level default constraints configuration. |
disabling the feature gate, however kube-scheduler will not take the MatchLabelKeys | ||
field into account. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
disabling the feature gate, however kube-scheduler will not take the MatchLabelKeys | |
field into account. | |
disabling the feature gate. |
kube-scheduler also looks up the label values from the pod and checks if those labels | ||
are included in `LabelSelector`. If not, kube-scheduler will take those labels and AND | ||
with `LabelSelector`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-scheduler also looks up the label values from the pod and checks if those labels | |
are included in `LabelSelector`. If not, kube-scheduler will take those labels and AND | |
with `LabelSelector`. | |
kube-scheduler also handles matchLabelKeys if the cluster-level default constraints has it. |
kube-scheduler will also look up the label values from the pod and check if those | ||
labels are included in `LabelSelector`. If not, kube-scheduler will take those labels | ||
and AND with `LabelSelector` to identify the group of existing pods over which the | ||
spreading skew will be calculated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-scheduler will also look up the label values from the pod and check if those | |
labels are included in `LabelSelector`. If not, kube-scheduler will take those labels | |
and AND with `LabelSelector` to identify the group of existing pods over which the | |
spreading skew will be calculated. | |
kube-scheduler will also handle it if the cluster-level default constraints have the one with `MatchLabelKeys`. |
which the spreading skew will be calculated. | ||
`TopologySpreadConstraint` which represent a set of label keys only. | ||
kube-apiserver will use those keys to look up label values from the incoming pod | ||
and those labels are merged to `LabelSelector`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and those labels are merged to `LabelSelector`. | |
and those key-value labels are ANDed with `LabelSelector` to identify the group of existing pods over | |
which the spreading skew will be calculated. |
/assign @wojtek-t for a PRR part reviewing. Please assign another person if needed. |
Queued - although I will wait for SIG approval to happen first. |
labelSelector.