-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pagination from cache KEP #5017
base: master
Are you sure you want to change the base?
Conversation
As still some pagination requests will be delegated to etcd, we will monitor the | ||
success rate by measuring the pagination cache hit vs miss ratio. | ||
|
||
Consideration: Should we start respecting the limit parameter? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand - are we not respecting the limit parameter in the current iteration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently API server doesn't respect limit when serving RV="0". https://github.com/kubernetes/kubernetes/blob/6746df77f2376c6bc1fd0de767d2a94e6bd6cec1/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go#L806-L818
I think we should consider re-enabling limit for consistency, however we need to better understand consequences. Impact on client/server when we cannot serve pagination from cache, like with L7 LB or pagination taking more than 75s.
I'm not worried about clients, user setting limit should be already prepared to handle pagination as it is required when not setting RV.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually also worried about clients - I think I've seen cases of people doing that and relying on the baviour of lack of pagination for RV=0.
I'm not saying it's a hard-no, but we need to figure out the story here.
That said, I would put it explicitly out-of-scope for this KEP and add that explicitly as future work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
|
||
For setups with L4 loadbalancer apiserver can be configured with Goaway, which | ||
requests client reconnects periodically, however per request probability should | ||
be configured around 0.1%. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for 0.1% here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.1% is the recommended to configuration for GOAWAY kubernetes/kubernetes#88567
requests client reconnects periodically, however per request probability should | ||
be configured around 0.1%. | ||
|
||
For L7 loadbalancer the default algorithm usually is round-robin. For most LBs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to better understand this, is the worst case here as follows (assume 3 API Servers A, B, C):
- Client hits API Server A: assuming the rv is cached, snapshot is created on receiving a LIST request with a limit parameter set
- Client hits API Server B: no snapshot present, we delegate to etcd
- Client hits API Server C: no snapshot present, we delegate to etcd
so the performance degenerates to the current situation without cached pagination with slight improvement for (1)? And if (1) also delegates to etcd in case the rv isn't cached, then perf degenerates to the current scenario of no cached pagination?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ^ is assuming all are on a version that has support for pagination from the cache. If there is one server that is on a minor version which does not have support, my understanding is that that would again be delegated to etcd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, there is no regression, assuming that:
- We will delegate continue requests, that we don't have cached responses for, to etcd.
- We will not change how API server doesn't respect limit for RV="0".
Signed-off-by: Madhav Jivrajani <[email protected]>
kep-4988: flesh out cached pagination procedure
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: serathius The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
- Since resourceVersions provide a global logical clock sequencing all events in the cluster, a snapshot | ||
of the watchCache for this resourceVersion is retrieved using the resourceVersion as the key. | ||
- The corresponding snapshot may not be present in the following 2 scenarios at an API Server: | ||
- Snapshot has been cleaned up due to the 75s TTL (see below). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we switched that recently, with 75s still being default, but it now depends on request timeouts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right
As still some pagination requests will be delegated to etcd, we will monitor the | ||
success rate by measuring the pagination cache hit vs miss ratio. | ||
|
||
Consideration: Should we start respecting the limit parameter? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually also worried about clients - I think I've seen cases of people doing that and relying on the baviour of lack of pagination for RV=0.
I'm not saying it's a hard-no, but we need to figure out the story here.
That said, I would put it explicitly out-of-scope for this KEP and add that explicitly as future work.
|
||
#### Memory overhead | ||
|
||
No, B-tree only store pointers the actual objects, not the object themselves. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well - there is some overhead (as you also write below) - you just claim it's small (can we somehow quantify small?)
For L7 loadbalancer the default algorithm usually is round-robin. For most LBs | ||
it should be possible to switch the algorithm to be based on source IP hash. | ||
Even if that is not possible, stored snapshots will never be used and user will | ||
not be able to benefit from the feature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we can expect providers to change their LB configuration...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main point here was that we expect only minimal overhead, while users of L7 LB can can opt in via a common configuration option.
I run the scalability tests to measure overhead of clone. Scalability tests are a good as they don't use pagination nor exact request. I used kubernetes/kubernetes#126855 which clones the storage on each request. The results are good: Overhead based on profiles collected during scalability tests:
The overhead is small enough that is within normal variance of memory usage during the test. The are some noticeable increases in request latency however I they are still far from SLO and could be due to high variance in results.
If we account for high variance of latency in scalability tests and look at profile differences only, we can estimate the expected overhead of keeping all store snapshots in the watchcache to be below 2% of memory. |
Are you looking at LoadResponsiveness_Prometheus or LoadResponsiveness_PrometheusSimple for latencies? https://perf-dash.k8s.io/#/?jobname=gce-5000Nodes&metriccategoryname=APIServer&metricname=LoadResponsiveness_Prometheus&Resource=pods&Scope=resource&Subresource=&Verb=DELETE |
I looked at the |
I would focus on PrometheusSimple as something that is much more predictible/repeatable. |
B-tree snapshots to serve paginated lists. | ||
|
||
Mechanism: | ||
1. **Snapshot Creation:** When a paginated list request (with a limit parameter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a HA configuration where there are multiple apiservers and client requests are load balanced across those apiservers, is the idea that each apiserver creates the snapshot on the first paginated request it receives, even if the request is for a subsequent page?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really - if I get only a subsequent request for n-th page, I simply forward it to etcd....
... unless we go with the approach I'm proposing instead: #5017 (comment)
4. **Snapshot Cleanup:** Snapshots will be subject to a Time-To-Live (TTL) | ||
mechanism. We will reuse the existing watch event cleanup logic, which has a | ||
75s TTL. This ensures that snapshots don't accumulate indefinitely. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a snapshot is missing for a request (either cleaned up, or otherwise), is it recreated or does the request fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the sentence below, perhaps it falls through to etcd and we get whatever etcd does with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As David wrote.
|
||
### Non-Goals | ||
|
||
- Serve `resourceVersion="N"` request from watch cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'm clear, this means that a paginated list from RV=N (which is valid I think based on docs: https://kubernetes.io/docs/reference/using-api/api-concepts/#semantics-for-get-and-list ) will not be supported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not initially - depending on the path we take here, if we take #5017 (comment) it will be easily extendable to support it later.
We just wanted to reduce the scope initially, but if you prefer to have it supported from the beginning, we can change it.
arrives, the API server will: | ||
- Extract the resourceVersion from the continue token. | ||
- Since resourceVersions provide a global logical clock sequencing all events in the cluster, a snapshot | ||
of the watchCache for this resourceVersion is retrieved using the resourceVersion as the key. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the request with the continue token goes to a different kube-apiserver than the initial list, how will this lookup succeed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's forwarded to etcd... unless we go with the approach that I'm proposing: #5017 (comment)
|
||
- `k8s/apiserver/pkg/storage/cache`: `2024-12-12` - `<test coverage>` | ||
|
||
##### Integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's start a list of must-have integration tests. I definitely want to see the handling of multiple kube-apiservers where the initial list and the continue list go to different kube-apiservers and we correctly fallback to etcd and the result functions.
|
||
[API call latency SLI](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/api_call_latency.md) | ||
|
||
###### Are there any missing metrics that would be useful to have to improve observability of this feature? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the importance of cache misses for continue tokens, I'd like to have those metrics available in alpha to inform going to beta.
|
||
No | ||
|
||
### Troubleshooting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we check in the field whether the response from the cache exactly matches the response from etcd?
Create first draft of #4988 as provisional.
Draft PR for context kubernetes/kubernetes#128951
/cc @wojtek-t @deads2k @MadhavJivrajani @jpbetz