-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: reduce memory usage #596
Conversation
b4da28c
to
3d41704
Compare
I've rebased against the latest changes merged into the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Top!
3d41704
to
9ed2f92
Compare
I've merged the changes into the I would prefer to not tag a new version of policy-evaluator right now. @fabriziosestito is going to open a new PR against |
Moving to blocked, waiting for the next iteration of policy-evaluator that is lock-free before merging this PR. |
This allows a better organization of the code Signed-off-by: Flavio Castelli <[email protected]>
This allows us to uniquely identify a precompiled wasm module Signed-off-by: Flavio Castelli <[email protected]>
This is going to be useful also to perform the k6 tests. Signed-off-by: Flavio Castelli <[email protected]>
Reduce the memory consumption of Policy Server when multiple instances of the same Wasm module are loaded. Thanks to this change, a worker will have only once instance of `PolicyEvaluator` (hence of wasmtime stack), per unique type of module. Literally speaking, if a user has the `apparmor` policy deployed 5 times (different names, settings,...) only one instance of `PolicyEvaluator` will be allocated for it. Note: the optimization works at the worker level. Meaning that `PolicyEvaluator` are NOT sharing these instances between themselves. This commit helps to address issue kubewarden/kubewarden-controller#528 Signed-off-by: Flavio Castelli <[email protected]>
d912d22
to
da81dc7
Compare
@fabriziosestito , @jvanz , @viccuad : I've rebased my changes against the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just a couple of comments added
overwrite: false | ||
supplemental_groups: | ||
rule: RunAsAny | ||
overwrite: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this needed? we are going to remove venom tests soonish
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm using them inside of k6 tests. This file should definitely be moved to the load-testing repo once we remove venom
94873a1
to
5bf6b3e
Compare
This commit significantly changes the internal architecture of Policy Server workers. This code takes advantage of the `PolicyEvaluatorPre` structure defined by the latest `policy-evalautor` crate. `PolicyEvaluatorPre` has different implementations, one per type of policy we support (waPC, Rego, WASI). Under the hood it holds a `wasmtime::InstancePre` instance that is used to quickly spawn a WebAssembly environment. Since `PolicyEvaluatorPre` instances are managed by the `EvaluationEnvironment` structure. `EvaluationEnvironment` takes care of deduplicating the WebAssembly modules defined inside of the `policies.yml` file, ensuring only one `PolicyEvaluatorPre` is created per policy. The `EvaluationEnvironment` struct provides `validate` and `validate_settings` methods. These methods create a fresh `PolicyEvaluator` instance by rehydrating its `PolicyEvaluatorPre` instance. Once the WebAssembly evaluation is done, the `PolicyEvaluator` instance is discarded. This is a big change compared to the previous approach, where each WebAssembly instance was a long lived object. This new architecture assures that each evaluation is done inside of a freshly created WebAssembly environment, which guarantees: - Policies leaking memory have a smaller impact on the memory consumption of the Policy Server process - Policy evaluation always starts with a clean slate, this is useful to prevent bugs caused by policies that are not written to be stateless In terms of memory optimizations, the `EvaluationEnvironment` is now an immutable object. That allows us to have one single instance of `EvaluationEnvironment` shared across all the worker threads, all without using mutex or locks. This significantly reduces the amount of memory required by the Policy Server instance, without impacting on the system performances. As an added bonus, a lot of code has been simplified during this transition. More code can be removed by future PRs. Signed-off-by: Flavio Castelli <[email protected]>
Introduce custom error types for `EvaluationEnvironment` and make use of them to properly identify not found policies. This is required to properly return a 404 response inside of the UI. This regression was detected by the integration tests. Signed-off-by: Flavio Castelli <[email protected]>
f943c78
to
09199c7
Compare
@fabriziosestito , @jvanz , @viccuad : you can take another look at the PR. I've applied the changes you requested with the latest commits |
09199c7
to
0850f00
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚀
Before merging this PR I want to summarize what happened. Optimization number 1The PR started with one optimization technique: ensure each worker of Policy Server (where a worker is a thread dedicated to Wasm evaluation) has just one instance of Optimization number 2This is an iteration over the previous one. We changed drastically the way we manage WebAssembly runtimes. We rely on In addition to that, we changed the evaluation mode to be "on demand". That means that, once an This architecture has two main advantages:
BenchmarksI've updated the table and the graph shown when the PR was open to include also the numbers of the second, and final, optimization. The benchmarks have been done on the same machine, with the same set of policies and the same measurement process.
This is the updated chart: As you can see, the memory consumption has been significantly reduced. Moreover, the memory usage stays constant regardless of the number of workers instantiated. |
Adapt the code to the new API exposed by latest version of policy-evaluator. On top of that, adding some extra unit tests. Signed-off-by: Flavio Castelli <[email protected]>
0850f00
to
ad951ab
Compare
Reduce the memory consumption of Policy Server when multiple instances of the same Wasm module are loaded.
Thanks to this change, a worker will have only once instance of
PolicyEvaluator
(hence of wasmtime stack), per unique type of module.Literally speaking, if a user has the
apparmor
policy deployed 5 times (different names, settings,...) only one instance ofPolicyEvaluator
will be allocated for it.Warning: the optimization works at the worker level. Meaning that
PolicyEvaluator
are NOT sharing these instances between themselves.This commit helps to address issue kubewarden/kubewarden-controller#528
Note 1: this depends on kubewarden/policy-evaluator#390
Benchmark data
I've collected data about the amount of memory consumed by 1 instance of policy server. The test has been done using the following policies being loaded.
policies.yml
This configuration file will load 13 policies:
Measurement process
The policy server has been started with an increasing number of workers. I started from 1 worker and went up to 8.
For each worker value, 10 samplings have been taken.
The table below shows the data collected.
This is a chart that shows the overall memory reduction brought by this change: