Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move performance testing YAML from dotnet/runtime to dotnet/performance #4639

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

caaavik-msft
Copy link
Contributor

@caaavik-msft caaavik-msft commented Jan 15, 2025

This PR implements the dotnet/performance portion of the work to move the performance testing YAML to do the dotnet/performance repository. You can find the corresponding change for the dotnet/runtime repository here: dotnet/runtime#111454.

Summary

The goal of this work is to move as much of the CI logic as possible out of the dotnet/runtime repository and into the dotnet/performance repository. The reason we want to make this change is to decouple what is being tested from the code that does the testing.

We have had many occurrences over the last few years where there would be a bug in the performance testing logic that caused invalid/skipped data or we have had to make a breaking change. Having this logic live in the dotnet/runtime repository meant we were unable to re-run performance tests when the bug was present or before a breaking change was made. With the changes from this PR, it means that if we ever have a bug that causes invalid/skipped data, we can make the change in the dotnet/performance repo and then re-run the performance tests with those fixes and correct any data issues.

Implementation

It is not practical to move everything out of the dotnet/runtime repository as there are some steps of our performance tests that may be strongly coupled to the version of the runtime. In particular, these would be any jobs that involve building the runtime or building test applications. To handle this, we have made extensive use of cross-repository template references.

If you look at the dotnet/runtime portion of this PR, you will see that eng/pipelines/coreclr/perf.yml will include the following lines:

resources:
  repositories:
    - repository: performance
      type: git
      name: internal/dotnet-performance

- template: /eng/pipelines/runtime-perf-jobs.yml@performance

There is a repository resource named performance which describes where the performance repository is located. For more information about repository resources, see the documentation here. Then we are able to reference a template from the performance repository by putting @performance at the end. When manually running the pipeline in Azure DevOps, under "Advanced options" there is a section called Resources which allows you to customise a branch/commit of the performance repository you want to run against, which is useful when testing changes that need to be made against both the runtime and performance repositories.

If you look at /eng/pipelines/runtime-perf-jobs.yml in this PR, you can see the following line:

- template: /eng/pipelines/coreclr/templates/perf-build-jobs.yml@${{ parameters.runtimeRepoAlias }}

This line is how the performance repository is able to reference back into the runtime repository and call the perf-build-jobs.yml template inside it. In the dotnet/runtime repository PR, it sets runtimeRepoAlias to self which means that it will reference the exact version of the runtime repository that called it. In a future PR, we will be able to add an additional pipeline to the dotnet/performance repository which will has a runtime repository resource defined so that we can verify that a PR isn't going to cause the runtime performance tests to break.

YAML Convergence

Before this PR, there was a lot of duplicated but slightly differing YAML files that did similar things. As part of this PR we are able to unify all this duplicate logic so that everything is only defined once and doesn't require making changes in lots of files. In particular I want to point out the following two files which are worth paying close attention to (both in the /eng/pipelines/templates/ directory):

  • run-performance-job.yml: Defines a job that clones the runtime and performance repositories, runs the performance job python script, and sends the helix job. Every performance test will go through this YAML file including those that run against the SDK or a build of the runtime.
  • runtime-perf-job.yml: A wrapper around run-performance-job.yml which sets up all the necessary parameters and additional steps for running performance tests against a build of the runtime. Most of this is just downloading the build artifacts and arranging them into the correct directories so that they can be used by the python scripts.

Other Changes

  • Job Names and Display Names
    • Job names are now standardised and should be more human readable. As an example:
      • Before: Performance osx x64 release iOSMono JIT ios_scenarios perfiphone12mini NoJS True False True net10.0
      • After: Performance ios_scenarios iOSMono JIT iOSLlvmBuild osx x64 perfiphone12mini net10.0
    • The ordering of the parts of the job name were changed so it is easier to scan through to find a particular job
    • An additionalJobIdentifier parameter can be specified as extra information to include in the job name if needed to disambiguate two jobs.
  • Applying a parameter to every job
    • If you look at eng/pipelines/coreclr/perf.yml you will see I added a parameter called onlySanityCheck which uses the new jobParameters parameter to set the onlySanityCheck parameter on every job that gets run.
    • I have been using this in my testing and it has greatly sped up testing time because I don't have to block the helix queues up with a large amount of jobs when I don't care about performance test results.

Validation

Since this change is large, it is likely to introduce bugs. To validate that things have been ported correctly, I looked at the Run performance job script step of the job to see the command line arguments that were passed to the python script to ensure that they are identical. There are still potentially other classes of bugs due to variables being different which are harder to detect, but from my testing I expect any bugs to be minimal.

We should also address in a future PR ensuring that onlySanityCheck is properly implemented for all scenarios and test cases as it is only be active for our microbenchmarks. This should be fine for now though as our microbenchmarks are the ones that take the longest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this file dead code? I don't see us doing the work that was done by this file anywhere else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we follow the naming convention we seem to have setup for other pipeline yaml files, and have performance- at the beginning of this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants