You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On second thought, this might not make sense. While copying all drv closures makes sense, it still means we don't get good sharing between agents. For example, if we have two jobs that both need a custom GCC, both jobs will end up building it. Even if job two starts after job one (e.g., there is a dependency), it can't see the result of build one's store.
So we'll need to have each build upload a NAR of its result, too. As we're in charge of creating the pipeline, this shouldn't be too hard: for each step, start by downloading an artifact from each dependency. When a step completes, upload the nix-store -r resulting closure.
This won't address two things building GCC in parallel. For this, we'll need to be smarter about how we build a pipeline. That, or just tell people to use https://nixbuild.net/ which automatically deduplicates builds. This issue only affects agents on multiple machines, Nix will already deduplicate builds on the same host.
This would let us have the evaluator distribute jobs to multiple machines without needing to setup a shared binary cache.
The text was updated successfully, but these errors were encountered: