-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running tests on multiple remote cores using socket #1170
Comments
Socket server currently doesn't support multiple processes The easiest way would be to add hop gateways to xdist and use popen//via=hop In this case a hop gateway wouldn't run tests itself it would just proxy to multiple popen gates started via it |
Thank you, this worked:
However, it also treats the first I'd like to create a PR to fix that, and I'm not sure which way to go for this. I'm currently thinking of adding an argument
What do you think? |
It needs a less generic name |
Either a role=proxy on the tx or a arg name that explicitly names proxies |
I thought this could be a more generic feature to control the |
Your explanation makes sense as well For me the key is having non test gateway specs with a specific name Using a role key would be supported by execnet already However adding it with single use is potentially problematic |
I have a remote server, and I would like to run tests on its cores in parallel. This is simple enough using SSH, where I can pass
n*ssh=...
. For socket I would expect it would work withn*socket=...
. However, thesocketserver.py
script linked in the execnet repo does not support parallel clients. So whenever I try to parallelize the socket argument using--tx 2*socket=...
my master gets stuck after initializing one worker, since the socket blocks on the first connection.Is there a better fix to this other than modifying
socketserver.py
to fork for every connection?Thanks.
The text was updated successfully, but these errors were encountered: