-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incompatibility with calling spin_until_future_complete while in callback #1313
Comments
I suspect it's because the same executor (default executor) is being executed simultaneously on different threads. I tried to use 2 executors (executor1 and default executor) and this issue doesn't occur. |
Looking more closely looks like it isn't caused exclusively by calling a cpp action server and I've adjusted issue description and steps to reproduce |
Looks like by doing something like this, can mitigate the issue, but feels really awkward. Here's a patch implementing this |
This guess is incorrect. |
The root cause is related to rclpy.spin_until_future_complete().
The implementation of rclpy.spin_until_future_complete() is Lines 324 to 329 in 43198cb
The last step is to remove node. Change code as below ...
#rclpy.spin_until_future_complete(self,request_future)
rclpy.get_global_executor().spin_until_future_complete(request_future)
...
#rclpy.spin_until_future_complete(self,result_future)
rclpy.get_global_executor().spin_until_future_complete(result_future) |
Would it be reasonable to check before adding the node if it is already in executor. If already present not remove it after completion? |
There won't be any duplicate Nodes added. Refer to line 271 rclpy/rclpy/rclpy/executors.py Lines 263 to 277 in 43198cb
|
I understand it's not adding a duplicate. I was wondering if it would be appropriate to have the node not be removed from executor at the end if it was already spinning. |
Spinning is the executor's responsibility. While spinning, the Nodes added to the executor can be changed. The executor will recollect the entities from added nodes at an appropriate point. rclpy/rclpy/rclpy/executors.py Lines 567 to 568 in 43198cb
|
Closes rclpy:ros2#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present
Closes rclpy:ros2#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present
Closes rclpy:ros2#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]>
I saw your PR #1316 and I understand your idea. I misunderstood earlier. |
Closes rclpy:ros2#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]>
The PR is a good change, but all async def execute_callback(self, goal_handle: ServerGoalHandle):
self.get_logger().info("Requesting goal...")
# self._action_client.send_goal_async()
request_future = self._action_client.send_goal_async(Fibonacci.Goal(order=goal_handle.request.order))
spin_handle: ClientGoalHandle = await request_future
self.get_logger().info("Recieved request result...")
result_future: Future = spin_handle.get_result_async()
result = await result_future
self.get_logger().info("Recieved final result...")
if not result_future.cancelled():
goal_handle.succeed()
return result |
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef)
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef)
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef)
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef) Co-authored-by: Jonathan <[email protected]>
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef) Co-authored-by: Jonathan <[email protected]>
Closes rclpy:#1313 Current if spin_unitl_future_complete is called inside a nodes callback it removes the node from the executor This results in any subsiquent waitables to never be checked by the node since the node is no longer in the executor This aims to fix that by only removing the node from the executor if it wasn't already present Signed-off-by: Jonathan Blixt <[email protected]> Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> (cherry picked from commit 47346ef) Co-authored-by: Jonathan <[email protected]>
Bug report
Required Info:
Steps to reproduce issue
Utilizing demos repo with action tutorials_py and action_tutorials_cpp code
Apply the following patch (its in a txt file because github issues don't like patches for some reason)
patch
Then build and run the following nodes in separate terminals
or
and
Finally, run
Then run it again
Expected behavior
On the first execution of ros2 action send_goal we get the expected behavior of
And any subsequent calls to send_goals to some_action work exactly the same only varying Goal ID
Actual behavior
Any Subsequent calls actually result in
So the goal request is never processed by the action hybrid, after the first call.
Additional information
I initially found this when working on #1123, thinking there was a race condition within rclpy, but since the goal request is never getting received by the hybrid action. The default goal request processing is a simple return accepted leaving no place for potential race condition issues in that instance.
The text was updated successfully, but these errors were encountered: