You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! The README.md lists "integration into elevation_mapping_cupy" as a feature, but there seems to be no further information about what this means.
I can see that wild_visual_navigation_ros/launch/elevation_mapping_cupy.launch references wild_visual_navigation_ros/config/elevation_mapping_cupy/wvn_sensor_parameter.yaml, where the visual_traversability channels are passed into elevation_mapping_cupy. Is that basically it?
Is the integration unidirectional, i.e. from wild_visual_navigation to elevation_mapping_cupy only, or does something also happen in the opposite direction?
What effects or benefits do the added visual_traversability channels bring to elevation_mapping_cupy? Do they improve its traversability predictions, elevation map, plane segmentation etc., or something else? Or is it more of a way to sort of unify and consolidate the visual traversability predictions by combining them with elevation_mapping_cupy's more geometry-based ones? What is the thinking and motivation behind this integration, and am I missing any other aspects of how the two packages are supposed to work together?
It could be helpful to add some more details about this to the README.md. Thanks!
The text was updated successfully, but these errors were encountered:
doctorcolossus
changed the title
document elevation_mapping_cupy integration
document elevation_mapping_cupy integration
Nov 18, 2024
Hi @doctorcolossus , apologies for the delay in our response.
What we meant in the readme, is that our system was tested and integrated for its use with the elevation_mapping_cupy package for closed-loop operation. This is because we implemented specific features there (particularly, raycasting an RGB image onto the terrain map) to integrate the visual traversability estimates into the terrain representation, and do standard local planning with it. This integration is unidirectional as you suggested (from WVN to elevation_mapping_cupy).
Regarding your last questions, there are more principled ways to integrate vision-based predictions with geometric cues indeed. The elevation_mapping_cupy itself implements other alternatives that were presented in the "Multi-Modal Elevation Mapping" paper by Gian Erni. But I agree that's a deeper challenge that was beyond what we introduced in the Wild Visual Navigation papers.
Hello! The
README.md
lists "integration into elevation_mapping_cupy" as a feature, but there seems to be no further information about what this means.I can see that
wild_visual_navigation_ros/launch/elevation_mapping_cupy.launch
referenceswild_visual_navigation_ros/config/elevation_mapping_cupy/wvn_sensor_parameter.yaml
, where thevisual_traversability
channels are passed intoelevation_mapping_cupy
. Is that basically it?Is the integration unidirectional, i.e. from
wild_visual_navigation
toelevation_mapping_cupy
only, or does something also happen in the opposite direction?What effects or benefits do the added
visual_traversability
channels bring toelevation_mapping_cupy
? Do they improve its traversability predictions, elevation map, plane segmentation etc., or something else? Or is it more of a way to sort of unify and consolidate the visual traversability predictions by combining them withelevation_mapping_cupy
's more geometry-based ones? What is the thinking and motivation behind this integration, and am I missing any other aspects of how the two packages are supposed to work together?It could be helpful to add some more details about this to the
README.md
. Thanks!The text was updated successfully, but these errors were encountered: