Skip to content

Commit

Permalink
Merge branch 'feature/data-collection' of github.com:leggedrobotics/v…
Browse files Browse the repository at this point in the history
…iplanner into feature/data-collection
  • Loading branch information
pascal-roth committed Nov 19, 2024
2 parents 7f06142 + 4ddbbb2 commit 71e8ed4
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 6 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ For more detailed instructions, please refer to [TRAINING.md](TRAINING.md).
Training data is generated from the [Matterport 3D](https://github.com/niessner/Matterport), [Carla](https://carla.org/) and [NVIDIA Warehouse](https://docs.omniverse.nvidia.com/isaacsim/latest/tutorial_static_assets.html) using IsaacLab. For detailed instruction on how to install the extension and run the data collection script, please see [here](omniverse/README.md)

1. Build Cost-Map <br>
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information).
The first step in training the policy is to build a cost-map from the available depth and semantic data. A cost-map is a representation of the environment where each cell is assigned a cost value indicating its traversability. The cost-map guides the optimization, therefore, is required to be differentiable. Cost-maps are built using the [cost-builder](viplanner/cost_builder.py) with configs [here](viplanner/config/costmap_cfg.py), given a pointcloud of the environment with semantic information (either from simultion or real-world information). The point-cloud of the simulated environments can be generated with the [reconstruction-script](viplanner/depth_reconstruct.py) with config [here](viplanner/config/costmap_cfg.py).

2. Training <br>
Once the cost-map is constructed, the next step is to train the policy. The policy is a machine learning model that learns to make decisions based on the depth and semantic measurements. An example training script can be found [here](viplanner/train.py) with configs [here](viplanner/config/learning_cfg.py)
Expand Down
14 changes: 10 additions & 4 deletions TRAINING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

Here an overview of the steps involved in training the policy is provided.


## Data Generation

For the data generation, please follow the instruction given in [here](omniverse/README.md).


## Cost-Map Building

Cost-Map building is an essential step in guiding optimization and representing the environment.
Expand All @@ -28,13 +34,14 @@ If depth and semantic images of the simulation are available, then first 3D reco
├── xxxx.png # images saved with 4 digits, e.g. 0000.png
```

when both depth and semantic images are available, then define sem_suffic and depth_suffix in ReconstructionCfg to differentiate between the two with the following structure:
In the case that the semantic and depth images have an offset in their position (as typical on some robotic platforms),
define a `sem_suffic` and `depth_suffix` in `ReconstructionCfg` to differentiate between the two with the following structure:

``` graphql
env_name
├── camera_extrinsic{depth_suffix}.txt # format: x y z qx qy qz qw
├── camera_extrinsic{sem_suffix}.txt # format: x y z qx qy qz qw
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images
├── intrinsics.txt # P-Matrix for intrinsics of depth and semantic images (depth first)
├── depth # either png and/ or npy, if both npy is used
| ├── xxxx{depth_suffix}.png # images saved with 4 digits, e.g. 0000.png
| ├── xxxx{depth_suffix}.npy # arrays saved with 4 digits, e.g. 0000.npy
Expand All @@ -49,7 +56,7 @@ If depth and semantic images of the simulation are available, then first 3D reco

3. **Cost-Building** <br>

Fully automated, either a geometric or semantic cost map can be generated running the following command:
Either a geometric or semantic cost map can be generated running the following command:

```
python viplanner/cost_builder.py
Expand All @@ -72,7 +79,6 @@ If depth and semantic images of the simulation are available, then first 3D reco
```



## Training

Configurations of the training given in [TrainCfg](viplanner/config/learning_cfg.py). Training can be started using the example training script [train.py](viplanner/train.py).
Expand Down
3 changes: 2 additions & 1 deletion omniverse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,8 @@ cd IsaacLab
## Data Collection
The training data is generated from different simulation environments. After they have been downloaded and converted to USD, the rendered viewpoints are collected by executing
The training data is generated from different simulation environments. After they have been downloaded and converted to USD, adjust the paths (marked as `${USER_PATH_TO_USD}`) in the corresponding config files ([Carla](./extension/omni.viplanner/omni/viplanner/config/carla_cfg.py) and [Matterport](./extension/omni.viplanner/omni/viplanner/config/matterport_cfg.py)).
The rendered viewpoints are collected by executing
```
cd IsaacLab
Expand Down

0 comments on commit 71e8ed4

Please sign in to comment.