Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding inputs to trajectory minicking network : Reference motion missing ? #36

Open
chinmaySwami opened this issue Apr 5, 2023 · 1 comment

Comments

@chinmaySwami
Copy link

chinmaySwami commented Apr 5, 2023

Hi,

I was looking into the code and realized that GetState() outputs COM position, velocity and gait phase. This is then used as input during NN training in main.py.

However, Fig. 5 in the paper indicates reference motion as input, which is not the case in the code.

image

Is it the case that the figure does not represent actual inputs to the network, instead it just represents the components used in the trajectory mimicking phase (reference motion is integral part of reward function) ?

Can you please help clarify ?

Thanks,
Chinmay Swami

@todayThursday
Copy link

I have been training the running model recently, but no matter how to modify reward_param, the training effect is not good, I want to know if you have solved this problem, but on the other hand, I think the weight parameter of the reward function does not affect the effect of the training model, in the code, The reward function only considers the position error and the velocity error of the joint, and scales these two with the error of the end-effector. Logically, if we only consider the similarity of the motion, and do not need to add other effects, just the results of the training of the neural network, then why does the run model not have good results no matter how it is felt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants