Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The data of humanml is of 153 dimension rather than 322 dimension. #70

Open
weihaosky opened this issue Mar 19, 2024 · 2 comments
Open

Comments

@weihaosky
Copy link

weihaosky commented Mar 19, 2024

Hi, thanks for this excellent work!

When I try to transfer the data to tomato representation with python raw_pose_processing.py, I find the data in face_motion_data/smplx_322/humanml or GRAB or EgoBody is of 153 dimension rather than 322 dimension, such that it cannot be processed by python raw_pose_processing.py.

How should I get the data of 322 dimension for dataset humanml, GRAB, and EgoBody?

Many thanks!

@linjing7
Copy link
Collaborator

Hi, have you preprocess the mocap data following this instruction?

@weihaosky
Copy link
Author

Hi, have you preprocess the mocap data following this instruction?

No, I download the motionx_face_motion_data.zip, which contains face_motion_data/smplx_322/humanml&GRAB&EgoBody
I did not do the preprocess from AMASS dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants