This is an implementation to test the performance of SynSP++, which specifically verifies the results in TABLE VIII of the paper, proving that the closer the searched pose is to the iput pose, the better the performance of SynSP++.
This project utilizes the Milvus vector database for similar pose search. During the training phase, as the computational cost of search operations becomes prohibitively high, we precompute the indices of the top-k most similar poses in advance to accelerate the training process. To enable increasing the number of DataLoader workers without escalating memory consumption, we implement a shared memory strategy. This approach dramatically reduces memory usage, though it is critical to ensure that the available shared memory space exceeds 10GB.
To run this project, you need to set up a Milvus database. Follow the instructions in the Milvus documentation to install Milvus. In addition, please ensure that your Python environment and Milvus are on the same host.
pip install -r requirements.txt
All the data used in our experiment can be downloaded here. data.tar contains AIST++ dataset and need to be unzipped to the data directory. 29_checkpoint.pth.tar is the model we trained with AIST++ dataset and VIBE estimator.
The directory sructure of the repository should look like this:
.
|-- configs
|-- data
| `-- poses
| `-- aist_vibe_3D
| |-- detected
| |-- eval_search
| | |-- 0
| | |-- 1
| | `-- 2
| `-- groundtruth
|-- lib
| |-- core
| |-- dataset
| |-- models
| |-- utils
| `-- visualize
|-- model
`-- vis
Let's take the AIST++ dataset as an example.
python lib/utils/milvus_pose.py --dataset_name DATASET_NAME --cfg CFG_PATH
# python lib/utils/milvus_pose.py --dataset_name aist_vibe_3D --cfg configs/aist_vibe_3D.yaml
Run the commands below to start training:
python train_synsp++.py --cfg configs/aist_vibe_3D.yaml --dataset_name aist --estimator vibe --body_representation 3D --slide_window_size 8 --tradition oneeuro
Run the commands below to start evaluation:
python eval_synsp++.py --cfg configs/aist_vibe_3D.yaml --checkpoint /mnt/new_disk/wangtao/SmoothNet/results/aist_aist_vibe_3D_8_good/29_checkpoint.pth.tar --dataset_name aist --estimator vibe --body_representation 3D --slide_window_size 8 --tradition oneeuro
The results should be as follows:
By debugging in models/model_inf.py Line 336, You can visualize the poses by send the string of pose data to 3dshow.py, here is an example of the visualization result: