Code for the Paper: Quasi-Online Detection of Take and Release Actions from Egocentric Videos. International Conference on Image Analysis and Processing 2023.
-
Updated
May 28, 2024 - Python
Code for the Paper: Quasi-Online Detection of Take and Release Actions from Egocentric Videos. International Conference on Image Analysis and Processing 2023.
R&D for action detection.
This project is designed to display how we can utilize deep learning methods for Sports Data Analytics.
The second generation of YOWO action detector.
Awesome video understanding toolkits based on PaddlePaddle. It supports video data annotation tools, lightweight RGB and skeleton based action recognition model, practical applications for video tagging and sport action detection.
Detect and count steel mace swings using computer vision
使用onnxruntime部署YOWOv2视频动作检测,包含C++和Python两个版本的程序
[CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Code for the paper "Multi-Task Learning of Object States and State-Modifying Actions from Web Videos" published in TPAMI
Official Implementation of our WACV2023 paper: “Holistic Interaction Transformer Network for Action Detection”
Code and material relevant to the paper, "Introducing SSBD+ Dataset with a Convolutional Pipeline for detecting Self-Stimulatory Behaviours in Children using raw videos"
[NeurIPS 2022] PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points
U-shape Deep Networks are Unified Backbones for Human Action Understanding from Wi-Fi Signals
This is the official implementation of KORSAL: Key-Point Detection based Online Real-Time Spatio-Temporal Action Detection. We use CenterNet to locate actions spatially - the first use of key-points in action detection.
[ICCV 2023] Efficient Video Action Detection with Token Dropout and Context Refinement
[IJCV] AOE-Net: Entities Interactions Modeling with Adaptive Attention Mechanism for Temporal Action Proposals Generation
[ICCV 2021] MultiSports: A Multi-Person Video Dataset of Spatio-Temporally Localized Sports Actions
[ECCV 2022] Official Pytorch Implementation of the paper : " Zero-Shot Temporal Action Detection via Vision-Language Prompting "
Annotations for the Mistake Detection benchmark of Assembly101
Make AVADataset custom dataset.
Add a description, image, and links to the action-detection topic page so that developers can more easily learn about it.
To associate your repository with the action-detection topic, visit your repo's landing page and select "manage topics."