Hello! I am Yingqing He. Nice to meet you!
π¨βπ»β I am currently a PhD student at HKUST. My research focuses on text-to-video generation and multimodal generation.
π€ I am also interested in making interesting applications.
π« How to reach me: yhebm@connect.ust.hk
π£ Our lab is hiring engineering-oriented research assistants (RA). If you would like to apply, feel free to reach out with your CV!
π
focusing
Ph.D. student @ HKUST;
Contact: yhebm@connect.ust.hk
Block or Report
Block or report YingqingHe
Report abuse
Contact GitHub support about this userβs behavior. Learn more about reporting abuse.
Report abusePinned
-
yzxing87/Seeing-and-Hearing
yzxing87/Seeing-and-Hearing Public[CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners
-
ScaleCrafter
ScaleCrafter Public[ICLR 2024 Spotlight] Official implementation of ScaleCrafter for higher-resolution visual generation at inference time.
-
AILab-CVC/Animate-A-Story
AILab-CVC/Animate-A-Story PublicRetrieval-Augmented Video Generation for Telling a Story
-
AILab-CVC/VideoCrafter
AILab-CVC/VideoCrafter PublicVideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
-
mayuelala/FollowYourPose
mayuelala/FollowYourPose Public[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.