InfoBodied AI Lab

Yanchao Yang

Yanchao Yang

Assistant Professor
Electrical and Computer Engineering and the Institute of Data Science
The University of Hong Kong
Email: yanchaoy at hku dot hk
Office: Room 714, Chow Yei Ching Building, HKU

About Me

    I am an Assistant Professor at HKU, jointly appointed by the Department of Electrical and Computer Engineering (ECE) and the HKU Musketeers Foundation Institute of Data Science (HKU-IDS). I was a Postdoctoral Research Fellow at Stanford University with Leonidas J. Guibas and received my Ph.D. from the University of California, Los Angeles (UCLA) with Stefano Soatto. Earlier, I obtained my Master's and Bachelor's degrees from KAUST and USTC, respectively.

    We do research in embodied AI and are interested in self-/semi-supervised techniques that allow embodied agents to learn at low-annotation regimes. Our long-term goal is to design learning algorithms that enable embodied agents to continuously build scene representations and acquire interaction skills through active perception with multimodal signals. Our recent effort is to develop efficient mutual information estimators and automate the learning of perception, compositional scene representation, and interaction policy for embodied intelligence in the open world, namely, InfoBodied AI. We are also grounding Large Foundation Models to the physical world via information-theoretic tools.

    Ph.D. students, postdocs and interns!

    We are constantly looking for talents with strong motivation to build fundamentals for autonomous agents to learn from unlimited data streams toward intelligent interactions with the physical world. This quest is related but not limited to computer vision and graphics, machine learning, communication and information theory, multimodal data mining, robotics, and human-machine interaction. Please contact me via email if you are interested in our research or potential collaborations.

    *Students can choose either ECE or Data Science when applying, but make sure to notify me after the submission.

    *Strong Ph.D. candidates are welcome to apply for HKPFS and HKUPS (all year around).

    *We are now considering applications for fall 2026 enrollment, the main round deadline is December 1st, 2025.

News

  • [10/2025] We are hosting the HKU AI Forum 2025!
  • [08/2025] Serving as an Area Chair for the Conference on Computer Vision and Pattern Recognition (CVPR 2026).
  • [08/2025] Serving as an Area Chair for the International Conference on Learning Representations (ICLR 2026).
  • [08/2025] Serving as an Area Chair for the AAAI Conference on Artificial Intelligence (AAAI 2026).
  • [07/2025] I am teaching Embodied AI 101 at HKU this summer.
  • [04/2025] Our work SLAM3R has won the China3DV Top1 Paper Award. Congrats to the team!
  • [01/2025] I am glad to receive the CPAL Rising Stars Award of year 2025!

Publications

See Google Scholar for a full list of papers. *: equal contributions, : corresponding author

himacon

HiMaCon: Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data

Ruizhe Liu, Pei Zhou, Qian Luo, Li Sun, Jun Cen, Yibing Song, Yanchao Yang

NeurIPS 2025

arXiv/code/project page

gazevlm

Gaze-VLM: Bridging Gaze and VLMs via Attention Regularization for Egocentric Understanding

Anupam Pani, Yanchao Yang

NeurIPS 2025

arXiv/code/project page

hypergoalnet

Hyper-GoalNet: Goal-Conditioned Manipulation Policy Learning with HyperNetworks

Pei Zhou, Wanting Yao, Qian Luo, Xunzhe Zhou, Yanchao Yang

NeurIPS 2025

arXiv/code/project page

hypertasr

HyperTASR: Hypernetwork-Driven Task-Aware Scene Representations for Robust Manipulation

Li Sun*, Jiefeng Wu*, Feng Chen, Ruizhe Liu, Yanchao Yang

CoRL 2025

arXiv/code/project page

humocon

HuMoCon: Concept Discovery for Human Motion Understanding

Qihang Fang, Chengcheng Tang, Bugra Tekin, Shugao Ma, Yanchao Yang

CVPR 2025

arXiv/code/project page

reloc3r

Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization

Siyan Dong*, Shuzhe Wang*, Shaohui Liu, Lulu Cai, Qingnan Fan, Juho Kannala, Yanchao Yang

CVPR 2025

arXiv/code/project page

slam3r

SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos

Yuzheng Liu*, Siyan Dong*, Shuzhe Wang, Yingda Yin, Yanchao Yang, Qingnan Fan, Baoquan Chen

CVPR 2025, Spotlight

arXiv/code/project page

hypogen

HyPoGen: Optimization-Biased Hypernetworks for Generalizable Policy Generation

Hanxiang Ren*, Li Sun*, Xulong Wang, Pei Zhou, Zewen Wu, Siyan Dong, Difan Zou, Youyi Zheng, Yanchao Yang

ICLR 2025

arXiv/code/project page

autocgp

AutoCGP: Closed-Loop Concept-Guided Policies from Unlabeled Demonstrations

Pei Zhou*, Ruizhe Liu*, Qian Luo*, Fan Wang, Yibing Song, Yanchao Yang

ICLR 2025, Spotlight

arXiv/code/project page

infogs

InfoGS: Efficient Structure-Aware 3D Gaussians via Lightweight Information Shaping

Yunchao Zhang, Guandao Yang, Leonidas Guibas, Yanchao Yang

ICLR 2025

arXiv/code/project page

Members

Professional Service