InfoBodied AI Lab

Yanchao Yang

Yanchao Yang

Assistant Professor
Electrical and Computer Engineering and the Institute of Data Science
The University of Hong Kong
Email: yanchaoy at hku dot hk
Office: Room 714, Chow Yei Ching Building, HKU

About Me

    I am an Assistant Professor at HKU, jointly appointed by the Department of Electrical and Computer Engineering (ECE) and the HKU Musketeers Foundation Institute of Data Science (HKU-IDS). I was a Postdoctoral Research Fellow at Stanford University with Leonidas J. Guibas and received my Ph.D. from the University of California, Los Angeles (UCLA) with Stefano Soatto. Earlier, I obtained my Master's and Bachelor's degrees from KAUST and USTC, respectively.

    We do research in embodied AI and are interested in self-/semi-supervised techniques that allow embodied agents to learn at low-annotation regimes. Our long-term goal is to design learning algorithms that enable embodied agents to continuously build scene representations and acquire interaction skills through active perception with multimodal signals. Our recent effort is to develop efficient mutual information estimators and automate the learning of perception, compositional scene representation, and interaction policy for embodied intelligence in the open world, namely, InfoBodied AI. We are also grounding Large Foundation Models to the physical world via information-theoretic tools.

    Ph.D. students, postdocs and interns!

    We are constantly looking for talents with strong motivation to build fundamentals for autonomous agents to learn from unlimited data streams toward intelligent interactions with the physical world. This quest is related but not limited to computer vision and graphics, machine learning, communication and information theory, multimodal data mining, robotics, and human-machine interaction. Please contact me via email if you are interested in our research or potential collaborations.

    *Students can choose either ECE or Data Science when applying, but make sure to notify me after the submission.

    *Strong Ph.D. candidates are welcome to apply for HKPFS and HKUPS (all year around).

    *We are now considering applications in the Main Round for 2025/26, deadline is Dec. 1, 2024. Please apply asap!

News

  • [10/2024] Pei Zhou just gave an Oral talk at ECCV on manipulation concept discovery (MaxMI). Congrats!
  • [09/2024] I am serving as an Area Chair for the IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR 2025).
  • [07/2024] Zhengyang Hu just gave an Oral talk at ICML on neural estimation of mutual information (InfoNet). Congrats!
  • [07/2024] Thanks for the Early Career Award from the Research Grants Council (RGC). Also thank the committee and the reviewers for their insightful comments!
  • [06/2024] I will be teaching with Prof. Yi Ma in the summer school on Towards AI by Deep Neural Networks.
  • [03/2024] I am serving as an Area Chair for the European Conference on Computer Vision (ECCV 2024).
  • [01/2024] Our conference on Parsimony and Learning CPAL is successfully concluded.

Publications

See Google Scholar for a full list of papers. *: equal contributions, : corresponding author

cigtime

CigTime: Corrective Instruction Generation Through Inverse Motion Editing

Qihang Fang, Chengcheng Tang, Bugra Tekin, Yanchao Yang

NeurIPS 2024

arXiv/code/project page

clover

Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation

Qingwen Bu*, Jia Zeng*, Li Chen*, Yanchao Yang, Guyue Zhou, Junchi Yan, Ping Luo, Heming Cui, Yi Ma, Hongyang Li

NeurIPS 2024

arXiv/code/project page

maxmi

MaxMI: A Maximal Mutual Information Criterion for Manipulation Concept Discovery

Pei Zhou, Yanchao Yang

ECCV 2024 Oral

arXiv/code/project page

infonorm

InfoNorm: Mutual Information Shaping of Normals for Sparse-View Reconstruction

Xulong Wang*, Siyan Dong*, Youyi Zheng, Yanchao Yang

ECCV 2024

arXiv/code/project page

sgnerf

SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization

Yiyang Chen*, Siyan Dong*, Xulong Wang, Lulu Cai, Youyi Zheng, Yanchao Yang

ECCV 2024

arXiv/code/project page

disco

DISCO: Embodied Navigation and Interaction via Differentiable Scene Semantics and Dual-level Control

Xinyu Xu, Shengcheng Luo, Yanchao Yang, Yong-Lu Li, Cewu Lu

ECCV 2024

arXiv/code/project page

revisit

Revisit Human-Scene Interaction via Space Occupancy

Xinpeng Liu, Haowen Hou, Yanchao Yang, Yong-Lu Li, Cewu Lu

ECCV 2024

arXiv/code/project page

infonet

InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization

Zhengyang Hu, Song Kang, Qunsong Zeng, Kaibin Huang, Yanchao Yang

ICML 2024 Oral

arXiv/code/project page

infocon

InfoCon: Concept Discovery with Generative and Discriminative Informativeness

Ruizhe Liu, Qian Luo, Yanchao Yang

ICLR 2024

arXiv/code/project page

t2r

Text2Reward: Reward Shaping with Language Models for Reinforcement Learning

Tianbao Xie, Siheng Zhao, Chen Henry Wu, Yitao Liu, Qian Luo, Victor Zhong, Yanchao Yang, Tao Yu

ICLR 2024 Spotlight

arXiv/code/project page

Members

Professional Service