Zhixuan Xu | 徐志轩

Hi there! I'm an incoming PhD at School of Computing, NUS, advised by Prof. Lin Shao. I'm currently a fourth-year undergraduate (Sep. 2020 - ) student majoring in Robotics 🤖 at Zhejiang University. I am also lucky to work with Kechun Xu, Prof. Rong Xiong and Prof. Yue Wang previously at ZJU.
My research interests lie in robot learning and dexterous manipulation 🦾. I'm open to collaborations on robotics related projects! If you are a researcher looking for a partner, or a student looking for mentorship, feel free to contact me👋.

Github     Google Scholar     ariszxxu [at] gmail [dot] com

ManiFoundation Model for General-Purpose Robotic Manipulation of Contact Synthesis with Arbitrary Objects and Robots

Zhixuan Xu*, Chongkai Gao*, Zixuan Liu*, Gang Yang*, Chenrui Tie, Haozhuo Zheng, Haoyu Zhou, Weikun Peng, Debang Wang, Tianyi Chen, Zhouliang Yu, Lin Shao
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2024
★ Oral ★
TL;DR: Introduced a framework taking contact synthesis as a unified task representation that can generalizes over objects, robots, and manipulation tasks. We can manipulate 1D, 2D, 3D deformable objects, articulated or rigid objects, with either grippers or dexterous hands with all using one ManiFoundation Model.
Website  •   arXiv   •   Code

Diff-LfD: Contact-aware Model-based Learning from Visual Demonstration for Robotic Manipulation via Differentiable Physics-based Simulation and Rendering

Xinghao Zhu, Jinghan Ke, Zhixuan Xu, Zhixin Sun, Bizhe Bai, Jun Lv, Qingtao Liu, Yuwei Zeng, Qi Ye, Cewu Lu, Masayoshi Tomizuka, Lin Shao
Conference on Robot Learning (CoRL) 2023
★ Oral ★
TL;DR: Proposed a self-supervised approach to reconstruct and extract object shapes and 6D poses from monocular human demonstration RGB videos using differentiable rendering. Combined global contact sampling with a robust gradient approximation technique for model-based robotic manipulation with the aid of differentiable simulation.
Website  •   Paper

Object-centric Inference for Language Conditioned Placement: A Foundation Model based Approach

Zhixuan Xu, Kechun Xu, Yue Wang, Rong Xiong
IEEE International Conference on Advanced Robotics and Mechatronics (ICARM) 2023
TL;DR: Proposed to leverage pre-trained large language models and visual language models, and to train residual blocks for better generalization to unseen instructions and objects, and for higher sample efficiency.
arXiv  •   IEEE  

Some of the cute handmade junior toys during my first two years of undergraduate study.


First Place in the 16th “China Control Cup” Robotic Competition for College Students of Zhejiang University
A self-build mobile manipulator that can navigate through a simulated supermarket and retrieve required objects.

prl prl

First Place in the 3rd Zhejiang University
Intelligent Robot Competition

A self-build quadrotor that can fly through
obstacles and land at the target point.


A self-built holographic projection system with remote-controlled positioning and rotation.


A self-built mobile manipulator that rearranges blocks according to color requirements.


A self-built delivery car that transport goods based on number requirements.