Fengrui Tian   田丰瑞

I'm an incoming 2024 Fall Ph.D. student at GRASP Lab, University of Pennsylvania, advised by Prof. René Vidal. My major research direction is 3D vision.

For now, I am a master student at the College of Artificial Intelligence of Xi'an Jiaotong University, where I work on computer vision. My master's supervisor is Prof. Shaoyi Du. I am also a research intern at Tsinghua University, where I'm very fortunate to work with Prof. Yueqi Duan and focus on novel view synthesis.

Before that, I was a research intern at CCVL group of Johns Hopkins University, advised by Prof. Alan Yuille and Angtian Wang. I was also an intern at Shanghai Artificial Intelligence Laboratory, Sensetime Research and Megvii Research. I received my B.Eng. degree from the School of Software Engineering of Xi'an Jiaotong University, supervised by Prof. Zhiqiang Tian, with the honor of Best Undergraduate Thesis (rank 1/110+).

I support Slow Science but it is really hard for me to practice it now.

Email  /  Google Scholar  /  Github  /  Full CV (updated to 2023.1.19)

profile photo


Selected Publications
prl Learning a Category-level Pose Estimator without Pose Annotations
Fengrui Tian, Yaoyao Liu, Alan Yuille, Adam Kortylewski, Yueqi Duan, Shaoyi Du, Angtian Wang
Under review, 2024
arxiv  

We propose to leverage the power of diffusion model to train a category-level pose estimator without requiring any pose annotations.

prl Semantic Flow: Learning Semantic Fields of Dynamic Scenes from Monocular Videos
Fengrui Tian, Yueqi Duan, Angtian Wang, Jianfei Guo, Shaoyi Du
ICLR, 2024
open review  /  arxiv  /  code  /  bibtex

We propose Semantic Flow that builds semantic fields of dynamic scenes from monocular videos.

prl MonoNeRF: Learning a Generalizable Dynamic Radiance Field from Monocular Videos
Fengrui Tian, Shaoyi Du, Yueqi Duan
ICCV, 2023
pdf  /  arxiv  /  poster  /  code  /  youtube video  /  talk at CCVL  /  bibtex

We propose MonoNeRF for learning a generalizable dynamic radiance field from monocular videos. While independently using 2D local features and optical flows suffers from ambiguity along the ray direction, they provide complementary constraints to jointly learn 3D point features and scene flows.

prl TCVM: Temporal Contrasting Video Montage Framework for Self-supervised Video Representation Learning
Fengrui Tian, Jiawei Fan, Xie Yu, Shaoyi Du, Meina Song, Yu Zhao
ACCV, 2022   (Oral Presentation, Best Paper Award Honorable Mention)
pdf  /  bibtex

We propose a framework for self-supervised video contrastive learning.

prl Surface-GCN: Learning Interaction Experience for Organ Segmentation in 3D Medical Images
Fengrui Tian, Zhiqiang Tian, Zhang Chen, Dong Zhang, Shaoyi Du
Medical Physics, 2023   (IF: 4.071)
paper  /  pdf  /  supplementary material  /  bibtex

We propose a framework for learning radiologists' experience in medical organ segmentation.



Academic Services
Conference Reviewer
  • IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) 2023, 2024
  • ACM Multimedia (ACM MM) 2024

  • Journal Reviewer
  • IEEE Transactions on Circuits and Systems for Video Technology (TCSVT)
  • ACM Transactions on Graphics (ToG)


  • Honors
  • ACCV Best Paper Award Honorable Mention (2/278),  2022
  • Top 15 Postgraduates Honorable Mention, Xi'an Jiaotong University (Top 0.1%),  2023
  • Special Scholarship, Xi'an Jiaotong University (Top 10%),  2022
  • Best Undergraduate Thesis, Xi'an Jiaotong University (Top 1%),  2021

  • Misc
    I am a badminton player. I played badminton at XJTU, Tsinghua University, SenseTime Corp., and JHU.
    Here are several awards I have won:
  • Quarter Final of Badminton Men's Single Contest, SenseTime Corp.,  2023
  • Semi Final of Freshman Cup Badminton, Faculty of Electronic and Information Engineering, XJTU,  2022
  • Don't hesitate to contact me if you want to play badminton with me!

    I spent a wonderful time on the Model United Nations activities with my closest friends during my undergraduate study. I appreciate and never forget those memories.

    The website template was borrowed from the source code and this page.