Greetings from Evanston.

I am currently a Ph.D. candidate at Northwestern University under the supervision of Prof. Ying Wu. My research interests lies in the intersection of computer vision and robotics, with a particular emphasis on active vision (the agent is endowed with the ability to move and perceive). I am constantly investigating the challenges inherent to active vision agents in an open-world context. These challenges include, but are not limited to, continual learning, few-sample learning, uncertainty quantification and vision-language models.

Prior to my Ph.D., my researches primarily focused on the perception in autonomous driving vehicles, encompassing areas such as stereo vision, 3D mapping, moving-object detection and map repair.

My detailed resume/CV is here (last updated on Feb 2024).

๐Ÿ”ฅ News

  • 2024.05: The proposed dataset to evaluate active recognition has been made publicly available! Please refer to the page for details.
  • 2024.04: ย ๐ŸŽ‰ I have successfully defended my Ph.D.! I would like to extend my gratitude to my committee: Prof. Ying Wu, Prof. Qi Zhu, and Prof. Thrasos N. Pappas. And I will join Amazon Robotics as an Applied Scientist this summer!
  • 2024.02: ย ๐ŸŽ‰ Two papers on active recognition for embodied agents have been accepted by CVPR 2024! Thanks to all my collaborators!
  • 2023.07: ย ๐ŸŽ‰ Our paper on uncertainty estimation has been accepted to ICCV 2023! Appreciation goes out to all advisors: Dr. Bo Liu, Dr. Haoxiang Li, Prof. Ying Wu, and Prof. Gang Hua!

๐Ÿ“– Educations

  • 2019.09 - 2024.06 (expected), Ph.D. cadidate in Electrical Engineering, advised by Prof. Ying Wu, Northwestern University.
  • 2017.09 - 2019.06, M.S. in Computer Science, advised by Prof. Long Chen, Sun Yat-sen University.
  • 2013.09 - 2017.06, B.E. in Computer Science, Sun Yat-sen University.

๐Ÿ“ Publications

To appear at CVPR 2024
sym

Active Open-Vocabulary Recognition: Let Intelligent Moving Mitigate CLIP Limitations

Lei Fan, Jianxiong Zhou, Xiaoying Xing, Ying Wu

Project (coming soon) | Video

  • Investigate CLIPโ€™s limitations in embodied perception scenarios, emphasizing diverse viewpoints and occlusion degrees.
  • Propose an active agent to mitigate CLIPโ€™s limitations, aiming for active open-vocabulary recognition.
To appear at CVPR 2024
sym

Evidential Active Recognition: Intelligent and Prudent Open-World Embodied Perception

Lei Fan, Mingfu Liang, Yunxuan Li, Gang Hua, Ying Wu

Supplementary | Dataset | Project (coming soon) | Video

  • Handling unexpected visual inputs for embodied agentโ€™s training and testing in open environments.
  • Collect a dataset for evaluating active recognition agents. Each testing sample is accompanied with a recognition difficulty level.
  • Applying evidential deep learning and evidence combination for frame-wise information fusion, mitigating unexpected image interference.
ICCV 2023
sym

Flexible Visual Recognition by Evidential Modeling of Confusion and Ignorance

Lei Fan, Bo Liu, Haoxiang Li, Ying Wu, Gang Hua

Supplementary | Poster | Project

  • Modeling both confusion and ignorance with hyper-opinions.
  • Proposing a hierarchical structure with binary plausible functions to handle the challenge of 2^K predictions.
  • Experiments with synthetic data, flexible visual recognition, and open-set detection validate our approach.
WACV 2023
sym

Avoiding Lingering in Learning Active Recognition by Adversarial Disturbance

Lei Fan, Ying Wu

Supplementary | Poster

  • Lingering: The joint learning process could lead to unintended solutions, like a collapsed policy that only visits views that the recognizer is already sufficiently trained to obtain rewards.
  • Our approach integrates another adversarial policy to disturb the recognition agent during training, forming a competing game to promote active explorations and avoid lingering.
ICCV 2021
sym

FLAR: A Unified Prototype Framework for Few-sample Lifelong Active Recognition

Lei Fan, Peixi Xiong, Wei Wei, Ying Wu

Supplementary | Poster

  • The active recognition agent needs to incrementally learn new classes with limited data during exploration.
  • Our approach integrates prototypes, a robust representation for limited training samples, into a reinforcement learning solution, which motivates the agent to move towards views resulting in more discriminative features.

๐Ÿ’ป Internships

  • 2023.06 - 2023.09, Applied Scientist Intern, Amazon Robotics, Seattle, US.
    - Topic: Surface normal estimation and stability analysis.
    - Advisors: Dr. Shantanu Thaker, Dr. Sisir Karumanchi.
  • 2022.06 - 2022.09, Research Intern, Wormpex AI Research, Bellevue, US.
    - Topic: Uncertainty quantification for deep visual recognition.
    - Advisors: Dr. Bo Liu, Dr. Haoxiang Li, and Dr. Gang Hua.
  • 2020.06 - 2020.09, Research Intern, Yosion Analytics, Chicago, US.
    - Topic: Autonomous forklift in a human-machine co-working environment.
  • 2016.06 - 2016.09, Visual Engineer Intern, DJI, Shenzhen, China.
    - Topic: Stereo matching using the fish-eye cameras on drones.

๐ŸŽ– Honors and Awards

  • 2019.09 Northwestern University Murphy Fellowship.
  • 2018.06 Best Student Paper, IEEE Intelligent Vehicle Symposium.
  • 2019.09 National Merit Scholarship, China