About me

I am a Ph.D. student at Nanyang Technological University, under the supervision of Prof. Lu Shijian. My research interests include computer vision and unsupervised learning.

Before my Ph.D. study, I received my B.Sc. degree in electronic information science and technology from the University of Electronic Science and Technology of China (UESTC) and M.Sc. degree in signal processing from the Nanyang Technological University (NTU).

Publications

  • Jiaxing Huang, Kai Jiang, Jingyi Zhang, Han Qiu, Lewei Lu, Shijian Lu, Eric Xing. “Learning to Prompt Segment Anything Models.” arXiv 2024. [Paper]
  • Jiaxing Huang, Jingyi Zhang, Kai Jiang, Han Qiu, Shijian Lu. “Visual Instruction Tuning towards General-Purpose Multimodal Model: A Survey.” arXiv 2023. [Paper][Project]
  • Jingyi Zhang, Jiaxing Huang, Sheng Jin, Shijian Lu. “Vision-Language Models for Vision Tasks: A Survey.” TPAMI. [Paper] [Project]
  • Jingyi Zhang, Jiaxing Huang, Xueying Jiang, Shijian Lu. “Black-box Unsupervised Domain Adaptation with Bi-directional Atkinson-Shiffrin Memory.” ICCV 2023. [Paper]
  • Jiaxing Huang, Jingyi Zhang, Han Qiu, Sheng Jin, Shijian Lu. “Prompt Ensemble Self-training for Open-Vocabulary Domain Adaptation.” arXiv 2023. [Paper]
  • Jingyi Zhang, Jiaxing Huang, Zhipeng Luo, Gongjie Zhang, Xiaoqin Zhang, Shijian Lu. “DA-DETR: Domain Adaptive Detection Transformer with Information Fusion.” CVPR 2023. [Paper]
  • Jingyi Zhang, Jiaxing Huang, Xiaoqin Zhang, Shijian Lu. “UniDAformer: Unified Domain Adaptive Panoptic Segmentation Transformer via Hierarchical Mask Calibration.” CVPR 2023. [Paper]
  • Gongjie Zhang, Zhipeng Luo, Yingchen Yu, Zichen Tian, Jingyi Zhang, Shijian Lu. “Towards Efficient Use of Multi-Scale Features in Transformer-Based Object Detectors.” CVPR 2023. [Paper]
  • Zichen Tian, Chuhui Xue, Jingyi Zhang, Shijian Lu. “Domain Adaptive Scene Text Detection via Subcategorization.” arXiv 2022. [Paper]
  • Jingyi Zhang, Jiaxing Huang, Zichen Tian, Shijian Lu. “Spectral Unsupervised Domain Adaptation for Visual Recognition.” CVPR 2022. [Paper]