I’m a CS Ph.D. student @University of Southern California, advised by Prof. Jieyu Zhao. Before that, I was a M.Eng student at Graduated School of Creative Science and Engineering @Waseda University (早稲田大学), Tokyo, supervised by Prof. Masayuki Goto (Japanese only). I also spent my time as a research assistant @University of Maryland, advised by Prof.Tianyi Zhou, and University of Tokyo (東京大学), advised by Prof.Irene Li. I also work closely with Jieyu Zhang, who focuses on interactive and data-centric AI/ML.

Research Interests: My research interest lies in the realm of natural language processing and synthetic data. Specifically, I’m trying to answer the following questions:

  • How can we comprehensively evaluate an LLM/VLM in different domains?
  • How can we extend ability of LLM/VLM with minimal costs?
  • How can we let LLM/VLMs collaborate safely, efficiently, and effectively to solve real-world problems?

📢 News

[04/08/2025] A new preprint is released. Check Efficient Reinforcement Finetuning via Adaptive Curriculum Learning for more details!

[03/31/2025] A new preprint is released. Check Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base for more details!

[04/10/2024] I will join CS@USC as a PhD student this fall!

📝 Selected Publications

(* denotes equal contribution)

Improving Language Models

Language Model Evaluation

Language Model Agent

Before PhD

🧑‍🏫 Teaching

👨‍💻 Internships

  • Salesforce Research - Research Intern
    2025.05-now

🏅 Professional Services

  • Maintainer of AG2 (Autogen).
  • Reviewer (for multiple years): WACV, KDD, NeurIPS, DMLR, ICLR, AISTATS, ACL, EMNLP, etc