Michael Tang

     


I work on language model post-training, and am currently based in San Francisco.

Previously, I was an undergraduate at Princeton University, where I'm incredibly grateful to have been advised by Prof. Karthik Narasimhan, Dr. Shunyu Yao, and Prof. Benjamin Eysenbach. I also spent time at Google, Citadel Securities, and Redwood Research.

Some other interests: technology policy, AI for health, mechanism design, and cardistry. Feel free to reach out by emailing [x]@alumni.princeton.edu, where [x] is replaced by mwtang.


Selected Work

A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals
Grace Liu, Michael Tang, Benjamin Eysenbach
ICLR 2025, IMOL Workshop @ NeurIPS 2024 (Oral)
paper | code | site | tweet

BRIGHT: Benchmarking Reasoning-Intensive Retrieval
Hongjin Su*, Howard Yen*, Mengzhou Xia*, Weijia Shi, Niklas Muennighoff, Han-yu Wang, Haisu Liu, Quan Shi, Zachary S. Siegel, Michael Tang, Ruoxi Sun, Jinsung Yoon, Sercan O. Arik, Danqi Chen, Tao Yu
ICLR 2025 (Spotlight)
paper | code | site | tweet

Can Language Models Solve Olympiad Programming?
Quan Shi*, Michael Tang*, Karthik Narasimhan, Shunyu Yao
COLM 2024
paper | code | site | tweet

Referral Augmentation for Zero-Shot Information Retrieval
Michael Tang, Shunyu Yao, John Yang, Karthik Narasimhan
ACL 2024 (Findings)
paper | code | tweet