Michael Tang

     


I recently graduated from Princeton University, where I'm incredibly grateful to have been advised by Prof. Karthik Narasimhan, Dr. Shunyu Yao, and Prof. Benjamin Eysenbach. I also spent time at Google, Citadel Securities, and Redwood Research.

My research focuses on developing systems that can reason and plan over long horizons. I'm interested in both (1) fundamental algorithms and (2) ways to elicit these capabilities from large language models.

Some other interests: technology policy, AI for health, mechanism design, and cardistry. Feel free to reach out by emailing [x]@alumni.princeton.edu, where [x] is replaced by mwtang.


Selected Work

A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals
Grace Liu, Michael Tang, Benjamin Eysenbach
preprint
paper | code | site | tweet

BRIGHT: Benchmarking Reasoning-Intensive Retrieval
Hongjin Su*, Howard Yen*, Mengzhou Xia*, Weijia Shi, Niklas Muennighoff, Han-yu Wang, Haisu Liu, Quan Shi, Zachary S. Siegel, Michael Tang, Ruoxi Sun, Jinsung Yoon, Sercan O. Arik, Danqi Chen, Tao Yu
preprint
paper | code | site | tweet

Can Language Models Solve Olympiad Programming?
Quan Shi*, Michael Tang*, Karthik Narasimhan, Shunyu Yao
COLM 2024
paper | code | site | tweet

Referral Augmentation for Zero-Shot Information Retrieval
Michael Tang, Shunyu Yao, John Yang, Karthik Narasimhan
ACL 2024 (Findings)
paper | code | tweet