Kaiqing Zhang

alt text 

Assistant Professor

Electrical and Computer Engineering (ECE)
Computer Science (CS)
Institute for Systems Research (ISR)
Center for Machine Learning
University of Maryland Institute for Advanced Computer Studies (UMIACS)
Maryland Robotics Center (MRC)
University of Maryland, College Park

Google Scholar

About Me

I am an Assistant Professor at the Department of Electrical and Computer Engineering (ECE) and the Institute for Systems Research (ISR), at the University of Maryland, College Park. I am also affiliated with the Department of Computer Science (CS), the University of Maryland Institute for Advanced Computer Studies (UMIACS), the Center for Machine Learning, the Maryland Robotics Center (MRC), and the Applied Mathematics & Statistics, and Scientific Computation (AMSC) program. During the deferral time before joining Maryland, I was a postdoctoral scholar affiliated with LIDS and CSAIL at MIT, and a Research Fellow at Simons Institute for the Theory of Computing at Berkeley. I finished my Ph.D. from the Department of ECE and CSL at the University of Illinois at Urbana-Champaign (UIUC). I also received M.S. in both ECE and Applied Math from UIUC, and B.E. from Tsinghua University. My research interests lie broadly in Control and Decision Theory, Game Theory, Robotics, Reinforcement/Machine Learning, Computation, and their intersections.


I am actively looking for self-motivated Ph.D. students and postdocs with a strong mathematical and/or programming background. I am also happy to host (remote) undergraduate/graduate visitors. I might be slow in responding to unsolicited emails regarding openings. Nevertheless, if you have a strong background and think your interests are compatible with mine, please do not hesitate to contact me with your CV. Also feel free to contact me if you are already admitted and looking for an advisor, or mention my name in your Ph.D. application (through either ECE or CS or AMSC programs) if you are interested in working with me.

Recent News

  • May 2024: Journal version of the ICML paper on non-stationary reinforcement learning is accepted to INFORMS Journal of Management Science (MS).

  • May 2024: New paper on independent Q-learning in zero-sum stochastic games with function approximation is accepted to ACM Conference on Economics and Computation (EC) 2024.

  • April 2024: Invited to speak at the Adaptive Learning in Complex Environments Workshop at TTIC.

  • April 2024: Invited to speak at the Reinforcement Learning Symposium at Boston University.

  • March 2024: Invited tutorial at the European Control Conference (ECC) 2024 on Learning-based Control: Fundamentals and Recent Advances.

  • March 2024: Invited to speak at the Annual Conference on Information Sciences and Systems (CISS)'s Modern Reinforcement Learning Session.

  • Jan. 2024: New paper on robot fleet learning is accepted to ICLR 2024.

  • Dec. 2023: Invited to give a tutorial at the ETH/EPFL Multi-Agent Reinforcement Learning Summer School in 2024.

  • Nov. 2023: Invited to speak at the INFORMS Optimization Society (IOS) Conference's Recent Advances in Min-Max Optimization and Beyond Session.

  • Sept. 2023: Invited to speak at the Allerton Conference 2023 on the work.

  • Sept. 2023: We have 4/5 submitted papers accepted to NeurIPS 2023, one on the modeling and algorithms for multi-player zero-sum stochastic games; one on the last-iterate convergence for solving safe RL; one on the finite-sample analysis of best-response-type dynamics for stochastic games; one on self-supervised transfer RL.

  • June 2023: Attended and gave an invited tutorial at 5th Annual Learning for Dynamics & Control (L4DC) Conference.

  • May 2023: Organized a workshop at American Control Conference (ACC) 2023, and invited to speak at SIAM Optimization (OP) 23, on policy optimization for control.

  • May 2023: Three papers accepted to COLT 2023, one on the complexity of solving general-sum stochastic games; one on out-of-support distribution shift; and one on independent function approximation in multi-agent RL.

  • April 2023: Invited to speak at the Robotics Seminar @ Illinois.

  • April 2023: Our paper on direct latent model learning for LQG control has been selected as Oral at L4DC 2023.

  • April 2023: Two papers accepted to ICML 2023, one on offline RL with general function approximation, one on partially observable multi-agent RL (first submission and acceptance with my student, congrats Xiangyu!).

  • Feb. 2023: Invited to speak at the Information Theory and Applications (ITA) Workshop's Learning and Control Session.

  • Jan. 2023: Honored to be invited to speak at the Cognition and Control in Complex Systems Workshop hosted by Prof. Sean Meyn.

  • Jan. 2023: Our paper on decentralized self-supervised learning is accepted to ICLR 2023, which shows the interesting and unique benefit of using self-supervised learning in decentralized/federated learning.

  • Jan. 2023: We have 2/2 submitted papers accepted to AISTATS 2023, and 3/4 submitted papers accepted to ICLR 2023.

  • Oct. 2022: We put together an invited article for Annual Review of Control, Robotics, and Autonomous Systems, on policy optimization for learning control policies.

  • Oct. 2022: I am awarded the CSL Ph.D. Thesis Award for contributions in reinforcement learning, control theory, and game theory.

  • Sept. 2022: Our paper on provable policy gradient methods for output estimation problems in control; and paper on the generalization guarantees of minimax optimization in machine learning, are accepted to NeurIPS 2022. The latter was also accepted as Oral (4 out of all submissions) in the New Frontiers in Adversarial Machine Learning Workshop, ICML 2022.

  • July 2022: Our paper on differentiable simulator for robotics has been selected as Outstanding Paper Award for ICML 2022.

  • June 2022: Invited article at International Congress of Mathematicians (ICM), companioned by Asu's talk at ICM 2022 and ICCOPT 2022 Plenary.

  • May 2022: Invited talk at the Gamification and Multiagent Solutions workshop at International Conference on Learning Representations (ICLR), 2022.

  • May 2022: Our paper on independent policy optimization for multi-agent RL (Long Oral); paper on improving the exploration efficiency in multi-agent RL (Short Oral); and paper on differentiable simulator for robotics (Long Oral), are accepted to ICML 2022.

  • May 2022: Our paper on fictitious play in identical-interest stochastic games, is accepted to EC 2022.

  • Jan.-May 2022: I visit Simons Institute at Berkeley as a Research Fellow for the program Learning and Games this spring.

  • Sept. 2021: Our paper on decentralized Q-learning for zero-sum stochastic games in multi-agent RL; and paper on the sample complexity of policy gradient methods for solving risk-sensitive and robust control, are accepted to NeurIPS 2021.

  • May 2021: I defend my Ph.D., and will be joining the University of Maryland, College Park as an Assistant Professor in Oct. 2022. I also start my Postdoc at MIT.

  • May 2021: Our paper on policy optimization for robust control is accepted to SIAM Journal on Control and Optimization (SICON).

  • Jan. 2021: Our paper on scalable (~1000 agents) and safe multi-agent control by learning decentralized control barrier functions, is accepted to ICLR 2021.

  • Sept. 2020: Papers accepted to NeurIPS 2020, with one Spotlight.