Kaiqing Zhang
About Me
I am an Assistant Professor at the Department of Electrical and Computer Engineering (ECE) and the Institute for Systems Research (ISR), and also with appointment (affiliated) at the Department of Computer Science (CS), at the University of Maryland, College Park. I am also a member of the University of Maryland Institute for Advanced Computer Studies (UMIACS), the Center for Machine Learning, the Maryland Robotics Center (MRC), and the Applied Mathematics & Statistics, and Scientific Computation (AMSC) program. During the deferral time before joining Maryland, I was a postdoctoral scholar at LIDS and CSAIL at MIT, and a Research Fellow at Simons Institute for the Theory of Computing at Berkeley. I finished my Ph.D. from the Department of ECE and CSL at the University of Illinois at Urbana-Champaign (UIUC). I also received M.S. in both ECE and Applied Math from UIUC, and B.E. from Tsinghua University. My research interests lie broadly in Systems and Control Theory, Reinforcement/Machine Learning, Game Theory, Robotics, Computation, and their intersections.
Openings
New (!): We have a new postdoc opening in the group. Please directly contact me with your CV if you are interested.
I am always actively looking for self-motivated Ph.D. students
with a strong mathematical and/or programming background.
I might be slow in responding to related unsolicited emails. Nevertheless, if you have a strong background and think your interests are compatible with mine, please do not hesitate to contact me. Also, feel free to contact me if you are already admitted and looking for an advisor, or mention my name in your Ph.D. application (through either ECE or CS or AMSC programs), if you are interested in working with me.
Recent News
Sept. 2025: Received the George Corcoran Memorial Award for Teaching and Educational Leadership at UMD.
Sept. 2025: I am co-organizing a workshop on decentralized control/information and (reinforcement) learning at IEEE CDC 2025. Please feel free to sign up if you are a Ph.D. student/postdoc interested in presenting.
Sept. 2025: Received Open Philanthropy AI Safety Research Award.
Sept. 2025: Paper on understanding learning-to-communicate through the lens of information structures from control theory is accepted to IEEE CDC 2025 (congrats to my student Haoyi on his first submission and acceptance!).
June 2025: Tutorial on Recent Advances of Learning in Dynamic Games at ACM SIGMETRICS 2025.
May 2025: Recognized as Outstanding Meta Reviewer for ICML 2025.
May 2025: Paper on post-co-training multiple LLMs with reinforcement learning is accepted to ACL 2025 as a main conference paper (also appeared as an Oral at the ICML workshop).
May 2025: Received Cisco Research Award.
Feb. 2025: Invited to speak at the SILO Seminar and visit UW Madison.
Feb. 2025: Received AFOSR Young Investigator Program (YIP) Award.
Jan. 2025: Paper on better understanding LLM agents for online and multi-agent decision-making is accepted to ICLR 2025 (also appeared as an Oral at the ICLR workshop).
Dec. 2024: Received National Science Foundation (NSF) CAREER Award.
Dec. 2024: Recognized by AAAI New Faculty Highlights 2025.
Sept. 2024: Paper on better understanding partially observable reinforcement learning with privileged information was accepted to NeurIPS 2024.
July 2024: Invited tutorial at the EPFL/ETHZ Multi-Agent Reinforcement Learning Summer School. You may find my tutorial slides (that may keep updating) here.
April 2024: Invited to speak at the Adaptive Learning in Complex Environments Workshop at TTIC.
April 2024: Invited to speak at the Reinforcement Learning Symposium at Boston University.
March 2024: Invited tutorial at the European Control Conference (ECC) 2024 on Learning-based Control: Fundamentals and Recent Advances.
March 2024: Invited to speak at the Annual Conference on Information Sciences and Systems (CISS)'s Modern Reinforcement Learning Session.
June 2023: Attended and gave an invited tutorial at 5th Annual Learning for Dynamics & Control (L4DC) Conference.
May 2023: Organized a workshop at American Control Conference (ACC) 2023, and invited to speak at SIAM Optimization (OP) 23, on policy optimization for control.
April 2023: Invited to speak at the Robotics Seminar @ Illinois.
April 2023: Paper on partially observable multi-agent RL is accepted to ICML 2023 (first submission and acceptance with my student, congrats Xiangyu!).
Feb. 2023: Invited to speak at the Information Theory and Applications (ITA) Workshop's Learning and Control Session.
Jan. 2023: Honored to be invited to speak at the Cognition and Control in Complex Systems Workshop hosted by Prof. Sean Meyn.
Oct. 2022: We put together an invited article for Annual Review of Control, Robotics, and Autonomous Systems, on policy optimization for learning control policies.
Oct. 2022: I am awarded the CSL Ph.D. Thesis Award for contributions in reinforcement learning, control theory, and game theory.
July 2022: Our paper on differentiable simulator for robotics has been selected as Outstanding Paper Award for ICML 2022.
June 2022: Invited article at International Congress of Mathematicians (ICM), companioned by Asu's talk at ICM 2022 and ICCOPT 2022 Plenary.
May 2022: Invited talk at the Gamification and Multiagent Solutions workshop at International Conference on Learning Representations (ICLR), 2022.
March 2022: Invited talk at the UC Berkeley Semi-Autonomous Seminar.
Jan.-May 2022: I am visiting the Simons Institute at Berkeley as a Research Fellow for the program Learning and Games this spring.
May 2021: I defended my Ph.D., and will be joining the University of Maryland, College Park as an Assistant Professor. I also started my Postdoc at MIT.
May 2021: Our paper on policy optimization for robust control is accepted to SIAM Journal on Control and Optimization (SICON).
|