Kaiqing Zhang

alt text 

Ph.D. Candidate

Department of Electrical and Computer Engineering (ECE)
Coordinated Science Laboratory (CSL)
University of Illinois at Urbana-Champaign (UIUC)

Office: Room 360, Coordinated Science Laboratory
Address: 1308 W Main St, Urbana, IL, 61801
Tel: 217-979-1869
E-mail: kzhang66@illinois.edu

Google Scholar

About Me

I am a Ph.D. Candidate in the Department of Electrical and Computer Engineering, and affiliated with the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. I am fortunate to be advised by Prof. Tamer Başar. My research interests lie broadly in control theory, game theory, reinforcement learning, and their intersections; with applications in intelligent and distributed multi-agent systems, including smart grid, robotics, and transportation systems.


  • University of Illinois at Urbana-Champaign, Urbana, IL: Ph.D. Candidate, Electrical and Computer Engineering, 2017 - Present

  • University of Illinois at Urbana-Champaign, Urbana, IL: M.S., Applied Mathematics, 2015 - 2017

  • University of Illinois at Urbana-Champaign, Urbana, IL: M.S., Electrical and Computer Engineering, 2015 - 2017

  • Tsinghua University, Beijing, China: B.S. in Automation (with Honor) & Dual Degree in Economics, 2011 - 2015

Recent News

I am co-organizing the online seminar series Games, Decisions & Networks, welcome to join us!

  • May 2021: Our paper on policy optimization for robust control accepted to SIAM Journal on Control and Optimization (SICON).

  • May 2021: Two papers accepted to ICML 2021.

  • Feb. 2021: I am honored to be awarded the Simons-Berkeley Research Fellowship from Simons Institute at Berkeley, for the program Learning and Games.

  • Feb. 2021: Update our paper on RL for robust control, by strengthening the global convergence and implicit regularization results. Also, excited to see that our policy optimization methods can be numerically much faster than some existing general robust control solvers, with competitive H2 and Hinf performance being achieved.

  • Jan. 2021: New paper is posted on arXiv, which develops sample-based policy optimization methods for risk-sensitive/robust control problems, with global convergence and sample complexity guarantees. It also solves LQ zero-sum dynamic games, a benchmark setting of multi-agent RL, in a model-free way.

  • Jan. 2021: New paper accepted to ICLR 2021, on scalable (~1000 agents) and safe multi-agent control by learning decentralized control barrier functions.

  • Aug.-Dec. 2020: I visit Simons Institute at Berkeley for the program Theory of Reinforcement Learning this Fall.

  • Sept. 2020: Papers accepted to NeurIPS 2020, with one Spotlight.

  • Mar. 2020: I am awarded the prestigious Hong, McCully, and Allen Fellowship ($12000) (highest fellowship in ECE Department).

  • Mar. 2020: Shorter version of our paper Policy optimization for H-2 linear control with H-infinity robustness guarantee: Implicit regularization and global convergence has been accepted to L4DC, and selected as one of the 14 top papers for Oral Presentation.

  • Feb. 2020: I present at the Intersections between Control, Learning and Optimization Workshop held by Institute for Pure & Applied Math. (IPAM) at UCLA.

  • Nov. 2019: Invited Chapter Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms to the Springer Studies in Systems, Decision and Control, Handbook on RL and Control has been posted on arXiv. This paper focuses on reviewing multi-agent RL algorithms that are backed by theoretical analysis.