Asynchronous Heterogeneous Linear Quadratic Regulator Design

13 Apr 2024  ·  Leonardo F. Toso, Han Wang, James Anderson ·

We address the problem of designing an LQR controller in a distributed setting, where M similar but not identical systems share their locally computed policy gradient (PG) estimates with a server that aggregates the estimates and computes a controller that, on average, performs well on all systems. Learning in a distributed setting has the potential to offer statistical benefits - multiple datasets can be leveraged simultaneously to produce more accurate policy gradient estimates. However, the interplay of heterogeneous trajectory data and varying levels of local computational power introduce bias to the aggregated PG descent direction, and prevents us from fully exploiting the parallelism in the distributed computation. The latter stems from synchronous aggregation, where straggler systems negatively impact the runtime. To address this, we propose an asynchronous policy gradient algorithm for LQR control design. By carefully controlling the "staleness" in the asynchronous aggregation, we show that the designed controller converges to each system's $\epsilon$-near optimal controller up to a heterogeneity bias. Furthermore, we prove that our asynchronous approach obtains exact local convergence at a sub-linear rate.

PDF Abstract