화학공학소재연구정보센터
Automatica, Vol.46, No.6, 999-1007, 2010
A new critical theorem for adaptive nonlinear stabilization
It is fairly well known that there are fundamental differences between adaptive control of continuous-time and discrete-time nonlinear systems. In fact, even for the seemingly simple single-input single-output control system y(t+1) = theta(t)f(y(t)) + ut + w(t+1) with a scalar unknown parameter theta(t) and noise disturbance in {w(t)}, and with a known function f(.) having possible nonlinear growth rate characterized by vertical bar f(x)vertical bar = Theta(vertical bar x vertical bar(b)) with b >= 1, the necessary and sufficient condition for the system to be globally stabilizable by adaptive feedback is b < 4. This was first found and proved by Guo (1997) for the Gaussian white noise case, and then proved by Li and Xie (2006) for the bounded noise case. Subsequently, a number of other types of "critical values" and "impossibility theorems" on the maximum capability of adaptive feedback were also found, mainly for systems with known control parameter as in the above model. In this paper, we will study the above basic model again but with additional unknown control parameter theta(2), i.e., u(t) is replaced by theta(2)u(t) in the above model. Interestingly, it turns out that the system is globally stabilizable if and only if b < 3. This is a new critical theorem for adaptive nonlinear stabilization, which has meaningful implications for the control of more general uncertain nonlinear systems. (C) 2010 Elsevier Ltd. All rights reserved.