AAS 97-150

CONDITIONS FOR CONVERGENCE IN INDIRECT ADAPTIVE CONTROL BASED LEARNING CONTROL

H. P. Wen and R. W. Longman, Columbia University

Abstract

Learning control develops controllers that learn from previous experience performing commands in order to improve their performance in future executions. One form of learning control is based on digital indirect adaptive control concepts. The adaptive problem normally requires knowledge that the system is stable when inverted, i.e., all zeros are inside the unit circle. This is very constraining since discretization often introduces zeros outside the circle. In this paper we show that the requirement doesn't apply to the learning control problem, and the finite time nature of the repeating process allows substituting an alternate proof that eliminates the assumption.