Stability of an Automatic Control System

Stability of an Automatic Control System

 

the ability of an automatic control system to function normally and to compensate various unavoidable perturbations. The state of a system is called stable if the deviation from that state remains arbitrarily small for any fairly small change in input signal. The stability of different types of automatic control systems is determined by different methods. An accurate and rigorous theory for the stability of systems described by ordinary differential equations was developed by A. M. Liapunov in 1892.

All the states of a linear automatic control system are either stable or unstable, and it is thus possible to speak of the overall system stability. For a stationary, linear automatic control system that is described by ordinary differential equations, it is necessary and sufficient that all roots of the system’s characteristic equation should have negative real parts (the system is then asymptotically stable). There exist various criteria that permit the signs of the roots of the characteristic equation to be predetermined directly from the equation’s coefficients without solving the equation. The Routh criterion (named after the British mathematician E. Routh) and the Hurwitz criterion (named after the German mathematician A. Hurwitz) are used in analysis of the stability of an automatic control system described by low-order (up to the fourth order) differential equations. However, it is impossible to make use of these criteria in many cases (for example, in the case of systems described by higher-order equations), because of the need to carry out enormous calculations; furthermore, finding the characteristic equations themselves for complex systems involves laborious mathematical operations. However, the frequency characteristics of all arbitrarily complex automatic control systems can be easily determined by simple graphical and algebraic operations. Consequently, the Nyquist criterion (named for the American physicist H. Nyquist) and the Mikhailov criterion (named after A. V. Mikhailov, the Soviet specialist in automatic control) are usually used in the analysis and design of linear, stationary automatic control systems. The Nyquist criterion is particularly simple and convenient for practical applications.

The set of parameter values of an automatic control system for which the system is stable is called the stability region. The proximity of an automatic control system to the boundary of the stability region is evaluated by means of stability margins for the phase and gain, which are determined from the gain and phase characteristics of the open-loop system. The modern theory of linear automatic control systems provides methods for analyzing the stability of systems having lumped and distributed parameters, continuous and discrete-data systems, and stationary and nonstationary systems.

The stability problem for nonlinear automatic control systems involves several important characteristics differing from those of linear systems. Depending on the nature of the nonlinearity in a system, some states may be stable, and others unstable. In the stability theory for nonlinear systems, one speaks of the stability of a given state but not of the system as such. The stability of any state of a nonlinear system can be maintained if the perturbations acting on the system are sufficiently small; it is overcome by large perturbations. Therefore, we introduce the concepts of stability (stability in the small), finite stability, and global stability (stability in the large). An important concept is that of absolute stability, that is, the stability of a system with an arbitrary limit on the initial perturbation and any nonlinearity of the system from a specified class of nonlinearities. The stability analysis of nonlinear systems is rather complex, even when computers are used. Liapunov functions are often used to find the sufficient conditions for stability. Sufficient frequency criteria for absolute stability have been proposed by the Rumanian mathematician V.-M. Popov and others. In addition to precise methods of analyzing stability, approximate methods based on the application of descriptive functions may also be used, such as the methods of harmonic and statistical linearization.

The stability of an automatic control system subjected to random perturbations and interference is studied by means of the stability theory of stochastic systems.

Modern computer technology makes it possible to solve many stability problems for linear and nonlinear systems of various types, both by the use of familiar algorithms and by means of new, special algorithms that depend on the capabilities of modern computer systems.

REFERENCES

Liapunov, A. M. Obshchaia zadacha ob ustoichivosti dvizheniia: Sobr. soch., vol. 2. Moscow-Leningrad, 1956.
Voronov, A. A. Osnovy teorii avtomaticheskogo upravleniia, vol. 2. Moscow-Leningrad, 1966.
Naumov, B. N. Teoriia nelineinykh avtomaticheskikh sistem: Chastotnye metody. Moscow, 1972.
Osnovy avtomaticheskogo upravleniia, 3rd ed. Edited by V. S. Pugachev. Moscow, 1974.

V. S. PUGACHEV and I. N. SINITSYN