Optimal Control and EstimationCourier Corporation, 2012 M10 16 - 672 páginas "An excellent introduction to optimal control and estimation theory and its relationship with LQG design. . . . invaluable as a reference for those already familiar with the subject." — Automatica. This highly regarded graduate-level text provides a comprehensive introduction to optimal control theory for stochastic systems, emphasizing application of its basic concepts to real problems. The first two chapters introduce optimal control and review the mathematics of control and estimation. Chapter 3 addresses optimal control of systems that may be nonlinear and time-varying, but whose inputs and parameters are known without error. Chapter 4 of the book presents methods for estimating the dynamic states of a system that is driven by uncertain forces and is observed with random measurement error. Chapter 5 discusses the general problem of stochastic optimal control, and the concluding chapter covers linear time-invariant systems. Robert F. Stengel is Professor of Mechanical and Aerospace Engineering at Princeton University, where he directs the Topical Program on Robotics and Intelligent Systems and the Laboratory for Control and Automation. He was a principal designer of the Project Apollo Lunar Module control system. "An excellent teaching book with many examples and worked problems which would be ideal for self-study or for use in the classroom. . . . The book also has a practical orientation and would be of considerable use to people applying these techniques in practice." — Short Book Reviews, Publication of the International Statistical Institute. "An excellent book which guides the reader through most of the important concepts and techniques. . . . A useful book for students (and their teachers) and for those practicing engineers who require a comprehensive reference to the subject." — Library Reviews, The Royal Aeronautical Society. |
Contenido
OPTIMAL TRAJECTORIES AND NEIGHBORING OPTIMAL | |
OPTIMAL STATE ESTIMATION | |
STOCHASTIC OPTIMAL CONTROL | |
EPILOGUE | |
ABOUT THE AUTHOR | |
Otras ediciones - Ver todas
Términos y frases comunes
adjoint algorithm angle assumed asymptotic asymptotic stability closed-loop system components computed constant continuous-time systems control and estimation control gain control history control law control system corresponding cost function covariance matrix defined density matrix derived deterministic diagonal differential equation dimension discrete-time disturbance input dynamic constraint dynamic system effect eigenvalues eigenvectors elements equality constraint equilibrium equivalent evaluated example expected value expressed filter gain frequency gain matrix Gaussian Hamiltonian IEEE Transactions initial conditions integral inverse iteration Kalman–Bucy linear linear-optimal linear-quadratic linear-quadratic regulator loop LQ regulator measurement error minimizing minimum noise nominal nonsingular open-loop optimal control optimal gain optimal trajectory output perturbation positive definite probability density function problem propagation provides pseudoinverse response Riccati equation root locus roots sampled-data satisfied scalar second-order Section sequence solution specified spectral density stability steady-state stochastic optimal symmetric Transactions on Automatic transfer function transform value function variables weighting