Description: Adaptive Dynamic Programming for Control by Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang • finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; FORMAT Hardcover LANGUAGE English CONDITION Brand New Publisher Description There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: • infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; • finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; • nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: • establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm; • demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and • shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. Back Cover There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming for Control approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: * infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; * finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinte-horizon control; * nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming for Control : * establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; * demonstrates convergence proofs of the ADP algorithms to deepen undertstanding of the derivation of stability and convergence with the iterative computational methods used; and * shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. The Communications and Control Engineering series reports major technological advances which have potential for great impact in the fields of communication and control. It reflects research in industrial and academic institutions around the world so that the readership can exploit new possibilities as they become available. Table of Contents Optimal Stabilization Control for Discrete-time Systems.- Optimal Tracking Control for Discrete-time Systems.- Optimal Stabilization Control for Nonlinear Systems with Time Delays.- Optimal Tracking Control for Nonlinear Systems with Time-delays.- Optimal Feedback Control for Continuous-time Systems via ADP.- Several Special Optimal Feedback Control Designs Based on ADP.- Zero-sum Games for Discrete-time Systems Based on Model-free ADP.- Nonlinear Games for a Class of Continuous-time Systems Based on ADP.- Other Applications of ADP. Review From the book reviews:"This book provides a self-contained treatment of adaptive dynamic programming with applications in feedback control and game theory. … This book … will appeal to graduate students, practitioners, and researchers seeking an up-to-date and consolidated treatment of the field." (IEEE Control Systems Magazine, October, 2013) Review Quote From the book reviews: "This book provides a self-contained treatment of adaptive dynamic programming with applications in feedback control and game theory. ... This book ... will appeal to graduate students, practitioners, and researchers seeking an up-to-date and consolidated treatment of the field." (IEEE Control Systems Magazine, October, 2013) Feature Convergence proofs of the algorithms presented teach readers how to derive necessary stability and convergence criteria for their own systems Establishes the fundamentals of ADP theory so that student readers can extrapolate their learning into control, operations research and related fields Applications examples show how the theory can be made to work in real example systems Details ISBN1447147561 Author Ding Wang Short Title ADAPTIVE DYNAMIC PROGRAMMING F Language English ISBN-10 1447147561 ISBN-13 9781447147565 Media Book Format Hardcover Birth 1959 Imprint Springer London Ltd Place of Publication England Country of Publication United Kingdom DEWEY 629.836 Subtitle Algorithms and Stability Pages 424 DOI 10.1007/978-1-4471-4757-2 AU Release Date 2012-12-14 NZ Release Date 2012-12-14 UK Release Date 2012-12-14 Publisher Springer London Ltd Edition Description 2013 ed. Series Communications and Control Engineering Year 2012 Edition 2013th Publication Date 2012-12-14 Alternative 9781447158813 Audience Postgraduate, Research & Scholarly Illustrations XVI, 424 p. We've got this At The Nile, if you're looking for it, we've got it. With fast shipping, low prices, friendly service and well over a million items - you're bound to find what you want, at a price you'll love! TheNile_Item_ID:96311382;
Price: 330.94 AUD
Location: Melbourne
End Time: 2025-01-05T21:47:10.000Z
Shipping Cost: 114.47 AUD
Product Images
Item Specifics
Restocking fee: No
Return shipping will be paid by: Buyer
Returns Accepted: Returns Accepted
Item must be returned within: 30 Days
ISBN-13: 9781447147565
Book Title: Adaptive Dynamic Programming for Control
Number of Pages: 424 Pages
Language: English
Publication Name: Adaptive Dynamic Programming for Control: Algorithms and Stability
Publisher: Springer London Ltd
Publication Year: 2012
Subject: Computer Science, Mathematics
Item Height: 235 mm
Item Weight: 8602 g
Type: Textbook
Author: Ding Wang, Derong Liu, Huaguang Zhang, Yanhong Luo
Subject Area: Mechanical Engineering
Item Width: 155 mm
Format: Hardcover