語系/ Language:
繁體中文
English
KMU OLIS
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Frontiers of intelligent control and...
~
Liu, Derong, (1963-)
Frontiers of intelligent control and information processing
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
正題名/作者:
Frontiers of intelligent control and information processing/ edited by Derong Liu
其他作者:
Liu, Derong,
出版者:
[Hackensack?] New Jersey :World Scientific, : 2014.,
面頁冊數:
1 online resource.
提要註:
The current research and development in intelligent control and information processing have been driven increasingly by advancements made from fields outside the traditional control areas, into new frontiers of intelligent control and information processing so as to deal with ever more complex systems with ever growing size of data and complexity. As researches in intelligent control and information processing are taking on ever more complex problems, the control system as a nuclear to coordinate the activity within a system increasingly need to be equipped with the capability to analyze, and.
標題:
Information technology. -
電子資源:
http://www.worldscientific.com/worldscibooks/10.1142/9243#t=toc
ISBN:
9789814616881
Frontiers of intelligent control and information processing
Frontiers of intelligent control and information processing
[electronic resource] /edited by Derong Liu - [Hackensack?] New Jersey :World Scientific,2014. - 1 online resource.
Includes bibliographical references.
Preface; Contents; 1. Dynamic Graphical Games: Online Adaptive Learning Solutions Using Approximate Dynamic Programming; 1.1 Introduction; 1.2 Graphs and Synchronization of Multi-Agent Dynamical Systems; 1.2.1 Graphs; 1.2.2 Synchronization and tracking error dynamics; 1.3 Multiple Player CooperativeGames on Graphs; 1.3.1 Graphical games; 1.3.2 Comparison of graphical games with standard dynamic games; 1.3.3 Nash equilibrium for graphical games; 1.3.4 Hamiltonian equation for dynamic graphical games; 1.3.5 Bellman equation for dynamic graphical games.
The current research and development in intelligent control and information processing have been driven increasingly by advancements made from fields outside the traditional control areas, into new frontiers of intelligent control and information processing so as to deal with ever more complex systems with ever growing size of data and complexity. As researches in intelligent control and information processing are taking on ever more complex problems, the control system as a nuclear to coordinate the activity within a system increasingly need to be equipped with the capability to analyze, and.
ISBN: 9789814616881Subjects--Topical Terms:
217625
Information technology.
LC Class. No.: TJ216 / .F76 2014eb
Dewey Class. No.: 629.8
Frontiers of intelligent control and information processing
LDR
:04467cam a2200337Ka 4500
001
288030
003
OCoLC
005
20151106100052.0
006
m o d
007
cr cnu---unuuu
008
151124s2014 nju ob 000 0 eng d
020
$a
9789814616881
$q
(electronic bk.)
020
$a
9814616885
$q
(electronic bk.)
020
$z
9789814616874
020
$z
9814616877
035
$a
(OCoLC)892911209
$z
(OCoLC)893332824
035
$a
ocn892911209
040
$a
N
$b
eng
$c
N
$d
IDEBK
$d
YDXCP
$d
CDX
$d
OCLCQ
$d
MYG
$d
EBLCP
$d
OCLCQ
050
4
$a
TJ216
$b
.F76 2014eb
082
0 4
$a
629.8
$2
23
245
0 0
$a
Frontiers of intelligent control and information processing
$h
[electronic resource] /
$c
edited by Derong Liu
260
$a
[Hackensack?] New Jersey :
$c
2014.
$b
World Scientific,
300
$a
1 online resource.
504
$a
Includes bibliographical references.
505
0
$a
Preface; Contents; 1. Dynamic Graphical Games: Online Adaptive Learning Solutions Using Approximate Dynamic Programming; 1.1 Introduction; 1.2 Graphs and Synchronization of Multi-Agent Dynamical Systems; 1.2.1 Graphs; 1.2.2 Synchronization and tracking error dynamics; 1.3 Multiple Player CooperativeGames on Graphs; 1.3.1 Graphical games; 1.3.2 Comparison of graphical games with standard dynamic games; 1.3.3 Nash equilibrium for graphical games; 1.3.4 Hamiltonian equation for dynamic graphical games; 1.3.5 Bellman equation for dynamic graphical games.
505
8
$a
1.3.6 Discrete Hamilton-Jacobi theory: Equivalence of Bellman and discrete-time Hamilton Jacobi equations1.3.7 Stability and Nash solution of the graphical games; 1.4 Approximate Dynamic Programming for Graphical Games; 1.4.1 Heuristic dynamic programming for graphical games; 1.4.2 Dual heuristic programming for graphical games; 1.5 Coupled Riccati Recursions; 1.6 Graphical Game Solutions by Actor-Critic Learning; 1.6.1 Actor-critic networks and tuning; 1.6.2 Actor-critic offline tuning with exploration; 1.6.3 Actor-critic online tuning in real-time.
505
8
$a
1.7 Graphical Game Example and Simulation Results1.7.1 Riccati recursion offline solution; 1.7.2 Simulation results using offline actor-critic tuning; 1.7.3 Simulation results using online actor-critic tuning; 1.8 Conclusions; Acknowledgement; References; 2. Reinforcement-Learning-Based Online Learning Control for Discrete-Time Unknown Nonaffine Nonlinear Systems; 2.1 Introduction; 2.2 Problem Statement and Preliminaries; 2.2.1 Dynamics of nonaffine nonlinear discrete-time systems; 2.2.2 A single-hidden layer neural network; 2.3 Controller Design via Reinforcement Learning.
505
8
$a
2.3.1 A basic controller design approach2.3.2 Critic neural network and weight update law; 2.3.3 Action neural network and weight update law; 2.4 Stability Analysis and Performance of the Closed-Loop System; 2.5 Numerical Examples; 2.5.1 Example 1; 2.5.2 Example 2; 2.6 Conclusions; Acknowledgement; References; 3. Experimental Studies on Data-Driven Heuristic Dynamic Programming for POMDP; 3.1 Introduction; 3.2 Markov Decision Process and Partially Observable Markov Decision Process; 3.2.1 Markov decision process; 3.2.2 Partially observable Markov decision process.
505
8
$a
3.3 Problem Formulation with the State Estimator3.4 Data-Driven HDP Algorithm for POMDP; 3.4.1 Learning in the state estimator network; 3.4.2 Learning in the critic and the action network; 3.5 Simulation Study; 3.5.1 Case study one; 3.5.2 Case study two; 3.5.3 Case study three; 3.6 Conclusions and Discussion; Acknowledgement; References; 4. Online Reinforcement Learning for Continuous-State Systems; 4.1 Introduction; 4.2 Background of Reinforcement Learning; 4.3 RLSPI Algorithm; 4.3.1 Policy iteration; 4.3.2 RLSPI; 4.4 Examples of RLSPI; 4.4.1 Linear discrete-time system.
520
$a
The current research and development in intelligent control and information processing have been driven increasingly by advancements made from fields outside the traditional control areas, into new frontiers of intelligent control and information processing so as to deal with ever more complex systems with ever growing size of data and complexity. As researches in intelligent control and information processing are taking on ever more complex problems, the control system as a nuclear to coordinate the activity within a system increasingly need to be equipped with the capability to analyze, and.
588
0
$a
Print version record.
650
0
$2
96060
$a
Information technology.
$3
217625
650
0
$a
Automatic control.
$3
382400
700
1
$a
Liu, Derong,
$d
1963-
$3
382836
856
4 0
$u
http://www.worldscientific.com/worldscibooks/10.1142/9243#t=toc
筆 0 讀者評論
多媒體
評論
新增評論
分享你的心得,請勿在此評論區張貼涉及人身攻擊、情緒謾罵、或內容涉及非法的不當言論,館方有權利刪除任何違反評論規則之發言,情節嚴重者一律停權,以維護所有讀者的自由言論空間。
Export
取書館別
處理中
...
變更密碼
登入