Consider the following nonlinear programming: min x f ( x ) s . t . g ( x ) ≤ 0 h ( x ) = 0 (1) \begin{aligned} &\min _{x} f(x) \\ &\begin{aligned} s.t. \; &g(x) \leq 0 \\ &h(x)=0 \end{aligned} \end{aligned} \tag{1} xminf(x)s.t.g(x)≤0h(x)=0(1)
Two-timescale model: ϵ x d x d t = y ϵ y d y d t = F y ( x , y , u , v ) ϵ y d u d t = F u ( x , y , u , v ) ϵ y d v d t = F v ( x , y , u , v ) (2) \begin{aligned} \epsilon_{x} \frac{d x}{d t} &=y \\ \epsilon_{y} \frac{d y}{d t} &=F_{y}(x, y, u, v) \\ \epsilon_{y} \frac{d u}{d t} &=F_{u}(x, y, u, v) \\ \epsilon_{y} \frac{d v}{d t} &=F_{v}(x, y, u, v) \end{aligned} \tag{2} ϵxdtdxϵydtdyϵydtduϵydtdv=y=Fy(x,y,u,v)=Fu(x,y,u,v)=Fv(x,y,u,v)(2)
x ∈ R n x \in R^n x∈Rn is an output neuronal state for decision vector, y ∈ R n y \in R^n y∈Rn is a hidden neuronal state for directional vector, u and v are hidden neuronal states for handling inequality and equality constraints,
F can be specified in: F = [ − [ Q ( x ) y + ∇ f ( x ) + ∇ g ( x ) u + ∇ h ( x ) v ] − u + [ u + ∇ g ( x ) T y + g ( x ) ] + ∇ h ( x ) T y + h ( x ) ] (3) F=\left[\begin{array}{c} -[Q(x) y+\nabla f(x)+\nabla g(x) u+\nabla h(x) v] \\ -u+\left[u+\nabla g(x)^{T} y+g(x)\right]^{+} \\ \nabla h(x)^{T} y+h(x) \end{array}\right] \tag{3} F=⎣⎡−[Q(x)y+∇f(x)+∇g(x)u+∇h(x)v]−u+[u+∇g(x)Ty+g(x)]+∇h(x)Ty+h(x)⎦⎤(3)
Q ( x ) = I (4) Q(x)=I \tag{4} Q(x)=I(4)
参考文献 - Two-timescale Multi-layer Recurrent Neural Networks with an Inertia Term for Nonlinear Programming
For solving problem (1) with h ( x ) = A x − b h(x) = Ax - b h(x)=Ax−b, two neurodynamic models with two-layer structures are presented as follows: ϵ x d x d t = − x + G ( x − ( ∇ f ( x ) + ∇ g ( x ) y ) ) ϵ x d y d t = − y + ( y + g ( x ) ) + (5) \begin{array}{l} \epsilon_{x} \frac{d x}{d t}=-x+G(x-(\nabla f(x)+\nabla g(x) y)) \\ \epsilon_{x} \frac{d y}{d t}=-y+(y+g(x))^{+} \\ \end{array} \tag{5} ϵxdtdx=−x+G(x−(∇f(x)+∇g(x)y))ϵxdtdy=−y+(y+g(x))+(5) ϵ x d x d t = − x + z − ∇ f ( z ) − ∇ g ( z ) y + ϵ x d y d t = − y + y + + g ( z ) z = G ( x ) (6) \begin{array}{l} \epsilon_{x} \frac{d x}{d t}=-x+z-\nabla f(z)-\nabla g(z) y^{+} \\ \epsilon_{x} \frac{d y}{d t}=-y+y^{+}+g(z) \\ z=G(x) \end{array} \tag{6} ϵxdtdx=−x+z−∇f(z)−∇g(z)y+ϵxdtdy=−y+y++g(z)z=G(x)(6)
where x ∈ R n a n d y ∈ R m x \in R^n \;and\; y \in R^m x∈Rnandy∈Rm are neuronal states, z is an output state, G ( x ) = ( I − A T ( A A T ) − 1 A ) x + A T ( A A T ) − 1 A b G(x) = (I - A^T (AA^T )^{-1}A)x + A^T (AA^T )^{-1}Ab G(x)=(I−AT(AAT)−1A)x+AT(AAT)−1Ab. It is proven that state x in (5) and output z in (6) are globally convergent to the optimal solution of problem (1) if ∇ 2 f ( x ) + ∑ i = 1 m y i ∇ 2 g i ( x ) \nabla^{2} f(x)+\sum_{i=1}^{m} y_{i} \nabla^{2} g_{i}(x) ∇2f(x)+∑i=1myi∇2gi(x) is positive semidefinite on X × R + m \mathcal{X} \times R_{+}^{m} X×R+m and positive definite on the space { ( x , y ) ∈ X × R + m : G ( x − ( ∇ f ( x ) + ∇ g ( x ) y ) ) = x , ( y + g ( x ) ) + = y } \left\{(x, y) \in \mathcal{X} \times R_{+}^{m}: G(x-(\nabla f(x)+\nabla g(x) y))=x,(y+g(x))^{+}=y\right\} {(x,y)∈X×R+m:G(x−(∇f(x)+∇g(x)y))=x,(y+g(x))+=y} where X = { x ∈ R n : A x = b } \mathcal{X}=\left\{x \in R^{n}: A x=b\right\} X={x∈Rn:Ax=b}
参考文献 - Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming