Two Projection Neural Networks With Reduced Model Complexity——论文欣赏

    科技2025-06-12  19

    Consider the following NP: min ⁡ f ( x ) s . t . c ( x ) ≤ 0 ,    B x = b (1) \begin{aligned} &\min\quad f(x)\\ &s.t.\quad c(x)\le 0,\;Bx=b \end{aligned} \tag{1} minf(x)s.t.c(x)0,Bx=b(1)

    where x ∈ ℜ n , f : ℜ n → ℜ , c ( x ) = [ c 1 ( x ) , . . . , c m ( x ) ] T x\in \Re^n,f:\Re^n\rightarrow \Re,c(x)=[c_1(x),...,c_m(x)]^T xn,f:n,c(x)=[c1(x),...,cm(x)]T is an m-dimensional vector-valued continuous function of n variables, B ∈ R = ℜ r × n B\in R=\Re^{r\times n} BR=r×n and r a n k ( B ) = r rank(B) = r rank(B)=r

    The feasible solution x is said to be a regular point if the gradients of c i ( x ) , ∇ c i ( x ) , ( i ∈ I = j ∣ c j ( x ) = 0 ) c_i (x), \nabla c_i (x),(i \in I = {j |c_j (x) = 0}) ci(x),ci(x),(iI=jcj(x)=0), are linearly independent.

    a regular point x ∗ x^* x is a local optimal solution of the NP problem (1), then there exist Lagrangian multiplier vectors y ∗ ∈ R m    a n d    z ∗ ∈ ℜ r y^* \in R^m\; and\; z^* \in \Re^r yRmandzr such that ( x ∗ , y ∗ , z ∗ ) (x^*, y^*, z^*) (x,y,z) satisfies the conventional optimality condition { y ≥ 0 , c ( x ) ≤ 0 , y T c ( x ) = 0 , B x = b ∇ f ( x ) + ∇ c ( x ) T y − B T z = 0 \left\{\begin{array}{l} y \geq 0, \quad c(x) \leq 0, \quad y^{\mathrm{T}} c(x)=0, \quad B x=b \\ \nabla f(x)+\nabla c(x)^{\mathrm{T}} y-B^{\mathrm{T}} z=0 \end{array}\right. {y0,c(x)0,yTc(x)=0,Bx=bf(x)+c(x)TyBTz=0

    where ∇ f ( x ) \nabla f (x) f(x) is the gradient of function f ( x ) f(x) f(x) and ∇ c ( x ) = [ ∇ c 1 ( x ) , . . . , ∇ c m ( x ) ] \nabla c(x) = [\nabla c_1(x), . . . ,\nabla c_m(x)] c(x)=[c1(x),...,cm(x)]. The conventional optimality condition can be further rewritten as { y = ( y + c ( x ) ) + , B x = b ∇ f ( x ) + ∇ c ( x ) T y − B T z = 0 \left\{\begin{array}{l} y=(y+c(x))^{+}, \quad B x=b \\ \nabla f(x)+\nabla c(x)^{\mathrm{T}} y-B^{\mathrm{T}} z=0 \end{array}\right. {y=(y+c(x))+,Bx=bf(x)+c(x)TyBTz=0

    where ( y ) + = max ⁡ ( y , 0 ) (y)^+=\max(y,0) (y)+=max(y,0). ( x ∗ , y ∗ , z ∗ ) (x^*,y^*,z^*) (x,y,z) is called the KKT point.

    This paper proposes two projection neural networks (RDPNNs)

    State Equation: d d t ( x y ) = λ ( − W 2 x − W 1 ( ∇ f ( x ) − ∇ c ( x ) T y ) − q − y + ( y + c ( x ) ) + ) \frac{d}{d t}\left(\begin{array}{l} x \\ y \end{array}\right)=\lambda\left(\begin{array}{c} -W_{2} x-W_{1}\left(\nabla f(x)-\nabla c(x)^{\mathrm{T}} y\right)-q \\ -y+(y+c(x))^{+} \end{array}\right) dtd(xy)=λ(W2xW1(f(x)c(x)Ty)qy+(y+c(x))+) Output Equation: v ( t ) = x ( t ) v(t)=x(t) v(t)=x(t) where [ x T ( t ) , y T ( t ) ] T ∈ ℜ n + m [x^T(t), y^T(t)]^T \in \Re^{n+m} [xT(t),yT(t)]Tn+m is the state trajectory. W 2 = B T ( B B T ) − 1 B , W 1 = I − B T ( B B T ) − 1 B , q = B T ( B B T ) − 1 b W_2 = B^T(BB^T)^{−1}B,W_1 = I − B^T(BB^T)^{−1}B,q = B^T(BB^T)^{−1}b W2=BT(BBT)1B,W1=IBT(BBT)1B,q=BT(BBT)1b

    State Equation: d d t ( x y ) = λ ( P ^ X ( x ) − ∇ f ( v ) − ∇ c ( v ) T ( y ) + c ( v ) − y + ( y ) + ) \frac{d}{d t}\left(\begin{array}{l} x \\ y \end{array}\right)=\lambda\left(\begin{array}{c} \hat{P}_{\mathbf{X}}(x)-\nabla f(v)-\nabla c(v)^{\mathrm{T}}(y)^{+} \\ c(v)-y+(y)^{+} \end{array}\right) dtd(xy)=λ(P^X(x)f(v)c(v)T(y)+c(v)y+(y)+) Output Equation: v ( t ) = P X ( x ( t ) ) v(t)=P_{\mathbf{X}}(x(t)) v(t)=PX(x(t)) where [ x T ( t ) , y T ( t ) ] T ∈ ℜ n + m [x^T(t), y^T(t)]^T \in \Re^{n+m} [xT(t),yT(t)]Tn+m is the state trajectory. v = P X ( x ) = W 1 x + q , P ^ X ( x ) = P X ( x ) − x v=P_{\mathbf{X}}(x)=W_1x + q,\hat{P}_{\mathbf{X}}(x)=P_{\mathbf{X}}(x)-x v=PX(x)=W1x+q,P^X(x)=PX(x)x

    Processed: 0.010, SQL: 8