In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multiplie...In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic r...In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values.展开更多
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w...This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.展开更多
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary nu...In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.展开更多
In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence o...In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence of fixed points,which can lead to an implementable globally convergent algorithm.展开更多
Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recov...Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.展开更多
In this article,we study the generalized Riemann problem for a scalar nonconvex Chapman-Jouguet combustion model in a neighborhood of the origin(t > 0) on the(x,t) plane.We focus our attention to the perturbation o...In this article,we study the generalized Riemann problem for a scalar nonconvex Chapman-Jouguet combustion model in a neighborhood of the origin(t > 0) on the(x,t) plane.We focus our attention to the perturbation on initial binding energy.The solutions are obtained constructively under the entropy conditions.It can be found that the solutions are essentially different from the corresponding Riemann solutions for some cases.Especially,two important phenomena are observed:the transition from detonation to deflagration followed by a shock,which appears in the numerical simulations [7,27];the transition from deflagration to detonation(DDT),which is one of the core problems in gas dynamic combustion.展开更多
Two-phase image segmentation is a fundamental task to partition an image into foreground and background.In this paper,two types of nonconvex and nonsmooth regularization models are proposed for basic two-phase segment...Two-phase image segmentation is a fundamental task to partition an image into foreground and background.In this paper,two types of nonconvex and nonsmooth regularization models are proposed for basic two-phase segmentation.They extend the convex regularization on the characteristic function on the image domain to the nonconvex case,which are able to better obtain piecewise constant regions with neat boundaries.By analyzing the proposed non-Lipschitz model,we combine the proximal alternating minimization framework with support shrinkage and linearization strategies to design our algorithm.This leads to two alternating strongly convex subproblems which can be easily solved.Similarly,we present an algorithm without support shrinkage operation for the nonconvex Lipschitz case.Using the Kurdyka-Lojasiewicz property of the objective function,we prove that the limit point of the generated sequence is a critical point of the original nonconvex nonsmooth problem.Numerical experiments and comparisons illustrate the effectiveness of our method in two-phase image segmentation.展开更多
This paper investigates the distributed H_(∞)consensus problem for a first-order multiagent system where both cooperative and antagonistic interactions coexist.In the presence of external disturbances,a distributed c...This paper investigates the distributed H_(∞)consensus problem for a first-order multiagent system where both cooperative and antagonistic interactions coexist.In the presence of external disturbances,a distributed control algorithm using local information is addressed and a sufficient condition to get the H_(∞)control gain is obtained,which make the states of the agents in the same group converge to a common point while the inputs of each agent are constrained in the nonconvex sets.Finally,a numerical simulation is exhibited to illustrate the theory.展开更多
The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an u...The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective.展开更多
This paper is concerned with convergence of stochastic gradient algorithms with momentum terms in the nonconvex setting.A class of stochastic momentum methods,including stochastic gradient descent,heavy ball and Neste...This paper is concerned with convergence of stochastic gradient algorithms with momentum terms in the nonconvex setting.A class of stochastic momentum methods,including stochastic gradient descent,heavy ball and Nesterov’s accelerated gradient,is analyzed in a general framework under mild assumptions.Based on the convergence result of expected gradients,the authors prove the almost sure convergence by a detailed discussion of the effects of momentum and the number of upcrossings.It is worth noting that there are not additional restrictions imposed on the objective function and stepsize.Another improvement over previous results is that the existing Lipschitz condition of the gradient is relaxed into the condition of H?lder continuity.As a byproduct,the authors apply a localization procedure to extend the results to stochastic stepsizes.展开更多
This work is about a splitting method for solving a nonconvex nonseparable optimization problem with linear constraints,where the objective function consists of two separable functions and a coupled term.First,based o...This work is about a splitting method for solving a nonconvex nonseparable optimization problem with linear constraints,where the objective function consists of two separable functions and a coupled term.First,based on the ideas from Bregman distance and Peaceman–Rachford splitting method,the Bregman Peaceman–Rachford splitting method with different relaxation factors for the multiplier is proposed.Second,the global and strong convergence of the proposed algorithm are proved under general conditions including the region of the two relaxation factors as well as the crucial Kurdyka–Łojasiewicz property.Third,when the associated Kurdyka–Łojasiewicz property function has a special structure,the sublinear and linear convergence rates of the proposed algorithm are guaranteed.Furthermore,some preliminary numerical results are shown to indicate the effectiveness of the proposed algorithm.展开更多
It is prominent that conjugate gradient method is a high-efficient solution way for large-scale optimization problems.However,most of the conjugate gradient methods do not have sufficient descent property.In this pape...It is prominent that conjugate gradient method is a high-efficient solution way for large-scale optimization problems.However,most of the conjugate gradient methods do not have sufficient descent property.In this paper,without any line search,the presented method can generate sufficient descent directions and trust region property.While use some suitable conditions,the global convergence of the method is established with Armijo line search.Moreover,we study the proposed method for solving nonsmooth problems and establish its global convergence.The experiments show that the presented method can be applied to solve smooth and nonsmooth unconstrained problems,image restoration problems and Muskingum model successfully.展开更多
针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化...针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化处理,并在x-子问题中引入惯性效应。在适当的假设条件下,建立了算法的全局收敛性;同时引入满足Kurdyka-Lojasiewicz不等式的辅助函数,验证了算法的强收敛性。通过两个数值实验表明,引入惯性效应的算法比没有惯性效应的算法收敛性能更好。展开更多
A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with...A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP.展开更多
文摘In this paper, our focus lies on addressing a two-block linearly constrained nonseparable nonconvex optimization problem with coupling terms. The most classical algorithm, the alternating direction method of multipliers (ADMM), is employed to solve such problems typically, which still requires the assumption of the gradient Lipschitz continuity condition on the objective function to ensure overall convergence from the current knowledge. However, many practical applications do not adhere to the conditions of smoothness. In this study, we justify the convergence of variant Bregman ADMM for the problem with coupling terms to circumvent the issue of the global Lipschitz continuity of the gradient. We demonstrate that the iterative sequence generated by our approach converges to a critical point of the issue when the corresponding function fulfills the Kurdyka-Lojasiewicz inequality and certain assumptions apply. In addition, we illustrate the convergence rate of the algorithm.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
基金National Natural Science Foundations of China(Nos.61362001,61365013,51165033)the Science and Technology Department of Jiangxi Province of China(Nos.20132BAB211030,20122BAB211015)+1 种基金the Jiangxi Advanced Projects for Postdoctoral Research Funds,China(o.2014KY02)the Innovation Special Fund Project of Nanchang University,China(o.cx2015136)
文摘In recent years,utilizing the low-rank prior information to construct a signal from a small amount of measures has attracted much attention.In this paper,a generalized nonconvex low-rank(GNLR) algorithm for magnetic resonance imaging(MRI)reconstruction is proposed,which reconstructs the image from highly under-sampled k-space data.In the algorithm,the nonconvex surrogate function replacing the conventional nuclear norm is utilized to enhance the low-rank property inherent in the reconstructed image.An alternative direction multiplier method(ADMM) is applied to solving the resulting non-convex model.Extensive experimental results have demonstrated that the proposed method can consistently recover MRIs efficiently,and outperforms the current state-of-the-art approaches in terms of higher peak signal-to-noise ratio(PSNR) and lower high-frequency error norm(HFEN) values.
基金Project supported by the National Engineering Research Center of Rail Transportation Operation and Control System,Beijing Jiaotong University(Grant No.NERC2019K002)。
文摘This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.
文摘In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.
基金Supported by the NNSF of China(11026079)Supported by the Youth Backbone Teacher Foundation of Henan Province(173)
文摘In this paper,we are mainly devoted to solving fixed point problems in more general nonconvex sets via an interior point homotopy method.Under suitable conditions,a constructive proof is given to prove the existence of fixed points,which can lead to an implementable globally convergent algorithm.
基金Projects(61173122,61262032) supported by the National Natural Science Foundation of ChinaProjects(11JJ3067,12JJ2038) supported by the Natural Science Foundation of Hunan Province,China
文摘Low-rank matrix recovery is an important problem extensively studied in machine learning, data mining and computer vision communities. A novel method is proposed for low-rank matrix recovery, targeting at higher recovery accuracy and stronger theoretical guarantee. Specifically, the proposed method is based on a nonconvex optimization model, by solving the low-rank matrix which can be recovered from the noisy observation. To solve the model, an effective algorithm is derived by minimizing over the variables alternately. It is proved theoretically that this algorithm has stronger theoretical guarantee than the existing work. In natural image denoising experiments, the proposed method achieves lower recovery error than the two compared methods. The proposed low-rank matrix recovery method is also applied to solve two real-world problems, i.e., removing noise from verification code and removing watermark from images, in which the images recovered by the proposed method are less noisy than those of the two compared methods.
基金Supported by NUAA Research Funding (NS2011001)NUAA’S Scientific Fund forthe Introduction of Qualified Personal,NSFC grant 10971130+1 种基金Shanghai Leading Academic Discipline ProjectJ 50101Shanghai Municipal Education Commission of Scientific Research Innovation Project 112284
文摘In this article,we study the generalized Riemann problem for a scalar nonconvex Chapman-Jouguet combustion model in a neighborhood of the origin(t > 0) on the(x,t) plane.We focus our attention to the perturbation on initial binding energy.The solutions are obtained constructively under the entropy conditions.It can be found that the solutions are essentially different from the corresponding Riemann solutions for some cases.Especially,two important phenomena are observed:the transition from detonation to deflagration followed by a shock,which appears in the numerical simulations [7,27];the transition from deflagration to detonation(DDT),which is one of the core problems in gas dynamic combustion.
基金supported by the National Natural Science Foundation of China(NSFC)(No.12001144)Zhejiang Provincial Natural Science Foundation of China(No.LQ20A010007)+1 种基金Chern Institute of Mathematicssupported by the National Natural Science Foundation of China(NSFC)(Nos.11871035,11531013).
文摘Two-phase image segmentation is a fundamental task to partition an image into foreground and background.In this paper,two types of nonconvex and nonsmooth regularization models are proposed for basic two-phase segmentation.They extend the convex regularization on the characteristic function on the image domain to the nonconvex case,which are able to better obtain piecewise constant regions with neat boundaries.By analyzing the proposed non-Lipschitz model,we combine the proximal alternating minimization framework with support shrinkage and linearization strategies to design our algorithm.This leads to two alternating strongly convex subproblems which can be easily solved.Similarly,we present an algorithm without support shrinkage operation for the nonconvex Lipschitz case.Using the Kurdyka-Lojasiewicz property of the objective function,we prove that the limit point of the generated sequence is a critical point of the original nonconvex nonsmooth problem.Numerical experiments and comparisons illustrate the effectiveness of our method in two-phase image segmentation.
文摘This paper investigates the distributed H_(∞)consensus problem for a first-order multiagent system where both cooperative and antagonistic interactions coexist.In the presence of external disturbances,a distributed control algorithm using local information is addressed and a sufficient condition to get the H_(∞)control gain is obtained,which make the states of the agents in the same group converge to a common point while the inputs of each agent are constrained in the nonconvex sets.Finally,a numerical simulation is exhibited to illustrate the theory.
基金supported by the National Natural Science Foundation of China (No.12171106)the Natural Science Foundation of Guangxi Province (No.2020GXNSFDA238017)。
文摘The alternating direction method of multipliers(ADMM)is one of the most successful and powerful methods for separable minimization optimization.Based on the idea of symmetric ADMM in two-block optimization,we add an updating formula for the Lagrange multiplier without restricting its position for multiblock one.Then,combining with the Bregman distance,in this work,a Bregman-style partially symmetric ADMM is presented for nonconvex multi-block optimization with linear constraints,and the Lagrange multiplier is updated twice with different relaxation factors in the iteration scheme.Under the suitable conditions,the global convergence,strong convergence and convergence rate of the presented method are analyzed and obtained.Finally,some preliminary numerical results are reported to support the correctness of the theoretical assertions,and these show that the presented method is numerically effective.
基金supported by the National Natural Science Foundation of China (Nos. 11631004,12031009)the National Key R&D Program of China (No. 2018YFA0703900)。
文摘This paper is concerned with convergence of stochastic gradient algorithms with momentum terms in the nonconvex setting.A class of stochastic momentum methods,including stochastic gradient descent,heavy ball and Nesterov’s accelerated gradient,is analyzed in a general framework under mild assumptions.Based on the convergence result of expected gradients,the authors prove the almost sure convergence by a detailed discussion of the effects of momentum and the number of upcrossings.It is worth noting that there are not additional restrictions imposed on the objective function and stepsize.Another improvement over previous results is that the existing Lipschitz condition of the gradient is relaxed into the condition of H?lder continuity.As a byproduct,the authors apply a localization procedure to extend the results to stochastic stepsizes.
基金supported by the National Natural Science Foundation of China(No.12171106)the Natural Science Foundation of Guangxi Province(Nos.2020GXNSFDA238017 and 2018GXNSFFA281007).
文摘This work is about a splitting method for solving a nonconvex nonseparable optimization problem with linear constraints,where the objective function consists of two separable functions and a coupled term.First,based on the ideas from Bregman distance and Peaceman–Rachford splitting method,the Bregman Peaceman–Rachford splitting method with different relaxation factors for the multiplier is proposed.Second,the global and strong convergence of the proposed algorithm are proved under general conditions including the region of the two relaxation factors as well as the crucial Kurdyka–Łojasiewicz property.Third,when the associated Kurdyka–Łojasiewicz property function has a special structure,the sublinear and linear convergence rates of the proposed algorithm are guaranteed.Furthermore,some preliminary numerical results are shown to indicate the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(No.11661009)the High Level Innovation Teams and Excellent Scholars Program in Guangxi institutions of higher education(No.[2019]52)+2 种基金the Guangxi Natural Science Key Fund(No.2017GXNSFDA198046)the Special Funds for Local Science and Technology Development Guided by the Central Government(No.ZY20198003)the special foundation for Guangxi Ba Gui Scholars.
文摘It is prominent that conjugate gradient method is a high-efficient solution way for large-scale optimization problems.However,most of the conjugate gradient methods do not have sufficient descent property.In this paper,without any line search,the presented method can generate sufficient descent directions and trust region property.While use some suitable conditions,the global convergence of the method is established with Armijo line search.Moreover,we study the proposed method for solving nonsmooth problems and establish its global convergence.The experiments show that the presented method can be applied to solve smooth and nonsmooth unconstrained problems,image restoration problems and Muskingum model successfully.
文摘针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化处理,并在x-子问题中引入惯性效应。在适当的假设条件下,建立了算法的全局收敛性;同时引入满足Kurdyka-Lojasiewicz不等式的辅助函数,验证了算法的强收敛性。通过两个数值实验表明,引入惯性效应的算法比没有惯性效应的算法收敛性能更好。
文摘A number of previous papers have studied the problem of recovering low-rank matrices with noise, further combining the noisy and perturbed cases, we propose a nonconvex Schatten p-norm minimization method to deal with the recovery of fully perturbed low-rank matrices. By utilizing the p-null space property (p-NSP) and the p-restricted isometry property (p-RIP) of the matrix, sufficient conditions to ensure that the stable and accurate reconstruction for low-rank matrix in the case of full perturbation are derived, and two upper bound recovery error estimation ns are given. These estimations are characterized by two vital aspects, one involving the best r-approximation error and the other concerning the overall noise. Specifically, this paper obtains two new error upper bounds based on the fact that p-RIP and p-NSP are able to recover accurately and stably low-rank matrix, and to some extent improve the conditions corresponding to RIP.