Abstract
This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network.
| Original language | English |
|---|---|
| Pages (from-to) | 272-281 |
| Number of pages | 10 |
| Journal | Neural Networks |
| Volume | 63 |
| DOIs | |
| State | Published - 1 Mar 2015 |
| Externally published | Yes |
Keywords
- Nonsmooth convex optimization problems
- One-layer neural network
- Tikhonov regularization method
Fingerprint
Dive into the research topics of 'Neural network for constrained nonsmooth optimization using Tikhonov regularization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver