TY - JOUR
T1 - Simultaneous Incremental Gradient Method for Inconsistent Convex Optimization Problem
AU - Katsekpor, Thomas
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2025.
PY - 2025/12
Y1 - 2025/12
N2 - We consider the minimization of additive cost function, which consists of a large number of smooth convex functions, over a large number of closed convex sets. This type of minimization arises in a number of settings, including large-scale data processing applications and distributed optimization across a finite number of agents, and statistical estimation problems. We adopt a first-order simultaneous incremental gradient method which involves taking finite number of steps sequentially along the gradients of the component functions, with intermediate adjustment of the variables after processing each component function in a cyclic order. Then, the last step or variable obtained at the end of a cycle is projected onto all the constraint sets and the new iterate is the relaxed convex combination of such projections. We prove the convergence of this algorithm from any starting point to the feasible solution of the optimization problem in the consistent case where the intersection of the convex sets is non-empty and to a weighted least squares type solution in the inconsistent case.
AB - We consider the minimization of additive cost function, which consists of a large number of smooth convex functions, over a large number of closed convex sets. This type of minimization arises in a number of settings, including large-scale data processing applications and distributed optimization across a finite number of agents, and statistical estimation problems. We adopt a first-order simultaneous incremental gradient method which involves taking finite number of steps sequentially along the gradients of the component functions, with intermediate adjustment of the variables after processing each component function in a cyclic order. Then, the last step or variable obtained at the end of a cycle is projected onto all the constraint sets and the new iterate is the relaxed convex combination of such projections. We prove the convergence of this algorithm from any starting point to the feasible solution of the optimization problem in the consistent case where the intersection of the convex sets is non-empty and to a weighted least squares type solution in the inconsistent case.
KW - Simultaneous projection
KW - closed convex sets
KW - consistent and inconsistent case
KW - continuously differentiable functions
KW - gradient method
KW - relaxation parameter
UR - https://www.scopus.com/pages/publications/105020690340
U2 - 10.1007/s00009-025-02986-0
DO - 10.1007/s00009-025-02986-0
M3 - Article
AN - SCOPUS:105020690340
SN - 1660-5446
VL - 22
JO - Mediterranean Journal of Mathematics
JF - Mediterranean Journal of Mathematics
IS - 8
M1 - 221
ER -