pylops_mpi.optimization.basic.cgls#
- pylops_mpi.optimization.basic.cgls(Op, y, x0, niter=10, damp=0.0, tol=0.0001, show=False, itershow=(10, 10, 10), callback=None)[source]#
Conjugate gradient least squares
Solve an overdetermined system of equations given either an MPILinearOperator or an MPIStackedLinearOperator``Op`` and distributed data
yusing conjugate gradient iterations.- Parameters:
- Op
pylops_mpi.MPILinearOperatororpylops_mpi.MPIStackedLinearOperator MPI Linear Operator to invert of size \([N \times M]\)
- y
pylops_mpi.DistributedArrayorpylops_mpi.StackedDistributedArray DistributedArray of size (N,)
- x0
pylops_mpi.DistributedArrayorpylops_mpi.StackedDistributedArray Initial guess
- niter
int, optional Number of iterations
- damp
float, optional Damping coefficient
- tol
float, optional Tolerance on residual norm
- show
bool, optional Display iterations log
- itershow
tuple, optional Display set log for the first N1 steps, last N2 steps, and every N3 steps in between where N1, N2, N3 are the three element of the list.
- callback
callable, optional Function with signature (
callback(x)) to call after each iteration wherexis the DistributedArray.
- Op
- Returns:
- x
pylops_mpi.DistributedArrayorpylops_mpi.StackedDistributedArray Estimated model of size (M, )
- istop
int Gives the reason for termination
1means \(\mathbf{x}\) is an approximate solution to \(\mathbf{y} = \mathbf{Op}\,\mathbf{x}\)2means \(\mathbf{x}\) approximately solves the least-squares problem- iit
int Iteration number upon termination
- r1norm
float \(||\mathbf{r}||_2\), where \(\mathbf{r} = \mathbf{y} - \mathbf{Op}\,\mathbf{x}\)
- r2norm
float \(\sqrt{\mathbf{r}^T\mathbf{r} + \epsilon^2 \mathbf{x}^T\mathbf{x}}\). Equal to
r1normif \(\epsilon=0\)- cost
numpy.ndarray, optional History of r1norm through iterations
- x
Notes