pylops_mpi.DistributedArray#
- class pylops_mpi.DistributedArray(global_shape, base_comm=<mpi4py.MPI.Intracomm object>, partition=Partition.SCATTER, axis=0, local_shapes=None, engine='numpy', dtype=<class 'numpy.float64'>)[source]#
Distributed Numpy Arrays
Multidimensional NumPy-like distributed arrays. It brings NumPy arrays to high-performance computing.
Warning
When setting the partition of the DistributedArray to
pylops_mpi.Partition.BROADCAST
, it is crucial to be aware that any attempts to make arrays different from rank to rank will be overwritten by the actions of rank 0. This means that if you modify the DistributedArray on a specific rank, and are using broadcast to synchronize the arrays across all ranks, the modifications made by other ranks will be discarded and overwritten with the value at rank 0.- Parameters:
- global_shape
tuple
orint
Shape of the global array.
- base_comm
mpi4py.MPI.Comm
, optional MPI Communicator over which array is distributed. Defaults to
mpi4py.MPI.COMM_WORLD
.- partition
Partition
, optional Broadcast or Scatter the array. Defaults to
Partition.SCATTER
.- axis
int
, optional Axis along which distribution occurs. Defaults to
0
.- local_shapes
list
, optional List of tuples or integers representing local shapes at each rank.
- engine
str
, optional Engine used to store array (
numpy
orcupy
)- dtype
str
, optional Type of elements in input array. Defaults to
numpy.float64
.
- global_shape
Methods
__init__
(global_shape[, base_comm, ...])add
(dist_array)Distributed Addition of arrays
add_ghost_cells
([cells_front, cells_back])Add ghost cells to the DistributedArray along the axis of partition at each rank.
asarray
()Global view of the array
conj
()Distributed conj() method
copy
()Creates a copy of the DistributedArray
dot
(dist_array)Distributed Dot Product
iadd
(dist_array)Distributed In-place Addition of arrays
multiply
(dist_array)Distributed Element-wise multiplication
norm
([ord, axis])Distributed numpy.linalg.norm method
ravel
([order])Return a flattened DistributedArray
to_dist
(x[, base_comm, partition, axis, ...])Convert A Global Array to a Distributed Array
Examples using pylops_mpi.DistributedArray
#
Multi-Dimensional Deconvolution