Documentation
¶
Overview ¶
Package tensormpi wraps the Message Passing Interface for distributed memory data sharing across a collection of processors (procs).
It also contains some useful abstractions and error logging support in Go.
The wrapping code was initially copied from https://github.com/cpmech/gosl/mpi and significantly modified.
All standard Go types are supported using the apache arrow tmpl generation tool. Int is assumed to be 64bit and is defined as a []int because that is typically more convenient.
Index ¶
- func AllocN(n int) (st, end int, err error)
- func GatherTableRows(dest, src *table.Table, comm *mpi.Comm)
- func GatherTensorRows(dest, src tensor.Tensor, comm *mpi.Comm) error
- func GatherTensorRowsString(dest, src *tensor.String, comm *mpi.Comm) error
- func RandCheck(comm *mpi.Comm) error
- func ReduceTable(dest, src *table.Table, comm *mpi.Comm, op mpi.Op)
- func ReduceTensor(dest, src tensor.Tensor, comm *mpi.Comm, op mpi.Op) error
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func AllocN ¶
Alloc allocates n items to current mpi proc based on WorldSize and WorldRank. Returns start and end (exclusive) range for current proc.
func GatherTableRows ¶
GatherTableRows does an MPI AllGather on given src table data, gathering into dest. dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must be a clone of src: if not same number of cols, will be configured from src.
func GatherTensorRows ¶
GatherTensorRows does an MPI AllGather on given src tensor data, gathering into dest, using a row-based tensor organization (as in an table.Table). dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must have same overall shape as src at start, but rows will be enforced.
func GatherTensorRowsString ¶
GatherTensorRowsString does an MPI AllGather on given String src tensor data, gathering into dest, using a row-based tensor organization (as in an table.Table). dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must have same overall shape as src at start, but rows will be enforced.
func RandCheck ¶
RandCheck checks that the current random numbers generated across each MPI processor are identical. Most emergent simulations depend on this being true, so it is good to check periodically to ensure!
func ReduceTable ¶
ReduceTable does an MPI AllReduce on given src table data using given operation, gathering into dest. each processor must have the same table organization -- the tensor values are just aggregated directly across processors. dest will be a clone of src if not the same (cos & rows), does nothing for strings.
func ReduceTensor ¶
ReduceTensor does an MPI AllReduce on given src tensor data, using given operation, gathering into dest. dest must have same overall shape as src -- will be enforced. IMPORTANT: src and dest must be different slices! each processor must have the same shape and organization for this to make sense. does nothing for strings.
Types ¶
This section is empty.