tensormpi

package
v0.1.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 5, 2024 License: BSD-3-Clause Imports: 8 Imported by: 0

README

tensormpi: Message Passing Interface

The tensormpi package has methods to support use of MPI with tensor and table data structures, using the mpi package for Go mpi wrappers.

You must set the mpi build tag to actually have it build using the mpi library -- the default is to build a dummy version that has 1 proc of rank 0 always, and nop versions of all the methods.

$ go build -tags mpi
  • Gathering table.Table and tensor.Tensor data across processors.

  • AllocN allocates n items to process across mpi processors.

Documentation

Overview

Package tensormpi wraps the Message Passing Interface for distributed memory data sharing across a collection of processors (procs).

It also contains some useful abstractions and error logging support in Go.

The wrapping code was initially copied from https://github.com/cpmech/gosl/mpi and significantly modified.

All standard Go types are supported using the apache arrow tmpl generation tool. Int is assumed to be 64bit and is defined as a []int because that is typically more convenient.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func AllocN

func AllocN(n int) (st, end int, err error)

Alloc allocates n items to current mpi proc based on WorldSize and WorldRank. Returns start and end (exclusive) range for current proc.

func GatherTableRows

func GatherTableRows(dest, src *table.Table, comm *mpi.Comm)

GatherTableRows does an MPI AllGather on given src table data, gathering into dest. dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must be a clone of src: if not same number of cols, will be configured from src.

func GatherTensorRows

func GatherTensorRows(dest, src tensor.Tensor, comm *mpi.Comm) error

GatherTensorRows does an MPI AllGather on given src tensor data, gathering into dest, using a row-based tensor organization (as in an table.Table). dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must have same overall shape as src at start, but rows will be enforced.

func GatherTensorRowsString

func GatherTensorRowsString(dest, src *tensor.String, comm *mpi.Comm) error

GatherTensorRowsString does an MPI AllGather on given String src tensor data, gathering into dest, using a row-based tensor organization (as in an table.Table). dest will have np * src.Rows Rows, filled with each processor's data, in order. dest must have same overall shape as src at start, but rows will be enforced.

func RandCheck

func RandCheck(comm *mpi.Comm) error

RandCheck checks that the current random numbers generated across each MPI processor are identical. Most emergent simulations depend on this being true, so it is good to check periodically to ensure!

func ReduceTable

func ReduceTable(dest, src *table.Table, comm *mpi.Comm, op mpi.Op)

ReduceTable does an MPI AllReduce on given src table data using given operation, gathering into dest. each processor must have the same table organization -- the tensor values are just aggregated directly across processors. dest will be a clone of src if not the same (cos & rows), does nothing for strings.

func ReduceTensor

func ReduceTensor(dest, src tensor.Tensor, comm *mpi.Comm, op mpi.Op) error

ReduceTensor does an MPI AllReduce on given src tensor data, using given operation, gathering into dest. dest must have same overall shape as src -- will be enforced. IMPORTANT: src and dest must be different slices! each processor must have the same shape and organization for this to make sense. does nothing for strings.

Types

This section is empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL