Documentation ¶
Overview ¶
Package tensor is a package that provides efficient, generic n-dimensional arrays in Go. Also in this package are functions and methods that are used commonly in arithmetic, comparison and linear algebra operations.
Example (AsDenseDiag) ¶
The AsDenseDiag construction option creates a dense diagonal matrix from the input, either a slice or a tensor. The resulting shape is automatically inferred from the input vector.
This is like Numpy's `diag()` function, except not stupid. Numpy's `diag()` has been a cause of errors because it's somewhat isometric:
>>> np.diag(np.diag(np.array([1,2,3]))) array([1,2,3])
T := New(WithShape(3), WithBacking([]int{1, 2, 3})) T1 := New(AsDenseDiag(T)) fmt.Printf("T1:\n%v", T1) T2 := New(AsDenseDiag([]float64{3.14, 6.28, 11111})) fmt.Printf("T2:\n%v", T2)
Output: T1: ⎡1 0 0⎤ ⎢0 2 0⎥ ⎣0 0 3⎦ T2: ⎡ 3.14 0 0⎤ ⎢ 0 6.28 0⎥ ⎣ 0 0 11111⎦
Example (AsFortran) ¶
The AsFortran construction option is a bit finnicky.
// Here the data is passed in and directly used without changing the underlying data T0 := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}), AsFortran(nil)) fmt.Printf("T0:\n%vData: %v\n\n", T0, T0.Data()) // Here the data is passed into the AsFortran construction option, and it assumes that the data is already in // row-major form. Therefore a transpose will be performed. T1 := New(WithShape(2, 3), AsFortran([]float64{0, 1, 2, 3, 4, 5})) fmt.Printf("T1:\n%vData: %v\n\n", T1, T1.Data()) // Further example of how AsFortran works: orig := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5})) T2 := New(WithShape(2, 3), AsFortran(orig)) fmt.Printf("Original\n%vData: %v\n", orig, orig.Data()) fmt.Printf("T2:\n%vData: %v\n", T2, T2.Data())
Output: T0: ⎡0 2 4⎤ ⎣1 3 5⎦ Data: [0 1 2 3 4 5] T1: ⎡0 1 2⎤ ⎣3 4 5⎦ Data: [0 3 1 4 2 5] Original ⎡0 1 2⎤ ⎣3 4 5⎦ Data: [0 1 2 3 4 5] T2: ⎡0 1 2⎤ ⎣3 4 5⎦ Data: [0 3 1 4 2 5]
Example (Basics) ¶
This example showcases the very basics of the package.
// Create a (2, 2)-Matrix of integers a := New(WithShape(2, 2), WithBacking([]int{1, 2, 3, 4})) fmt.Printf("a:\n%v\n", a) // Create a (2, 3, 4)-tensor of float32s b := New(WithBacking(Range(Float32, 0, 24)), WithShape(2, 3, 4)) fmt.Printf("b:\n%1.1f", b) // Accessing data x, _ := b.At(0, 1, 2) // in Numpy syntax: b[0,1,2] fmt.Printf("x: %1.1f\n\n", x) // Setting data b.SetAt(float32(1000), 0, 1, 2) fmt.Printf("b:\n%v", b)
Output: a: ⎡1 2⎤ ⎣3 4⎦ b: ⎡ 0.0 1.0 2.0 3.0⎤ ⎢ 4.0 5.0 6.0 7.0⎥ ⎣ 8.0 9.0 10.0 11.0⎦ ⎡12.0 13.0 14.0 15.0⎤ ⎢16.0 17.0 18.0 19.0⎥ ⎣20.0 21.0 22.0 23.0⎦ x: 6.0 b: ⎡ 0 1 2 3⎤ ⎢ 4 5 1000 7⎥ ⎣ 8 9 10 11⎦ ⎡ 12 13 14 15⎤ ⎢ 16 17 18 19⎥ ⎣ 20 21 22 23⎦
Example (DifferingDataOrders) ¶
This example showcases interactions between different data orders
T0 := New(WithShape(2, 3), WithBacking(Range(Int, 0, 6))) // Create a (2, 3)-matrix with the standard row-major backing T1 := New(WithShape(2, 3), WithBacking(Range(Int, 0, 6)), AsFortran(nil)) // Create a (2, 3)-matrix with a col-major backing T2, _ := Add(T0, T1) fmt.Printf("T0:\n%vT1:\n%vT2:\n%vT2 Data Order: %v\n\n", T0, T1, T2, T2.DataOrder()) // the result's data order is highly dependent on the order of operation. It will take after the first operand T0 = New(WithShape(2, 3), WithBacking(Range(Int, 1, 7)), AsFortran(nil)) // Create a (2, 3)-matrix with a col-major backing T1 = New(WithShape(2, 3), WithBacking(Range(Int, 1, 7))) // Create a (2, 3)-matrix with the standard row-major backing T2, _ = Add(T0, T1) fmt.Printf("T0:\n%vT1:\n%vT2:\n%vT2 Data Order: %v\n\n", T0, T1, T2, T2.DataOrder()) reuse := New(WithShape(2, 3), WithBacking([]int{1000, 1000, 1000, 1000, 1000, 1000})) fmt.Printf("reuse Data Order: %v\n", reuse.DataOrder()) T2, _ = Add(T0, T1, WithReuse(reuse)) fmt.Printf("T2:\n%vT2 Data Order: %v\n\n", T2, T2.DataOrder())
Output: T0: ⎡0 1 2⎤ ⎣3 4 5⎦ T1: ⎡0 2 4⎤ ⎣1 3 5⎦ T2: ⎡ 0 3 6⎤ ⎣ 4 7 10⎦ T2 Data Order: Contiguous, RowMajor T0: ⎡1 3 5⎤ ⎣2 4 6⎦ T1: ⎡1 2 3⎤ ⎣4 5 6⎦ T2: ⎡ 2 5 8⎤ ⎣ 6 9 12⎦ T2 Data Order: Contiguous, ColMajor reuse Data Order: Contiguous, RowMajor T2: ⎡ 2 5 8⎤ ⎣ 6 9 12⎦ T2 Data Order: Contiguous, ColMajor
Example (Extension) ¶
package main import ( //"errors" "fmt" "reflect" "github.com/pkg/errors" "gorgonia.org/tensor" ) // In this example, we want to create and handle a tensor of *MyType // First, define MyType // MyType is defined type MyType struct { x, y int } func (T MyType) Format(s fmt.State, c rune) { fmt.Fprintf(s, "(%d, %d)", T.x, T.y) } // MyDtype this the dtype of MyType. This value is populated in the init() function below var MyDtype tensor.Dtype // MyEngine supports additions of MyType, as well as other Dtypes type MyEngine struct { tensor.StdEng } // For simplicity's sake, we'd only want to handle MyType-MyType or MyType-Int interactions // Also, we only expect Dense tensors // You're of course free to define your own rules // Add adds two tensors func (e MyEngine) Add(a, b tensor.Tensor, opts ...tensor.FuncOpt) (retVal tensor.Tensor, err error) { switch a.Dtype() { case MyDtype: switch b.Dtype() { case MyDtype: data := a.Data().([]*MyType) datb := b.Data().([]*MyType) for i, v := range data { v.x += datb[i].x v.y += datb[i].y } return a, nil case tensor.Int: data := a.Data().([]*MyType) datb := b.Data().([]int) for i, v := range data { v.x += datb[i] v.y += datb[i] } return a, nil } case tensor.Int: switch b.Dtype() { case MyDtype: data := a.Data().([]int) datb := b.Data().([]*MyType) for i, v := range datb { v.x += data[i] v.y += data[i] } default: return e.StdEng.Add(a, b, opts...) } default: return e.StdEng.Add(a, b, opts...) } return nil, errors.New("Unreachable") } func init() { MyDtype = tensor.Dtype{reflect.TypeOf(&MyType{})} } func main() { T := tensor.New(tensor.WithEngine(MyEngine{}), tensor.WithShape(2, 2), tensor.WithBacking([]*MyType{ &MyType{0, 0}, &MyType{0, 1}, &MyType{1, 0}, &MyType{1, 1}, })) ones := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]int{1, 1, 1, 1}), tensor.WithEngine(MyEngine{})) T2, _ := T.Add(ones) fmt.Printf("T:\n%+v", T) fmt.Printf("T2:\n%+v", T2) }
Output: T: Matrix (2, 2) [2 1] ⎡(1, 1) (1, 2)⎤ ⎣(2, 1) (2, 2)⎦ T2: Matrix (2, 2) [2 1] ⎡(1, 1) (1, 2)⎤ ⎣(2, 1) (2, 2)⎦
Example (IteratorRowmajor) ¶
This is an example of how to use `IteratorFromDense` from a row-major Dense tensor
T := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5})) it := IteratorFromDense(T) fmt.Printf("T:\n%v\n", T) for i, err := it.Start(); err == nil; i, err = it.Next() { fmt.Printf("i: %d, coord: %v\n", i, it.Coord()) }
Output: T: ⎡0 1 2⎤ ⎣3 4 5⎦ i: 0, coord: [0 1] i: 1, coord: [0 2] i: 2, coord: [1 0] i: 3, coord: [1 1] i: 4, coord: [1 2] i: 5, coord: [0 0]
Example (IteratorcolMajor) ¶
This is an example of using `IteratorFromDense` on a col-major Dense tensor. More importantly this example shows the order of the iteration.
T := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}), AsFortran(nil)) it := IteratorFromDense(T) fmt.Printf("T:\n%v\n", T) for i, err := it.Start(); err == nil; i, err = it.Next() { fmt.Printf("i: %d, coord: %v\n", i, it.Coord()) }
Output: T: ⎡0 2 4⎤ ⎣1 3 5⎦ i: 0, coord: [0 1] i: 2, coord: [0 2] i: 4, coord: [1 0] i: 1, coord: [1 1] i: 3, coord: [1 2] i: 5, coord: [0 0]
Example (Sparse_advanced) ¶
xs := []int{1, 2, 6, 8} ys := []int{1, 2, 1, 6} vals := []int16{3, 1, 4, 1} S := CSCFromCoord(Shape{9, 7}, xs, ys, vals) T := New(WithShape(9, 7), Of(Int16)) // dense Reuse := New(WithShape(9, 7), Of(Int16)) // reuse must be a *Dense because the result will always be a dense Result, _ := Add(S, T, WithReuse(Reuse)) fmt.Printf("Operations involving sparse tensors also do take the usual function options like Reuse:\n%+#s\nResult == Reuse: %t", Result, Result == Reuse)
Output: Operations involving sparse tensors also do take the usual function options like Reuse: Matrix (9, 7) [7 1] ⎡0 0 0 0 0 0 0⎤ ⎢0 3 0 0 0 0 0⎥ ⎢0 0 1 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 4 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎣0 0 0 0 0 0 1⎦ Result == Reuse: true
Example (Sparse_basics) ¶
xs := []int{1, 2, 6, 8} ys := []int{1, 2, 1, 6} vals := []float32{3, 1, 4, 1} S := CSCFromCoord(Shape{9, 7}, xs, ys, vals) T := New(WithShape(9, 7), Of(Float32)) // dense Result, _ := Add(S, T) fmt.Printf("When adding a sparse tensor to a dense tensor, the result is of %T:\n=============================================================================\n%+#s\n", Result, Result) Result, _ = Add(T, S) fmt.Printf("And vice versa - %T\n=========================\n%+#s\n", Result, Result)
Output: When adding a sparse tensor to a dense tensor, the result is of *tensor.Dense: ============================================================================= Matrix (9, 7) [7 1] ⎡0 0 0 0 0 0 0⎤ ⎢0 3 0 0 0 0 0⎥ ⎢0 0 1 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 4 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎣0 0 0 0 0 0 1⎦ And vice versa - *tensor.Dense ========================= Matrix (9, 7) [7 1] ⎡0 0 0 0 0 0 0⎤ ⎢0 3 0 0 0 0 0⎥ ⎢0 0 1 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎢0 4 0 0 0 0 0⎥ ⎢0 0 0 0 0 0 0⎥ ⎣0 0 0 0 0 0 1⎦
Example (StackExtension) ¶
package main import ( "fmt" "gorgonia.org/tensor" ) // LongStruct is a type that is an arbitrarily long struct type LongStruct struct { a, b, c, d, e uint64 } // Format implements fmt.Formatter for easier-to-read output of data func (ls LongStruct) Format(s fmt.State, c rune) { fmt.Fprintf(s, "{a: %d, b: %d, c: %d, d: %d, e: %d}", ls.a, ls.b, ls.c, ls.d, ls.e) } type s int func (ss s) Start() int { return int(ss) } func (ss s) End() int { return int(ss) + 1 } func (ss s) Step() int { return 1 } func main() { // For documentation if you're reading this on godoc: // // type LongStruct struct { // a, b, c, d, e uint64 // } T := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]LongStruct{ LongStruct{0, 0, 0, 0, 0}, LongStruct{1, 1, 1, 1, 1}, LongStruct{2, 2, 2, 2, 2}, LongStruct{3, 3, 3, 3, 3}, }), ) S, _ := T.Slice(nil, s(1)) // s is a type that implements tensor.Slice T2 := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]LongStruct{ LongStruct{10, 10, 10, 10, 10}, LongStruct{11, 11, 11, 11, 11}, LongStruct{12, 12, 12, 12, 12}, LongStruct{13, 13, 13, 13, 13}, }), ) S2, _ := T2.Slice(nil, s(0)) // an alternative would be something like this // T3, _ := S.(*tensor.Dense).Stack(1, S2.(*tensor.Dense)) T3, _ := tensor.Stack(1, S, S2) fmt.Printf("Stacked:\n%v", T3) }
Output: Stacked: ⎡ {a: 1, b: 1, c: 1, d: 1, e: 1} {a: 10, b: 10, c: 10, d: 10, e: 10}⎤ ⎣ {a: 3, b: 3, c: 3, d: 3, e: 3} {a: 12, b: 12, c: 12, d: 12, e: 12}⎦
Example (Sum_Sliced) ¶
T := New(WithShape(4, 4), WithBacking([]int{ 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, })) s, _ := T.Slice(S(1, 3), S(1, 3)) sum, _ := Sum(s) fmt.Printf("T:\n%v\nsliced:\n%v\nSum: %v", T, s, sum)
Output: T: ⎡1 2 3 4⎤ ⎢5 6 7 8⎥ ⎢1 2 3 4⎥ ⎣5 6 7 8⎦ sliced: ⎡6 7⎤ ⎣2 3⎦ Sum: 18
Index ¶
- Constants
- Variables
- func BorrowBools(size int) []bool
- func BorrowInts(size int) []int
- func BroadcastStrides(destShape, srcShape Shape, destStrides, srcStrides []int) (retVal []int, err error)deprecated
- func CheckSlice(s Slice, size int) error
- func Copy(dst, src Tensor) error
- func DontUsePool()
- func Inner(a, b Tensor) (retVal interface{}, err error)
- func IsMonotonicInts(a []int) (monotonic bool, incr1 bool)
- func Itol(i int, shape Shape, strides []int) (coords []int, err error)
- func Ltoi(shape Shape, strides []int, coords ...int) (at int, err error)
- func MaskedReduce(t *Dense, retType Dtype, fn maskedReduceFn, axis ...int) interface{}
- func MaxInt(a, b int) int
- func MaxInts(is ...int) (retVal int)
- func MinInt(a, b int) int
- func ProdInts(a []int) (retVal int)
- func Random(dt Dtype, size int) interface{}
- func Range(dt Dtype, start, end int) interface{}
- func Register(a Dtype)
- func RegisterEq(a Dtype)
- func RegisterFloat(a Dtype)
- func RegisterNumber(a Dtype)
- func RegisterOrd(a Dtype)
- func ReturnBools(is []bool)
- func ReturnInts(is []int)
- func ReturnTensor(t Tensor)
- func SampleIndex(in interface{}) int
- func SliceDetails(s Slice, size int) (start, end, step int, err error)
- func SortIndex(in interface{}) (out []int)
- func SumInts(a []int) (retVal int)
- func ToMat64(t *Dense, opts ...FuncOpt) (retVal *mat.Dense, err error)
- func TransposeIndex(i int, oldShape, pattern, oldStrides, newStrides []int) int
- func UnsafePermute(pattern []int, xs ...[]int) (err error)
- func UntransposeIndex(i int, oldShape, pattern, oldStrides, newStrides []int) int
- func Use(b BLAS)
- func UsePool()
- type AP
- func (ap *AP) C() bool
- func (ap *AP) Clone() (retVal AP)
- func (ap *AP) CloneTo(dest *AP)
- func (ap *AP) DataOrder() DataOrder
- func (ap *AP) Dims() int
- func (ap *AP) F() bool
- func (ap *AP) Format(state fmt.State, c rune)
- func (ap *AP) Init(shape Shape, strides []int)
- func (ap *AP) IsColVec() bool
- func (ap *AP) IsMatrix() bool
- func (ap *AP) IsRowVec() bool
- func (ap *AP) IsScalar() bool
- func (ap *AP) IsScalarEquiv() bool
- func (ap *AP) IsVector() bool
- func (ap *AP) IsVectorLike() bool
- func (ap *AP) IsZero() bool
- func (ap *AP) S(size int, slices ...Slice) (newAP AP, ndStart, ndEnd int, err error)
- func (ap *AP) SetShape(s ...int)
- func (ap *AP) Shape() Shape
- func (ap *AP) Size() int
- func (ap *AP) Strides() []int
- func (ap *AP) String() string
- func (ap *AP) T(axes ...int) (retVal AP, a []int, err error)
- type Abser
- type Adder
- type Argmaxer
- type Argminer
- type BLAS
- type BitMap
- type Boolable
- type ByIndiceser
- type CS
- func (t *CS) Apply(fn interface{}, opts ...FuncOpt) (Tensor, error)
- func (t *CS) AsCSC()
- func (t *CS) AsCSR()
- func (t *CS) At(coord ...int) (interface{}, error)
- func (a *CS) Cap() int
- func (t *CS) Clone() interface{}
- func (a CS) Data() interface{}
- func (t *CS) DataOrder() DataOrder
- func (t *CS) DataSize() int
- func (t *CS) Dense() *Dense
- func (t *CS) Dims() int
- func (t *CS) Dtype() Dtype
- func (t *CS) Engine() Engine
- func (t *CS) Eq(other interface{}) bool
- func (t *CS) Format(s fmt.State, c rune)
- func (a *CS) Get(i int) interface{}
- func (t *CS) GobDecode(p []byte) (err error)
- func (t *CS) GobEncode() (p []byte, err error)
- func (t *CS) Indices() []int
- func (t *CS) Indptr() []int
- func (t *CS) IsManuallyManaged() bool
- func (t *CS) IsNativelyAccessible() bool
- func (t *CS) IsScalar() bool
- func (t *CS) Iterator() Iterator
- func (a *CS) Len() int
- func (t *CS) MemSize() uintptr
- func (a *CS) Memset(x interface{}) error
- func (t *CS) NonZeroes() int
- func (t *CS) ReadNpy(r io.Reader) error
- func (t *CS) RequiresIterator() bool
- func (t *CS) Reshape(...int) error
- func (t *CS) ScalarValue() interface{}
- func (a *CS) Set(i int, x interface{})
- func (t *CS) SetAt(v interface{}, coord ...int) error
- func (t *CS) Shape() Shape
- func (t *CS) Size() int
- func (t *CS) Slice(...Slice) (View, error)
- func (t *CS) Strides() []int
- func (t *CS) String() string
- func (t *CS) T(axes ...int) error
- func (t *CS) Transpose() error
- func (t *CS) UT()
- func (t *CS) Uintptr() uintptr
- func (t *CS) WriteNpy(w io.Writer) error
- func (a CS) Zero()
- type Cbrter
- type Clamper
- type Cloner
- type Concater
- type ConsOpt
- func AsDenseDiag(backing interface{}) ConsOpt
- func AsFortran(backing interface{}, argMask ...[]bool) ConsOpt
- func FromMemory(ptr uintptr, memsize uintptr) ConsOpt
- func FromScalar(x interface{}, argMask ...[]bool) ConsOpt
- func Of(a Dtype) ConsOpt
- func WithBacking(x interface{}, argMask ...[]bool) ConsOpt
- func WithEngine(e Engine) ConsOpt
- func WithMask(x interface{}) ConsOpt
- func WithShape(dims ...int) ConsOpt
- type Cuber
- type DataOrder
- type Dataer
- type Dense
- func FromArrowArray(a arrowArray.Interface) *Dense
- func FromArrowTensor(a arrowTensor.Interface) *Dense
- func FromMat64(m *mat.Dense, opts ...FuncOpt) *Dense
- func I(dt Dtype, r, c, k int) *Dense
- func New(opts ...ConsOpt) *Dense
- func NewDense(dt Dtype, shape Shape, opts ...ConsOpt) *Dense
- func Ones(dt Dtype, shape ...int) *Dense
- func (t *Dense) Add(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) AddScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Apply(fn interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func (t *Dense) Argmax(axis int) (retVal *Dense, err error)
- func (t *Dense) Argmin(axis int) (retVal *Dense, err error)
- func (t *Dense) At(coords ...int) (interface{}, error)
- func (a *Dense) Cap() int
- func (t *Dense) Clone() interface{}
- func (t *Dense) ClumpMasked() []Slice
- func (t *Dense) ClumpUnmasked() []Slice
- func (t *Dense) Concat(axis int, Ts ...*Dense) (retVal *Dense, err error)
- func (t *Dense) CopyTo(other *Dense) error
- func (t *Dense) Data() interface{}
- func (t *Dense) DataSize() int
- func (t *Dense) Div(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) DivScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Dtype() Dtype
- func (t *Dense) ElEq(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) ElEqScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) ElNe(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) ElNeScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Engine() Engine
- func (t *Dense) Eq(other interface{}) bool
- func (t *Dense) FBDecode(buf []byte) error
- func (t *Dense) FBEncode() ([]byte, error)
- func (t *Dense) FillValue() interface{}
- func (t *Dense) Filled(val ...interface{}) (interface{}, error)
- func (t *Dense) FilledInplace(val ...interface{}) (interface{}, error)
- func (t *Dense) FlatMaskedContiguous() []Slice
- func (t *Dense) FlatMaskedEdges() (int, int)
- func (t *Dense) FlatNotMaskedContiguous() []Slice
- func (t *Dense) FlatNotMaskedEdges() (int, int)
- func (t *Dense) Format(s fmt.State, c rune)
- func (a *Dense) Get(i int) interface{}
- func (t *Dense) GobDecode(p []byte) (err error)
- func (t *Dense) GobEncode() (p []byte, err error)
- func (t *Dense) Gt(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) GtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Gte(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) GteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) HardenMask() bool
- func (t *Dense) Hstack(others ...*Dense) (*Dense, error)
- func (t *Dense) Info() *AP
- func (t *Dense) Inner(other Tensor) (retVal interface{}, err error)
- func (t *Dense) IsManuallyManaged() bool
- func (t *Dense) IsMasked() bool
- func (t *Dense) IsMaterializable() bool
- func (t *Dense) IsNativelyAccessible() bool
- func (t *Dense) IsView() bool
- func (t *Dense) Iterator() Iterator
- func (a *Dense) Len() int
- func (t *Dense) Lt(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) LtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Lte(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) LteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Mask() []bool
- func (t *Dense) MaskAt(coords ...int) (bool, error)
- func (t *Dense) MaskFromDense(tts ...*Dense)
- func (t *Dense) MaskFromSlice(x interface{})
- func (t *Dense) MaskedAll(axis ...int) interface{}
- func (t *Dense) MaskedAny(axis ...int) interface{}
- func (t *Dense) MaskedCount(axis ...int) interface{}
- func (t *Dense) MaskedEqual(val1 interface{}) (err error)
- func (t *Dense) MaskedGreater(val1 interface{}) (err error)
- func (t *Dense) MaskedGreaterEqual(val1 interface{}) (err error)
- func (t *Dense) MaskedInside(val1 interface{}, val2 interface{}) (err error)
- func (t *Dense) MaskedLess(val1 interface{}) (err error)
- func (t *Dense) MaskedLessEqual(val1 interface{}) (err error)
- func (t *Dense) MaskedNotEqual(val1 interface{}) (err error)
- func (t *Dense) MaskedOutside(val1 interface{}, val2 interface{}) (err error)
- func (t *Dense) MaskedValues(val1 interface{}, val2 interface{}, val3 ...interface{}) (err error)
- func (t *Dense) MatMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) MatVecMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Materialize() Tensor
- func (t *Dense) Max(along ...int) (retVal *Dense, err error)
- func (a *Dense) MemSize() uintptr
- func (t *Dense) Memset(x interface{}) error
- func (t *Dense) Min(along ...int) (retVal *Dense, err error)
- func (t *Dense) Mod(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) ModScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Mul(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) MulScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Narrow(dim, start, length int) (View, error)
- func (t *Dense) NonMaskedCount(axis ...int) interface{}
- func (t *Dense) Norm(ord NormOrder, axes ...int) (retVal *Dense, err error)
- func (t *Dense) Outer(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) PBDecode(buf []byte) error
- func (t *Dense) PBEncode() ([]byte, error)
- func (t *Dense) Pow(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) PowScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) ReadCSV(r io.Reader, opts ...FuncOpt) (err error)
- func (t *Dense) ReadNpy(r io.Reader) (err error)
- func (t *Dense) Reduce(fn interface{}, axis int, defaultValue interface{}) (retVal *Dense, err error)
- func (t *Dense) Repeat(axis int, repeats ...int) (retVal Tensor, err error)
- func (t *Dense) RequiresIterator() bool
- func (t *Dense) ResetMask(val ...bool) error
- func (t *Dense) Reshape(dims ...int) error
- func (t *Dense) RollAxis(axis, start int, safe bool) (retVal *Dense, err error)
- func (t *Dense) SVD(uv, full bool) (s, u, v *Dense, err error)
- func (t *Dense) SafeT(axes ...int) (retVal *Dense, err error)
- func (t *Dense) ScalarValue() interface{}
- func (a *Dense) Set(i int, x interface{})
- func (t *Dense) SetAt(v interface{}, coords ...int) error
- func (t *Dense) SetMask(mask []bool)
- func (t *Dense) SetMaskAt(v bool, coords ...int) error
- func (t *Dense) SetMaskAtIndex(v bool, i int) error
- func (t *Dense) ShallowClone() *Dense
- func (t *Dense) Slice(slices ...Slice) (retVal View, err error)
- func (t *Dense) SliceInto(view *Dense, slices ...Slice) (retVal View, err error)
- func (t *Dense) SoftenMask() bool
- func (t *Dense) Stack(axis int, others ...*Dense) (retVal *Dense, err error)
- func (t *Dense) Sub(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) SubScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
- func (t *Dense) Sum(along ...int) (retVal *Dense, err error)
- func (t *Dense) T(axes ...int) (err error)
- func (t *Dense) TensorMul(other Tensor, axesA, axesB []int) (retVal *Dense, err error)
- func (t *Dense) Trace() (retVal interface{}, err error)
- func (t *Dense) Transpose() error
- func (t *Dense) UT()
- func (a *Dense) Uintptr() uintptr
- func (t *Dense) Vstack(others ...*Dense) (*Dense, error)
- func (t *Dense) WriteCSV(w io.Writer, formats ...string) (err error)
- func (t *Dense) WriteNpy(w io.Writer) (err error)
- func (t *Dense) Zero()
- type DenseStacker
- type DenseTensor
- type Densor
- type Diager
- type Diver
- type Dotter
- type Dtype
- type Dtyper
- type ElEqer
- type Engine
- type Eq
- type Exper
- type FMAer
- type FlatIterator
- func (it *FlatIterator) Chan() (retVal chan int)
- func (it *FlatIterator) Coord() []int
- func (it *FlatIterator) Done() bool
- func (it *FlatIterator) Next() (int, error)
- func (it *FlatIterator) NextInvalid() (int, int, error)
- func (it *FlatIterator) NextValid() (int, int, error)
- func (it *FlatIterator) NextValidity() (int, bool, error)
- func (it *FlatIterator) Reset()
- func (it *FlatIterator) SetForward()
- func (it *FlatIterator) SetReverse()
- func (it *FlatIterator) Slice(sli Slice) (retVal []int, err error)
- func (it *FlatIterator) Start() (int, error)
- type FlatMaskedIterator
- type FlatSparseIterator
- func (a FlatSparseIterator) Cap() int
- func (it *FlatSparseIterator) Coord() []int
- func (a FlatSparseIterator) Data() interface{}
- func (it *FlatSparseIterator) Done() bool
- func (a FlatSparseIterator) Get(i int) interface{}
- func (a FlatSparseIterator) Len() int
- func (a FlatSparseIterator) Memset(x interface{}) error
- func (it *FlatSparseIterator) Next() (int, error)
- func (it *FlatSparseIterator) NextInvalid() (int, int, error)
- func (it *FlatSparseIterator) NextValid() (int, int, error)
- func (it *FlatSparseIterator) NextValidity() (int, bool, error)
- func (it *FlatSparseIterator) Reset()
- func (a FlatSparseIterator) Set(i int, x interface{})
- func (it *FlatSparseIterator) SetForward()
- func (it *FlatSparseIterator) SetReverse()
- func (it *FlatSparseIterator) Start() (int, error)
- func (a FlatSparseIterator) Zero()
- type Float32Engine
- func (e Float32Engine) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e Float32Engine) FMA(a, x, y Tensor) (retVal Tensor, err error)
- func (e Float32Engine) FMAScalar(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)
- func (e Float32Engine) Inner(a, b Tensor) (retVal float32, err error)
- type Float64Engine
- func (e Float64Engine) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e Float64Engine) FMA(a, x, y Tensor) (retVal Tensor, err error)
- func (e Float64Engine) FMAScalar(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)
- func (e Float64Engine) Inner(a, b Tensor) (retVal float64, err error)
- type FuncOpt
- type Gteer
- type Gter
- type InfChecker
- type InnerProder
- type InnerProderF32
- type InnerProderF64
- type InvSqrter
- type Inver
- type Iterator
- type Kinder
- type Log10er
- type Log2er
- type Loger
- type Lteer
- type Lter
- type Mapper
- type MaskedTensor
- type MatMuler
- type MatVecMuler
- type MathError
- type MaxBetweener
- type Maxer
- type MemSetter
- type Memory
- type MemoryFlag
- type MinBetweener
- type Miner
- type Moder
- type Muler
- type MultIterator
- func (it *MultIterator) Coord() []int
- func (it *MultIterator) Done() bool
- func (it *MultIterator) LastIndex(j int) int
- func (it *MultIterator) Next() (int, error)
- func (it *MultIterator) NextInvalid() (int, int, error)
- func (it *MultIterator) NextValid() (int, int, error)
- func (it *MultIterator) NextValidity() (int, bool, error)
- func (it *MultIterator) Reset()
- func (it *MultIterator) SetForward()
- func (it *MultIterator) SetReverse()
- func (it *MultIterator) Start() (int, error)
- type NaNChecker
- type Neger
- type NoOpError
- type NonStdEngine
- type NormOrder
- type Oner
- type OpOpt
- type OptimizedReducer
- type OuterProder
- type Power
- type Proder
- type Reducer
- type Repeater
- type SVDer
- type ScalarRep
- type Shape
- func (s Shape) CalcStrides() []int
- func (s Shape) CalcStridesColMajor() []int
- func (s Shape) CalcStridesWithMask(mask []bool) []int
- func (s Shape) Clone() Shape
- func (s Shape) Concat(axis int, ss ...Shape) (newShape Shape, err error)
- func (s Shape) DimSize(d int) (size int, err error)
- func (s Shape) Dims() int
- func (s Shape) Eq(other Shape) bool
- func (s Shape) Format(st fmt.State, r rune)
- func (s Shape) IsColVec() bool
- func (s Shape) IsMatrix() bool
- func (s Shape) IsRowVec() bool
- func (s Shape) IsScalar() bool
- func (s Shape) IsScalarEquiv() bool
- func (s Shape) IsVector() bool
- func (s Shape) IsVectorLike() bool
- func (s Shape) Repeat(axis int, repeats ...int) (newShape Shape, finalRepeats []int, size int, err error)
- func (s Shape) S(slices ...Slice) (retVal Shape, err error)
- func (s Shape) TotalSize() int
- type Signer
- type Slice
- type Slicer
- type SoftMaxer
- type Sparse
- type SparseTensor
- type Sqrter
- type Squarer
- type Stacker
- type StdEng
- func (e StdEng) Abs(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Accessible(mem Memory) (Memory, error)
- func (e StdEng) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) AddScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Alloc(size int64) (Memory, error)
- func (e StdEng) AllocAccessible() bool
- func (e StdEng) Argmax(t Tensor, axis int) (retVal Tensor, err error)
- func (e StdEng) Argmin(t Tensor, axis int) (retVal Tensor, err error)
- func (e StdEng) Cbrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Clamp(a Tensor, min, max interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Concat(t Tensor, axis int, others ...Tensor) (retVal Tensor, err error)
- func (e StdEng) Cube(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Diag(t Tensor) (retVal Tensor, err error)
- func (e StdEng) Div(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) DivScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Dot(x, y Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) ElEq(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) ElNe(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) EqScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Exp(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) FMA(a, x, y Tensor) (Tensor, error)
- func (e StdEng) FMAScalar(a Tensor, x interface{}, y Tensor) (Tensor, error)
- func (e StdEng) Free(mem Memory, size int64) error
- func (e StdEng) Gt(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) GtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Gte(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) GteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Inner(a, b Tensor) (retVal interface{}, err error)
- func (e StdEng) Inv(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) InvSqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Log(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Log10(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Log2(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Lt(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) LtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Lte(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) LteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Map(fn interface{}, a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) MatMul(a, b, prealloc Tensor) (err error)
- func (e StdEng) MatVecMul(a, b, prealloc Tensor) (err error)
- func (e StdEng) Max(a Tensor, along ...int) (retVal Tensor, err error)
- func (e StdEng) MaxBetween(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) MaxBetweenScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Memclr(mem Memory)
- func (e StdEng) Memcpy(dst, src Memory) error
- func (e StdEng) Memset(mem Memory, val interface{}) error
- func (e StdEng) Min(a Tensor, along ...int) (retVal Tensor, err error)
- func (e StdEng) MinBetween(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) MinBetweenScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Mod(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) ModScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Mul(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) MulScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) NeScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Neg(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) OptimizedReduce(a Tensor, axis int, firstFn, lastFn, defaultFn, defaultValue interface{}, ...) (retVal Tensor, err error)
- func (e StdEng) Outer(a, b, prealloc Tensor) (err error)
- func (e StdEng) Pow(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) PowScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Reduce(fn interface{}, a Tensor, axis int, defaultValue interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Repeat(t Tensor, axis int, repeats ...int) (Tensor, error)
- func (e StdEng) RepeatReuse(t Tensor, reuse Tensor, axis int, repeats ...int) (Tensor, error)
- func (e StdEng) SVD(a Tensor, uv, full bool) (s, u, v Tensor, err error)
- func (e StdEng) SelectByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) SelectByIndicesB(input, outGrad, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Sign(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Sqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Square(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) StackDense(t DenseTensor, axis int, others ...DenseTensor) (retVal DenseTensor, err error)
- func (e StdEng) Sub(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) SubScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Sum(a Tensor, along ...int) (retVal Tensor, err error)
- func (e StdEng) Tanh(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func (e StdEng) Trace(t Tensor) (retVal interface{}, err error)
- func (e StdEng) Transpose(a Tensor, expStrides []int) error
- func (e StdEng) WorksWith(order DataOrder) bool
- type Suber
- type Sumer
- type Tanher
- type Tensor
- func Abs(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Add(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Argmax(t Tensor, axis int) (retVal Tensor, err error)
- func Argmin(t Tensor, axis int) (retVal Tensor, err error)
- func ByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func ByIndicesB(a, b, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func Cbrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Clamp(a Tensor, min interface{}, max interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Concat(axis int, t Tensor, others ...Tensor) (retVal Tensor, err error)
- func Contract(a, b Tensor, aAxes, bAxes []int) (retVal Tensor, err error)
- func Cube(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Diag(t Tensor) (retVal Tensor, err error)
- func Div(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Dot(x, y Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func ElEq(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func ElNe(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Exp(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func FMA(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)
- func Gt(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Gte(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Inv(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func InvSqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Log(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Log10(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Log2(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func Lt(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Lte(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func MatMul(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func MatVecMul(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Materialize(t Tensor) Tensor
- func MaxBetween(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func MinBetween(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Mod(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Mul(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Neg(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Outer(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Pow(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Repeat(t Tensor, axis int, repeats ...int) (retVal Tensor, err error)
- func RepeatReuse(t, reuse Tensor, axis int, repeats ...int) (retval Tensor, err error)
- func Sign(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
- func Sqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Square(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Stack(axis int, t Tensor, others ...Tensor) (retVal Tensor, err error)
- func Sub(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)
- func Sum(t Tensor, along ...int) (retVal Tensor, err error)
- func T(t Tensor, axes ...int) (retVal Tensor, err error)
- func Tanh(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)
- func Transpose(t Tensor, axes ...int) (retVal Tensor, err error)
- type Tracer
- type Transposer
- type Triangle
- type View
- type Zeroer
- Bugs
Examples ¶
- Package (AsDenseDiag)
- Package (AsFortran)
- Package (Basics)
- Package (DifferingDataOrders)
- Package (Extension)
- Package (IteratorRowmajor)
- Package (IteratorcolMajor)
- Package (Sparse_advanced)
- Package (Sparse_basics)
- Package (StackExtension)
- Package (Sum_Sliced)
- Argmax
- Argmax (Sliced)
- Argmin
- ByIndices
- ByIndicesB
- Dense.Add (Basic)
- Dense.Add (Incr)
- Dense.Add (Reuse)
- Dense.Add (Reuse_operand)
- Dense.Add (Unsafe)
- Dense.AddScalar (Basic)
- Dense.AddScalar (Incr)
- Dense.AddScalar (Reuse)
- Dense.AddScalar (Unsafe)
- Dense.Apply
- Dense.Data
- Dense.Div (Basic)
- Dense.Div (Incr)
- Dense.Div (Reuse)
- Dense.Div (Reuse_operand)
- Dense.Div (Unsafe)
- Dense.DivScalar (Basic)
- Dense.DivScalar (Incr)
- Dense.DivScalar (Reuse)
- Dense.DivScalar (Unsafe)
- Dense.ElEq (Basic)
- Dense.ElEq (Reuse)
- Dense.ElEq (Unsafe)
- Dense.ElNe (Basic)
- Dense.ElNe (Reuse)
- Dense.ElNe (Unsafe)
- Dense.Gt (Basic)
- Dense.Gt (Reuse)
- Dense.Gt (Unsafe)
- Dense.Gte (Basic)
- Dense.Gte (Reuse)
- Dense.Gte (Unsafe)
- Dense.Hstack
- Dense.Lt (Basic)
- Dense.Lt (Reuse)
- Dense.Lt (Unsafe)
- Dense.Lte (Basic)
- Dense.Lte (Reuse)
- Dense.Lte (Unsafe)
- Dense.MatMul
- Dense.MatMul (Sliced)
- Dense.MatVecMul
- Dense.MatVecMul (RowMajorSliced)
- Dense.Mod (Basic)
- Dense.Mod (Incr)
- Dense.Mod (Reuse)
- Dense.Mod (Unsafe)
- Dense.ModScalar (Basic)
- Dense.Mul (Basic)
- Dense.Mul (Incr)
- Dense.Mul (Reuse)
- Dense.Mul (Reuse_operand)
- Dense.Mul (Unsafe)
- Dense.MulScalar (Basic)
- Dense.MulScalar (Incr)
- Dense.MulScalar (Reuse)
- Dense.MulScalar (Unsafe)
- Dense.Pow (Basic)
- Dense.Pow (Incr)
- Dense.Pow (Reuse)
- Dense.Pow (Unsafe)
- Dense.PowScalar (Basic)
- Dense.SVD
- Dense.Slice
- Dense.Slice (OneDimension)
- Dense.Slice (ViewMutation)
- Dense.Sub (Basic)
- Dense.Sub (Incr)
- Dense.Sub (Reuse)
- Dense.Sub (Reuse_operand)
- Dense.Sub (Unsafe)
- Dense.SubScalar (Basic)
- Dense.SubScalar (Incr)
- Dense.SubScalar (Reuse)
- Dense.SubScalar (Unsafe)
- Dense.Vstack
- Repeat (UncommonUses)
- RepeatReuse
- Sum
- Sum (Sliced)
- T
- T (Scalarlike)
- Transpose (Extension)
- View
Constants ¶
const AllAxes int = -1
const DEBUG = false
const (
PoolSize = 4096
)
Variables ¶
var ( Bool = Dtype{reflect.TypeOf(true)} Int = Dtype{reflect.TypeOf(int(1))} Int8 = Dtype{reflect.TypeOf(int8(1))} Int16 = Dtype{reflect.TypeOf(int16(1))} Int32 = Dtype{reflect.TypeOf(int32(1))} Int64 = Dtype{reflect.TypeOf(int64(1))} Uint = Dtype{reflect.TypeOf(uint(1))} Uint8 = Dtype{reflect.TypeOf(uint8(1))} Uint16 = Dtype{reflect.TypeOf(uint16(1))} Uint32 = Dtype{reflect.TypeOf(uint32(1))} Uint64 = Dtype{reflect.TypeOf(uint64(1))} Float32 = Dtype{reflect.TypeOf(float32(1))} Float64 = Dtype{reflect.TypeOf(float64(1))} Complex64 = Dtype{reflect.TypeOf(complex64(1))} Complex128 = Dtype{reflect.TypeOf(complex128(1))} String = Dtype{reflect.TypeOf("")} // aliases Byte = Uint8 // extras Uintptr = Dtype{reflect.TypeOf(uintptr(0))} UnsafePointer = Dtype{reflect.TypeOf(unsafe.Pointer(&Uintptr))} )
oh how nice it'd be if I could make them immutable
var TABCOUNT uint32 = 0
Functions ¶
func BorrowBools ¶
BorrowBools borrows a slice of bools from the pool. USE WITH CAUTION.
func BorrowInts ¶
BorrowInts borrows a slice of ints from the pool. USE WITH CAUTION.
func BroadcastStrides
deprecated
func CheckSlice ¶
CheckSlice checks a slice to see if it's sane
func DontUsePool ¶
func DontUsePool()
DontUsePool makes sure the functions don't use the tensor pool provided. This is useful as certain applications don't lend themselves well to use of the pool. Examples of such applications would be one where many tensors of wildly different sizes are created all the time.
func Inner ¶
Inner finds the inner products of two vector Tensors. Both arguments to the functions are eexpected to be vectors.
func IsMonotonicInts ¶
IsMonotonicInts returns true if the slice of ints is monotonically increasing. It also returns true for incr1 if every succession is a succession of 1
func Ltoi ¶
Ltoi is Location to Index. Provide a shape, a strides, and a list of integers as coordinates, and returns the index at which the element is.
func MaskedReduce ¶
MaskedReduce applies a reduction function of type maskedReduceFn to mask, and returns either an int, or another array
func MaxInt ¶
MaxInt returns the highest between two ints. If both are the same, it returns the first
func Random ¶
Random creates an array of random numbers of the given type. For complex Dtypes, the imaginary component will be 0.
This function is only useful in cases where the randomness is not vital.
func Range ¶
Range creates a ranged array with a given type. It panics if the Dtype is not supported or does not represent a naturally orderable type (strings, pointers etc) Do note that the range algorithm is very simple, and simply does increments or decrements of 1. This means for floating point types you're not able to create a range with a 0.1 increment step, and for complex number types, the imaginary part will always be 0i
func RegisterEq ¶
func RegisterEq(a Dtype)
RegisterEq registers a dtype as a type that can be compared for equality
func RegisterFloat ¶
func RegisterFloat(a Dtype)
func RegisterNumber ¶
func RegisterNumber(a Dtype)
RegisterNumber is a function required to register a new numerical Dtype. This package provides the following Dtype:
Int Int8 Int16 Int32 Int64 Uint Uint8 Uint16 Uint32 Uint64 Float32 Float64 Complex64 Complex128
If a Dtype that is registered already exists on the list, it will not be added to the list.
func RegisterOrd ¶
func RegisterOrd(a Dtype)
RegisterOrd registers a dtype as a type that can be typed
func ReturnBools ¶
func ReturnBools(is []bool)
ReturnBools returns a slice from the pool. USE WITH CAUTION.
func ReturnInts ¶
func ReturnInts(is []int)
ReturnInts returns a slice from the pool. USE WITH CAUTION.
func ReturnTensor ¶
func ReturnTensor(t Tensor)
ReturnTensor returns a Tensor to their respective pools. USE WITH CAUTION
func SampleIndex ¶
func SampleIndex(in interface{}) int
SampleIndex samples a slice or a Tensor. TODO: tidy this up.
func SliceDetails ¶
SliceDetails is a function that takes a slice and spits out its details. The whole reason for this is to handle the nil Slice, which is this: a[:]
func SortIndex ¶
func SortIndex(in interface{}) (out []int)
SortIndex is similar to numpy's argsort TODO: tidy this up
func ToMat64 ¶
ToMat64 converts a *Dense to a *mat.Dense. All the values are converted into float64s. This function will only convert matrices. Anything *Dense with dimensions larger than 2 will cause an error.
func TransposeIndex ¶
TransposeIndex returns the new index given the old index
func UnsafePermute ¶
func UntransposeIndex ¶
UntransposeIndex returns the old index given the new index
func Use ¶
func Use(b BLAS)
Use defines which BLAS implementation gorgonia should use. The default is Gonum's Native. These are the other options:
Use(blastoise.Implementation()) Use(cubone.Implementation()) Use(cgo.Implementation)
Note the differences in the brackets. The blastoise and cubone ones are functions.
Types ¶
type AP ¶
type AP struct { Δ Triangle // contains filtered or unexported fields }
An AP is an access pattern. It tells the various ndarrays how to access their data through the use of strides Through the AP, there are several definitions of things, most notably there are two very specific "special cases":
Scalar has Dims() of 0. - (1) Scalarlikes are higher order tensors, but each with a size of 1. The Dims() are not 0. - (1, 1) - (1, 1, 1) - (1, 1, 1, 1), etc Vector has Dims() of 1, but its shape can take several forms: - (x, 1) - (1, x) - (x) Matrix has Dims() of 2. This is the most basic form. The len(shape) has to be equal to 2 as well ndarray has Dims() of n.
func (*AP) Init ¶
Init initializes an already created AP with a shape and stries. It will panic if AP is nil.
func (*AP) IsMatrix ¶
IsMatrix returns true if it's a matrix. This is mostly a convenience method. RowVec and ColVecs are also considered matrices
func (*AP) IsScalarEquiv ¶ added in v0.9.15
IsScalarEquiv returns true if the access pattern is equivalent to a scalar shape.
func (*AP) IsVector ¶
IsVector returns whether the access pattern falls into one of three possible definitions of vectors:
vanilla vector (not a row or a col) column vector row vector
func (*AP) IsVectorLike ¶ added in v0.9.12
IsVectorLike returns true if the shape is vector-like (i.e. the shape only has one dim that is a non-1).
func (*AP) SetShape ¶
SetShape is for very specific times when modifying the AP is necessary, such as reshaping and doing I/O related stuff
Caveats:
- SetShape will recalculate the strides.
- If the AP is locked, nothing will happen
type Adder ¶
type Adder interface { // Add performs a + b Add(a, b Tensor, opts ...FuncOpt) (Tensor, error) // AddScalar adds a scalar to the tensor. leftTensor indicates if the tensor is the left operand. // Whether or not the input tensor is clobbered is left to the implementation AddScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Adder is any engine that can perform elementwise addition.
type Argmaxer ¶
Argmaxer is any engine that can find the indices of the maximum values along an axis. By convention the returned Tensor has Dtype of Int.
type Argminer ¶
Argmaxer is any engine that can find the indices of the minimum values along an axis. By convention the returned Tensor has Dtype of Int.
type BitMap ¶
type BitMap struct {
// contains filtered or unexported fields
}
BitMap is a very simple bitmap. It only supports Set, IsSet and Clear methods. It's mostly used for tracking which element has been set
func (*BitMap) Clear ¶
Clear clears the ith bit. It panics if i is greater or equal to the defined max
type ByIndiceser ¶ added in v0.9.15
type ByIndiceser interface { SelectByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) SelectByIndicesB(input, outGrad, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) }
ByIndiceser allows for values in tensor `a` to be selected by the indices listed in the `indices` tensor.
type CS ¶
type CS struct {
// contains filtered or unexported fields
}
CS is a compressed sparse data structure. It can be used to represent both CSC and CSR sparse matrices. Refer to the individual creation functions for more information.
func CSCFromCoord ¶
CSRFromCoord creates a new Compressed Sparse Column matrix given the coordinates. The data has to be a slice or it panics.
func CSRFromCoord ¶
CSRFromCoord creates a new Compressed Sparse Row matrix given the coordinates. The data has to be a slice or it panics.
func NewCSC ¶
NewCSC creates a new Compressed Sparse Column matrix. The data has to be a slice, or it panics.
func NewCSR ¶
NewCSR creates a new Compressed Sparse Row matrix. The data has to be a slice or it panics.
func (*CS) Get ¶
func (a *CS) Get(i int) interface{}
Get returns the ith element of the underlying array of the *Dense tensor.
func (*CS) IsManuallyManaged ¶
func (*CS) IsNativelyAccessible ¶
func (*CS) NonZeroes ¶
NonZeroes returns the nonzeroes. In academic literature this is often written as NNZ.
func (*CS) RequiresIterator ¶
func (*CS) ScalarValue ¶
func (t *CS) ScalarValue() interface{}
func (*CS) Set ¶
func (a *CS) Set(i int, x interface{})
Set sets the value of the underlying array at the index i.
type Cloner ¶
type Cloner interface {
Clone() interface{}
}
Cloner is any type that can clone itself
type ConsOpt ¶
type ConsOpt func(Tensor)
ConsOpt is a tensor construction option.
func AsDenseDiag ¶
func AsDenseDiag(backing interface{}) ConsOpt
func AsFortran ¶
AsFortran creates a *Dense with a col-major layout. If the optional backing argument is passed, the backing is assumed to be C-order (row major), and it will be transposed before being used.
func FromMemory ¶
FromMemory is a construction option for creating a *Dense (for now) from memory location. This is a useful option for super large tensors that don't fit into memory - the user may need to `mmap` a file the tensor.
Bear in mind that at the current stage of the ConsOpt design, the order of the ConsOpt is important. FromMemory requires the *Dense's Dtype be set already. This would fail (and panic):
New(FromMemory(ptr, size), Of(Float64))
This would not:
New(Of(Float64), FromMemory(ptr, size))
This behaviour of requiring the ConsOpts to be in order might be changed in the future.
Memory must be manually managed by the caller. Tensors called with this construction option will not be returned to any pool - rather, all references to the pointers will be null'd. Use with caution.
func FromScalar ¶
FromScalar is a construction option for representing a scalar value as a Tensor
func WithBacking ¶
WithBacking is a construction option for a Tensor Use it as such:
backing := []float64{1,2,3,4} t := New(WithBacking(backing))
It can be used with other construction options like WithShape
func WithEngine ¶
WithEngine is a construction option that would cause a Tensor to be linked with an execution engine.
func WithMask ¶
func WithMask(x interface{}) ConsOpt
WithMask is a construction option for a Tensor Use it as such:
mask := []bool{true,true,false,false} t := New(WithBacking(backing), WithMask(mask))
It can be used with other construction options like WithShape The supplied mask can be any type. If non-boolean, then tensor mask is set to true wherever non-zero value is obtained
type DataOrder ¶
type DataOrder byte
DataOrder is a flag that indicates the order of data. The default DataOrder (0) is what this package uses by default.
const ( // ColMajor indicates that the data is stored in a col-major way. // A data can only be stored in either ColMajor(1) or RowMajor(0). // The way the DataOrder was designed causes the default to be RowMajor ColMajor DataOrder = 1 << iota // NonContiguous indicates that the data is not contiguous. // A data can either be Contiguous (0) or NonContiguous (2). // The way DataOrder was designed causes the default to be Contiguous. NonContiguous // Transposed indicates that the data has been transposed Transposed )
func MakeDataOrder ¶
MakeDataOrder makes a data order. Typical examples:
MakeDataOrder(DataOrder(0)) // Row Major, contiguous MakeDataOrder(NonContiguous // Row Major, non-contiguous MakeDataOrder(ColMajor) // Col Major, contiguous MakeDataOrder(ColMajor, NonContiguous) // what it says on the tin
func (DataOrder) HasSameOrder ¶
HasSameOrder returns true if both data orders are the same (either both are ColMajor or both are RowMajor)
func (DataOrder) IsColMajor ¶
IsColMajor returns true if the data order describes a col-major data
func (DataOrder) IsContiguous ¶
IsContiguous returns true if the data order describes a contiguous data.
func (DataOrder) IsNotContiguous ¶
IsNotContiguous returns true if the data order describes a noncontiguous data.
func (DataOrder) IsRowMajor ¶
IsRowMajor returns true if the data order describes a row-major data
func (DataOrder) IsTransposed ¶
IsTransposed returns true if the data order describes whether the data has been tranposed (but not moved)
type Dataer ¶
type Dataer interface {
Data() interface{}
}
Dataer is any type that returns the data in its original form (typically a Go slice of something)
type Dense ¶
type Dense struct { AP // contains filtered or unexported fields }
Dense represents a dense tensor - this is the most common form of tensors. It can be used to represent vectors, matrices.. etc
func FromArrowArray ¶ added in v0.9.11
func FromArrowArray(a arrowArray.Interface) *Dense
FromArrowArray converts an "arrow/array".Interface into a Tensor of matching DataType.
func FromArrowTensor ¶ added in v0.9.11
func FromArrowTensor(a arrowTensor.Interface) *Dense
FromArrowTensor converts an "arrow/tensor".Interface into a Tensor of matching DataType.
func I ¶
I creates the identity matrix (usually a square) matrix with 1s across the diagonals, and zeroes elsewhere, like so:
Matrix(4,4) ⎡1 0 0 0⎤ ⎢0 1 0 0⎥ ⎢0 0 1 0⎥ ⎣0 0 0 1⎦
While technically an identity matrix is a square matrix, in attempt to keep feature parity with Numpy, the I() function allows you to create non square matrices, as well as an index to start the diagonals.
For example:
T = I(Float64, 4, 4, 1)
Yields:
⎡0 1 0 0⎤ ⎢0 0 1 0⎥ ⎢0 0 0 1⎥ ⎣0 0 0 0⎦
The index k can also be a negative number:
T = I(Float64, 4, 4, -1)
Yields:
⎡0 0 0 0⎤ ⎢1 0 0 0⎥ ⎢0 1 0 0⎥ ⎣0 0 1 0⎦
func New ¶
New creates a new Dense Tensor. For sparse arrays use their relevant construction function
func (*Dense) Add ¶
Add performs t + other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Add(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Add(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 + T2 T3: ⎡10 12 14⎤ ⎢16 18 20⎥ ⎣22 24 26⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] + T2 T3: ⎡10 12⎤ ⎣15 17⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Add(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Add(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 + T2 Incr == T3: true T3: ⎡110 112 114⎤ ⎢116 118 120⎥ ⎣122 124 126⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 + T2 Incr == T3: true T3: ⎡110 112⎤ ⎣115 117⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Add(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Add(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡10 12 14⎤ ⎢16 18 20⎥ ⎣22 24 26⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡10 12⎤ ⎣15 17⎦
Example (Reuse_operand) ¶
An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.
var T1, T2, T3 *Dense T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Add(T2, WithReuse(T1)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Add(T2, WithReuse(T2)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output: Reuse tensor passed in ====================== T3 == T1: true T3: ⎡10 12 14⎤ ⎢16 18 20⎥ ⎣22 24 26⎦ Reuse tensor passed in ====================== T3 == T2: true T3: ⎡10 12 14⎤ ⎢16 18 20⎥ ⎣22 24 26⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Add(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 + T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Add(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] + T2\nV:\n%v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output: Unsafe Operation ================ T3 = T1 + T2 T1 == T3: true T1: ⎡10 12 14⎤ ⎢16 18 20⎥ ⎣22 24 26⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] + T2 V: ⎡10 12⎤ ⎣15 17⎦ Naturally, T1 is mutated too: ⎡10 12 2⎤ ⎢15 17 5⎥ ⎣ 6 7 8⎦
func (*Dense) AddScalar ¶
func (t *Dense) AddScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
AddScalar performs t + other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.AddScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T3, _ = T1.AddScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 + T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.AddScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.AddScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 + T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe (tensor is left operand) ========================== T3 = T1 + 5 T3: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (tensor is right operand) ========================== T3 = 5 + T1 T3: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 1:3] + 5 T3: ⎡ 6 7⎤ ⎢ 9 10⎥ ⎣12 13⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is right operand) ============================================= T3 = 5 + T1[:, 1:3] T3: ⎡ 6 7⎤ ⎢ 9 10⎥ ⎣12 13⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.AddScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.AddScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 + T2 Incr == T3: true T3: ⎡105 106 107⎤ ⎢108 109 110⎥ ⎣111 112 113⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 + T2 Incr == T3: true T3: ⎡105 106⎤ ⎣108 109⎦
Example (Reuse) ¶
Reuse tensors may be used, with the WithReuse() function option.
var T1, V, Reuse, T3 *Dense var sliced Tensor // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.AddScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is right operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.AddScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.AddScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.AddScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in (Tensor is left operand) ====================== T3 == Reuse: true T3: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ Reuse tensor passed in (Tensor is right operand) ====================== T3 == Reuse: true T3: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡5 6⎤ ⎣8 9⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡5 6⎤ ⎣8 9⎦
Example (Unsafe) ¶
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.AddScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 + 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T3, _ = T1.AddScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 + T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.AddScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.AddScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 + T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output: Operation is unsafe (tensor is left operand) ========================== T3 = T1 + 5 T3: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ T3 == T1: true T1 is changed: ⎡ 5 6 7⎤ ⎢ 8 9 10⎥ ⎣11 12 13⎦ Operation is unsafe (tensor is right operand) ========================== T3 = 5 + T1 T3: ⎡10 11 12⎤ ⎢13 14 15⎥ ⎣16 17 18⎦ T3 == T1: true T1 is changed: ⎡10 11 12⎤ ⎢13 14 15⎥ ⎣16 17 18⎦ Operation is unsafe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 0:2] + 5 T3: ⎡ 5 6⎤ ⎢ 8 9⎥ ⎣11 12⎦ sliced == T3: true T1 is changed: ⎡ 5 6 2⎤ ⎢ 8 9 5⎥ ⎣11 12 8⎦ Operation is unsafe (sliced operations - tensor is right operand) ============================================= T3 = 5 + T1[:, 0:2] T3: ⎡ 5 6⎤ ⎢ 8 9⎥ ⎣11 12⎦ sliced == T3: true T1 is changed: ⎡ 5 6 2⎤ ⎢ 8 9 5⎥ ⎣11 12 8⎦
func (*Dense) Apply ¶
Apply applies a function to all the values in the tensor.
Example ¶
package main import ( "fmt" "gorgonia.org/tensor" ) func main() { a := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]float64{1, 2, 3, 4})) cube := func(a float64) float64 { return a * a * a } b, err := a.Apply(cube) if err != nil { fmt.Printf("b is an error %v", err) } fmt.Printf("a and b are the same object - %t\n", a.Eq(b)) fmt.Printf("a is unmutated\n%v\n", a) c, err := a.Apply(cube, tensor.WithReuse(a)) if err != nil { fmt.Printf("c is an error %v\n", err) } fmt.Printf("a and c are the same object - %t\n", a.Eq(c)) fmt.Printf("a is now mutated\n%v\n", a) }
Output: a and b are the same object - false a is unmutated ⎡1 2⎤ ⎣3 4⎦ a and c are the same object - true a is now mutated ⎡ 1 8⎤ ⎣27 64⎦
func (*Dense) Clone ¶
func (t *Dense) Clone() interface{}
Clone clones a *Dense. It creates a copy of the data, and the underlying array will be allocated
func (*Dense) ClumpMasked ¶
ClumpMasked returns a list of slices corresponding to the masked clumps of a 1-D array Added to match numpy function names
func (*Dense) ClumpUnmasked ¶
ClumpUnmasked returns a list of slices corresponding to the unmasked clumps of a 1-D array Added to match numpy function names
func (*Dense) Concat ¶
Concat concatenates the other tensors along the given axis. It is like Numpy's concatenate() function.
func (*Dense) CopyTo ¶
CopyTo copies the underlying data to the destination *Dense. The original data is untouched. Note: CopyTo doesn't care about the metadata of the destination *Dense. Take for example:
T = NewTensor(WithShape(6)) T2 = NewTensor(WithShape(2,3)) err = T.CopyTo(T2) // err == nil
The only time that this will fail is if the underlying sizes are different
func (*Dense) Data ¶
func (t *Dense) Data() interface{}
Data returns the underlying array. If the *Dense represents a scalar value, the scalar value is returned instead
Example ¶
Data shows how the shape of the *Dense actually affects the return value of .Data().
T := New(WithShape(2, 2), WithBacking([]float64{1, 2, 3, 4})) fmt.Printf("Basics:\n======\nAny kind of arrays: %v\n", T.Data()) fmt.Printf("\nScalar-like\n===========\n") T = New(WithShape(), FromScalar(3.14)) fmt.Printf("WithShape(), FromScalar: %v\n", T.Data()) T = New(WithShape(), WithBacking([]float64{3.14})) fmt.Printf("WithShape(), With a slice of 1 as backing: %v\n", T.Data()) T = New(WithShape(1), FromScalar(3.14)) fmt.Printf("WithShape(1), With an initial scalar: %v\n", T.Data()) T = New(WithShape(1, 1), WithBacking([]float64{3.14})) fmt.Printf("WithShape(1, 1), With an initial scalar: %v\n", T.Data()) T = New(WithShape(1, 1), FromScalar(3.14)) fmt.Printf("WithShape(1, 1), With an initial scalar: %v\n", T.Data()) T.Reshape() fmt.Printf("After reshaping to (): %v\n", T.Data())
Output: Basics: ====== Any kind of arrays: [1 2 3 4] Scalar-like =========== WithShape(), FromScalar: 3.14 WithShape(), With a slice of 1 as backing: 3.14 WithShape(1), With an initial scalar: [3.14] WithShape(1, 1), With an initial scalar: [3.14] WithShape(1, 1), With an initial scalar: [3.14] After reshaping to (): 3.14
func (*Dense) DataSize ¶
DataSize returns the size of the underlying array. Typically t.DataSize() == t.Shape().TotalSize()
func (*Dense) Div ¶
Div performs t ÷ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Div(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 ÷ T2\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Div(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] ÷ T2\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 ÷ T2 T3: ⎡ 0 0.09 0.2⎤ ⎢ 0.2 0.3 0.3⎥ ⎣ 0.4 0.4 0.4⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] ÷ T2 T3: ⎡ 0 0.09⎤ ⎣ 0.2 0.3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Div(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 ÷ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Div(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 ÷ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 ÷ T2 Incr == T3: true T3: ⎡ 100 100.09 100.17⎤ ⎢100.23 100.29 100.33⎥ ⎣100.38 100.41 100.44⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 ÷ T2 Incr == T3: true T3: ⎡ 100 100.09⎤ ⎣100.25 100.31⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Div(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Div(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡ 0 0.09 0.2⎤ ⎢ 0.2 0.3 0.3⎥ ⎣ 0.4 0.4 0.4⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡ 0 0.09⎤ ⎣ 0.2 0.3⎦
Example (Reuse_operand) ¶
An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.
var T1, T2, T3 *Dense T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Div(T2, WithReuse(T1)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%1.1v\n", T3 == T1, T3) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Div(T2, WithReuse(T2)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%1.1v\n", T3 == T2, T3)
Output: Reuse tensor passed in ====================== T3 == T1: true T3: ⎡ 0 0.09 0.2⎤ ⎢ 0.2 0.3 0.3⎥ ⎣ 0.4 0.4 0.4⎦ Reuse tensor passed in ====================== T3 == T2: true T3: ⎡ 0 0.09 0.2⎤ ⎢ 0.2 0.3 0.3⎥ ⎣ 0.4 0.4 0.4⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Div(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 ÷ T2\nT1 == T3: %t\nT1:\n%1.1v", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Div(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] ÷ T2\nV:\n%1.1v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%1.1v", T1)
Output: Unsafe Operation ================ T3 = T1 ÷ T2 T1 == T3: true T1: ⎡ 0 0.09 0.2⎤ ⎢ 0.2 0.3 0.3⎥ ⎣ 0.4 0.4 0.4⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] ÷ T2 V: ⎡ 0 0.09⎤ ⎣ 0.2 0.3⎦ Naturally, T1 is mutated too: ⎡ 0 0.09 2⎤ ⎢ 0.2 0.3 5⎥ ⎣ 6 7 8⎦
func (*Dense) DivScalar ¶
func (t *Dense) DivScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
DivScalar performs t ÷ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.DivScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 / 5\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1) T3, _ = T1.DivScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 / T1\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.DivScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.DivScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 / T1[:, 1:3]\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)
Output: Default operation is safe (tensor is left operand) ========================== T3 = T1 / 5 T3: ⎡ 0 0.2 0.4⎤ ⎢0.6 0.8 1⎥ ⎣ 1 1 2⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (tensor is right operand) ========================== T3 = 5 / T1 T3: ⎡+Inf 5 2⎤ ⎢ 2 1 1⎥ ⎣ 0.8 0.7 0.6⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 1:3] + 5 T3: ⎡0.2 0.4⎤ ⎢0.8 1⎥ ⎣ 1 2⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is right operand) ============================================= T3 = 5 / T1[:, 1:3] T3: ⎡ 5 2⎤ ⎢ 1 1⎥ ⎣0.7 0.6⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.DivScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 / T2\nIncr == T3: %t\nT3:\n%3.1v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.DivScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 / T2\nIncr == T3: %t\nT3:\n%3.1v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 / T2 Incr == T3: true T3: ⎡1e+02 1e+02 1e+02⎤ ⎢1e+02 1e+02 1e+02⎥ ⎣1e+02 1e+02 1e+02⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 / T2 Incr == T3: true T3: ⎡1e+02 1e+02⎤ ⎣1e+02 1e+02⎦
Example (Reuse) ¶
Reuse tensors may be used, with the WithReuse() function option.
var T1, V, Reuse, T3 *Dense var sliced Tensor // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.DivScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3) // Tensor is right operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.DivScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.DivScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.DivScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v", T3 == Reuse, T3)
Output: Reuse tensor passed in (Tensor is left operand) ====================== T3 == Reuse: true T3: ⎡ 0 0.2 0.4⎤ ⎢0.6 0.8 1⎥ ⎣ 1 1 2⎦ Reuse tensor passed in (Tensor is right operand) ====================== T3 == Reuse: true T3: ⎡+Inf 5 2⎤ ⎢ 2 1 1⎥ ⎣ 0.8 0.7 0.6⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡ 0 0.2⎤ ⎣0.6 0.8⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡+Inf 5⎤ ⎣ 2 1⎦
Example (Unsafe) ¶
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.DivScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 / 5\nT3:\n%1.1v\nT3 == T1: %t\nT1 is changed:\n%1.1v\n", T3, T3 == T1, T1) T3, _ = T1.DivScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 / T1\nT3:\n%1.1v\nT3 == T1: %t\nT1 is changed:\n%1.1v\n", T3, T3 == T1, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.DivScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%1.1v\nsliced == T3: %t\nT1 is changed:\n%1.1v\n", T3, sliced == T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.DivScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 / T1[:, 0:2]\nT3:\n%1.1v\nsliced == T3: %t\nT1 is changed:\n%1.1v\n", T3, sliced == T3, T1)
Output: Operation is unsafe (tensor is left operand) ========================== T3 = T1 / 5 T3: ⎡ 0 0.2 0.4⎤ ⎢0.6 0.8 1⎥ ⎣ 1 1 2⎦ T3 == T1: true T1 is changed: ⎡ 0 0.2 0.4⎤ ⎢0.6 0.8 1⎥ ⎣ 1 1 2⎦ Operation is unsafe (tensor is right operand) ========================== T3 = 5 / T1 T3: ⎡ +Inf 2e+01 1e+01⎤ ⎢ 8 6 5⎥ ⎣ 4 4 3⎦ T3 == T1: true T1 is changed: ⎡ +Inf 2e+01 1e+01⎤ ⎢ 8 6 5⎥ ⎣ 4 4 3⎦ Operation is unsafe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 0:2] + 5 T3: ⎡ 0 0.2⎤ ⎢0.6 0.8⎥ ⎣ 1 1⎦ sliced == T3: true T1 is changed: ⎡ 0 0.2 2⎤ ⎢0.6 0.8 5⎥ ⎣ 1 1 8⎦ Operation is unsafe (sliced operations - tensor is right operand) ============================================= T3 = 5 / T1[:, 0:2] T3: ⎡+Inf 5⎤ ⎢ 2 1⎥ ⎣ 0.8 0.7⎦ sliced == T3: true T1 is changed: ⎡+Inf 5 2⎤ ⎢ 2 1 5⎥ ⎣ 0.8 0.7 8⎦
func (*Dense) ElEq ¶
ElEq performs t == other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.ElEq(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 == T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.ElEq(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.ElEq(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.ElEq(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 == T2 T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Returning same type =================== T3 (Returns Same Type): ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Safe slicing ============ T3: ⎡false false⎤ ⎣ true true⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡0 0⎤ ⎣1 1⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.ElEq(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.ElEq(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.ElEq(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.ElEq(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Reuse With Same Type ===================== T3: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Reuse on sliced tensors ====================== T3 ⎡ true true⎤ ⎣false false⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡1 1⎤ ⎣0 0⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.ElEq(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.ElEq(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡0 0 2⎤ ⎢1 1 5⎥ ⎣6 7 8⎦
func (*Dense) ElEqScalar ¶
func (t *Dense) ElEqScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
EqScalar performs t == other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) ElNe ¶
ElNe performs t ≠ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.ElNe(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 != T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.ElNe(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.ElNe(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.ElNe(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 != T2 T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Returning same type =================== T3 (Returns Same Type): ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Safe slicing ============ T3: ⎡ true true⎤ ⎣false false⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡1 1⎤ ⎣0 0⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.ElNe(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.ElNe(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.ElNe(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.ElNe(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Reuse With Same Type ===================== T3: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Reuse on sliced tensors ====================== T3 ⎡false false⎤ ⎣ true true⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡0 0⎤ ⎣1 1⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.ElNe(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.ElNe(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡1 1 2⎤ ⎢0 0 5⎥ ⎣6 7 8⎦
func (*Dense) ElNeScalar ¶
func (t *Dense) ElNeScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
NeScalar performs t ≠ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) Eq ¶
Eq checks that any two things are equal. If the shapes are the same, but the strides are not the same, it's will still be considered the same
func (*Dense) FBEncode ¶
FBEncode encodes to a byte slice using flatbuffers.
Only natively accessible data can be encided
func (*Dense) FillValue ¶
func (t *Dense) FillValue() interface{}
FillValue returns the value used to fill the invalid entries of a masked array
func (*Dense) Filled ¶
Filled returns a tensor with masked data replaced by default fill value, or by optional passed value
func (*Dense) FilledInplace ¶
FilledInplace replaces masked data with default fill value, or by optional passed value
func (*Dense) FlatMaskedContiguous ¶
FlatMaskedContiguous is used to find contiguous masked data in a masked array. Applies to a flattened version of the array. Returns:A sorted sequence of slices (start index, end index).
func (*Dense) FlatMaskedEdges ¶
FlatMaskedEdges is used to find the indices of the first and last masked values Applies to a flattened version of the array. Returns: A pair of ints. -1 if all values are unmasked.
func (*Dense) FlatNotMaskedContiguous ¶
FlatNotMaskedContiguous is used to find contiguous unmasked data in a masked array. Applies to a flattened version of the array. Returns:A sorted sequence of slices (start index, end index).
func (*Dense) FlatNotMaskedEdges ¶
FlatNotMaskedEdges is used to find the indices of the first and last unmasked values Applies to a flattened version of the array. Returns: A pair of ints. -1 if all values are masked.
func (*Dense) Format ¶
Format implements fmt.Formatter. Formatting can be controlled with verbs and flags. All default Go verbs are supported and work as expected. By default, only 8 columns and rows are printed (the first and the last 4 columns and rows, while the middle columns and rows are ellided) Special flags are:
'-' for printing a flat array of values '+' for printing extra metadata before printing the tensor (it prints shape, stride and type, which are useful for debugging) '#' for printing the full tensor - there are no elisions. Overrides the 's' verb
Special care also needs be taken for the verb 's' - it prints a super compressed version of the tensor, only printing 4 cols and 4 rows.
func (*Dense) Get ¶
func (a *Dense) Get(i int) interface{}
Get returns the ith element of the underlying array of the *Dense tensor.
func (*Dense) Gt ¶
Gt performs t > other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.Gt(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 > T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.Gt(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Gt(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Gt(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 > T2 T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Returning same type =================== T3 (Returns Same Type): ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Safe slicing ============ T3: ⎡false false⎤ ⎣false false⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡0 0⎤ ⎣0 0⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.Gt(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.Gt(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.Gt(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.Gt(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Reuse With Same Type ===================== T3: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Reuse on sliced tensors ====================== T3 ⎡false false⎤ ⎣ true true⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡0 0⎤ ⎣1 1⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.Gt(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.Gt(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡0 0 2⎤ ⎢0 0 5⎥ ⎣6 7 8⎦
func (*Dense) GtScalar ¶
func (t *Dense) GtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
GtScalar performs t > other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) Gte ¶
Gte performs t ≥ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.Gte(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 >= T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.Gte(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Gte(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Gte(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 >= T2 T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Returning same type =================== T3 (Returns Same Type): ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Safe slicing ============ T3: ⎡false false⎤ ⎣ true true⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡0 0⎤ ⎣1 1⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.Gte(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.Gte(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.Gte(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.Gte(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Reuse With Same Type ===================== T3: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Reuse on sliced tensors ====================== T3 ⎡true true⎤ ⎣true true⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡1 1⎤ ⎣1 1⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.Gte(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.Gte(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡0 0 2⎤ ⎢1 1 5⎥ ⎣6 7 8⎦
func (*Dense) GteScalar ¶
func (t *Dense) GteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
GteScalar performs t ≥ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) HardenMask ¶
HardenMask forces the mask to hard. If mask is hard, then true mask values can not be unset
func (*Dense) Hstack ¶
Hstack stacks other tensors columnwise (horizontal stacking)
Example ¶
var T, T1, T2, T3 *Dense var err error T = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T1 = New(WithBacking([]float64{1000, 2000}), WithShape(2, 1)) // Simple example if T2, err = T.Hstack(T1); err == nil { fmt.Printf("T.Hstack(T1):\n%v\n", T2) } // This fails, because they are not the same shape T1.Reshape(2) if _, err = T.Hstack(T1); err != nil { fmt.Printf("Error: %v\n\n", err) } // You can stack more than one, as long as all the tensors have the same shape T1.Reshape(2, 1) T3 = T1.Clone().(*Dense) if T2, err = T.Hstack(T1, T3); err == nil { fmt.Printf("T.Hstack(T1, T3):\n%v\n", T2) } // Compatible shapes can be stacked T1 = New(Of(Float64), WithShape(2, 3)) if T2, err = T.Hstack(T1); err == nil { fmt.Printf("Hstacking (2,2) with (2,3):\n%v\n", T2) } // Special attention to vectors - vectors can only be stacked with vectors T = New(WithBacking([]float64{1000, 2000})) T1 = New(WithBacking([]float64{0, 1}), WithShape(1, 2)) if _, err = T.Hstack(T1); err != nil { fmt.Printf("Hstacking (2) with (1,2): %v\n", err) } // Now let's look at failure conditions, or unhandled situations // Incompatible shapes cannot be stacked T1.Reshape(3, 2) if _, err = T.Hstack(T1); err != nil { fmt.Printf("Hstacking (2,2) with (3,2): %v\n", err) } // Obviously you can't stack a scalar onto tensors (or the other way around) T1 = New(FromScalar(1.0)) if _, err = T.Hstack(T1); err != nil { fmt.Printf("Hstacking a scalar onto a tensor: %v\n", err) } if _, err = T1.Hstack(T); err != nil { fmt.Printf("Hstacking a tensor onto a scalar: %v\n", err) }
Output: T.Hstack(T1): ⎡ 0 1 1000⎤ ⎣ 2 3 2000⎦ Error: Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 2, got 1 T.Hstack(T1, T3): ⎡ 0 1 1000 1000⎤ ⎣ 2 3 2000 2000⎦ Hstacking (2,2) with (2,3): ⎡0 1 0 0 0⎤ ⎣2 3 0 0 0⎦ Hstacking (2) with (1,2): Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 1, got 2 Hstacking (2,2) with (3,2): Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 1, got 2 Hstacking a scalar onto a tensor: Tensor has to be at least 1 dimensions Hstacking a tensor onto a scalar: Tensor has to be at least 1 dimensions
func (*Dense) Info ¶
Info returns the access pattern which explains how the data in the underlying array is accessed. This is mostly used for debugging.
func (*Dense) Inner ¶
Inner performs a dot product on two vectors. If t or other are not vectors, it will return an error.
func (*Dense) IsManuallyManaged ¶
IsManuallyManaged returns true if the memory associated with this *Dense is manually managed (by the user)
func (*Dense) IsMaterializable ¶
IsMaterializeable indicates if the Tensor is materializable - if it has either gone through some transforms or slicing
func (*Dense) IsNativelyAccessible ¶
IsNativelyAccessible checks if the pointers are accessible by Go
func (*Dense) Lt ¶
Lt performs t < other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.Lt(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 < T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.Lt(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Lt(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Lt(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 < T2 T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Returning same type =================== T3 (Returns Same Type): ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Safe slicing ============ T3: ⎡ true true⎤ ⎣false false⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡1 1⎤ ⎣0 0⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.Lt(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.Lt(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.Lt(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.Lt(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡false false false⎤ ⎢false false false⎥ ⎣false false false⎦ Reuse With Same Type ===================== T3: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Reuse on sliced tensors ====================== T3 ⎡false false⎤ ⎣false false⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡0 0⎤ ⎣0 0⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.Lt(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.Lt(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡0 0 0⎤ ⎢0 0 0⎥ ⎣0 0 0⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡1 1 2⎤ ⎢0 0 5⎥ ⎣6 7 8⎦
func (*Dense) LtScalar ¶
func (t *Dense) LtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
LtScalar performs t < other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) Lte ¶
Lte performs t ≤ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
Example (Basic) ¶
LTE
Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3, _ = T1.Lte(T2) fmt.Println("Basic operations are safe\n=========================\nT3 = T1 <= T2") fmt.Printf("T3:\n%v\n", T3) // To return the same type, use the AsSameType function option T3, _ = T1.Lte(T2, AsSameType()) fmt.Println("Returning same type\n===================") fmt.Printf("T3 (Returns Same Type):\n%v\n", T3) // Sliced tensors are safe too T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Lte(T2) fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1) // Similarly for tensors that return the same type T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) T3, _ = V.Lte(T2, AsSameType()) // AsSameType returns a tensor of the same type fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output: Basic operations are safe ========================= T3 = T1 <= T2 T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Returning same type =================== T3 (Returns Same Type): ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Safe slicing ============ T3: ⎡true true⎤ ⎣true true⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Safe slicing (Same type) ======================== T3: ⎡1 1⎤ ⎣1 1⎦ T1 remains unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Reuse) ¶
The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen
var T1, T2, T3, V *Dense var sliced Tensor // The reuse tensor is a Tensor of bools... T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking([]bool{ true, false, true, false, true, false, true, false, true}), WithShape(3, 3)) T1.Lte(T2, WithReuse(T3)) // note that AsSameType is not used here fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3) // If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64... T1.Lte(T2, WithReuse(T3), AsSameType()) // AsSameType is used to return float64s fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3) // Slicing is similar: T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2)) V.Lte(T2, WithReuse(T3)) fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3) // Again, bear in mind same types T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) V.Lte(T2, WithReuse(T3), AsSameType()) fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output: Default behaviour: Reuse tensor is expected to be of Bools ========================================================== T3: ⎡true true true⎤ ⎢true true true⎥ ⎣true true true⎦ Reuse With Same Type ===================== T3: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Reuse on sliced tensors ====================== T3 ⎡ true true⎤ ⎣false false⎦ Reuse on sliced tensors (same type) ================================= T3 ⎡1 1⎤ ⎣0 0⎦
Example (Unsafe) ¶
If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type
var T1, T2, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T1.Lte(T2, UseUnsafe()) fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2)) V.Lte(T2, UseUnsafe()) fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output: Unsafe operation ================ T1: ⎡1 1 1⎤ ⎢1 1 1⎥ ⎣1 1 1⎦ Unsafe operation, with a sliced Tensor ====================================== T1: ⎡1 1 2⎤ ⎢1 1 5⎥ ⎣6 7 8⎦
func (*Dense) LteScalar ¶
func (t *Dense) LteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
LteScalar performs t ≤ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (*Dense) MaskAt ¶
MaskAt returns the value of the mask at a given coordinate returns false (valid) if not tensor is not masked
func (*Dense) MaskFromDense ¶
MaskFromDense adds a mask slice to tensor by XORing dense arguments' masks
func (*Dense) MaskFromSlice ¶
func (t *Dense) MaskFromSlice(x interface{})
MaskFromSlice makes mask from supplied slice
func (*Dense) MaskedAll ¶
MaskedAll returns True if all mask elements evaluate to True. If object is not masked, returns false !!! Not the same as numpy's, which looks at data elements and not at mask Instead, equivalent to numpy ma.getmask(t).all(axis)
func (*Dense) MaskedAny ¶
MaskedAny returns True if any mask elements evaluate to True. If object is not masked, returns false !!! Not the same as numpy's, which looks at data elements and not at mask Instead, equivalent to numpy ma.getmask(t).any(axis)
func (*Dense) MaskedCount ¶
MaskedCount counts the masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds
func (*Dense) MaskedEqual ¶
MaskedEqual sets the mask to true where the corresponding data is equal to val Any values must be the same type as the tensor
func (*Dense) MaskedGreater ¶
MaskedGreater sets the mask to true where the corresponding data is greater than val Any values must be the same type as the tensor
func (*Dense) MaskedGreaterEqual ¶
MaskedGreaterEqual sets the mask to true where the corresponding data is greater than or equal to val Any values must be the same type as the tensor
func (*Dense) MaskedInside ¶
MaskedInside sets the mask to true where the corresponding data is inside range of val Any values must be the same type as the tensor
func (*Dense) MaskedLess ¶
MaskedLess sets the mask to true where the corresponding data is less than val Any values must be the same type as the tensor
func (*Dense) MaskedLessEqual ¶
MaskedLessEqual sets the mask to true where the corresponding data is less than or equal to val Any values must be the same type as the tensor
func (*Dense) MaskedNotEqual ¶
MaskedNotEqual sets the mask to true where the corresponding data is not equal to val Any values must be the same type as the tensor
func (*Dense) MaskedOutside ¶
MaskedOutside sets the mask to true where the corresponding data is outside range of val Any values must be the same type as the tensor
func (*Dense) MaskedValues ¶
MaskedValues sets the mask to true where the corresponding data is equal to val Any values must be the same type as the tensor
func (*Dense) MatMul ¶
MatMul is the basic matrix multiplication that you learned in high school. It takes an optional reuse ndarray, where the ndarray is reused as the result. If that isn't passed in, a new ndarray will be created instead.
Example ¶
handleErr := func(err error) { if err != nil { panic(err) } } T0 := New(WithShape(10, 15), WithBacking(Range(Float64, 0, 150))) T1 := New(WithShape(15, 10), WithBacking(Range(Float64, 150, 0))) T2, err := MatMul(T0, T1) handleErr(err) fmt.Printf("T2:\n%v", T2)
Output: T2: ⎡ 5600 5495 5390 5285 ... 4970 4865 4760 4655⎤ ⎢ 23600 23270 22940 22610 ... 21620 21290 20960 20630⎥ ⎢ 41600 41045 40490 39935 ... 38270 37715 37160 36605⎥ ⎢ 59600 58820 58040 57260 ... 54920 54140 53360 52580⎥ . . . ⎢113600 112145 110690 109235 ... 104870 103415 101960 100505⎥ ⎢131600 129920 128240 126560 ... 121520 119840 118160 116480⎥ ⎢149600 147695 145790 143885 ... 138170 136265 134360 132455⎥ ⎣167600 165470 163340 161210 ... 154820 152690 150560 148430⎦
Example (Sliced) ¶
//ASPIRATIONAL TODO: incX and incY of different sizes handleErr := func(err error) { if err != nil { panic(err) } } T0 := New(WithShape(10, 15), WithBacking(Range(Float64, 0, 150))) T1 := New(WithShape(15, 10), WithBacking(Range(Float64, 150, 0))) T2, err := MatMul(T0, T1) handleErr(err) fmt.Printf("T2:\n%v", T2) // Slice T0 to only take a (2, 3) on the upper quadrant // T3 := T0[0:3, 0:2] T3, err := T0.Slice(makeRS(0, 3), makeRS(0, 2)) handleErr(err) fmt.Printf("T3:\n%v", T3) T4, err := T1.Slice(makeRS(13, 15), makeRS(8, 10)) handleErr(err) fmt.Printf("T4:\n%v", T4) T5, err := T3.(*Dense).MatMul(T4) handleErr(err) fmt.Printf("T3xT4:\n%v", T5) // Outputz: // T2: // ⎡ 5600 5495 5390 5285 ... 4970 4865 4760 4655⎤ // ⎢ 23600 23270 22940 22610 ... 21620 21290 20960 20630⎥ // ⎢ 41600 41045 40490 39935 ... 38270 37715 37160 36605⎥ // ⎢ 59600 58820 58040 57260 ... 54920 54140 53360 52580⎥ // . // . // . // ⎢113600 112145 110690 109235 ... 104870 103415 101960 100505⎥ // ⎢131600 129920 128240 126560 ... 121520 119840 118160 116480⎥ // ⎢149600 147695 145790 143885 ... 138170 136265 134360 132455⎥ // ⎣167600 165470 163340 161210 ... 154820 152690 150560 148430⎦ // T3: // ⎡ 0 1⎤ // ⎢15 16⎥ // ⎣30 31⎦ // T4: // ⎡12 11⎤ // ⎣ 2 1⎦ // T3xT4: // ⎡ 2 1⎤ // ⎢212 181⎥ // ⎣422 361⎦
Output:
func (*Dense) MatVecMul ¶
MatVecMul performs a matrix-vector multiplication.
Example ¶
handleErr := func(err error) { if err != nil { panic(err) } } T0 := New(WithShape(2, 3), WithBacking(Range(Float64, 1, 7))) T1 := New(WithShape(3), WithBacking(Range(Float64, 0, 3))) T2, err := T0.MatVecMul(T1) handleErr(err) fmt.Printf("T2:\n%v\n", T2)
Output: T2: [ 8 17]
Example (RowMajorSliced) ¶
// ASPIRATIONAL TODO: IncX and incY of differering values handleErr := func(err error) { if err != nil { panic(err) } } T0 := New(WithShape(10, 12), WithBacking(Range(Float64, 1, 121))) T1 := New(WithShape(3, 3), WithBacking(Range(Float64, 1, 10))) T2, err := T0.Slice(makeRS(1, 3), makeRS(3, 6)) handleErr(err) T3, err := T1.Slice(nil, makeRS(1, 2)) handleErr(err) // here the + formatting option is used because you should know that after this particular slice, the result will be a vector fmt.Printf("T2:\n%+v", T2) fmt.Printf("T3:\n%+v\n", T3) // here we print the underlying slice of T3 just to show that it's actually a much larger slice fmt.Printf("Underlying Slice: %v\n", T3.Data()) T4, err := T2.(*Dense).MatVecMul(T3) handleErr(err) fmt.Printf("T4:\n%v\n", T4) // Outputz: // T2: // Matrix (2, 3) [10 1] // ⎡14 15 16⎤ // ⎣24 25 26⎦ // T3: // Vector (3) [3] // [2 5 8] // Underlying Slice: [2 3 4 5 6 7 8] // T4: // [261 441]
Output:
func (*Dense) Materialize ¶
Materialize takes a view, copies its data and puts it in a new *Tensor.
func (*Dense) MemSize ¶
func (a *Dense) MemSize() uintptr
MemSize returns how big the slice is in bytes
func (*Dense) Mod ¶
Mod performs t % other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mod(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 %% T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Mod(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] %% T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 % T2 T3: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] % T2 T3: ⎡0 1⎤ ⎣3 4⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Mod(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 %% T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Mod(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 %% T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 % T2 Incr == T3: true T3: ⎡100 101 102⎤ ⎢103 104 105⎥ ⎣106 107 108⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 % T2 Incr == T3: true T3: ⎡100 101⎤ ⎣103 104⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Mod(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Mod(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡0 1⎤ ⎣3 4⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mod(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 %% T2\nT1 == T3: %t\nT1:\n%v\n", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Mod(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] %% T2\nV:\n%v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output: Unsafe Operation ================ T3 = T1 % T2 T1 == T3: true T1: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] % T2 V: ⎡0 1⎤ ⎣3 4⎦ Naturally, T1 is mutated too: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
func (*Dense) ModScalar ¶
func (t *Dense) ModScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
ModScalar performs t % other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.ModScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 %% 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T3, _ = T1.ModScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 %% T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.ModScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[0:2, 0:2] %% 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.ModScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 %% T1[0:2, 0:2]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe (tensor is left operand) ========================== T3 = T1 % 5 T3: ⎡0 1 2⎤ ⎢3 4 0⎥ ⎣1 2 3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (tensor is right operand) ========================== T3 = 5 % T1 T3: ⎡NaN 0 1⎤ ⎢ 2 1 0⎥ ⎣ 5 5 5⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is left operand) ============================================= T3 = T1[0:2, 0:2] % 5 T3: ⎡0 1⎤ ⎣3 4⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is right operand) ============================================= T3 = 5 % T1[0:2, 0:2] T3: ⎡NaN 0⎤ ⎣ 2 1⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
func (*Dense) Mul ¶
Mul performs t × other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mul(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 × T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Mul(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] × T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 × T2 T3: ⎡ 0 11 24⎤ ⎢ 39 56 75⎥ ⎣ 96 119 144⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] × T2 T3: ⎡ 0 11⎤ ⎣36 52⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Mul(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 × T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Mul(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 × T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 × T2 Incr == T3: true T3: ⎡100 111 124⎤ ⎢139 156 175⎥ ⎣196 219 244⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 × T2 Incr == T3: true T3: ⎡100 111⎤ ⎣136 152⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Mul(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Mul(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡ 0 11 24⎤ ⎢ 39 56 75⎥ ⎣ 96 119 144⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡ 0 11⎤ ⎣36 52⎦
Example (Reuse_operand) ¶
An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.
var T1, T2, T3 *Dense T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mul(T2, WithReuse(T1)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mul(T2, WithReuse(T2)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output: Reuse tensor passed in ====================== T3 == T1: true T3: ⎡ 0 11 24⎤ ⎢ 39 56 75⎥ ⎣ 96 119 144⎦ Reuse tensor passed in ====================== T3 == T2: true T3: ⎡ 0 11 24⎤ ⎢ 39 56 75⎥ ⎣ 96 119 144⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Mul(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 × T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Mul(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] × T2\nV:\n%v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output: Unsafe Operation ================ T3 = T1 × T2 T1 == T3: true T1: ⎡ 0 11 24⎤ ⎢ 39 56 75⎥ ⎣ 96 119 144⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] × T2 V: ⎡ 0 11⎤ ⎣36 52⎦ Naturally, T1 is mutated too: ⎡ 0 11 2⎤ ⎢36 52 5⎥ ⎣ 6 7 8⎦
func (*Dense) MulScalar ¶
func (t *Dense) MulScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
MulScalar performs t × other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.MulScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 * 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T3, _ = T1.MulScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 * T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.MulScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.MulScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 * T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe (tensor is left operand) ========================== T3 = T1 * 5 T3: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (tensor is right operand) ========================== T3 = 5 * T1 T3: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 1:3] + 5 T3: ⎡ 5 10⎤ ⎢20 25⎥ ⎣35 40⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is right operand) ============================================= T3 = 5 * T1[:, 1:3] T3: ⎡ 5 10⎤ ⎢20 25⎥ ⎣35 40⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.MulScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 * T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.MulScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 * T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 * T2 Incr == T3: true T3: ⎡100 105 110⎤ ⎢115 120 125⎥ ⎣130 135 140⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 * T2 Incr == T3: true T3: ⎡100 105⎤ ⎣115 120⎦
Example (Reuse) ¶
Reuse tensors may be used, with the WithReuse() function option.
var T1, V, Reuse, T3 *Dense var sliced Tensor // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.MulScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is right operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.MulScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.MulScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.MulScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in (Tensor is left operand) ====================== T3 == Reuse: true T3: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ Reuse tensor passed in (Tensor is right operand) ====================== T3 == Reuse: true T3: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡ 0 5⎤ ⎣15 20⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡ 0 5⎤ ⎣15 20⎦
Example (Unsafe) ¶
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.MulScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 * 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T3, _ = T1.MulScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 * T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.MulScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.MulScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 * T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output: Operation is unsafe (tensor is left operand) ========================== T3 = T1 * 5 T3: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ T3 == T1: true T1 is changed: ⎡ 0 5 10⎤ ⎢15 20 25⎥ ⎣30 35 40⎦ Operation is unsafe (tensor is right operand) ========================== T3 = 5 * T1 T3: ⎡ 0 25 50⎤ ⎢ 75 100 125⎥ ⎣150 175 200⎦ T3 == T1: true T1 is changed: ⎡ 0 25 50⎤ ⎢ 75 100 125⎥ ⎣150 175 200⎦ Operation is unsafe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 0:2] + 5 T3: ⎡ 0 5⎤ ⎢15 20⎥ ⎣30 35⎦ sliced == T3: true T1 is changed: ⎡ 0 5 2⎤ ⎢15 20 5⎥ ⎣30 35 8⎦ Operation is unsafe (sliced operations - tensor is right operand) ============================================= T3 = 5 * T1[:, 0:2] T3: ⎡ 0 5⎤ ⎢15 20⎥ ⎣30 35⎦ sliced == T3: true T1 is changed: ⎡ 0 5 2⎤ ⎢15 20 5⎥ ⎣30 35 8⎦
func (*Dense) NonMaskedCount ¶
NonMaskedCount counts the non-masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds MaskedCount counts the masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds
func (*Dense) Norm ¶
Norm returns the p-ordered norm of the *Dense, given the axes.
This implementation is directly adapted from Numpy, which is licenced under a BSD-like licence, and can be found here: https://docs.scipy.org/doc/numpy-1.9.1/license.html
func (*Dense) Pow ¶
Pow performs t ^ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Pow(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 ^ T2\nT3:\n%1.1v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Pow(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] ^ T2\nT3:\n%1.1v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 ^ T2 T3: ⎡ 0 1 4e+03⎤ ⎢2e+06 3e+08 3e+10⎥ ⎣3e+12 2e+14 2e+16⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] ^ T2 T3: ⎡ 0 1⎤ ⎣5e+05 7e+07⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Pow(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 ^ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Pow(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 ^ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 ^ T2 Incr == T3: true T3: ⎡ 100 101 4196⎤ ⎢1.5944e+06 2.6844e+08 3.0518e+10⎥ ⎣2.8211e+12 2.3263e+14 1.8014e+16⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 ^ T2 Incr == T3: true T3: ⎡ 100 101⎤ ⎣5.3154e+05 6.7109e+07⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Pow(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Pow(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%1.v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡ 0 1 4e+03⎤ ⎢2e+06 3e+08 3e+10⎥ ⎣3e+12 2e+14 2e+16⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡ 0 1⎤ ⎣5e+05 7e+07⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Pow(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 ^ T2\nT1 == T3: %t\nT1:\n%1.1v\n", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Pow(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] ^ T2\nV:\n%1.1v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%1.1v", T1)
Output: Unsafe Operation ================ T3 = T1 ^ T2 T1 == T3: true T1: ⎡ 0 1 4e+03⎤ ⎢2e+06 3e+08 3e+10⎥ ⎣3e+12 2e+14 2e+16⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] ^ T2 V: ⎡ 0 1⎤ ⎣5e+05 7e+07⎦ Naturally, T1 is mutated too: ⎡ 0 1 2⎤ ⎢5e+05 7e+07 5⎥ ⎣ 6 7 8⎦
func (*Dense) PowScalar ¶
func (t *Dense) PowScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
PowScalar performs t ^ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.PowScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 ^ 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T3, _ = T1.PowScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 ^ T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.PowScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[0:2, 0:2] ^ 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.PowScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 ^ T1[0:2, 0:2]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:
func (*Dense) ReadCSV ¶
ReadCSV reads a CSV into a *Dense. It will override the underlying data.
BUG(chewxy): reading CSV doesn't handle CSVs with different columns per row yet.
func (*Dense) Reduce ¶
func (t *Dense) Reduce(fn interface{}, axis int, defaultValue interface{}) (retVal *Dense, err error)
Reduce applies a reduction function and reduces the values along the given axis.
func (*Dense) Repeat ¶
Repeat is like Numpy's repeat. It repeats the elements of an array. The repeats param defines how many times each element in the axis is repeated. Just like NumPy, the repeats param is broadcasted to fit the size of the given axis.
func (*Dense) RequiresIterator ¶
RequiresIterator indicates if an iterator is required to read the data in *Dense in the correct fashion
func (*Dense) Reshape ¶
Reshape reshapes a *Dense. If the tensors need to be materialized (either it's a view or transpose), it will be materialized before the reshape happens
func (*Dense) RollAxis ¶
RollAxis rolls the axis backwards until it lies in the given position.
This method was adapted from Numpy's Rollaxis. The licence for Numpy is a BSD-like licence and can be found here: https://github.com/numpy/numpy/blob/master/LICENSE.txt
As a result of being adapted from Numpy, the quirks are also adapted. A good guide reducing the confusion around rollaxis can be found here: http://stackoverflow.com/questions/29891583/reason-why-numpy-rollaxis-is-so-confusing (see answer by hpaulj)
func (*Dense) SVD ¶
SVD does the Single Value Decomposition for the *Dense.
How it works is it temporarily converts the *Dense into a gonum/mat64 matrix, and uses Gonum's SVD function to perform the SVD. In the future, when gonum/lapack fully supports float32, we'll look into rewriting this
Example ¶
T := New( WithShape(4, 5), WithBacking([]float64{1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0}), ) _, u, _, _ := T.SVD(true, true) uT := u.Clone().(*Dense) uT.T() eye, err := u.MatMul(uT) fmt.Println(eye) fmt.Println(err)
Output: ⎡1 0 0 0⎤ ⎢0 1 0 0⎥ ⎢0 0 1 0⎥ ⎣0 0 0 1⎦ <nil>
func (*Dense) SafeT ¶
SafeT is exactly like T(), except it returns a new *Dense. The data is also copied over, unmoved.
func (*Dense) ScalarValue ¶
func (t *Dense) ScalarValue() interface{}
ScalarValue returns the scalar value of a *Tensor, IF and ONLY IF it's a Tensor representation of a scalar value. This is required because operations like a (vec · vec) would return a scalar value. I didn't want to return interface{} for all the API methods, so the next best solution is to wrap the scalar value in a *Tensor
func (*Dense) Set ¶
func (a *Dense) Set(i int, x interface{})
Set sets the value of the underlying array at the index i.
func (*Dense) SetMaskAtIndex ¶
SetMaskAtDataIndex set the value of the mask at a given index
func (*Dense) ShallowClone ¶
ShallowClone clones the *Dense without making a copy of the underlying array
func (*Dense) Slice ¶
Slice performs slicing on the *Dense Tensor. It returns a view which shares the same underlying memory as the original *Dense.
Given:
T = NewTensor(WithShape(2,2), WithBacking(RangeFloat64(0,4))) V, _ := T.Slice(nil, singleSlice(1)) // T[:, 1]
Any modification to the values in V, will be reflected in T as well.
The method treats <nil> as equivalent to a colon slice. T.Slice(nil) is equivalent to T[:] in Numpy syntax
Example ¶
var T Tensor T = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) fmt.Printf("T:\n%v\n", T) // T[0:2, 0:2] T, _ = T.Slice(makeRS(0, 2), makeRS(0, 2)) // makeRS is an unexported function that creates a Slice. fmt.Printf("T[0:2, 0:2]:\n%v\n", T) // T[:, 1] T, _ = T.(Slicer).Slice(nil, ss(1)) // ss is unexported fmt.Printf("T[:, 1]:\n%v\n", T)
Output: T: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ T[0:2, 0:2]: ⎡0 1⎤ ⎣3 4⎦ T[:, 1]: [1 4]
Example (OneDimension) ¶
Slicing works on one dimensional arrays too:
var T Tensor T = New(WithBacking(Range(Float64, 0, 9))) fmt.Printf("T:\n%v\n\n", T) T, _ = T.Slice(makeRS(0, 5)) fmt.Printf("T[0:5]:\n%v\n", T)
Output: T: [0 1 2 3 ... 5 6 7 8] T[0:5]: [0 1 2 3 4]
Example (ViewMutation) ¶
Any modifications to the sliced value modifies the original slice as well
var T, V Tensor T = New(WithBacking(Range(Int, 0, 16)), WithShape(4, 4)) fmt.Printf("T:\n%v\n", T) V, _ = T.Slice(makeRS(1, 3), makeRS(1, 3)) fmt.Printf("V:\n%v\n", V) // Now we modify V's 0th value V.(*Dense).Set(0, 1000) fmt.Printf("V[0] = 1000:\n%v\n", V) fmt.Printf("T is also mutated:\n%v", T)
Output: T: ⎡ 0 1 2 3⎤ ⎢ 4 5 6 7⎥ ⎢ 8 9 10 11⎥ ⎣12 13 14 15⎦ V: ⎡ 5 6⎤ ⎣ 9 10⎦ V[0] = 1000: ⎡1000 6⎤ ⎣ 9 10⎦ T is also mutated: ⎡ 0 1 2 3⎤ ⎢ 4 1000 6 7⎥ ⎢ 8 9 10 11⎥ ⎣ 12 13 14 15⎦
func (*Dense) SliceInto ¶
SliceInto is a convenience method. It does NOT copy the values - it simply updates the AP of the view. The underlying data is the same. This method will override ALL the metadata in view.
func (*Dense) Stack ¶
Stack stacks the other tensors along the axis specified. It is like Numpy's stack function.
func (*Dense) Sub ¶
Sub performs t - other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Sub(T2) fmt.Printf("Default operation is safe\n==========================\nT3 = T1 - T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) T3, _ = V.Sub(T2) fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe ========================== T3 = T1 - T2 T3: ⎡-10 -10 -10⎤ ⎢-10 -10 -10⎥ ⎣-10 -10 -10⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations) ============================================= T3 = T1[0:2, 0:2] + T2 T3: ⎡-10 -10⎤ ⎣ -9 -9⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T2, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.Sub(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.Sub(T2, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 - T2 Incr == T3: true T3: ⎡90 90 90⎤ ⎢90 90 90⎥ ⎣90 90 90⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 + T2 Incr == T3: true T3: ⎡90 90⎤ ⎣91 91⎦
Example (Reuse) ¶
An optional reuse tensor can also be specified with the WithReuse function option
var T1, V, T2, Reuse, T3 *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) T3, _ = T1.Sub(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.Sub(T2, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in ====================== T3 == Reuse: true T3: ⎡-10 -10 -10⎤ ⎢-10 -10 -10⎥ ⎣-10 -10 -10⎦ Reuse tensor passed in (sliced tensor) ====================================== T3 == Reuse: true T3: ⎡-10 -10⎤ ⎣ -9 -9⎦
Example (Reuse_operand) ¶
An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.
var T1, T2, T3 *Dense T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Sub(T2, WithReuse(T1)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Sub(T2, WithReuse(T2)) fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output: Reuse tensor passed in ====================== T3 == T1: true T3: ⎡-10 -10 -10⎤ ⎢-10 -10 -10⎥ ⎣-10 -10 -10⎦ Reuse tensor passed in ====================== T3 == T2: true T3: ⎡-10 -10 -10⎤ ⎢-10 -10 -10⎥ ⎣-10 -10 -10⎦
Example (Unsafe) ¶
To perform unsafe operations, use the `UseUnsafe` function option
var T1, T2, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3)) T3, _ = T1.Sub(T2, UseUnsafe()) fmt.Printf("Unsafe Operation\n================\nT3 = T1 - T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1) T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2)) V.Sub(T2, UseUnsafe()) // unsafe overwrites the data in T1 fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] + T2\nV:\n%v\n", V) fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output: Unsafe Operation ================ T3 = T1 - T2 T1 == T3: true T1: ⎡-10 -10 -10⎤ ⎢-10 -10 -10⎥ ⎣-10 -10 -10⎦ Unsafe Operation on sliced Tensors ================================== V = T1[0:2, 0:2] + T2 V: ⎡-10 -10⎤ ⎣ -9 -9⎦ Naturally, T1 is mutated too: ⎡-10 -10 2⎤ ⎢ -9 -9 5⎥ ⎣ 6 7 8⎦
func (*Dense) SubScalar ¶
func (t *Dense) SubScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)
SubScalar performs t - other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
Example (Basic) ¶
By default, arithmetic operations are safe
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.SubScalar(float32(5), true) fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 - 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T3, _ = T1.SubScalar(float32(5), false) fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 - T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.SubScalar(float32(5), true) fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(1, 3)) V = sliced.(*Dense) T3, _ = V.SubScalar(float32(5), false) fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 - T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output: Default operation is safe (tensor is left operand) ========================== T3 = T1 - 5 T3: ⎡-5 -4 -3⎤ ⎢-2 -1 0⎥ ⎣ 1 2 3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (tensor is right operand) ========================== T3 = 5 - T1 T3: ⎡ 5 4 3⎤ ⎢ 2 1 0⎥ ⎣-1 -2 -3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 1:3] + 5 T3: ⎡-4 -3⎤ ⎢-1 0⎥ ⎣ 2 3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦ Default operation is safe (sliced operations - tensor is right operand) ============================================= T3 = 5 - T1[:, 1:3] T3: ⎡ 4 3⎤ ⎢ 1 0⎥ ⎣-2 -3⎦ T1 is unchanged: ⎡0 1 2⎤ ⎢3 4 5⎥ ⎣6 7 8⎦
Example (Incr) ¶
Incrementing a tensor is also a function option provided by the package
var T1, T3, Incr, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3)) T3, _ = T1.SubScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3) // Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2)) T3, _ = V.SubScalar(float32(5), true, WithIncr(Incr)) fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output: Incr tensor passed in ====================== Incr += T1 - T2 Incr == T3: true T3: ⎡ 95 96 97⎤ ⎢ 98 99 100⎥ ⎣101 102 103⎦ Incr tensor passed in (sliced tensor) ====================================== Incr += T1 - T2 Incr == T3: true T3: ⎡95 96⎤ ⎣98 99⎦
Example (Reuse) ¶
Reuse tensors may be used, with the WithReuse() function option.
var T1, V, Reuse, T3 *Dense var sliced Tensor // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.SubScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is right operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3)) T3, _ = T1.SubScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.SubScalar(float32(5), true, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3) // Tensor is left operand T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2)) V = sliced.(*Dense) Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result T3, _ = V.SubScalar(float32(5), false, WithReuse(Reuse)) fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output: Reuse tensor passed in (Tensor is left operand) ====================== T3 == Reuse: true T3: ⎡-5 -4 -3⎤ ⎢-2 -1 0⎥ ⎣ 1 2 3⎦ Reuse tensor passed in (Tensor is right operand) ====================== T3 == Reuse: true T3: ⎡ 5 4 3⎤ ⎢ 2 1 0⎥ ⎣-1 -2 -3⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡-5 -4⎤ ⎣-2 -1⎦ Reuse tensor passed in (sliced tensor - Tensor is left operand) ====================================== T3 == Reuse: true T3: ⎡5 4⎤ ⎣2 1⎦
Example (Unsafe) ¶
var T1, T3, V *Dense var sliced Tensor T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) T3, _ = T1.SubScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 - 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T3, _ = T1.SubScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 - T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.SubScalar(float32(5), true, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1) T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3)) sliced, _ = T1.Slice(nil, makeRS(0, 2)) V = sliced.(*Dense) T3, _ = V.SubScalar(float32(5), false, UseUnsafe()) fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 - T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output: Operation is unsafe (tensor is left operand) ========================== T3 = T1 - 5 T3: ⎡-5 -4 -3⎤ ⎢-2 -1 0⎥ ⎣ 1 2 3⎦ T3 == T1: true T1 is changed: ⎡-5 -4 -3⎤ ⎢-2 -1 0⎥ ⎣ 1 2 3⎦ Operation is unsafe (tensor is right operand) ========================== T3 = 5 - T1 T3: ⎡10 9 8⎤ ⎢ 7 6 5⎥ ⎣ 4 3 2⎦ T3 == T1: true T1 is changed: ⎡10 9 8⎤ ⎢ 7 6 5⎥ ⎣ 4 3 2⎦ Operation is unsafe (sliced operations - tensor is left operand) ============================================= T3 = T1[:, 0:2] + 5 T3: ⎡-5 -4⎤ ⎢-2 -1⎥ ⎣ 1 2⎦ sliced == T3: true T1 is changed: ⎡-5 -4 2⎤ ⎢-2 -1 5⎥ ⎣ 1 2 8⎦ Operation is unsafe (sliced operations - tensor is right operand) ============================================= T3 = 5 - T1[:, 0:2] T3: ⎡ 5 4⎤ ⎢ 2 1⎥ ⎣-1 -2⎦ sliced == T3: true T1 is changed: ⎡ 5 4 2⎤ ⎢ 2 1 5⎥ ⎣-1 -2 8⎦
func (*Dense) T ¶
T performs a thunked transpose. It doesn't actually do anything, except store extra information about the post-transposed shapes and strides Usually this is more than enough, as BLAS will handle the rest of the transpose
func (*Dense) TensorMul ¶
TensorMul is for multiplying Tensors with more than 2 dimensions.
The algorithm is conceptually simple (but tricky to get right):
- Transpose and reshape the Tensors in such a way that both t and other are 2D matrices
- Use DGEMM to multiply them
- Reshape the results to be the new expected result
This function is a Go implementation of Numpy's tensordot method. It simplifies a lot of what Numpy does.
func (*Dense) Trace ¶
Trace returns the trace of the matrix (i.e. the sum of the diagonal elements). It only works for matrices
func (*Dense) Transpose ¶
Transpose() actually transposes the data. This is a generalized version of the inplace matrix transposition algorithm from Wikipedia: https://en.wikipedia.org/wiki/In-place_matrix_transposition
func (*Dense) UT ¶
func (t *Dense) UT()
UT is a quick way to untranspose a currently transposed *Dense The reason for having this is quite simply illustrated by this problem:
T = NewTensor(WithShape(2,3,4)) T.T(1,2,0)
To untranspose that, we'd need to apply a transpose of (2,0,1). This means having to keep track and calculate the transposes. Instead, here's a helpful convenience function to instantly untranspose any previous transposes.
Nothing will happen if there was no previous transpose
func (*Dense) Uintptr ¶
func (a *Dense) Uintptr() uintptr
Uintptr returns the pointer of the first value of the slab
func (*Dense) Vstack ¶
Vstack stacks other tensors rowwise (vertical stacking). Vertical stacking requires all involved Tensors to have at least 2 dimensions
Example ¶
var T, T1, T2, T3 *Dense var err error T = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2)) T1 = New(WithBacking([]float64{1000, 2000}), WithShape(1, 2)) // Simple example if T2, err = T.Vstack(T1); err == nil { fmt.Printf("T.Vstack(T1):\n%v\n", T2) } else { fmt.Printf("%+v", err) } // You can stack more than one, as long as all the tensors have the same shape T3 = T1.Clone().(*Dense) if T2, err = T.Vstack(T1, T3); err == nil { fmt.Printf("T.Vstack(T1, T3):\n%v\n", T2) } else { fmt.Printf("====\nerr %v\n%v\n===\n", err, T3.Shape()) } // Let's look at failure conditions // All tensors must be at least 2D T.Reshape(4) if _, err = T.Vstack(T1); err != nil { fmt.Printf("Vstacking (4) with (1, 2): %v\n", err) } if _, err = T1.Vstack(T); err != nil { fmt.Printf("Vstacking (1, 2) with (4): %v\n", err) }
Output: T.Vstack(T1): ⎡ 0 1⎤ ⎢ 2 3⎥ ⎣1000 2000⎦ T.Vstack(T1, T3): ⎡ 0 1⎤ ⎢ 2 3⎥ ⎢1000 2000⎥ ⎣1000 2000⎦ Vstacking (4) with (1, 2): Tensor has to be at least 2 dimensions Vstacking (1, 2) with (4): Tensor has to be at least 2 dimensions
func (*Dense) WriteCSV ¶
WriteCSV writes the *Dense to a CSV. It accepts an optional string formatting ("%v", "%f", etc...), which controls what is written to the CSV. If tensor is masked, invalid values are replaced by the default fill value.
func (*Dense) WriteNpy ¶
WriteNpy writes the *Tensor as a numpy compatible serialized file.
The format is very well documented here: http://docs.scipy.org/doc/numpy/neps/npy-format.html
Gorgonia specifically uses Version 1.0, as 65535 bytes should be more than enough for the headers. The values are written in little endian order, because let's face it - 90% of the world's computers are running on x86+ processors.
This method does not close the writer. Closing (if needed) is deferred to the caller If tensor is masked, invalid values are replaced by the default fill value.
type DenseStacker ¶
type DenseStacker interface {
StackDense(t DenseTensor, axis int, others ...DenseTensor) (retVal DenseTensor, err error)
}
DenseStacker is any engine that can stack DenseTensors along an axis. This is a specialization of Stacker.
type DenseTensor ¶
type DenseTensor interface { Tensor Info() *AP IsMatrix() bool IsVector() bool IsRowVec() bool IsColVec() bool // operations Inner(other Tensor) (retVal interface{}, err error) MatMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error) MatVecMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error) TensorMul(other Tensor, axesA, axesB []int) (retVal *Dense, err error) // contains filtered or unexported methods }
DenseTensor is the interface for any Dense tensor.
type Densor ¶
type Densor interface {
Dense() *Dense
}
A Densor is any type that can return a *Dense
type Diager ¶
Diager is any engine that can return a tensor that only contains the diagonal values of the input
type Diver ¶
type Diver interface { // Div performs a / b Div(a, b Tensor, opts ...FuncOpt) (Tensor, error) // DivScalar divides a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand. // Whether or not the input tensor is clobbered is left to the implementation DivScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Diver is any engine that can perform elementwise division.
type Dtype ¶
Dtype represents a data type of a Tensor. Concretely it's implemented as an embedded reflect.Type which allows for easy reflection operations. It also implements hm.Type, for type inference in Gorgonia
func (Dtype) FreeTypeVar ¶
func (dt Dtype) FreeTypeVar() hm.TypeVarSet
type ElEqer ¶
type ElEqer interface { ElEq(a, b Tensor, opts ...FuncOpt) (Tensor, error) EqScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) ElNe(a, b Tensor, opts ...FuncOpt) (Tensor, error) NeScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
ElEqer is any engine that can perform the elementwise equality comparison operation.
type Engine ¶
type Engine interface { AllocAccessible() bool // AllocAccessible returns true if the engine return Go-accessible memory pointers? Alloc(size int64) (Memory, error) // Alloc allocates memory Free(mem Memory, size int64) error // Free rees memory Memset(mem Memory, val interface{}) error // Memset - duh Memclr(mem Memory) // Memclr - duh Memcpy(dst, src Memory) error // Memcpy - duh Accessible(mem Memory) (Memory, error) // Accessible returns Go-accesible memory pointers, or errors, if it cannot be done WorksWith(order DataOrder) bool // WorksWith returns true if the data order can be directly worked with }
Engine is a representation of an execution engine. While different execution engines can have different capabilities, all execution engines must be able to allocate and free memory
type Eq ¶
type Eq interface {
Eq(interface{}) bool
}
Eq is any type where you can perform an equality test
type Exper ¶
Exper is any engine that can perform elementwise natural exponentiation on the values in a Tensor.
type FMAer ¶
type FMAer interface { FMA(a, x, y Tensor) (Tensor, error) FMAScalar(a Tensor, x interface{}, y Tensor) (Tensor, error) }
FMAer is any engine that can perform fused multiply add functions: A * X + Y. Also known as Axpy.
type FlatIterator ¶
type FlatIterator struct { *AP // contains filtered or unexported fields }
FlatIterator is an iterator that iterates over Tensors according to the data's layout. It utilizes the *AP of a Tensor to determine what the next index is. This data structure is similar to Numpy's flatiter, with some standard Go based restrictions of course (such as, not allowing negative indices)
func FlatIteratorFromDense ¶
func FlatIteratorFromDense(tt DenseTensor) *FlatIterator
FlatIteratorFromDense creates a new FlatIterator from a dense tensor
func (*FlatIterator) Chan ¶
func (it *FlatIterator) Chan() (retVal chan int)
Chan returns a channel of ints. This is useful for iterating multiple Tensors at the same time.
func (*FlatIterator) Coord ¶
func (it *FlatIterator) Coord() []int
Coord returns the next coordinate. When Next() is called, the coordinates are updated AFTER the Next() returned. See example for more details.
The returned coordinates is mutable. Changing any values in the return value will change the state of the iterator
func (*FlatIterator) Done ¶
func (it *FlatIterator) Done() bool
Done checks whether iterators are done
func (*FlatIterator) Next ¶
func (it *FlatIterator) Next() (int, error)
Next returns the index of the current coordinate.
func (*FlatIterator) NextInvalid ¶
func (it *FlatIterator) NextInvalid() (int, int, error)
NextInvalid returns the index of the current coordinate. Identical to Next for FlatIterator also returns the number of increments to get to next invalid element (1 or -1 in reverse case). Like NextValid, this method's purpose is to maintain consistency with the masked iterator, for which the step between invalid elements can be anywhere from 0 to the tensor's length
func (*FlatIterator) NextValid ¶
func (it *FlatIterator) NextValid() (int, int, error)
NextValid returns the index of the current coordinate. Identical to Next for FlatIterator Also returns the number of increments to get to next element ( 1, or -1 in reverse case). This is to maintain consistency with the masked iterator, for which the step between valid elements can be more than 1
func (*FlatIterator) NextValidity ¶
func (it *FlatIterator) NextValidity() (int, bool, error)
NextValidity returns the index of the current coordinate, and whether or not it's valid. Identical to Next()
func (*FlatIterator) SetForward ¶
func (it *FlatIterator) SetForward()
SetForward initializes iterator to run forwards
func (*FlatIterator) SetReverse ¶
func (it *FlatIterator) SetReverse()
SetReverse initializes iterator to run backwards
type FlatMaskedIterator ¶
type FlatMaskedIterator struct { *FlatIterator // contains filtered or unexported fields }
FlatMaskedIterator is an iterator that iterates over simple masked Tensors. It is used when the mask stride is identical to data stride with the exception of trailing zeros, in which case the data index is always a perfect integer multiple of the mask index
func FlatMaskedIteratorFromDense ¶
func FlatMaskedIteratorFromDense(tt MaskedTensor) *FlatMaskedIterator
FlatMaskedIteratorFromDense creates a new FlatMaskedIterator from dense tensor
func (*FlatMaskedIterator) NextInvalid ¶
func (it *FlatMaskedIterator) NextInvalid() (int, int, error)
NextInvalid returns the index of the next invalid element as well as the number of increments to get to next invalid element
func (*FlatMaskedIterator) NextValid ¶
func (it *FlatMaskedIterator) NextValid() (int, int, error)
NextValid returns the index of the next valid element, as well as the number of increments to get to next element
func (*FlatMaskedIterator) NextValidity ¶
func (it *FlatMaskedIterator) NextValidity() (int, bool, error)
type FlatSparseIterator ¶
type FlatSparseIterator struct { *CS // contains filtered or unexported fields }
FlatSparseIterator is an iterator that works very much in the same way as flatiterator, except for sparse tensors
func NewFlatSparseIterator ¶
func NewFlatSparseIterator(t *CS) *FlatSparseIterator
func (*FlatSparseIterator) Coord ¶
func (it *FlatSparseIterator) Coord() []int
func (FlatSparseIterator) Data ¶
func (a FlatSparseIterator) Data() interface{}
Data returns the representation of a slice.
func (*FlatSparseIterator) Done ¶
func (it *FlatSparseIterator) Done() bool
func (FlatSparseIterator) Get ¶
func (a FlatSparseIterator) Get(i int) interface{}
Get returns the ith element of the underlying array of the *Dense tensor.
func (FlatSparseIterator) Memset ¶
func (a FlatSparseIterator) Memset(x interface{}) error
Memset sets all values in the array.
func (*FlatSparseIterator) Next ¶
func (it *FlatSparseIterator) Next() (int, error)
func (*FlatSparseIterator) NextInvalid ¶
func (it *FlatSparseIterator) NextInvalid() (int, int, error)
func (*FlatSparseIterator) NextValidity ¶
func (it *FlatSparseIterator) NextValidity() (int, bool, error)
func (*FlatSparseIterator) Reset ¶
func (it *FlatSparseIterator) Reset()
func (FlatSparseIterator) Set ¶
func (a FlatSparseIterator) Set(i int, x interface{})
Set sets the value of the underlying array at the index i.
func (*FlatSparseIterator) SetForward ¶
func (it *FlatSparseIterator) SetForward()
func (*FlatSparseIterator) SetReverse ¶
func (it *FlatSparseIterator) SetReverse()
func (*FlatSparseIterator) Start ¶
func (it *FlatSparseIterator) Start() (int, error)
type Float32Engine ¶
type Float32Engine struct {
StdEng
}
Float32Engine is an execution engine that is optimized to only work with float32s. It assumes all data will are float32s.
Use this engine only as form of optimization. You should probably be using the basic default engine for most cases.
func (Float32Engine) Add ¶
Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
type Float64Engine ¶
type Float64Engine struct {
StdEng
}
Float64Engine is an execution engine that is optimized to only work with float64s. It assumes all data will are float64s.
Use this engine only as form of optimization. You should probably be using the basic default engine for most cases.
func (Float64Engine) Add ¶
Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
type FuncOpt ¶
type FuncOpt func(*OpOpt)
FuncOpt are optionals for calling Tensor function.
func As ¶
As makes sure that the the return Tensor is of the type specified. Currently only works for FromMat64
func AsSameType ¶
func AsSameType() FuncOpt
AsSameType makes sure that the return Tensor is the same type as input Tensors.
func UseSafe ¶
func UseSafe() FuncOpt
UseSafe ensures that the operation is a safe operation (copies data, does not clobber). This is the default option for most methods and functions
type Gteer ¶
type Gteer interface { Gte(a, b Tensor, opts ...FuncOpt) (Tensor, error) GteScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Gteer is any engine that can perform the Gte operation.
type Gter ¶
type Gter interface { Gt(a, b Tensor, opts ...FuncOpt) (Tensor, error) GtScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Gter is any engine that can perform the Gt operation.
type InfChecker ¶
InfChecker checks that the tensor contains a Inf. Errors are to be returned if the concept of Inf does not apply to the data type. Other errors may also occur. See specific implementations for details
type InnerProder ¶
type InnerProder interface {
Inner(a, b Tensor) (interface{}, error) // Inner always returns a scalar value
}
InnerProder is any engine that can perform inner product multiplication
type InnerProderF32 ¶
InnerProderF32 is an optimization for float32 - results are returned as float32.
type InnerProderF64 ¶
InnerProderF64 is an optimization for float64 - results are returned as float64
type Iterator ¶
type Iterator interface { // Start returns the first index Start() (int, error) // Next returns the next index. Next is defined as the next value in the coordinates // For example: let x be a (5,5) matrix that is row-major. Current index is for the coordinate (3,3). // Next() returns the index of (3,4). // // If there is no underlying data store for (3,4) - say for example, the matrix is a sparse matrix, it return an error. // If however, there is an underlying data store for (3,4), but it's not valid (for example, masked tensors), it will not return an error. // // Second example: let x be a (5,5) matrix that is col-major. Current index is for coordinate (3,3). // Next() returns the index of (4,3). Next() (int, error) // NextValidity is like Next, but returns the validity of the value at the index as well. NextValidity() (int, bool, error) // NextValid returns the next valid index, as well as a skip count. NextValid() (int, int, error) // NextInvalid returns the next invalid index, as well as a skip count. NextInvalid() (int, int, error) // Reset resets the iterator Reset() // SetReverse tells the iterator to iterate in reverse SetReverse() // SetForward tells the iterator to iterate forwards SetForward() // Coord returns the coordinates Coord() []int // Done returns true when the iterator is done iterating. Done() bool // Shape returns the shape of the multidimensional tensor it's iterating on. Shape() Shape }
Iterator is the generic iterator interface. It's used to iterate across multi-dimensional slices, no matter the underlying data arrangement
func IteratorFromDense ¶
func IteratorFromDense(tts ...DenseTensor) Iterator
IteratorFromDense creates a new Iterator from a list of dense tensors
func NewIterator ¶
NewIterator creates a new Iterator from an ap. The type of iterator depends on number of aps passed, and whether they are masked or not
type Lteer ¶
type Lteer interface { Lte(a, b Tensor, opts ...FuncOpt) (Tensor, error) LteScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Lteer is any engine that can perform the Lte operation.
type Lter ¶
type Lter interface { Lt(a, b Tensor, opts ...FuncOpt) (Tensor, error) LtScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Lter is any engine that can perform the Lt operation.
type MaskedTensor ¶
type MaskedTensor interface { DenseTensor IsMasked() bool SetMask([]bool) Mask() []bool }
type MatVecMuler ¶
MatVecMuler is any engine that can perform matrix vector multiplication
type MathError ¶
type MathError interface {
Indices() []int
}
MathError is an error that occurs in an Array. It lists the indices for which an error has happened
type MaxBetweener ¶ added in v0.9.21
type MaxBetweener interface { MaxBetween(a, b Tensor, opts ...FuncOpt) (Tensor, error) MaxBetweenScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
MaxBetweener is any engine that can perform an elementwise ma<x-between.
type MemSetter ¶
type MemSetter interface {
Memset(interface{}) error
}
A MemSetter is any type that can set itself to a value.
type Memory ¶
Memory is a representation of memory of the value.
The main reason for requiring both Uintptr() and Pointer() methods is because while Go currently does not have a compacting garbage collector, from the docs of `unsafe`:
Even if a uintptr holds the address of some object, the garbage collector, will not update that uintptr's value if the object moves, nor will that uintptr keep the object from being reclaimed.
type MemoryFlag ¶
type MemoryFlag byte
MemoryFlag is a flag representing the use possibilities of Memory
const ( // NativelyInaccessible indicates that the data in the memory cannot be accessed by Go code. NativelyInaccessible MemoryFlag = 1 << iota // ManuallyManaged indicates that the memory is managed by something else. Any Tensor with // manually managed memory will not be returned to the pool. ManuallyManaged // IsOverallocated indicates that the memory for a given tensor is overallocated (i.e. the size-in-use is smaller than the size allocated) IsOverallocated )
func MakeMemoryFlag ¶
func MakeMemoryFlag(fs ...MemoryFlag) (retVal MemoryFlag)
type MinBetweener ¶ added in v0.9.21
type MinBetweener interface { MinBetween(a, b Tensor, opts ...FuncOpt) (Tensor, error) MinBetweenScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
MinBetweener is any engine that can perform an elementwise min=between.
type Moder ¶
type Moder interface { // Mod performs a % b Mod(a, b Tensor, opts ...FuncOpt) (Tensor, error) // ModScalar performs a % b where one of the operands is scalar. leftTensor indicates if the tensor is the left operand. // Whether or not hte input tensor is clobbered is left to the implementation ModScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Moder is any engine that can perform elementwise Mod()
type Muler ¶
type Muler interface { // Mul performs a * b Mul(a, b Tensor, opts ...FuncOpt) (Tensor, error) // MulScalar multiplies a scalar to the tensor. leftTensor indicates if the tensor is the left operand. // Whether or not the input tensor is clobbered is left to the implementation MulScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Muler is any engine that can perform elementwise multiplication. For matrix multiplication, an engine should implement MatMul() or MatVecMul() or Inner()
type MultIterator ¶
type MultIterator struct { *AP // Uses AP of the largest tensor in list // contains filtered or unexported fields }
MultIterator is an iterator that iterates over multiple tensors, including masked tensors.
It utilizes the *AP of a Tensor to determine what the next index is.
This data structure is similar to Numpy's flatiter, with some standard Go based restrictions of course (such as, not allowing negative indices)
func MultIteratorFromDense ¶
func MultIteratorFromDense(tts ...DenseTensor) *MultIterator
MultIteratorFromDense creates a new MultIterator from a list of dense tensors
func NewMultIterator ¶
func NewMultIterator(aps ...*AP) *MultIterator
NewMultIterator creates a new MultIterator from a list of APs
func (*MultIterator) Coord ¶
func (it *MultIterator) Coord() []int
Coord returns the next coordinate. When Next() is called, the coordinates are updated AFTER the Next() returned. See example for more details.
func (*MultIterator) Done ¶
func (it *MultIterator) Done() bool
Done checks whether iterators are done
func (*MultIterator) LastIndex ¶
func (it *MultIterator) LastIndex(j int) int
LastIndex returns index of requested iterator
func (*MultIterator) Next ¶
func (it *MultIterator) Next() (int, error)
Next returns the index of the next coordinate
func (*MultIterator) NextInvalid ¶
func (it *MultIterator) NextInvalid() (int, int, error)
NextInvalid returns the index of the next invalid coordinate
func (*MultIterator) NextValid ¶
func (it *MultIterator) NextValid() (int, int, error)
NextValid returns the index of the next valid coordinate
func (*MultIterator) NextValidity ¶
func (it *MultIterator) NextValidity() (int, bool, error)
func (*MultIterator) SetForward ¶
func (it *MultIterator) SetForward()
SetForward initializes iterator to run forward
func (*MultIterator) SetReverse ¶
func (it *MultIterator) SetReverse()
SetReverse initializes iterator to run backward
type NaNChecker ¶
NaNChecker checks that the tensor contains a NaN Errors are to be returned if the concept of NaN does not apply to the data type. Other errors may also occur. See specific implementations for details
type NoOpError ¶
type NoOpError interface {
NoOp() bool
}
NoOpError is a useful for operations that have no op.
type NonStdEngine ¶
type NonStdEngine interface {
NonStdAlloc() // noop
}
NonStdEngine are any engines that do not allocate using the default built in allocator
type NormOrder ¶
type NormOrder float64
NormOrder represents the order of the norm. Ideally, we'd only represent norms with a uint/byte. But there are norm types that are outside numerical types, such as nuclear norm and fobenius norm. So it is internally represented by a float. If Go could use NaN and Inf as consts, it would have been best, Instead, we use constructors. Both Nuclear and Frobenius norm types are represented as NaNs
The using of NaN and Inf as "special" Norm types lead to the need for IsInf() and IsFrobenius() and IsNuclear() method
func FrobeniusNorm ¶
func FrobeniusNorm() NormOrder
func NegInfNorm ¶
func NegInfNorm() NormOrder
func NuclearNorm ¶
func NuclearNorm() NormOrder
func UnorderedNorm ¶
func UnorderedNorm() NormOrder
func (NormOrder) IsFrobenius ¶
IsFrobenius returns true if the NormOrder is a Frobenius norm
func (NormOrder) IsUnordered ¶
IsUnordered returns true if the NormOrder is not an ordered norm
type Oner ¶
type Oner interface {
One()
}
A Oner is any type that can set itself to the equivalent of one. It's used to implement the arrays
type OpOpt ¶
type OpOpt struct {
// contains filtered or unexported fields
}
OpOpt are the options used to call ops
func ParseFuncOpts ¶
ParseFuncOpts parses a list of FuncOpt into a single unified method call structure.
func (*OpOpt) As ¶
As returns the dtype of the return value of the method call. For example:
a.Lt(b, As(Bool))
indicates that the result of the `Lt()` should be a Tensor of Bool.
Another example:
a.Add(b, As(Int))
indicates that the result of `Add()` should be converted to a Tensor of Int. Note that this function is not yet supported in most operations.
type OptimizedReducer ¶
type OptimizedReducer interface {
OptimizedReduce(a Tensor, axis int, firstFn, lastFn, defaultFn, defaultValue interface{}, opts ...FuncOpt) (Tensor, error)
}
OptimizedReducer is any engine that can perform a reduction function with optimizations for the first dimension, last dimension and dimensions in between.
type OuterProder ¶
OuterProder is any engine that can perform outer product (kronecker) multiplication
type Power ¶
type Power interface { // Pow performs a ^ b Pow(a, b Tensor, opts ...FuncOpt) (Tensor, error) // PowScalar exponentiates a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand. // Whether or not the input tensor is clobbered is left to the implementation PowScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Power is any engine that can perform elementwise Pow()
type Reducer ¶
type Reducer interface {
Reduce(fn interface{}, a Tensor, axis int, defaultValue interface{}, opts ...FuncOpt) (Tensor, error)
}
Reducer is any engine that can perform a reduction function.
type Repeater ¶
type Repeater interface { Repeat(t Tensor, axis int, repeats ...int) (Tensor, error) RepeatReuse(t Tensor, reuse Tensor, axis int, repeeats ...int) (Tensor, error) }
Repeater is any engine that can repeat values along the given axis.
type ScalarRep ¶
type ScalarRep interface { IsScalar() bool ScalarValue() interface{} }
ScalarRep is any Tensor that can represent a scalar
type Shape ¶
type Shape []int
Shape represents the dimensions of a Tensor. A (2,3) matrix has a shape of (2,3) - 2 rows, 3 columns. Likewise, a shape of (2,3,4) means a Tensor has 3 dimensions: 2 layers, 3 rows, 4 columns.
Vectors are of particular note. This package defines a shape of (x, 1) as a column vector and a (1, x) as a row vector. Row vectors and column vectors are matrices as well. It is important to note that row and column vectors and vanilla vectors are comparable under some circumstances
func ScalarShape ¶
func ScalarShape() Shape
ScalarShape represents a scalar. It has no dimensions, no sizes
func (Shape) CalcStrides ¶
CalcStrides calculates the default strides for a shape
func (Shape) CalcStridesColMajor ¶
CalcStridesColMajor is like CalcStrides, but assumes a col major layout
func (Shape) CalcStridesWithMask ¶
CalcStridesWithMask is similar to CalcStrides, except that it has an argument, masks. It is used to mask out given dimensions during calculation of stride
func (Shape) DimSize ¶
DimSize returns the size of the dimension wanted.
This method implemnents the DimSizer interface in Gorgonia.
func (Shape) Eq ¶
Eq indicates if a shape is equal with another. There is a soft concept of equality when it comes to vectors.
If s is a column vector and other is a vanilla vector, they're considered equal if the size of the column dimension is the same as the vector size; if s is a row vector and other is a vanilla vector, they're considered equal if the size of the row dimension is the same as the vector size
func (Shape) IsMatrix ¶
IsMatrix returns true if it's a matrix. This is mostly a convenience method. RowVec and ColVecs are also considered matrices
func (Shape) IsScalarEquiv ¶
IsScalarEquiv returns true if the access pattern indicates it's a scalar-like value
func (Shape) IsVector ¶
IsVector returns whether the access pattern falls into one of three possible definitions of vectors:
vanilla vector (not a row or a col) column vector row vector
func (Shape) IsVectorLike ¶ added in v0.9.12
IsVectorLike returns true when the shape looks like a vector e.g. a number that is surrounded by 1s:
(1, 1, ... 1, 10, 1, 1... 1)
func (Shape) Repeat ¶
func (s Shape) Repeat(axis int, repeats ...int) (newShape Shape, finalRepeats []int, size int, err error)
Repeat returns the expected new shape given the repetition parameters.
type Slice ¶
A Slice represents a slicing operation for a Tensor.
type SoftMaxer ¶ added in v0.9.22
type SoftMaxer interface { LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error) }
type Sparse ¶
type Sparse interface { Tensor Densor NonZeroes() int // NonZeroes returns the number of nonzero values }
Sparse is a sparse tensor.
type SparseTensor ¶
type StdEng ¶
StdEng is the default execution engine that comes with the tensors. To use other execution engines, use the WithEngine construction option.
func (StdEng) Add ¶
Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) AddScalar ¶
func (e StdEng) AddScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
AddScalar performs t + s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) AllocAccessible ¶
func (StdEng) Div ¶
Div performs a ÷ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) DivScalar ¶
func (e StdEng) DivScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
DivScalar performs t ÷ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) ElEq ¶
ElEq performs a == b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) ElNe ¶
ElNe performs a ≠ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) Gt ¶
Gt performs a > b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) GtScalar ¶
func (e StdEng) GtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
GtScalar performs t > s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) Gte ¶
Gte performs a ≥ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) GteScalar ¶
func (e StdEng) GteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
GteScalar performs t ≥ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) Inner ¶
Inner is a thin layer over BLAS's D/Sdot. It returns a scalar value, wrapped in an interface{}, which is not quite nice.
func (StdEng) LogSoftMax ¶ added in v0.9.22
LogSoftMax performs softmax but in log space. This provides some amount of numerical stabilization. Conceptually it is the same as performing a logarithm after applying the softmax function. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.
func (StdEng) LogSoftMaxB ¶ added in v0.9.22
func (e StdEng) LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
LogSoftMaxB computes the gradient of the input `x`, given the `output = LogSoftmax(x)` and its associated gradient. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.
func (StdEng) Lt ¶
Lt performs a < b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) LtScalar ¶
func (e StdEng) LtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
LtScalar performs t < s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) Lte ¶
Lte performs a ≤ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) LteScalar ¶
func (e StdEng) LteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
LteScalar performs t ≤ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.
func (StdEng) MatMul ¶
MatMul is a thin layer over DGEMM. DGEMM computes:
C = αA * B + βC
To prevent needless zeroing out of the slice, we just set β to 0
func (StdEng) MatVecMul ¶
MatVecMul is a thin layer over BLAS' DGEMV Because DGEMV computes:
y = αA * x + βy
we set beta to 0, so we don't have to manually zero out the reused/retval tensor data
func (StdEng) MaxBetween ¶ added in v0.9.21
func (StdEng) MaxBetweenScalar ¶ added in v0.9.21
func (StdEng) MinBetween ¶ added in v0.9.21
func (StdEng) MinBetweenScalar ¶ added in v0.9.21
func (StdEng) Mod ¶
Mod performs a % b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) ModScalar ¶
func (e StdEng) ModScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
ModScalar performs t % s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) Mul ¶
Mul performs a × b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) MulScalar ¶
func (e StdEng) MulScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
MulScalar performs t × s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) OptimizedReduce ¶
func (StdEng) Pow ¶
Pow performs a ^ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) PowScalar ¶
func (e StdEng) PowScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
PowScalar performs t ^ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) RepeatReuse ¶
RepeatReuse is like Repeat, but with a provided reuse Tensor. The reuseTensor must be of the same type as the input t.
func (StdEng) SelectByIndices ¶ added in v0.9.15
func (e StdEng) SelectByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
SelectByIndices selects the values given the in `indices` tensor.
Currently SelectByIndices only supports Dense tensors that do not require the use of iterators. Please make a pull request to support tensors that require the use of an iterator to traverse data.
func (StdEng) SelectByIndicesB ¶ added in v0.9.15
func (e StdEng) SelectByIndicesB(input, outGrad, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
SelectByIndicesB computes the gradient of the result of `SelectByIndices`.
Currently SelectByIndicesB only supports Dense tensors that do not require the use of iterators. Please make a pull request to support tensors that require the use of an iterator to traverse data.
func (StdEng) SoftMax ¶ added in v0.9.22
SoftMax performs the softmax operation on the given tensor. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.
The softmax function is defined as :
σ(x) = e^x_i / Σ(e^x_i)
func (StdEng) SoftMaxB ¶ added in v0.9.22
SoftMaxB computes gradient of the input `x`, given the `output = SoftMax(x)` and its associated gradient. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.
func (StdEng) StackDense ¶
func (e StdEng) StackDense(t DenseTensor, axis int, others ...DenseTensor) (retVal DenseTensor, err error)
func (StdEng) Sub ¶
Sub performs a - b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
func (StdEng) SubScalar ¶
func (e StdEng) SubScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)
SubScalar performs t - s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)
type Suber ¶
type Suber interface { // Sub performs a - b Sub(a, b Tensor, opts ...FuncOpt) (Tensor, error) // SubScalar subtracts a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand. // Whether or not the input tensor is clobbered is left to the implementation SubScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error) }
Suber is any engine that can perform elementwise subtraction.
type Tensor ¶
type Tensor interface { // info about the ndarray Shape() Shape Strides() []int Dtype() Dtype Dims() int Size() int DataSize() int // Data access related RequiresIterator() bool Iterator() Iterator DataOrder() DataOrder // ops Slicer At(...int) (interface{}, error) SetAt(v interface{}, coord ...int) error Reshape(...int) error T(axes ...int) error UT() Transpose() error // Transpose actually moves the data Apply(fn interface{}, opts ...FuncOpt) (Tensor, error) // data related interface Zeroer MemSetter Dataer Eq Cloner // type overloading methods IsScalar() bool ScalarValue() interface{} // engine/memory related stuff // all Tensors should be able to be expressed of as a slab of memory // Note: the size of each element can be acquired by T.Dtype().Size() Memory // Tensors all implement Memory Engine() Engine // Engine can be nil IsNativelyAccessible() bool // Can Go access the memory IsManuallyManaged() bool // Must Go manage the memory // formatters fmt.Formatter fmt.Stringer // all Tensors are serializable to these formats WriteNpy(io.Writer) error ReadNpy(io.Reader) error gob.GobEncoder gob.GobDecoder // contains filtered or unexported methods }
Tensor represents a variety of n-dimensional arrays. The most commonly used tensor is the Dense tensor. It can be used to represent a vector, matrix, 3D matrix and n-dimensional tensors.
func Add ¶
Add performs elementwise addition on the Tensor(s). These operations are supported:
Add(*Dense, scalar) Add(scalar, *Dense) Add(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Argmax ¶
Argmax finds the index of the max value along the axis provided
Example ¶
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2)) fmt.Printf("T:\n%v\n", T) // argmax along the x-axis am, _ := Argmax(T, 0) fmt.Printf("Argmax: %v\n", am) fmt.Printf("Argmax is %T of %v", am, am.Dtype())
Output: T: ⎡ 0 100⎤ ⎣200 3⎦ Argmax: [1 0] Argmax is *tensor.Dense of int
Example (Sliced) ¶
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2)) fmt.Printf("T:\n%v\n", T) // slice creates a view V, _ := T.Slice(nil, S(1)) // argmax along the x-axis am, _ := Argmax(V, 0) fmt.Printf("Argmax: %v\n", am) fmt.Printf("Argmax is %T of %v", am, am.Dtype())
Output: T: ⎡ 0 100⎤ ⎣200 3⎦ Argmax: 0 Argmax is *tensor.Dense of int
func Argmin ¶
Argmin finds the index of the min value along the axis provided
Example ¶
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2)) fmt.Printf("T:\n%v\n", T) // argmax along the x-axis am, _ := Argmin(T, 0) fmt.Printf("Argmin: %v\n", am) fmt.Printf("Argmin is %T of %v", am, am.Dtype())
Output: T: ⎡ 0 100⎤ ⎣200 3⎦ Argmin: [0 1] Argmin is *tensor.Dense of int
func ByIndices ¶ added in v0.9.15
ByIndices allows for selection of value of `a` byt the indices listed in the `indices` tensor. The `indices` tensor has to be a vector-like tensor of ints.
Example ¶
a := New(WithShape(2, 2), WithBacking([]float64{ 100, 200, 300, 400, })) indices := New(WithBacking([]int{1, 1, 1, 0, 1})) b, err := ByIndices(a, indices, 0) // we select rows 1, 1, 1, 0, 1 if err != nil { fmt.Println(err) return } fmt.Printf("a:\n%v\nindices: %v\nb:\n%v\n", a, indices, b)
Output: a: ⎡100 200⎤ ⎣300 400⎦ indices: [1 1 1 0 1] b: ⎡300 400⎤ ⎢300 400⎥ ⎢300 400⎥ ⎢100 200⎥ ⎣300 400⎦
func ByIndicesB ¶ added in v0.9.15
ByIndicesB is the backpropagation of ByIndices.
Example ¶
a := New(WithShape(2, 2), WithBacking([]float64{ 100, 200, 300, 400, })) indices := New(WithBacking([]int{1, 1, 1, 0, 1})) b, err := ByIndices(a, indices, 0) // we select rows 1, 1, 1, 0, 1 if err != nil { fmt.Println(err) return } outGrad := b.Clone().(*Dense) outGrad.Memset(1.0) grad, err := ByIndicesB(a, outGrad, indices, 0) if err != nil { fmt.Println(err) return } fmt.Printf("a:\n%v\nindices: %v\nb:\n%v\ngrad:\n%v", a, indices, b, grad)
Output: a: ⎡100 200⎤ ⎣300 400⎦ indices: [1 1 1 0 1] b: ⎡300 400⎤ ⎢300 400⎥ ⎢300 400⎥ ⎢100 200⎥ ⎣300 400⎦ grad: ⎡1 1⎤ ⎣4 4⎦
func Concat ¶
Concat concatenates a list of Tensors. At the moment the operation only supports Tensors of the same type (*Dense can only be concatenated with a bunch of *Dense, CSCs can only be concatenated with a bunch of CSC, etc)
func Div ¶
Div performs elementwise division on the Tensor(s). These operations are supported:
Div(*Dense, scalar) Div(scalar, *Dense) Div(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Dot ¶
Dot is a highly opinionated API for performing dot product operations on two *Denses, a and b. This function is opinionated with regard to the vector operations because of how it treats operations with vectors. Vectors in this package comes in two flavours - column or row vectors. Column vectors have shape (x, 1), while row vectors have shape (1, x).
As such, it is easy to assume that performing a linalg operation on vectors would follow the same rules (i.e shapes have to be aligned for things to work). For the most part in this package, this is true. This function is one of the few notable exceptions.
Here I give three specific examples of how the expectations of vector operations will differ.
Given two vectors, a, b with shapes (4, 1) and (4, 1), Dot() will perform an inner product as if the shapes were (1, 4) and (4, 1). This will result in a scalar value Given matrix A and vector b with shapes (2, 4) and (1, 4), Dot() will perform a matrix-vector multiplication as if the shapes were (2,4) and (4,1). This will result in a column vector with shape (2,1) Given vector a and matrix B with shapes (3, 1) and (3, 2), Dot() will perform a matrix-vector multiplication as if it were Bᵀ * a
The main reason why this opinionated route was taken was due to the author's familiarity with NumPy, and general laziness in translating existing machine learning algorithms to fit the API of the package.
func ElEq ¶
ElEq performs a elementwise equality comparison (a == b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func ElNe ¶
ElNe performs a elementwise equality comparison (a != b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func Gt ¶
Gt performs a elementwise greater than comparison (a > b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func Gte ¶
Gte performs a elementwise greater than eq comparison (a >= b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func LogSoftMax ¶ added in v0.9.22
LogSoftMax applies log softmax to the given tensor.
func LogSoftMaxB ¶ added in v0.9.22
LogSoftMaxB applies softmax backwards operation
func Lt ¶
Lt performs a elementwise less than comparison (a < b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func Lte ¶
Lte performs a elementwise less than eq comparison (a <= b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.
If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.
func MatVecMul ¶
MatVecMul performs matrix-vector multiplication between two Tensors. `a` is expected to be a matrix, and `b` is expected to be a vector
func Materialize ¶
Materialize takes a View and copies out the data into a new allocation.
func MaxBetween ¶ added in v0.9.21
func MinBetween ¶ added in v0.9.21
func Mod ¶
Mod performs elementwise modulo on the Tensor(s). These operations are supported:
Mod(*Dense, scalar) Mod(scalar, *Dense) Mod(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Mul ¶
Mul performs elementwise multiplication on the Tensor(s). These operations are supported:
Mul(*Dense, scalar) Mul(scalar, *Dense) Mul(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Outer ¶
Outer performs the outer product of two vector Tensors. Both arguments to the functions are expected to be vectors.
func Pow ¶
Pow performs elementwise exponentiation on the Tensor(s). These operations are supported:
Pow(*Dense, scalar) Pow(scalar, *Dense) Pow(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Repeat ¶
Repeat repeats a Tensor along the axis and given the number of repeats.
Example (UncommonUses) ¶
T := New(WithBacking([]int{1, 2, 3, 4, 5, 6}), WithShape(2, 3)) fmt.Printf("T:\n%v", T) fmt.Println("Axis 0 has 2 elements. So we will need to write the number of times each element is to be repeated") fmt.Println("Here, Repeat(T, 0, 3, 2) results in this:") T1, err := Repeat(T, 0, 3, 2) if err != nil { fmt.Printf("Err %v", err) } fmt.Printf("%v", T1) fmt.Println("Observe the 0th element ([1 2 3]) has been repeated 3 times, and the 1st element ([4 5 6]) has been repeated twice") fmt.Println("") fmt.Println("We can also repeat on Axis 1. Now along Axis 1 there are 3 elements: ([1 4], [2 5], [3 6])") fmt.Println("So we have to specify how many times to repeat each element.") fmt.Println("Repeat(T, 1, 2, 3, 2) yields the following result:") T1, err = Repeat(T, 1, 2, 3, 2) if err != nil { fmt.Printf("Err %v", err) } fmt.Printf("%v", T1) fmt.Println("Once again, observe that the 1st element ([2 5]) has been repeated 3 times, while the rest have been repeated twice") /* // TODO break this out to another example T1, err = Repeat(T, AllAxes, 2, 3, 2, 2, 2, 2) if err != nil { fmt.Printf("Err %v", err) } fmt.Printf("%#v", T1) */
Output: T: ⎡1 2 3⎤ ⎣4 5 6⎦ Axis 0 has 2 elements. So we will need to write the number of times each element is to be repeated Here, Repeat(T, 0, 3, 2) results in this: ⎡1 2 3⎤ ⎢1 2 3⎥ ⎢1 2 3⎥ ⎢4 5 6⎥ ⎣4 5 6⎦ Observe the 0th element ([1 2 3]) has been repeated 3 times, and the 1st element ([4 5 6]) has been repeated twice We can also repeat on Axis 1. Now along Axis 1 there are 3 elements: ([1 4], [2 5], [3 6]) So we have to specify how many times to repeat each element. Repeat(T, 1, 2, 3, 2) yields the following result: ⎡1 1 2 2 2 3 3⎤ ⎣4 4 5 5 5 6 6⎦ Once again, observe that the 1st element ([2 5]) has been repeated 3 times, while the rest have been repeated twice
func RepeatReuse ¶
RepeatReuse repeats a Tensor along the axis and the given number of repeats, and puts the results in the provided reuse tensor. If the reuse tensor is not correctly sized, then an error will be given, but the results will still be valid.
Example ¶
var T, T1 *Dense T = New(WithBacking([]float64{1, 2, 3, 4}), WithShape(1, 4)) T1 = New(Of(Float64), WithShape(3, 4)) var T2 Tensor var err error if T2, err = RepeatReuse(T, T1, 0, 3); err != nil { fmt.Printf("Err %v", err) } fmt.Printf("RepeatReuse(T, T1):\n%v", T2) fmt.Printf("T1 == T2: %t\n", T1 == T2) // But if your reuse is wrongly shaped, an error occurs T1 = New(Of(Float64), WithShape(1, 4)) // too small if _, err = RepeatReuse(T, T1, 0, 3); err != nil { fmt.Printf("Expected Error: %v\n", err) }
Output: RepeatReuse(T, T1): ⎡1 2 3 4⎤ ⎢1 2 3 4⎥ ⎣1 2 3 4⎦ T1 == T2: true Expected Error: Reuse shape is (1, 4). Expected shape is (3, 4)
func Stack ¶
Stack stacks a list of other Tensors. At the moment the operation only supports Tensors of the same type. (*Dense can only be stacked with *Dense... etc)
func Sub ¶
Sub performs elementwise subtraction on the Tensor(s). These operations are supported:
Sub(*Dense, scalar) Sub(scalar, *Dense) Sub(*Dense, *Dense)
If the Unsafe flag is passed in, the data of the first tensor will be overwritten
func Sum ¶
Sum sums a Tensor along the given axes
Example ¶
T := New(WithBacking([]float64{0, 1, 2, 3}), WithShape(2, 2)) fmt.Printf("T:\n%v\n", T) // sum along axis 0 summed, _ := Sum(T, 0) fmt.Printf("Summed:\n%v\n", summed) // to keep dims, simply reshape summed.Reshape(1, 2) fmt.Printf("Summed (Kept Dims - Shape: %v):\n%v\n\n", summed.Shape(), summed) // summing along multiple axes summed, _ = Sum(T, 1, 0) fmt.Printf("Summed along (1, 0): %v", summed)
Output: T: ⎡0 1⎤ ⎣2 3⎦ Summed: [2 4] Summed (Kept Dims - Shape: (1, 2)): R[2 4] Summed along (1, 0): 6
Example (Sliced) ¶
T := New(WithBacking([]float64{0, 1, 2, 3}), WithShape(2, 2)) fmt.Printf("T:\n%v\n", T) V, _ := T.Slice(nil, S(1)) fmt.Printf("V:\n%v\n", V) Σ, _ := Sum(V) fmt.Printf("Σ: %v", Σ)
Output: T: ⎡0 1⎤ ⎣2 3⎦ V: [1 3] Σ: 4
func T ¶
T safely transposes a Tensor. It returns a tensor that is not a view of the input tensor - rather, the data is all copied.
Example ¶
// Usual example of 2D matrix being transposed: M := New(WithBacking([]int{1, 2, 3, 4, 5, 6}), WithShape(2, 3)) M2, err := T(M) if err != nil { fmt.Printf("Err: %v\n", err) } fmt.Printf("M:\n%v\nM2:\n%v\n", M, M2) // T accepts optional parameters describing the permutation of axes. // In a 2D case, there are only two options: (0, 1) or (1, 0). // The latter is default if no parameters are passed in. // The former is a no-op as rearranging a matrix so that the 0th axis becomes the 0th axis // and the first axis becomes the first axis is not going to do anything. // // However, note that M3 is a different result. M3, err := T(M, 0, 1) if err != nil { fmt.Printf("Err: %v\n", err) } fmt.Printf("M3:\n%v\nM == M3: %t", M3, M == M3)
Output: M: ⎡1 2 3⎤ ⎣4 5 6⎦ M2: ⎡1 4⎤ ⎢2 5⎥ ⎣3 6⎦ M3: ⎡1 2 3⎤ ⎣4 5 6⎦ M == M3: false
Example (Scalarlike) ¶
// Be aware when dealing with scalarlike tensors // scalar/scalarlikes have no effect when calling T() // but the result is put into a new tensor S := New(WithBacking([]float32{3.14}), WithShape()) S2, err := T(S) if err != nil { fmt.Printf("Err %v", err) } fmt.Printf("S: %v S2 %v S == S2: %t\n", S, S2, S == S2) // however do note that scalars and scalarlikes are not the same thing. // for example, consider this: _, err = T(S, 1, 0) fmt.Printf("error when the axes are more than the shape's dims: %v\n", err) // but if you have a tensor that is a scalar-like: S.Reshape(1, 1) S2, err = T(S, 1, 0) if err != nil { fmt.Printf("Err: %v\n", err) } fmt.Printf("S:\n%v\nS2:\n%v\nS == S2: %t\n", S, S2, S == S2)
Output: S: 3.14 S2 3.14 S == S2: false error when the axes are more than the shape's dims: Dimension mismatch. Expected 0, got 2 S: [[3.14]] S2: [[3.14]] S == S2: false
func Transpose ¶
Transpose performs transposition of a tensor according to its axes.
Example (Extension) ¶
package main import ( "fmt" "gorgonia.org/tensor" ) // LongStruct is a type that is an arbitrarily long struct type LongStruct struct { a, b, c, d, e uint64 } // Format implements fmt.Formatter for easier-to-read output of data func (ls LongStruct) Format(s fmt.State, c rune) { fmt.Fprintf(s, "{a: %d, b: %d, c: %d, d: %d, e: %d}", ls.a, ls.b, ls.c, ls.d, ls.e) } func main() { // For documentation if you're reading this on godoc: // // type LongStruct struct { // a, b, c, d, e uint64 // } T := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]LongStruct{ LongStruct{0, 0, 0, 0, 0}, LongStruct{1, 1, 1, 1, 1}, LongStruct{2, 2, 2, 2, 2}, LongStruct{3, 3, 3, 3, 3}, }), ) fmt.Printf("Before:\n%v\n", T) retVal, _ := tensor.Transpose(T) // an alternative would be to use T.T(); T.Transpose() fmt.Printf("After:\n%v\n", retVal) }
Output: Before: ⎡{a: 0, b: 0, c: 0, d: 0, e: 0} {a: 1, b: 1, c: 1, d: 1, e: 1}⎤ ⎣{a: 2, b: 2, c: 2, d: 2, e: 2} {a: 3, b: 3, c: 3, d: 3, e: 3}⎦ After: ⎡{a: 0, b: 0, c: 0, d: 0, e: 0} {a: 2, b: 2, c: 2, d: 2, e: 2}⎤ ⎣{a: 1, b: 1, c: 1, d: 1, e: 1} {a: 3, b: 3, c: 3, d: 3, e: 3}⎦
type Tracer ¶
Tracer is any engine that can return the trace (aka the sum of the diagonal elements).
type Transposer ¶
Transposer is any engine that can perform an unsafe transpose of a tensor.
type View ¶
View is any Tensor that can provide a view on memory
Example ¶
// Slicing creates a "view" on the original tensor T := New(WithBacking(Range(Int, 0, 16)), WithShape(4, 4)) fmt.Printf("T:\n%v\n", T) V, _ := T.Slice(makeRS(1, 3), makeRS(1, 3)) fmt.Printf("V:\n%v\n", V) // Now we modify V's 0th value V.(*Dense).Set(0, 1000) fmt.Printf("V[0] = 1000:\n%v\n", V) fmt.Printf("T is also mutated:\n%v\n", T) // Now we materialize the views fmt.Printf("V is Materializable: %v\n", V.IsMaterializable()) T2 := V.Materialize() fmt.Printf("T2 == V:\n%v\n", T2) // Once materialized, it is decoupled from the original tensor T2.(*Dense).Set(0, 999) fmt.Printf("T2 is mutated:\n%v\nBut T is not mutated:\n%v\nNeither is V:\n%v", T2, T, V)
Output: T: ⎡ 0 1 2 3⎤ ⎢ 4 5 6 7⎥ ⎢ 8 9 10 11⎥ ⎣12 13 14 15⎦ V: ⎡ 5 6⎤ ⎣ 9 10⎦ V[0] = 1000: ⎡1000 6⎤ ⎣ 9 10⎦ T is also mutated: ⎡ 0 1 2 3⎤ ⎢ 4 1000 6 7⎥ ⎢ 8 9 10 11⎥ ⎣ 12 13 14 15⎦ V is Materializable: true T2 == V: ⎡1000 6⎤ ⎣ 9 10⎦ T2 is mutated: ⎡999 6⎤ ⎣ 9 10⎦ But T is not mutated: ⎡ 0 1 2 3⎤ ⎢ 4 1000 6 7⎥ ⎢ 8 9 10 11⎥ ⎣ 12 13 14 15⎦ Neither is V: ⎡1000 6⎤ ⎣ 9 10⎦
Notes ¶
Bugs ¶
reading CSV doesn't handle CSVs with different columns per row yet.
Source Files ¶
- ap.go
- api_arith.go
- api_cmp.go
- api_matop.go
- api_minmax.go
- api_reduction.go
- api_unary.go
- api_utils.go
- array.go
- array_getset.go
- bitmap.go
- blas.go
- collections.go
- consopt.go
- defaultengine.go
- defaultengine_argmethods.go
- defaultengine_arith.go
- defaultengine_cmp.go
- defaultengine_linalg.go
- defaultengine_mapreduce.go
- defaultengine_matop_misc.go
- defaultengine_matop_stack.go
- defaultengine_matop_transpose.go
- defaultengine_minmax.go
- defaultengine_misc.go
- defaultengine_prep.go
- defaultengine_selbyidx.go
- defaultengine_softmax.go
- defaultengine_unary.go
- defaultenginefloat32.go
- defaultenginefloat64.go
- dense.go
- dense_argmethods.go
- dense_arith.go
- dense_assign.go
- dense_cmp.go
- dense_compat.go
- dense_format.go
- dense_generated.go
- dense_io.go
- dense_linalg.go
- dense_mapreduce.go
- dense_mask_filling.go
- dense_mask_inspection.go
- dense_maskcmp_methods.go
- dense_matop.go
- dense_matop_memmove.go
- dense_norms.go
- dense_reduction_methods.go
- dense_views.go
- engine.go
- errors.go
- flags.go
- generic_utils.go
- interfaces.go
- iterator.go
- iterator_mult.go
- iterator_utils.go
- mathutils.go
- perf.go
- release.go
- shape.go
- slice.go
- sparse.go
- sparse_io.go
- tensor.go
- types.go
- unsafe.go
- utils.go
Directories ¶
Path | Synopsis |
---|---|
internal
|
|
serialization
package serialization provides the data structures for serialization
|
package serialization provides the data structures for serialization |
serialization/pb
Package pb is a generated protocol buffer package.
|
Package pb is a generated protocol buffer package. |
package native is a utility package for gorgonia.org/tensor.
|
package native is a utility package for gorgonia.org/tensor. |