tensor

package module
v0.9.24 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 18, 2022 License: Apache-2.0 Imports: 37 Imported by: 457

README

Package tensor GoDoc GitHub version Build Status Coverage Status Go Report Card unstable#

Package tensor is a package that provides efficient, generic (by some definitions of generic) n-dimensional arrays in Go. Also in this package are functions and methods that are used commonly in arithmetic, comparison and linear algebra operations.

The main purpose of this package is to support the operations required by Gorgonia.

Introduction

In the data analysis world, Numpy and Matlab currently reign supreme. Both tools rely heavily on having performant n-dimensional arrays, or tensors. There is an obvious need for multidimensional arrays in Go.

While slices are cool, a large majority of scientific and numeric computing work relies heavily on matrices (two-dimensional arrays), three dimensional arrays and so on. In Go, the typical way of getting multidimensional arrays is to use something like [][]T. Applications that are more math heavy may opt to use the very excellent Gonum matrix package. What then if we want to go beyond having a float64 matrix? What if we wanted a 3-dimensional float32 array?

It comes to reason then there should be a data structure that handles these things. The tensor package fits in that niche.

Basic Idea: Tensor

A tensor is a multidimensional array. It's like a slice, but works in multiple dimensions.

With slices, there are usage patterns that are repeated enough that warrant abstraction - append, len, cap, range are abstractions used to manipulate and query slices. Additionally slicing operations (a[:1] for example) are also abstractions provided by the language. Andrew Gerrand wrote a very good write up on Go's slice usage and internals.

Tensors come with their own set of usage patterns and abstractions. Most of these have analogues in slices, enumerated below (do note that certain slice operation will have more than one tensor analogue - this is due to the number of options available):

Slice Operation Tensor Operation
len(a) T.Shape()
cap(a) T.DataSize()
a[:] T.Slice(...)
a[0] T.At(x,y)
append(a, ...) T.Stack(...), T.Concat(...)
copy(dest, src) T.CopyTo(dest), tensor.Copy(dest, src)
for _, v := range a for i, err := iterator.Next(); err == nil; i, err = iterator.Next()

Some operations for a tensor does not have direct analogues to slice operations. However, they stem from the same idea, and can be considered a superset of all operations common to slices. They're enumerated below:

Tensor Operation Basic idea in slices
T.Strides() The stride of a slice will always be one element
T.Dims() The dimensions of a slice will always be one
T.Size() The size of a slice will always be its length
T.Dtype() The type of a slice is always known at compile time
T.Reshape() Given the shape of a slice is static, you can't really reshape a slice
T.T(...) / T.Transpose() / T.UT() No equivalent with slices

The Types of Tensors

As of the current revision of this package, only dense tensors are supported. Support for sparse matrix (in form of a sparse column matrix and dictionary of keys matrix) will be coming shortly.

Dense Tensors

The *Dense tensor is the primary tensor and is represented by a singular flat array, regardless of dimensions. See the Design of *Dense section for more information. It can hold any data type.

Compressed Sparse Column Matrix

Documentation Coming soon

Compressed Sparse Row Matrix

Documentation Coming soon

Usage

To install: go get -u "gorgonia.org/tensor"

To create a matrix with package tensor is easy:

// Creating a (2,2) matrix of int:
a := New(WithShape(2, 2), WithBacking([]int{1, 2, 3, 4}))
fmt.Printf("a:\n%v\n", a)

// Output:
// a:
// ⎡1  2⎤
// ⎣3  4⎦
//

To create a 3-Tensor is just as easy - just put the correct shape and you're good to go:

// Creating a (2,3,4) 3-Tensor of float32
b := New(WithBacking(Range(Float32, 0, 24)), WithShape(2, 3, 4))
fmt.Printf("b:\n%1.1f\n", b)

// Output:
// b:
// ⎡ 0.0   1.0   2.0   3.0⎤
// ⎢ 4.0   5.0   6.0   7.0⎥
// ⎣ 8.0   9.0  10.0  11.0⎦
//
// ⎡12.0  13.0  14.0  15.0⎤
// ⎢16.0  17.0  18.0  19.0⎥
// ⎣20.0  21.0  22.0  23.0⎦

Accessing and Setting data is fairly easy. Dimensions are 0-indexed, so if you come from an R background, suck it up like I did. Be warned, this is the inefficient way if you want to do a batch access/setting:

// Accessing data:
b := New(WithBacking(Range(Float32, 0, 24)), WithShape(2, 3, 4))
x, _ := b.At(0, 1, 2)
fmt.Printf("x: %v\n", x)

// Setting data
b.SetAt(float32(1000), 0, 1, 2)
fmt.Printf("b:\n%v", b)

// Output:
// x: 6
// b:
// ⎡   0     1     2     3⎤
// ⎢   4     5  1000     7⎥
// ⎣   8     9    10    11⎦

// ⎡  12    13    14    15⎤
// ⎢  16    17    18    19⎥
// ⎣  20    21    22    23⎦

Bear in mind to pass in data of the correct type. This example will cause a panic:

// Accessing data:
b := New(WithBacking(Range(Float32, 0, 24)), WithShape(2, 3, 4))
x, _ := b.At(0, 1, 2)
fmt.Printf("x: %v\n", x)

// Setting data
b.SetAt(1000, 0, 1, 2)
fmt.Printf("b:\n%v", b)

There is a whole laundry list of methods and functions available at the godoc page

Design of *Dense

The design of the *Dense tensor is quite simple in concept. However, let's start with something more familiar. This is a visual representation of a slice in Go (taken from rsc's excellent blog post on Go data structures):

slice

The data structure for *Dense is similar, but a lot more complex. Much of the complexity comes from the need to do accounting work on the data structure as well as preserving references to memory locations. This is how the *Dense is defined:

type Dense struct {
	*AP
	array
	e Engine

	// other fields elided for simplicity's sake
}

And here's a visual representation of the *Dense.

dense

*Dense draws its inspiration from Go's slice. Underlying it all is a flat array, and access to elements are controlled by *AP. Where a Go is able to store its metadata in a 3-word structure (obviating the need to allocate memory), a *Dense unfortunately needs to allocate some memory. The majority of the data is stored in the *AP structure, which contains metadata such as shape, stride, and methods for accessing the array.

*Dense embeds an array (not to be confused with Go's array), which is an abstracted data structure that looks like this:

type array struct {
	storage.Header
	t Dtype
	v interface{}
}

*storage.Header is the same structure as reflect.SliceHeader, except it stores a unsafe.Pointer instead of a uintptr. This is done so that eventually when more tests are done to determine how the garbage collector marks data, the v field may be removed.

The storage.Header field of the array (and hence *Dense) is there to provide a quick and easy way to translate back into a slice for operations that use familiar slice semantics, of which much of the operations are dependent upon.

By default, *Dense operations try to use the language builtin slice operations by casting the *storage.Header field into a slice. However, to accomodate a larger subset of types, the *Dense operations have a fallback to using pointer arithmetic to iterate through the slices for other types with non-primitive kinds (yes, you CAN do pointer arithmetic in Go. It's slow and unsafe). The result is slower operations for types with non-primitive kinds.

Memory Allocation

New() functions as expected - it returns a pointer of *Dense to a array of zeroed memory. The underlying array is allocated, depending on what ConsOpt is passed in. With New(), ConsOpts are used to determine the exact nature of the *Dense. It's a bit icky (I'd have preferred everything to have been known statically at compile time), but it works. Let's look at some examples:

x := New(Of(Float64), WithShape(2,2)) // works
y := New(WithShape(2,2)) // panics
z := New(WithBacking([]int{1,2,3,4})) // works

The following will happen:

  • Line 1 works: This will allocate a float64 array of size 4.
  • Line 2 will cause a panic. This is because the function doesn't know what to allocate - it only knows to allocate an array of something for the size of 4.
  • Line 3 will NOT fail, because the array has already been allocated (the *Dense reuses the same backing array as the slice passed in). Its shape will be set to (4).

Alternatively you may also pass in an Engine. If that's the case then the allocation will use the Alloc method of the Engine instead:

x := New(Of(Float64), WithEngine(myEngine), WithShape(2,2))

The above call will use myEngine to allocate memory instead. This is useful in cases where you may want to manually manage your memory.

Other failed designs

The alternative designs can be seen in the ALTERNATIVE DESIGNS document

Generic Features

Example:


x := New(WithBacking([]string{"hello", "world", "hello", "world"}), WithShape(2,2))
x = New(WithBacking([]int{1,2,3,4}), WithShape(2,2))

The above code will not cause a compile error, because the structure holding the underlying array (of strings and then of ints) is a *Dense.

One could argue that this sidesteps the compiler's type checking system, deferring it to runtime (which a number of people consider dangerous). However, tools are being developed to type check these things, and until Go does support typechecked generics, unfortunately this will be the way it has to be.

Currently, the tensor package supports limited type of genericity - limited to a tensor of any primitive type.

How This Package is Developed

Much of the code in this package is generated. The code to generate them is in the directory genlib2. genlib2 requires goimports binary to be available in the $PATH.

Tests

Tests require python with numpy installed. You can select which python intepreter is being used by setting the environment variable PYTHON_COMMAND accordingly. The default value is python.

Things Knowingly Untested For

  • complex64 and complex128 are excluded from quick check generation process Issue #11
TODO
  • Identity optimizations for op
  • Zero value optimizations
  • fix Random() - super dodgy

How To Get Support

The best way of support right now is to open a ticket on Github.

Contributing

Obviously since you are most probably reading this on Github, Github will form the major part of the workflow for contributing to this package.

See also: CONTRIBUTING.md

Contributors and Significant Contributors

All contributions are welcome. However, there is a new class of contributor, called Significant Contributors.

A Significant Contributor is one who has shown deep understanding of how the library works and/or its environs. Here are examples of what constitutes a Significant Contribution:

  • Wrote significant amounts of documentation pertaining to why/the mechanics of particular functions/methods and how the different parts affect one another
  • Wrote code, and tests around the more intricately connected parts of Gorgonia
  • Wrote code and tests, and have at least 5 pull requests accepted
  • Provided expert analysis on parts of the package (for example, you may be a floating point operations expert who optimized one function)
  • Answered at least 10 support questions.

Significant Contributors list will be updated once a month (if anyone even uses Gorgonia that is).

Licence

Gorgonia and the tensor package are licenced under a variant of Apache 2.0. It's for all intents and purposes the same as the Apache 2.0 Licence, with the exception of not being able to commercially profit directly from the package unless you're a Significant Contributor (for example, providing commercial support for the package). It's perfectly fine to profit directly from a derivative of Gorgonia (for example, if you use Gorgonia as a library in your product)

Everyone is still allowed to use Gorgonia for commercial purposes (example: using it in a software for your business).

These are the packages and libraries which inspired and were adapted from in the process of writing Gorgonia (the Go packages that were used were already declared above):

Source How it's Used Licence
Numpy Inspired large portions. Directly adapted algorithms for a few methods (explicitly labelled in the docs) MIT/BSD-like. Numpy Licence

Documentation

Overview

Package tensor is a package that provides efficient, generic n-dimensional arrays in Go. Also in this package are functions and methods that are used commonly in arithmetic, comparison and linear algebra operations.

Example (AsDenseDiag)

The AsDenseDiag construction option creates a dense diagonal matrix from the input, either a slice or a tensor. The resulting shape is automatically inferred from the input vector.

This is like Numpy's `diag()` function, except not stupid. Numpy's `diag()` has been a cause of errors because it's somewhat isometric:

>>> np.diag(np.diag(np.array([1,2,3])))
array([1,2,3])
T := New(WithShape(3), WithBacking([]int{1, 2, 3}))
T1 := New(AsDenseDiag(T))
fmt.Printf("T1:\n%v", T1)

T2 := New(AsDenseDiag([]float64{3.14, 6.28, 11111}))
fmt.Printf("T2:\n%v", T2)
Output:

T1:
⎡1  0  0⎤
⎢0  2  0⎥
⎣0  0  3⎦
T2:
⎡ 3.14      0      0⎤
⎢    0   6.28      0⎥
⎣    0      0  11111⎦
Example (AsFortran)

The AsFortran construction option is a bit finnicky.

// Here the data is passed in and directly used without changing the underlying data
T0 := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}), AsFortran(nil))
fmt.Printf("T0:\n%vData: %v\n\n", T0, T0.Data())

// Here the data is passed into the AsFortran construction option, and it assumes that the data is already in
// row-major form. Therefore a transpose will be performed.
T1 := New(WithShape(2, 3), AsFortran([]float64{0, 1, 2, 3, 4, 5}))
fmt.Printf("T1:\n%vData: %v\n\n", T1, T1.Data())

// Further example of how AsFortran works:
orig := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}))
T2 := New(WithShape(2, 3), AsFortran(orig))
fmt.Printf("Original\n%vData: %v\n", orig, orig.Data())
fmt.Printf("T2:\n%vData: %v\n", T2, T2.Data())
Output:

T0:
⎡0  2  4⎤
⎣1  3  5⎦
Data: [0 1 2 3 4 5]

T1:
⎡0  1  2⎤
⎣3  4  5⎦
Data: [0 3 1 4 2 5]

Original
⎡0  1  2⎤
⎣3  4  5⎦
Data: [0 1 2 3 4 5]
T2:
⎡0  1  2⎤
⎣3  4  5⎦
Data: [0 3 1 4 2 5]
Example (Basics)

This example showcases the very basics of the package.

// Create a (2, 2)-Matrix of integers
a := New(WithShape(2, 2), WithBacking([]int{1, 2, 3, 4}))
fmt.Printf("a:\n%v\n", a)

// Create a (2, 3, 4)-tensor of float32s
b := New(WithBacking(Range(Float32, 0, 24)), WithShape(2, 3, 4))
fmt.Printf("b:\n%1.1f", b)

// Accessing data
x, _ := b.At(0, 1, 2) // in Numpy syntax: b[0,1,2]
fmt.Printf("x: %1.1f\n\n", x)

// Setting data
b.SetAt(float32(1000), 0, 1, 2)
fmt.Printf("b:\n%v", b)
Output:

a:
⎡1  2⎤
⎣3  4⎦

b:
⎡ 0.0   1.0   2.0   3.0⎤
⎢ 4.0   5.0   6.0   7.0⎥
⎣ 8.0   9.0  10.0  11.0⎦

⎡12.0  13.0  14.0  15.0⎤
⎢16.0  17.0  18.0  19.0⎥
⎣20.0  21.0  22.0  23.0⎦

x: 6.0

b:
⎡   0     1     2     3⎤
⎢   4     5  1000     7⎥
⎣   8     9    10    11⎦

⎡  12    13    14    15⎤
⎢  16    17    18    19⎥
⎣  20    21    22    23⎦
Example (DifferingDataOrders)

This example showcases interactions between different data orders

T0 := New(WithShape(2, 3), WithBacking(Range(Int, 0, 6)))                 // Create a (2, 3)-matrix with the standard row-major backing
T1 := New(WithShape(2, 3), WithBacking(Range(Int, 0, 6)), AsFortran(nil)) // Create a (2, 3)-matrix with a col-major backing
T2, _ := Add(T0, T1)
fmt.Printf("T0:\n%vT1:\n%vT2:\n%vT2 Data Order: %v\n\n", T0, T1, T2, T2.DataOrder())

// the result's data order is highly dependent on the order of operation. It will take after the first operand
T0 = New(WithShape(2, 3), WithBacking(Range(Int, 1, 7)), AsFortran(nil)) // Create a (2, 3)-matrix with a col-major backing
T1 = New(WithShape(2, 3), WithBacking(Range(Int, 1, 7)))                 // Create a (2, 3)-matrix with the standard row-major backing
T2, _ = Add(T0, T1)
fmt.Printf("T0:\n%vT1:\n%vT2:\n%vT2 Data Order: %v\n\n", T0, T1, T2, T2.DataOrder())

reuse := New(WithShape(2, 3), WithBacking([]int{1000, 1000, 1000, 1000, 1000, 1000}))
fmt.Printf("reuse Data Order: %v\n", reuse.DataOrder())
T2, _ = Add(T0, T1, WithReuse(reuse))
fmt.Printf("T2:\n%vT2 Data Order: %v\n\n", T2, T2.DataOrder())
Output:

	T0:
⎡0  1  2⎤
⎣3  4  5⎦
T1:
⎡0  2  4⎤
⎣1  3  5⎦
T2:
⎡ 0   3   6⎤
⎣ 4   7  10⎦
T2 Data Order: Contiguous, RowMajor

T0:
⎡1  3  5⎤
⎣2  4  6⎦
T1:
⎡1  2  3⎤
⎣4  5  6⎦
T2:
⎡ 2   5   8⎤
⎣ 6   9  12⎦
T2 Data Order: Contiguous, ColMajor

reuse Data Order: Contiguous, RowMajor
T2:
⎡ 2   5   8⎤
⎣ 6   9  12⎦
T2 Data Order: Contiguous, ColMajor
Example (Extension)
package main

import (
	//"errors"
	"fmt"
	"reflect"

	"github.com/pkg/errors"
	"gorgonia.org/tensor"
)

// In this example, we want to create and handle a tensor of *MyType

// First, define MyType

// MyType is defined
type MyType struct {
	x, y int
}

func (T MyType) Format(s fmt.State, c rune) { fmt.Fprintf(s, "(%d, %d)", T.x, T.y) }

// MyDtype this the dtype of MyType. This value is populated in the init() function below
var MyDtype tensor.Dtype

// MyEngine supports additions of MyType, as well as other Dtypes
type MyEngine struct {
	tensor.StdEng
}

// For simplicity's sake, we'd only want to handle MyType-MyType or MyType-Int interactions
// Also, we only expect Dense tensors
// You're of course free to define your own rules

// Add adds two tensors
func (e MyEngine) Add(a, b tensor.Tensor, opts ...tensor.FuncOpt) (retVal tensor.Tensor, err error) {
	switch a.Dtype() {
	case MyDtype:
		switch b.Dtype() {
		case MyDtype:
			data := a.Data().([]*MyType)
			datb := b.Data().([]*MyType)
			for i, v := range data {
				v.x += datb[i].x
				v.y += datb[i].y
			}
			return a, nil
		case tensor.Int:
			data := a.Data().([]*MyType)
			datb := b.Data().([]int)
			for i, v := range data {
				v.x += datb[i]
				v.y += datb[i]
			}
			return a, nil
		}
	case tensor.Int:
		switch b.Dtype() {
		case MyDtype:
			data := a.Data().([]int)
			datb := b.Data().([]*MyType)
			for i, v := range datb {
				v.x += data[i]
				v.y += data[i]
			}
		default:
			return e.StdEng.Add(a, b, opts...)
		}
	default:
		return e.StdEng.Add(a, b, opts...)
	}
	return nil, errors.New("Unreachable")
}

func init() {
	MyDtype = tensor.Dtype{reflect.TypeOf(&MyType{})}
}

func main() {
	T := tensor.New(tensor.WithEngine(MyEngine{}),
		tensor.WithShape(2, 2),
		tensor.WithBacking([]*MyType{
			&MyType{0, 0}, &MyType{0, 1},
			&MyType{1, 0}, &MyType{1, 1},
		}))
	ones := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]int{1, 1, 1, 1}), tensor.WithEngine(MyEngine{}))
	T2, _ := T.Add(ones)

	fmt.Printf("T:\n%+v", T)
	fmt.Printf("T2:\n%+v", T2)

}
Output:

T:
Matrix (2, 2) [2 1]
⎡(1, 1)  (1, 2)⎤
⎣(2, 1)  (2, 2)⎦
T2:
Matrix (2, 2) [2 1]
⎡(1, 1)  (1, 2)⎤
⎣(2, 1)  (2, 2)⎦
Example (IteratorRowmajor)

This is an example of how to use `IteratorFromDense` from a row-major Dense tensor

T := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}))
it := IteratorFromDense(T)
fmt.Printf("T:\n%v\n", T)

for i, err := it.Start(); err == nil; i, err = it.Next() {
	fmt.Printf("i: %d, coord: %v\n", i, it.Coord())
}
Output:

T:
⎡0  1  2⎤
⎣3  4  5⎦

i: 0, coord: [0 1]
i: 1, coord: [0 2]
i: 2, coord: [1 0]
i: 3, coord: [1 1]
i: 4, coord: [1 2]
i: 5, coord: [0 0]
Example (IteratorcolMajor)

This is an example of using `IteratorFromDense` on a col-major Dense tensor. More importantly this example shows the order of the iteration.

T := New(WithShape(2, 3), WithBacking([]float64{0, 1, 2, 3, 4, 5}), AsFortran(nil))
it := IteratorFromDense(T)
fmt.Printf("T:\n%v\n", T)

for i, err := it.Start(); err == nil; i, err = it.Next() {
	fmt.Printf("i: %d, coord: %v\n", i, it.Coord())
}
Output:

T:
⎡0  2  4⎤
⎣1  3  5⎦

i: 0, coord: [0 1]
i: 2, coord: [0 2]
i: 4, coord: [1 0]
i: 1, coord: [1 1]
i: 3, coord: [1 2]
i: 5, coord: [0 0]
Example (Sparse_advanced)
xs := []int{1, 2, 6, 8}
ys := []int{1, 2, 1, 6}
vals := []int16{3, 1, 4, 1}

S := CSCFromCoord(Shape{9, 7}, xs, ys, vals)
T := New(WithShape(9, 7), Of(Int16))     // dense
Reuse := New(WithShape(9, 7), Of(Int16)) // reuse must be a *Dense because the result will always be a dense
Result, _ := Add(S, T, WithReuse(Reuse))
fmt.Printf("Operations involving sparse tensors also do take the usual function options like Reuse:\n%+#s\nResult == Reuse: %t", Result, Result == Reuse)
Output:

Operations involving sparse tensors also do take the usual function options like Reuse:
Matrix (9, 7) [7 1]
⎡0  0  0  0  0  0  0⎤
⎢0  3  0  0  0  0  0⎥
⎢0  0  1  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  4  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎣0  0  0  0  0  0  1⎦

Result == Reuse: true
Example (Sparse_basics)
xs := []int{1, 2, 6, 8}
ys := []int{1, 2, 1, 6}
vals := []float32{3, 1, 4, 1}

S := CSCFromCoord(Shape{9, 7}, xs, ys, vals)
T := New(WithShape(9, 7), Of(Float32)) // dense

Result, _ := Add(S, T)
fmt.Printf("When adding a sparse tensor to a dense tensor, the result is of %T:\n=============================================================================\n%+#s\n", Result, Result)
Result, _ = Add(T, S)
fmt.Printf("And vice versa - %T\n=========================\n%+#s\n", Result, Result)
Output:

When adding a sparse tensor to a dense tensor, the result is of *tensor.Dense:
=============================================================================
Matrix (9, 7) [7 1]
⎡0  0  0  0  0  0  0⎤
⎢0  3  0  0  0  0  0⎥
⎢0  0  1  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  4  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎣0  0  0  0  0  0  1⎦

And vice versa - *tensor.Dense
=========================
Matrix (9, 7) [7 1]
⎡0  0  0  0  0  0  0⎤
⎢0  3  0  0  0  0  0⎥
⎢0  0  1  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎢0  4  0  0  0  0  0⎥
⎢0  0  0  0  0  0  0⎥
⎣0  0  0  0  0  0  1⎦
Example (StackExtension)
package main

import (
	"fmt"

	"gorgonia.org/tensor"
)

// LongStruct is a type that is an arbitrarily long struct
type LongStruct struct {
	a, b, c, d, e uint64
}

// Format implements fmt.Formatter for easier-to-read output of data
func (ls LongStruct) Format(s fmt.State, c rune) {
	fmt.Fprintf(s, "{a: %d, b: %d, c: %d, d: %d, e: %d}", ls.a, ls.b, ls.c, ls.d, ls.e)
}

type s int

func (ss s) Start() int { return int(ss) }
func (ss s) End() int   { return int(ss) + 1 }
func (ss s) Step() int  { return 1 }

func main() {
	// For documentation if you're reading this on godoc:
	//
	// type LongStruct struct {
	// a, b, c, d, e uint64
	// }

	T := tensor.New(tensor.WithShape(2, 2),
		tensor.WithBacking([]LongStruct{
			LongStruct{0, 0, 0, 0, 0},
			LongStruct{1, 1, 1, 1, 1},
			LongStruct{2, 2, 2, 2, 2},
			LongStruct{3, 3, 3, 3, 3},
		}),
	)
	S, _ := T.Slice(nil, s(1)) // s is a type that implements tensor.Slice
	T2 := tensor.New(tensor.WithShape(2, 2),
		tensor.WithBacking([]LongStruct{
			LongStruct{10, 10, 10, 10, 10},
			LongStruct{11, 11, 11, 11, 11},
			LongStruct{12, 12, 12, 12, 12},
			LongStruct{13, 13, 13, 13, 13},
		}),
	)
	S2, _ := T2.Slice(nil, s(0))

	// an alternative would be something like this
	// T3, _ := S.(*tensor.Dense).Stack(1, S2.(*tensor.Dense))
	T3, _ := tensor.Stack(1, S, S2)
	fmt.Printf("Stacked:\n%v", T3)

}
Output:

Stacked:
⎡     {a: 1, b: 1, c: 1, d: 1, e: 1}  {a: 10, b: 10, c: 10, d: 10, e: 10}⎤
⎣     {a: 3, b: 3, c: 3, d: 3, e: 3}  {a: 12, b: 12, c: 12, d: 12, e: 12}⎦
Example (Sum_Sliced)
T := New(WithShape(4, 4), WithBacking([]int{
	1, 2, 3, 4,
	5, 6, 7, 8,
	1, 2, 3, 4,
	5, 6, 7, 8,
}))
s, _ := T.Slice(S(1, 3), S(1, 3))
sum, _ := Sum(s)

fmt.Printf("T:\n%v\nsliced:\n%v\nSum: %v", T, s, sum)
Output:

T:
⎡1  2  3  4⎤
⎢5  6  7  8⎥
⎢1  2  3  4⎥
⎣5  6  7  8⎦

sliced:
⎡6  7⎤
⎣2  3⎦

Sum: 18

Index

Examples

Constants

View Source
const AllAxes int = -1
View Source
const DEBUG = false
View Source
const (
	PoolSize = 4096
)

Variables

View Source
var (
	Bool       = Dtype{reflect.TypeOf(true)}
	Int        = Dtype{reflect.TypeOf(int(1))}
	Int8       = Dtype{reflect.TypeOf(int8(1))}
	Int16      = Dtype{reflect.TypeOf(int16(1))}
	Int32      = Dtype{reflect.TypeOf(int32(1))}
	Int64      = Dtype{reflect.TypeOf(int64(1))}
	Uint       = Dtype{reflect.TypeOf(uint(1))}
	Uint8      = Dtype{reflect.TypeOf(uint8(1))}
	Uint16     = Dtype{reflect.TypeOf(uint16(1))}
	Uint32     = Dtype{reflect.TypeOf(uint32(1))}
	Uint64     = Dtype{reflect.TypeOf(uint64(1))}
	Float32    = Dtype{reflect.TypeOf(float32(1))}
	Float64    = Dtype{reflect.TypeOf(float64(1))}
	Complex64  = Dtype{reflect.TypeOf(complex64(1))}
	Complex128 = Dtype{reflect.TypeOf(complex128(1))}
	String     = Dtype{reflect.TypeOf("")}

	// aliases
	Byte = Uint8

	// extras
	Uintptr       = Dtype{reflect.TypeOf(uintptr(0))}
	UnsafePointer = Dtype{reflect.TypeOf(unsafe.Pointer(&Uintptr))}
)

oh how nice it'd be if I could make them immutable

View Source
var TABCOUNT uint32 = 0

Functions

func BorrowBools

func BorrowBools(size int) []bool

BorrowBools borrows a slice of bools from the pool. USE WITH CAUTION.

func BorrowInts

func BorrowInts(size int) []int

BorrowInts borrows a slice of ints from the pool. USE WITH CAUTION.

func BroadcastStrides deprecated

func BroadcastStrides(destShape, srcShape Shape, destStrides, srcStrides []int) (retVal []int, err error)

BroadcastStrides handles broadcasting from different shapes.

Deprecated: this function will be unexported

func CheckSlice

func CheckSlice(s Slice, size int) error

CheckSlice checks a slice to see if it's sane

func Copy

func Copy(dst, src Tensor) error

Copy copies a tensor to another. For *Dense views, only the relevant slots are copied.

func DontUsePool

func DontUsePool()

DontUsePool makes sure the functions don't use the tensor pool provided. This is useful as certain applications don't lend themselves well to use of the pool. Examples of such applications would be one where many tensors of wildly different sizes are created all the time.

func Inner

func Inner(a, b Tensor) (retVal interface{}, err error)

Inner finds the inner products of two vector Tensors. Both arguments to the functions are eexpected to be vectors.

func IsMonotonicInts

func IsMonotonicInts(a []int) (monotonic bool, incr1 bool)

IsMonotonicInts returns true if the slice of ints is monotonically increasing. It also returns true for incr1 if every succession is a succession of 1

func Itol

func Itol(i int, shape Shape, strides []int) (coords []int, err error)

Itol is Index to Location.

func Ltoi

func Ltoi(shape Shape, strides []int, coords ...int) (at int, err error)

Ltoi is Location to Index. Provide a shape, a strides, and a list of integers as coordinates, and returns the index at which the element is.

func MaskedReduce

func MaskedReduce(t *Dense, retType Dtype, fn maskedReduceFn, axis ...int) interface{}

MaskedReduce applies a reduction function of type maskedReduceFn to mask, and returns either an int, or another array

func MaxInt

func MaxInt(a, b int) int

MaxInt returns the highest between two ints. If both are the same, it returns the first

func MaxInts

func MaxInts(is ...int) (retVal int)

MaxInts returns the max of a slice of ints.

func MinInt

func MinInt(a, b int) int

MinInt returns the lowest between two ints. If both are the same it returns the first

func ProdInts

func ProdInts(a []int) (retVal int)

ProdInts returns the internal product of an int slice

func Random

func Random(dt Dtype, size int) interface{}

Random creates an array of random numbers of the given type. For complex Dtypes, the imaginary component will be 0.

This function is only useful in cases where the randomness is not vital.

func Range

func Range(dt Dtype, start, end int) interface{}

Range creates a ranged array with a given type. It panics if the Dtype is not supported or does not represent a naturally orderable type (strings, pointers etc) Do note that the range algorithm is very simple, and simply does increments or decrements of 1. This means for floating point types you're not able to create a range with a 0.1 increment step, and for complex number types, the imaginary part will always be 0i

func Register

func Register(a Dtype)

Register registers a new Dtype

func RegisterEq

func RegisterEq(a Dtype)

RegisterEq registers a dtype as a type that can be compared for equality

func RegisterFloat

func RegisterFloat(a Dtype)

func RegisterNumber

func RegisterNumber(a Dtype)

RegisterNumber is a function required to register a new numerical Dtype. This package provides the following Dtype:

Int
Int8
Int16
Int32
Int64
Uint
Uint8
Uint16
Uint32
Uint64
Float32
Float64
Complex64
Complex128

If a Dtype that is registered already exists on the list, it will not be added to the list.

func RegisterOrd

func RegisterOrd(a Dtype)

RegisterOrd registers a dtype as a type that can be typed

func ReturnBools

func ReturnBools(is []bool)

ReturnBools returns a slice from the pool. USE WITH CAUTION.

func ReturnInts

func ReturnInts(is []int)

ReturnInts returns a slice from the pool. USE WITH CAUTION.

func ReturnTensor

func ReturnTensor(t Tensor)

ReturnTensor returns a Tensor to their respective pools. USE WITH CAUTION

func SampleIndex

func SampleIndex(in interface{}) int

SampleIndex samples a slice or a Tensor. TODO: tidy this up.

func SliceDetails

func SliceDetails(s Slice, size int) (start, end, step int, err error)

SliceDetails is a function that takes a slice and spits out its details. The whole reason for this is to handle the nil Slice, which is this: a[:]

func SortIndex

func SortIndex(in interface{}) (out []int)

SortIndex is similar to numpy's argsort TODO: tidy this up

func SumInts

func SumInts(a []int) (retVal int)

SumInts sums a slice of ints

func ToMat64

func ToMat64(t *Dense, opts ...FuncOpt) (retVal *mat.Dense, err error)

ToMat64 converts a *Dense to a *mat.Dense. All the values are converted into float64s. This function will only convert matrices. Anything *Dense with dimensions larger than 2 will cause an error.

func TransposeIndex

func TransposeIndex(i int, oldShape, pattern, oldStrides, newStrides []int) int

TransposeIndex returns the new index given the old index

func UnsafePermute

func UnsafePermute(pattern []int, xs ...[]int) (err error)

func UntransposeIndex

func UntransposeIndex(i int, oldShape, pattern, oldStrides, newStrides []int) int

UntransposeIndex returns the old index given the new index

func Use

func Use(b BLAS)

Use defines which BLAS implementation gorgonia should use. The default is Gonum's Native. These are the other options:

Use(blastoise.Implementation())
Use(cubone.Implementation())
Use(cgo.Implementation)

Note the differences in the brackets. The blastoise and cubone ones are functions.

func UsePool

func UsePool()

UsePool enables the use of a pool of *Tensors as provided in the package. This is the default option

Types

type AP

type AP struct {
	Δ Triangle
	// contains filtered or unexported fields
}

An AP is an access pattern. It tells the various ndarrays how to access their data through the use of strides Through the AP, there are several definitions of things, most notably there are two very specific "special cases":

Scalar has Dims() of 0.
	- (1)
Scalarlikes are higher order tensors, but each with a size of 1. The Dims() are not 0.
	- (1, 1)
	- (1, 1, 1)
	- (1, 1, 1, 1), etc
Vector has Dims() of 1, but its shape can take several forms:
	- (x, 1)
	- (1, x)
	- (x)
Matrix has Dims() of 2. This is the most basic form. The len(shape) has to be equal to 2 as well
ndarray has Dims() of n.

func MakeAP

func MakeAP(shape Shape, strides []int, o DataOrder, Δ Triangle) AP

MakeAP creates an AP, given the shape and strides.

func (*AP) C

func (ap *AP) C() bool

C returns true if the access pattern is C-contiguous array

func (*AP) Clone

func (ap *AP) Clone() (retVal AP)

Clone clones the *AP. Clearly. It returns AP

func (*AP) CloneTo

func (ap *AP) CloneTo(dest *AP)

func (*AP) DataOrder

func (ap *AP) DataOrder() DataOrder

DataOrder returns the data order of the AP.

func (*AP) Dims

func (ap *AP) Dims() int

Dims returns the dimensions of the shape in the AP

func (*AP) F

func (ap *AP) F() bool

F returns true if the access pattern is Fortran contiguous array

func (*AP) Format

func (ap *AP) Format(state fmt.State, c rune)

Format implements fmt.Formatter

func (*AP) Init

func (ap *AP) Init(shape Shape, strides []int)

Init initializes an already created AP with a shape and stries. It will panic if AP is nil.

func (*AP) IsColVec

func (ap *AP) IsColVec() bool

IsColVec returns true when the access pattern has the shape (x, 1)

func (*AP) IsMatrix

func (ap *AP) IsMatrix() bool

IsMatrix returns true if it's a matrix. This is mostly a convenience method. RowVec and ColVecs are also considered matrices

func (*AP) IsRowVec

func (ap *AP) IsRowVec() bool

IsRowVec returns true when the access pattern has the shape (1, x)

func (*AP) IsScalar

func (ap *AP) IsScalar() bool

IsScalar returns true if the access pattern indicates it's a scalar value.

func (*AP) IsScalarEquiv added in v0.9.15

func (ap *AP) IsScalarEquiv() bool

IsScalarEquiv returns true if the access pattern is equivalent to a scalar shape.

func (*AP) IsVector

func (ap *AP) IsVector() bool

IsVector returns whether the access pattern falls into one of three possible definitions of vectors:

vanilla vector (not a row or a col)
column vector
row vector

func (*AP) IsVectorLike added in v0.9.12

func (ap *AP) IsVectorLike() bool

IsVectorLike returns true if the shape is vector-like (i.e. the shape only has one dim that is a non-1).

func (*AP) IsZero

func (ap *AP) IsZero() bool

IsZero tell us if the ap has zero size

func (*AP) S

func (ap *AP) S(size int, slices ...Slice) (newAP AP, ndStart, ndEnd int, err error)

S returns the metadata of the sliced tensor.

func (*AP) SetShape

func (ap *AP) SetShape(s ...int)

SetShape is for very specific times when modifying the AP is necessary, such as reshaping and doing I/O related stuff

Caveats:

- SetShape will recalculate the strides.

- If the AP is locked, nothing will happen

func (*AP) Shape

func (ap *AP) Shape() Shape

Shape returns the shape of the AP

func (*AP) Size

func (ap *AP) Size() int

Size returns the expected array size of the shape

func (*AP) Strides

func (ap *AP) Strides() []int

Strides returns the strides of the AP

func (*AP) String

func (ap *AP) String() string

String implements fmt.Stringer and runtime.Stringer

func (*AP) T

func (ap *AP) T(axes ...int) (retVal AP, a []int, err error)

T returns the transposed metadata based on the given input

type Abser

type Abser interface {
	Abs(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Abser is any engine that can perform Abs on the values of a Tensor.

type Adder

type Adder interface {
	// Add performs a + b
	Add(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// AddScalar adds a scalar to the tensor. leftTensor indicates if the tensor is the left operand.
	// Whether or not the input tensor is clobbered is left to the implementation
	AddScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Adder is any engine that can perform elementwise addition.

type Argmaxer

type Argmaxer interface {
	Argmax(t Tensor, axis int) (Tensor, error)
}

Argmaxer is any engine that can find the indices of the maximum values along an axis. By convention the returned Tensor has Dtype of Int.

type Argminer

type Argminer interface {
	Argmin(t Tensor, axis int) (Tensor, error)
}

Argmaxer is any engine that can find the indices of the minimum values along an axis. By convention the returned Tensor has Dtype of Int.

type BLAS

type BLAS interface {
	blas.Float32
	blas.Float64
	blas.Complex64
	blas.Complex128
}

BLAS represents all the possible implementations of BLAS. The default is Gonum's Native

func WhichBLAS

func WhichBLAS() BLAS

WhichBLAS returns the BLAS that gorgonia uses.

type BitMap

type BitMap struct {
	// contains filtered or unexported fields
}

BitMap is a very simple bitmap. It only supports Set, IsSet and Clear methods. It's mostly used for tracking which element has been set

func NewBitMap

func NewBitMap(size int) *BitMap

NewBitMap creates a new BitMap.

func (*BitMap) Clear

func (bm *BitMap) Clear(i int)

Clear clears the ith bit. It panics if i is greater or equal to the defined max

func (*BitMap) IsSet

func (bm *BitMap) IsSet(i int) bool

IsSet returns true if the ith bit is set. It panics if the i is greater or equal to the defined max

func (*BitMap) Set

func (bm *BitMap) Set(i int)

Set sets the ith bit of the bit map to 1. It panics if i is greater or equal to the defined max

type Boolable

type Boolable interface {
	Zeroer
	Oner
}

Boolable is any type has a zero and one value, and is able to set itself to either

type ByIndiceser added in v0.9.15

type ByIndiceser interface {
	SelectByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
	SelectByIndicesB(input, outGrad, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
}

ByIndiceser allows for values in tensor `a` to be selected by the indices listed in the `indices` tensor.

type CS

type CS struct {
	// contains filtered or unexported fields
}

CS is a compressed sparse data structure. It can be used to represent both CSC and CSR sparse matrices. Refer to the individual creation functions for more information.

func CSCFromCoord

func CSCFromCoord(shape Shape, xs, ys []int, data interface{}) *CS

CSRFromCoord creates a new Compressed Sparse Column matrix given the coordinates. The data has to be a slice or it panics.

func CSRFromCoord

func CSRFromCoord(shape Shape, xs, ys []int, data interface{}) *CS

CSRFromCoord creates a new Compressed Sparse Row matrix given the coordinates. The data has to be a slice or it panics.

func NewCSC

func NewCSC(indices, indptr []int, data interface{}, opts ...ConsOpt) *CS

NewCSC creates a new Compressed Sparse Column matrix. The data has to be a slice, or it panics.

func NewCSR

func NewCSR(indices, indptr []int, data interface{}, opts ...ConsOpt) *CS

NewCSR creates a new Compressed Sparse Row matrix. The data has to be a slice or it panics.

func (*CS) Apply

func (t *CS) Apply(fn interface{}, opts ...FuncOpt) (Tensor, error)

func (*CS) AsCSC

func (t *CS) AsCSC()

func (*CS) AsCSR

func (t *CS) AsCSR()

func (*CS) At

func (t *CS) At(coord ...int) (interface{}, error)

func (*CS) Cap added in v0.9.15

func (a *CS) Cap() int

func (*CS) Clone

func (t *CS) Clone() interface{}

func (CS) Data

func (a CS) Data() interface{}

Data returns the representation of a slice.

func (*CS) DataOrder

func (t *CS) DataOrder() DataOrder

func (*CS) DataSize

func (t *CS) DataSize() int

func (*CS) Dense

func (t *CS) Dense() *Dense

Dense creates a Dense tensor from the compressed one.

func (*CS) Dims

func (t *CS) Dims() int

func (*CS) Dtype

func (t *CS) Dtype() Dtype

func (*CS) Engine

func (t *CS) Engine() Engine

func (*CS) Eq

func (t *CS) Eq(other interface{}) bool

func (*CS) Format

func (t *CS) Format(s fmt.State, c rune)

func (*CS) Get

func (a *CS) Get(i int) interface{}

Get returns the ith element of the underlying array of the *Dense tensor.

func (*CS) GobDecode

func (t *CS) GobDecode(p []byte) (err error)

func (*CS) GobEncode

func (t *CS) GobEncode() (p []byte, err error)

func (*CS) Indices

func (t *CS) Indices() []int

func (*CS) Indptr

func (t *CS) Indptr() []int

func (*CS) IsManuallyManaged

func (t *CS) IsManuallyManaged() bool

func (*CS) IsNativelyAccessible

func (t *CS) IsNativelyAccessible() bool

func (*CS) IsScalar

func (t *CS) IsScalar() bool

func (*CS) Iterator

func (t *CS) Iterator() Iterator

func (*CS) Len added in v0.9.15

func (a *CS) Len() int

func (*CS) MemSize

func (t *CS) MemSize() uintptr

func (*CS) Memset

func (a *CS) Memset(x interface{}) error

Memset sets all values in the array.

func (*CS) NonZeroes

func (t *CS) NonZeroes() int

NonZeroes returns the nonzeroes. In academic literature this is often written as NNZ.

func (*CS) ReadNpy

func (t *CS) ReadNpy(r io.Reader) error

func (*CS) RequiresIterator

func (t *CS) RequiresIterator() bool

func (*CS) Reshape

func (t *CS) Reshape(...int) error

func (*CS) ScalarValue

func (t *CS) ScalarValue() interface{}

func (*CS) Set

func (a *CS) Set(i int, x interface{})

Set sets the value of the underlying array at the index i.

func (*CS) SetAt

func (t *CS) SetAt(v interface{}, coord ...int) error

func (*CS) Shape

func (t *CS) Shape() Shape

func (*CS) Size

func (t *CS) Size() int

func (*CS) Slice

func (t *CS) Slice(...Slice) (View, error)

func (*CS) Strides

func (t *CS) Strides() []int

func (*CS) String

func (t *CS) String() string

func (*CS) T

func (t *CS) T(axes ...int) error

T transposes the matrix. Concretely, it just changes a bit - the state goes from CSC to CSR, and vice versa.

func (*CS) Transpose

func (t *CS) Transpose() error

Transpose is a no-op. The data does not move

func (*CS) UT

func (t *CS) UT()

UT untransposes the CS

func (*CS) Uintptr

func (t *CS) Uintptr() uintptr

func (*CS) WriteNpy

func (t *CS) WriteNpy(w io.Writer) error

func (CS) Zero

func (a CS) Zero()

Zero zeroes out the underlying array of the *Dense tensor.

type Cbrter

type Cbrter interface {
	Cbrt(a Tensor, opt ...FuncOpt) (Tensor, error)
}

Cbrter is any engine that can perform cube root on the values in a Tensor.

type Clamper

type Clamper interface {
	Clamp(a Tensor, min, max interface{}, opts ...FuncOpt) (Tensor, error)
}

Clamper is any engine that can clamp the values in a tensor to between min and max.

type Cloner

type Cloner interface {
	Clone() interface{}
}

Cloner is any type that can clone itself

type Concater

type Concater interface {
	Concat(t Tensor, axis int, others ...Tensor) (Tensor, error)
}

Concater is any enegine that can concatenate multiple Tensors together

type ConsOpt

type ConsOpt func(Tensor)

ConsOpt is a tensor construction option.

func AsDenseDiag

func AsDenseDiag(backing interface{}) ConsOpt

func AsFortran

func AsFortran(backing interface{}, argMask ...[]bool) ConsOpt

AsFortran creates a *Dense with a col-major layout. If the optional backing argument is passed, the backing is assumed to be C-order (row major), and it will be transposed before being used.

func FromMemory

func FromMemory(ptr uintptr, memsize uintptr) ConsOpt

FromMemory is a construction option for creating a *Dense (for now) from memory location. This is a useful option for super large tensors that don't fit into memory - the user may need to `mmap` a file the tensor.

Bear in mind that at the current stage of the ConsOpt design, the order of the ConsOpt is important. FromMemory requires the *Dense's Dtype be set already. This would fail (and panic):

New(FromMemory(ptr, size), Of(Float64))

This would not:

New(Of(Float64), FromMemory(ptr, size))

This behaviour of requiring the ConsOpts to be in order might be changed in the future.

Memory must be manually managed by the caller. Tensors called with this construction option will not be returned to any pool - rather, all references to the pointers will be null'd. Use with caution.

func FromScalar

func FromScalar(x interface{}, argMask ...[]bool) ConsOpt

FromScalar is a construction option for representing a scalar value as a Tensor

func Of

func Of(a Dtype) ConsOpt

Of is a construction option for a Tensor.

func WithBacking

func WithBacking(x interface{}, argMask ...[]bool) ConsOpt

WithBacking is a construction option for a Tensor Use it as such:

backing := []float64{1,2,3,4}
t := New(WithBacking(backing))

It can be used with other construction options like WithShape

func WithEngine

func WithEngine(e Engine) ConsOpt

WithEngine is a construction option that would cause a Tensor to be linked with an execution engine.

func WithMask

func WithMask(x interface{}) ConsOpt

WithMask is a construction option for a Tensor Use it as such:

mask := []bool{true,true,false,false}
t := New(WithBacking(backing), WithMask(mask))

It can be used with other construction options like WithShape The supplied mask can be any type. If non-boolean, then tensor mask is set to true wherever non-zero value is obtained

func WithShape

func WithShape(dims ...int) ConsOpt

WithShape is a construction option for a Tensor. It creates the ndarray in the required shape.

type Cuber

type Cuber interface {
	Cube(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Cuber is any engine that can cube the values elementwise in a Tensor.

type DataOrder

type DataOrder byte

DataOrder is a flag that indicates the order of data. The default DataOrder (0) is what this package uses by default.

const (
	// ColMajor indicates that the data is stored in a col-major way.
	// A data can only be stored in either ColMajor(1) or RowMajor(0).
	// The way the DataOrder was designed causes the default to be RowMajor
	ColMajor DataOrder = 1 << iota
	// NonContiguous indicates that the data is not contiguous.
	// A data can either be Contiguous (0) or NonContiguous (2).
	// The way DataOrder was designed causes the default to be Contiguous.
	NonContiguous

	// Transposed indicates that the data has been transposed
	Transposed
)

func MakeDataOrder

func MakeDataOrder(fs ...DataOrder) (retVal DataOrder)

MakeDataOrder makes a data order. Typical examples:

MakeDataOrder(DataOrder(0))            // Row Major, contiguous
MakeDataOrder(NonContiguous            // Row Major, non-contiguous
MakeDataOrder(ColMajor)                // Col Major, contiguous
MakeDataOrder(ColMajor, NonContiguous) // what it says on the tin

func (DataOrder) HasSameOrder

func (f DataOrder) HasSameOrder(other DataOrder) bool

HasSameOrder returns true if both data orders are the same (either both are ColMajor or both are RowMajor)

func (DataOrder) IsColMajor

func (f DataOrder) IsColMajor() bool

IsColMajor returns true if the data order describes a col-major data

func (DataOrder) IsContiguous

func (f DataOrder) IsContiguous() bool

IsContiguous returns true if the data order describes a contiguous data.

func (DataOrder) IsNotContiguous

func (f DataOrder) IsNotContiguous() bool

IsNotContiguous returns true if the data order describes a noncontiguous data.

func (DataOrder) IsRowMajor

func (f DataOrder) IsRowMajor() bool

IsRowMajor returns true if the data order describes a row-major data

func (DataOrder) IsTransposed

func (f DataOrder) IsTransposed() bool

IsTransposed returns true if the data order describes whether the data has been tranposed (but not moved)

func (DataOrder) String

func (f DataOrder) String() string

type Dataer

type Dataer interface {
	Data() interface{}
}

Dataer is any type that returns the data in its original form (typically a Go slice of something)

type Dense

type Dense struct {
	AP
	// contains filtered or unexported fields
}

Dense represents a dense tensor - this is the most common form of tensors. It can be used to represent vectors, matrices.. etc

func FromArrowArray added in v0.9.11

func FromArrowArray(a arrowArray.Interface) *Dense

FromArrowArray converts an "arrow/array".Interface into a Tensor of matching DataType.

func FromArrowTensor added in v0.9.11

func FromArrowTensor(a arrowTensor.Interface) *Dense

FromArrowTensor converts an "arrow/tensor".Interface into a Tensor of matching DataType.

func FromMat64

func FromMat64(m *mat.Dense, opts ...FuncOpt) *Dense

FromMat64 converts a *"gonum/matrix/mat64".Dense into a *tensorf64.Tensor.

func I

func I(dt Dtype, r, c, k int) *Dense

I creates the identity matrix (usually a square) matrix with 1s across the diagonals, and zeroes elsewhere, like so:

Matrix(4,4)
⎡1  0  0  0⎤
⎢0  1  0  0⎥
⎢0  0  1  0⎥
⎣0  0  0  1⎦

While technically an identity matrix is a square matrix, in attempt to keep feature parity with Numpy, the I() function allows you to create non square matrices, as well as an index to start the diagonals.

For example:

T = I(Float64, 4, 4, 1)

Yields:

⎡0  1  0  0⎤
⎢0  0  1  0⎥
⎢0  0  0  1⎥
⎣0  0  0  0⎦

The index k can also be a negative number:

T = I(Float64, 4, 4, -1)

Yields:

⎡0  0  0  0⎤
⎢1  0  0  0⎥
⎢0  1  0  0⎥
⎣0  0  1  0⎦

func New

func New(opts ...ConsOpt) *Dense

New creates a new Dense Tensor. For sparse arrays use their relevant construction function

func NewDense

func NewDense(dt Dtype, shape Shape, opts ...ConsOpt) *Dense

NewDense creates a new *Dense. It tries its best to get from the tensor pool.

func Ones

func Ones(dt Dtype, shape ...int) *Dense

Ones creates a *Dense with the provided shape and type

func (*Dense) Add

func (t *Dense) Add(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Add performs t + other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Add(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Add(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 + T2
T3:
⎡10  12  14⎤
⎢16  18  20⎥
⎣22  24  26⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] + T2
T3:
⎡10  12⎤
⎣15  17⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Add(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Add(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 + T2
Incr == T3: true
T3:
⎡110  112  114⎤
⎢116  118  120⎥
⎣122  124  126⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 + T2
Incr == T3: true
T3:
⎡110  112⎤
⎣115  117⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Add(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Add(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡10  12  14⎤
⎢16  18  20⎥
⎣22  24  26⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡10  12⎤
⎣15  17⎦
Example (Reuse_operand)

An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.

var T1, T2, T3 *Dense

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Add(T2, WithReuse(T1))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Add(T2, WithReuse(T2))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output:

Reuse tensor passed in
======================
T3 == T1: true
T3:
⎡10  12  14⎤
⎢16  18  20⎥
⎣22  24  26⎦

Reuse tensor passed in
======================
T3 == T2: true
T3:
⎡10  12  14⎤
⎢16  18  20⎥
⎣22  24  26⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Add(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 + T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Add(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] + T2\nV:\n%v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output:

Unsafe Operation
================
T3 = T1 + T2
T1 == T3: true
T1:
⎡10  12  14⎤
⎢16  18  20⎥
⎣22  24  26⎦
Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] + T2
V:
⎡10  12⎤
⎣15  17⎦

Naturally, T1 is mutated too:
⎡10  12   2⎤
⎢15  17   5⎥
⎣ 6   7   8⎦

func (*Dense) AddScalar

func (t *Dense) AddScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

AddScalar performs t + other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.AddScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T3, _ = T1.AddScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 + T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.AddScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.AddScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 + T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe (tensor is left operand)
==========================
T3 = T1 + 5
T3:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (tensor is right operand)
==========================
T3 = 5 + T1
T3:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 1:3] + 5
T3:
⎡ 6   7⎤
⎢ 9  10⎥
⎣12  13⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is right operand)
=============================================
T3 = 5 + T1[:, 1:3]
T3:
⎡ 6   7⎤
⎢ 9  10⎥
⎣12  13⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.AddScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.AddScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 + T2
Incr == T3: true
T3:
⎡105  106  107⎤
⎢108  109  110⎥
⎣111  112  113⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 + T2
Incr == T3: true
T3:
⎡105  106⎤
⎣108  109⎦
Example (Reuse)

Reuse tensors may be used, with the WithReuse() function option.

var T1, V, Reuse, T3 *Dense
var sliced Tensor

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.AddScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is right operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.AddScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.AddScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.AddScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in (Tensor is left operand)
======================
T3 == Reuse: true
T3:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

Reuse tensor passed in (Tensor is right operand)
======================
T3 == Reuse: true
T3:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡5  6⎤
⎣8  9⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡5  6⎤
⎣8  9⎦
Example (Unsafe)
var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.AddScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 + 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T3, _ = T1.AddScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 + T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.AddScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.AddScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 + T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output:

Operation is unsafe (tensor is left operand)
==========================
T3 = T1 + 5
T3:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

T3 == T1: true
T1 is changed:
⎡ 5   6   7⎤
⎢ 8   9  10⎥
⎣11  12  13⎦

Operation is unsafe (tensor is right operand)
==========================
T3 = 5 + T1
T3:
⎡10  11  12⎤
⎢13  14  15⎥
⎣16  17  18⎦

T3 == T1: true
T1 is changed:
⎡10  11  12⎤
⎢13  14  15⎥
⎣16  17  18⎦

Operation is unsafe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 0:2] + 5
T3:
⎡ 5   6⎤
⎢ 8   9⎥
⎣11  12⎦

sliced == T3: true
T1 is changed:
⎡ 5   6   2⎤
⎢ 8   9   5⎥
⎣11  12   8⎦

Operation is unsafe (sliced operations - tensor is right operand)
=============================================
T3 = 5 + T1[:, 0:2]
T3:
⎡ 5   6⎤
⎢ 8   9⎥
⎣11  12⎦

sliced == T3: true
T1 is changed:
⎡ 5   6   2⎤
⎢ 8   9   5⎥
⎣11  12   8⎦

func (*Dense) Apply

func (t *Dense) Apply(fn interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Apply applies a function to all the values in the tensor.

Example
package main

import (
	"fmt"

	"gorgonia.org/tensor"
)

func main() {
	a := tensor.New(tensor.WithShape(2, 2), tensor.WithBacking([]float64{1, 2, 3, 4}))
	cube := func(a float64) float64 { return a * a * a }

	b, err := a.Apply(cube)
	if err != nil {
		fmt.Printf("b is an error %v", err)
	}
	fmt.Printf("a and b are the same object - %t\n", a.Eq(b))
	fmt.Printf("a is unmutated\n%v\n", a)

	c, err := a.Apply(cube, tensor.WithReuse(a))
	if err != nil {
		fmt.Printf("c is an error %v\n", err)
	}
	fmt.Printf("a and c are the same object - %t\n", a.Eq(c))

	fmt.Printf("a is now mutated\n%v\n", a)
}
Output:

a and b are the same object - false
a is unmutated
⎡1  2⎤
⎣3  4⎦

a and c are the same object - true
a is now mutated
⎡ 1   8⎤
⎣27  64⎦

func (*Dense) Argmax

func (t *Dense) Argmax(axis int) (retVal *Dense, err error)

Argmax finds the index of the max value along the axis provided

func (*Dense) Argmin

func (t *Dense) Argmin(axis int) (retVal *Dense, err error)

Argmin finds the index of the min value along the axis provided

func (*Dense) At

func (t *Dense) At(coords ...int) (interface{}, error)

At returns the value at the given coordinate

func (*Dense) Cap added in v0.9.15

func (a *Dense) Cap() int

func (*Dense) Clone

func (t *Dense) Clone() interface{}

Clone clones a *Dense. It creates a copy of the data, and the underlying array will be allocated

func (*Dense) ClumpMasked

func (t *Dense) ClumpMasked() []Slice

ClumpMasked returns a list of slices corresponding to the masked clumps of a 1-D array Added to match numpy function names

func (*Dense) ClumpUnmasked

func (t *Dense) ClumpUnmasked() []Slice

ClumpUnmasked returns a list of slices corresponding to the unmasked clumps of a 1-D array Added to match numpy function names

func (*Dense) Concat

func (t *Dense) Concat(axis int, Ts ...*Dense) (retVal *Dense, err error)

Concat concatenates the other tensors along the given axis. It is like Numpy's concatenate() function.

func (*Dense) CopyTo

func (t *Dense) CopyTo(other *Dense) error

CopyTo copies the underlying data to the destination *Dense. The original data is untouched. Note: CopyTo doesn't care about the metadata of the destination *Dense. Take for example:

T = NewTensor(WithShape(6))
T2 = NewTensor(WithShape(2,3))
err = T.CopyTo(T2) // err == nil

The only time that this will fail is if the underlying sizes are different

func (*Dense) Data

func (t *Dense) Data() interface{}

Data returns the underlying array. If the *Dense represents a scalar value, the scalar value is returned instead

Example

Data shows how the shape of the *Dense actually affects the return value of .Data().

T := New(WithShape(2, 2), WithBacking([]float64{1, 2, 3, 4}))
fmt.Printf("Basics:\n======\nAny kind of arrays: %v\n", T.Data())

fmt.Printf("\nScalar-like\n===========\n")
T = New(WithShape(), FromScalar(3.14))
fmt.Printf("WithShape(), FromScalar: %v\n", T.Data())

T = New(WithShape(), WithBacking([]float64{3.14}))
fmt.Printf("WithShape(), With a slice of 1 as backing: %v\n", T.Data())

T = New(WithShape(1), FromScalar(3.14))
fmt.Printf("WithShape(1), With an initial scalar: %v\n", T.Data())

T = New(WithShape(1, 1), WithBacking([]float64{3.14}))
fmt.Printf("WithShape(1, 1), With an initial scalar: %v\n", T.Data())

T = New(WithShape(1, 1), FromScalar(3.14))
fmt.Printf("WithShape(1, 1), With an initial scalar: %v\n", T.Data())

T.Reshape()
fmt.Printf("After reshaping to (): %v\n", T.Data())
Output:

Basics:
======
Any kind of arrays: [1 2 3 4]

Scalar-like
===========
WithShape(), FromScalar: 3.14
WithShape(), With a slice of 1 as backing: 3.14
WithShape(1), With an initial scalar: [3.14]
WithShape(1, 1), With an initial scalar: [3.14]
WithShape(1, 1), With an initial scalar: [3.14]
After reshaping to (): 3.14

func (*Dense) DataSize

func (t *Dense) DataSize() int

DataSize returns the size of the underlying array. Typically t.DataSize() == t.Shape().TotalSize()

func (*Dense) Div

func (t *Dense) Div(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Div performs t ÷ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Div(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 ÷ T2\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Div(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] ÷ T2\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 ÷ T2
T3:
⎡   0  0.09   0.2⎤
⎢ 0.2   0.3   0.3⎥
⎣ 0.4   0.4   0.4⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] ÷ T2
T3:
⎡   0  0.09⎤
⎣ 0.2   0.3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Div(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 ÷ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Div(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 ÷ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 ÷ T2
Incr == T3: true
T3:
⎡   100  100.09  100.17⎤
⎢100.23  100.29  100.33⎥
⎣100.38  100.41  100.44⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 ÷ T2
Incr == T3: true
T3:
⎡   100  100.09⎤
⎣100.25  100.31⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Div(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Div(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡   0  0.09   0.2⎤
⎢ 0.2   0.3   0.3⎥
⎣ 0.4   0.4   0.4⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡   0  0.09⎤
⎣ 0.2   0.3⎦
Example (Reuse_operand)

An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.

var T1, T2, T3 *Dense

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Div(T2, WithReuse(T1))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%1.1v\n", T3 == T1, T3)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Div(T2, WithReuse(T2))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%1.1v\n", T3 == T2, T3)
Output:

Reuse tensor passed in
======================
T3 == T1: true
T3:
⎡   0  0.09   0.2⎤
⎢ 0.2   0.3   0.3⎥
⎣ 0.4   0.4   0.4⎦

Reuse tensor passed in
======================
T3 == T2: true
T3:
⎡   0  0.09   0.2⎤
⎢ 0.2   0.3   0.3⎥
⎣ 0.4   0.4   0.4⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Div(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 ÷ T2\nT1 == T3: %t\nT1:\n%1.1v", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Div(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] ÷ T2\nV:\n%1.1v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%1.1v", T1)
Output:

Unsafe Operation
================
T3 = T1 ÷ T2
T1 == T3: true
T1:
⎡   0  0.09   0.2⎤
⎢ 0.2   0.3   0.3⎥
⎣ 0.4   0.4   0.4⎦
Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] ÷ T2
V:
⎡   0  0.09⎤
⎣ 0.2   0.3⎦

Naturally, T1 is mutated too:
⎡   0  0.09     2⎤
⎢ 0.2   0.3     5⎥
⎣   6     7     8⎦

func (*Dense) DivScalar

func (t *Dense) DivScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

DivScalar performs t ÷ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.DivScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 / 5\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)

T3, _ = T1.DivScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 / T1\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.DivScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.DivScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 / T1[:, 1:3]\nT3:\n%1.1v\nT1 is unchanged:\n%1.1v\n", T3, T1)
Output:

Default operation is safe (tensor is left operand)
==========================
T3 = T1 / 5
T3:
⎡  0  0.2  0.4⎤
⎢0.6  0.8    1⎥
⎣  1    1    2⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (tensor is right operand)
==========================
T3 = 5 / T1
T3:
⎡+Inf     5     2⎤
⎢   2     1     1⎥
⎣ 0.8   0.7   0.6⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 1:3] + 5
T3:
⎡0.2  0.4⎤
⎢0.8    1⎥
⎣  1    2⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is right operand)
=============================================
T3 = 5 / T1[:, 1:3]
T3:
⎡  5    2⎤
⎢  1    1⎥
⎣0.7  0.6⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.DivScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 / T2\nIncr == T3: %t\nT3:\n%3.1v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.DivScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 / T2\nIncr == T3: %t\nT3:\n%3.1v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 / T2
Incr == T3: true
T3:
⎡1e+02  1e+02  1e+02⎤
⎢1e+02  1e+02  1e+02⎥
⎣1e+02  1e+02  1e+02⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 / T2
Incr == T3: true
T3:
⎡1e+02  1e+02⎤
⎣1e+02  1e+02⎦
Example (Reuse)

Reuse tensors may be used, with the WithReuse() function option.

var T1, V, Reuse, T3 *Dense
var sliced Tensor

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.DivScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3)

// Tensor is right operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.DivScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.DivScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.DivScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%1.1v", T3 == Reuse, T3)
Output:

Reuse tensor passed in (Tensor is left operand)
======================
T3 == Reuse: true
T3:
⎡  0  0.2  0.4⎤
⎢0.6  0.8    1⎥
⎣  1    1    2⎦

Reuse tensor passed in (Tensor is right operand)
======================
T3 == Reuse: true
T3:
⎡+Inf     5     2⎤
⎢   2     1     1⎥
⎣ 0.8   0.7   0.6⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡  0  0.2⎤
⎣0.6  0.8⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡+Inf     5⎤
⎣   2     1⎦
Example (Unsafe)
var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.DivScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 / 5\nT3:\n%1.1v\nT3 == T1: %t\nT1 is changed:\n%1.1v\n", T3, T3 == T1, T1)

T3, _ = T1.DivScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 / T1\nT3:\n%1.1v\nT3 == T1: %t\nT1 is changed:\n%1.1v\n", T3, T3 == T1, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.DivScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%1.1v\nsliced == T3: %t\nT1 is changed:\n%1.1v\n", T3, sliced == T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.DivScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 / T1[:, 0:2]\nT3:\n%1.1v\nsliced == T3: %t\nT1 is changed:\n%1.1v\n", T3, sliced == T3, T1)
Output:

Operation is unsafe (tensor is left operand)
==========================
T3 = T1 / 5
T3:
⎡  0  0.2  0.4⎤
⎢0.6  0.8    1⎥
⎣  1    1    2⎦

T3 == T1: true
T1 is changed:
⎡  0  0.2  0.4⎤
⎢0.6  0.8    1⎥
⎣  1    1    2⎦

Operation is unsafe (tensor is right operand)
==========================
T3 = 5 / T1
T3:
⎡ +Inf  2e+01  1e+01⎤
⎢    8      6      5⎥
⎣    4      4      3⎦

T3 == T1: true
T1 is changed:
⎡ +Inf  2e+01  1e+01⎤
⎢    8      6      5⎥
⎣    4      4      3⎦

Operation is unsafe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 0:2] + 5
T3:
⎡  0  0.2⎤
⎢0.6  0.8⎥
⎣  1    1⎦

sliced == T3: true
T1 is changed:
⎡  0  0.2    2⎤
⎢0.6  0.8    5⎥
⎣  1    1    8⎦

Operation is unsafe (sliced operations - tensor is right operand)
=============================================
T3 = 5 / T1[:, 0:2]
T3:
⎡+Inf     5⎤
⎢   2     1⎥
⎣ 0.8   0.7⎦

sliced == T3: true
T1 is changed:
⎡+Inf     5     2⎤
⎢   2     1     5⎥
⎣ 0.8   0.7     8⎦

func (*Dense) Dtype

func (t *Dense) Dtype() Dtype

Dtype returns the data type of the *Dense tensor.

func (*Dense) ElEq

func (t *Dense) ElEq(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

ElEq performs t == other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.ElEq(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 == T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.ElEq(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.ElEq(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.ElEq(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 == T2
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Returning same type
===================
T3 (Returns Same Type):
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Safe slicing
============
T3:
⎡false  false⎤
⎣ true   true⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡0  0⎤
⎣1  1⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.ElEq(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.ElEq(T2, WithReuse(T3), AsSameType())                         // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.ElEq(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.ElEq(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Reuse With Same Type
=====================
T3:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Reuse on sliced tensors
======================
T3
⎡ true   true⎤
⎣false  false⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡1  1⎤
⎣0  0⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.ElEq(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.ElEq(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡0  0  2⎤
⎢1  1  5⎥
⎣6  7  8⎦

func (*Dense) ElEqScalar

func (t *Dense) ElEqScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

EqScalar performs t == other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) ElNe

func (t *Dense) ElNe(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

ElNe performs t ≠ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.ElNe(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 != T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.ElNe(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.ElNe(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.ElNe(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 != T2
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Returning same type
===================
T3 (Returns Same Type):
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Safe slicing
============
T3:
⎡ true   true⎤
⎣false  false⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡1  1⎤
⎣0  0⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.ElNe(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.ElNe(T2, WithReuse(T3), AsSameType())                         // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.ElNe(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.ElNe(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Reuse With Same Type
=====================
T3:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Reuse on sliced tensors
======================
T3
⎡false  false⎤
⎣ true   true⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡0  0⎤
⎣1  1⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.ElNe(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.ElNe(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡1  1  2⎤
⎢0  0  5⎥
⎣6  7  8⎦

func (*Dense) ElNeScalar

func (t *Dense) ElNeScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

NeScalar performs t ≠ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) Engine

func (t *Dense) Engine() Engine

Engine returns the execution engine associated with this Tensor

func (*Dense) Eq

func (t *Dense) Eq(other interface{}) bool

Eq checks that any two things are equal. If the shapes are the same, but the strides are not the same, it's will still be considered the same

func (*Dense) FBDecode

func (t *Dense) FBDecode(buf []byte) error

FBDecode decodes a byteslice from a flatbuffer table into a *Dense

func (*Dense) FBEncode

func (t *Dense) FBEncode() ([]byte, error)

FBEncode encodes to a byte slice using flatbuffers.

Only natively accessible data can be encided

func (*Dense) FillValue

func (t *Dense) FillValue() interface{}

FillValue returns the value used to fill the invalid entries of a masked array

func (*Dense) Filled

func (t *Dense) Filled(val ...interface{}) (interface{}, error)

Filled returns a tensor with masked data replaced by default fill value, or by optional passed value

func (*Dense) FilledInplace

func (t *Dense) FilledInplace(val ...interface{}) (interface{}, error)

FilledInplace replaces masked data with default fill value, or by optional passed value

func (*Dense) FlatMaskedContiguous

func (t *Dense) FlatMaskedContiguous() []Slice

FlatMaskedContiguous is used to find contiguous masked data in a masked array. Applies to a flattened version of the array. Returns:A sorted sequence of slices (start index, end index).

func (*Dense) FlatMaskedEdges

func (t *Dense) FlatMaskedEdges() (int, int)

FlatMaskedEdges is used to find the indices of the first and last masked values Applies to a flattened version of the array. Returns: A pair of ints. -1 if all values are unmasked.

func (*Dense) FlatNotMaskedContiguous

func (t *Dense) FlatNotMaskedContiguous() []Slice

FlatNotMaskedContiguous is used to find contiguous unmasked data in a masked array. Applies to a flattened version of the array. Returns:A sorted sequence of slices (start index, end index).

func (*Dense) FlatNotMaskedEdges

func (t *Dense) FlatNotMaskedEdges() (int, int)

FlatNotMaskedEdges is used to find the indices of the first and last unmasked values Applies to a flattened version of the array. Returns: A pair of ints. -1 if all values are masked.

func (*Dense) Format

func (t *Dense) Format(s fmt.State, c rune)

Format implements fmt.Formatter. Formatting can be controlled with verbs and flags. All default Go verbs are supported and work as expected. By default, only 8 columns and rows are printed (the first and the last 4 columns and rows, while the middle columns and rows are ellided) Special flags are:

'-' for printing a flat array of values
'+' for printing extra metadata before printing the tensor (it prints shape, stride and type, which are useful for debugging)
'#' for printing the full tensor - there are no elisions. Overrides the 's' verb

Special care also needs be taken for the verb 's' - it prints a super compressed version of the tensor, only printing 4 cols and 4 rows.

func (*Dense) Get

func (a *Dense) Get(i int) interface{}

Get returns the ith element of the underlying array of the *Dense tensor.

func (*Dense) GobDecode

func (t *Dense) GobDecode(p []byte) (err error)

GobDecode implements gob.GobDecoder

func (*Dense) GobEncode

func (t *Dense) GobEncode() (p []byte, err error)

GobEncode implements gob.GobEncoder

func (*Dense) Gt

func (t *Dense) Gt(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Gt performs t > other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.Gt(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 > T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.Gt(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Gt(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Gt(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 > T2
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Returning same type
===================
T3 (Returns Same Type):
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Safe slicing
============
T3:
⎡false  false⎤
⎣false  false⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡0  0⎤
⎣0  0⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.Gt(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.Gt(T2, WithReuse(T3), AsSameType())                           // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.Gt(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.Gt(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Reuse With Same Type
=====================
T3:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Reuse on sliced tensors
======================
T3
⎡false  false⎤
⎣ true   true⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡0  0⎤
⎣1  1⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.Gt(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.Gt(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡0  0  2⎤
⎢0  0  5⎥
⎣6  7  8⎦

func (*Dense) GtScalar

func (t *Dense) GtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

GtScalar performs t > other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) Gte

func (t *Dense) Gte(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Gte performs t ≥ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.Gte(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 >= T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.Gte(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Gte(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Gte(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 >= T2
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Returning same type
===================
T3 (Returns Same Type):
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Safe slicing
============
T3:
⎡false  false⎤
⎣ true   true⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡0  0⎤
⎣1  1⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.Gte(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.Gte(T2, WithReuse(T3), AsSameType())                          // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.Gte(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.Gte(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Reuse With Same Type
=====================
T3:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Reuse on sliced tensors
======================
T3
⎡true  true⎤
⎣true  true⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡1  1⎤
⎣1  1⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.Gte(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.Gte(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡0  0  2⎤
⎢1  1  5⎥
⎣6  7  8⎦

func (*Dense) GteScalar

func (t *Dense) GteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

GteScalar performs t ≥ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) HardenMask

func (t *Dense) HardenMask() bool

HardenMask forces the mask to hard. If mask is hard, then true mask values can not be unset

func (*Dense) Hstack

func (t *Dense) Hstack(others ...*Dense) (*Dense, error)

Hstack stacks other tensors columnwise (horizontal stacking)

Example
var T, T1, T2, T3 *Dense
var err error
T = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T1 = New(WithBacking([]float64{1000, 2000}), WithShape(2, 1))

// Simple example
if T2, err = T.Hstack(T1); err == nil {
	fmt.Printf("T.Hstack(T1):\n%v\n", T2)
}

// This fails, because they are not the same shape
T1.Reshape(2)
if _, err = T.Hstack(T1); err != nil {
	fmt.Printf("Error: %v\n\n", err)
}

// You can stack more than one, as long as all the tensors have the same shape
T1.Reshape(2, 1)
T3 = T1.Clone().(*Dense)
if T2, err = T.Hstack(T1, T3); err == nil {
	fmt.Printf("T.Hstack(T1, T3):\n%v\n", T2)
}

// Compatible shapes can be stacked
T1 = New(Of(Float64), WithShape(2, 3))
if T2, err = T.Hstack(T1); err == nil {
	fmt.Printf("Hstacking (2,2) with (2,3):\n%v\n", T2)
}

// Special attention to vectors - vectors can only be stacked with vectors
T = New(WithBacking([]float64{1000, 2000}))
T1 = New(WithBacking([]float64{0, 1}), WithShape(1, 2))
if _, err = T.Hstack(T1); err != nil {
	fmt.Printf("Hstacking (2) with (1,2): %v\n", err)
}

// Now let's look at failure conditions, or unhandled situations

// Incompatible shapes cannot be stacked
T1.Reshape(3, 2)
if _, err = T.Hstack(T1); err != nil {
	fmt.Printf("Hstacking (2,2) with (3,2): %v\n", err)
}

// Obviously you can't stack a scalar onto tensors (or the other way around)
T1 = New(FromScalar(1.0))
if _, err = T.Hstack(T1); err != nil {
	fmt.Printf("Hstacking a scalar onto a tensor: %v\n", err)
}
if _, err = T1.Hstack(T); err != nil {
	fmt.Printf("Hstacking a tensor onto a scalar: %v\n", err)
}
Output:

T.Hstack(T1):
⎡   0     1  1000⎤
⎣   2     3  2000⎦

Error: Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 2, got 1

T.Hstack(T1, T3):
⎡   0     1  1000  1000⎤
⎣   2     3  2000  2000⎦

Hstacking (2,2) with (2,3):
⎡0  1  0  0  0⎤
⎣2  3  0  0  0⎦

Hstacking (2) with (1,2): Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 1, got 2
Hstacking (2,2) with (3,2): Failed to perform Concat: Unable to find new shape that results from concatenation: Dimension mismatch. Expected 1, got 2
Hstacking a scalar onto a tensor: Tensor has to be at least 1 dimensions
Hstacking a tensor onto a scalar: Tensor has to be at least 1 dimensions

func (*Dense) Info

func (t *Dense) Info() *AP

Info returns the access pattern which explains how the data in the underlying array is accessed. This is mostly used for debugging.

func (*Dense) Inner

func (t *Dense) Inner(other Tensor) (retVal interface{}, err error)

Inner performs a dot product on two vectors. If t or other are not vectors, it will return an error.

func (*Dense) IsManuallyManaged

func (t *Dense) IsManuallyManaged() bool

IsManuallyManaged returns true if the memory associated with this *Dense is manually managed (by the user)

func (*Dense) IsMasked

func (t *Dense) IsMasked() bool

IsMasked indicates whether tensor is masked

func (*Dense) IsMaterializable

func (t *Dense) IsMaterializable() bool

IsMaterializeable indicates if the Tensor is materializable - if it has either gone through some transforms or slicing

func (*Dense) IsNativelyAccessible

func (t *Dense) IsNativelyAccessible() bool

IsNativelyAccessible checks if the pointers are accessible by Go

func (*Dense) IsView

func (t *Dense) IsView() bool

IsView indicates if the Tensor is a view of another (typically from slicing)

func (*Dense) Iterator

func (t *Dense) Iterator() Iterator

func (*Dense) Len added in v0.9.15

func (a *Dense) Len() int

func (*Dense) Lt

func (t *Dense) Lt(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Lt performs t < other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.Lt(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 < T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.Lt(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Lt(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Lt(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 < T2
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Returning same type
===================
T3 (Returns Same Type):
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Safe slicing
============
T3:
⎡ true   true⎤
⎣false  false⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡1  1⎤
⎣0  0⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.Lt(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.Lt(T2, WithReuse(T3), AsSameType())                           // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.Lt(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.Lt(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡false  false  false⎤
⎢false  false  false⎥
⎣false  false  false⎦

Reuse With Same Type
=====================
T3:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Reuse on sliced tensors
======================
T3
⎡false  false⎤
⎣false  false⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡0  0⎤
⎣0  0⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.Lt(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.Lt(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡0  0  0⎤
⎢0  0  0⎥
⎣0  0  0⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡1  1  2⎤
⎢0  0  5⎥
⎣6  7  8⎦

func (*Dense) LtScalar

func (t *Dense) LtScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

LtScalar performs t < other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) Lte

func (t *Dense) Lte(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Lte performs t ≤ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

Example (Basic)
LTE

Comparison functions return a Tensor of bool by default. To return the same type, simply pass in the AsSameType function option

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3, _ = T1.Lte(T2)
fmt.Println("Basic operations are safe\n=========================\nT3 = T1 <= T2")
fmt.Printf("T3:\n%v\n", T3)

// To return the same type, use the AsSameType function option
T3, _ = T1.Lte(T2, AsSameType())
fmt.Println("Returning same type\n===================")
fmt.Printf("T3 (Returns Same Type):\n%v\n", T3)

// Sliced tensors are safe too
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Lte(T2)
fmt.Printf("Safe slicing\n============\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)

// Similarly for tensors that return the same type
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
T3, _ = V.Lte(T2, AsSameType()) // AsSameType returns a tensor of the same type
fmt.Printf("Safe slicing (Same type)\n========================\nT3:\n%v\nT1 remains unchanged:\n%v\n", T3, T1)
Output:

Basic operations are safe
=========================
T3 = T1 <= T2
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Returning same type
===================
T3 (Returns Same Type):
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Safe slicing
============
T3:
⎡true  true⎤
⎣true  true⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Safe slicing (Same type)
========================
T3:
⎡1  1⎤
⎣1  1⎦

T1 remains unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Reuse)

The WithReuse function option can be used to pass in reuse tensors. But be sure to also use the AsSameType() function option or else funny results will happen

var T1, T2, T3, V *Dense
var sliced Tensor
// The reuse tensor is a Tensor of bools...
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking([]bool{
	true, false, true,
	false, true, false,
	true, false, true}), WithShape(3, 3))
T1.Lte(T2, WithReuse(T3)) // note that AsSameType is not used here
fmt.Printf("Default behaviour: Reuse tensor is expected to be of Bools\n==========================================================\nT3:\n%v\n", T3)

// If you want to use a Reuse tensor of the same type, then besure to also pass in the AsSameType() flag
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T3 = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3)) // The reuse tensor is a Tensor of Float64...
T1.Lte(T2, WithReuse(T3), AsSameType())                          // AsSameType is used to return float64s
fmt.Printf("Reuse With Same Type\n=====================\nT3:\n%v\n", T3)

// Slicing is similar:
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking([]bool{true, true, true, true}), WithShape(2, 2))
V.Lte(T2, WithReuse(T3))
fmt.Printf("Reuse on sliced tensors\n======================\nT3\n%v\n", T3)

// Again, bear in mind same types
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T3 = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2))
V.Lte(T2, WithReuse(T3), AsSameType())
fmt.Printf("Reuse on sliced tensors (same type)\n=================================\nT3\n%v\n", T3)
Output:

Default behaviour: Reuse tensor is expected to be of Bools
==========================================================
T3:
⎡true  true  true⎤
⎢true  true  true⎥
⎣true  true  true⎦

Reuse With Same Type
=====================
T3:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Reuse on sliced tensors
======================
T3
⎡ true   true⎤
⎣false  false⎦

Reuse on sliced tensors (same type)
=================================
T3
⎡1  1⎤
⎣0  0⎦
Example (Unsafe)

If the UseUnsafe function option is passed into the call, the assumption is made that it will be returning the same type

var T1, T2, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T1.Lte(T2, UseUnsafe())
fmt.Printf("Unsafe operation\n================\nT1:\n%v\n", T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 1, 5)), WithShape(2, 2))
V.Lte(T2, UseUnsafe())
fmt.Printf("Unsafe operation, with a sliced Tensor\n======================================\nT1:\n%v", T1)
Output:

Unsafe operation
================
T1:
⎡1  1  1⎤
⎢1  1  1⎥
⎣1  1  1⎦

Unsafe operation, with a sliced Tensor
======================================
T1:
⎡1  1  2⎤
⎢1  1  5⎥
⎣6  7  8⎦

func (*Dense) LteScalar

func (t *Dense) LteScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

LteScalar performs t ≤ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (*Dense) Mask

func (t *Dense) Mask() []bool

func (*Dense) MaskAt

func (t *Dense) MaskAt(coords ...int) (bool, error)

MaskAt returns the value of the mask at a given coordinate returns false (valid) if not tensor is not masked

func (*Dense) MaskFromDense

func (t *Dense) MaskFromDense(tts ...*Dense)

MaskFromDense adds a mask slice to tensor by XORing dense arguments' masks

func (*Dense) MaskFromSlice

func (t *Dense) MaskFromSlice(x interface{})

MaskFromSlice makes mask from supplied slice

func (*Dense) MaskedAll

func (t *Dense) MaskedAll(axis ...int) interface{}

MaskedAll returns True if all mask elements evaluate to True. If object is not masked, returns false !!! Not the same as numpy's, which looks at data elements and not at mask Instead, equivalent to numpy ma.getmask(t).all(axis)

func (*Dense) MaskedAny

func (t *Dense) MaskedAny(axis ...int) interface{}

MaskedAny returns True if any mask elements evaluate to True. If object is not masked, returns false !!! Not the same as numpy's, which looks at data elements and not at mask Instead, equivalent to numpy ma.getmask(t).any(axis)

func (*Dense) MaskedCount

func (t *Dense) MaskedCount(axis ...int) interface{}

MaskedCount counts the masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds

func (*Dense) MaskedEqual

func (t *Dense) MaskedEqual(val1 interface{}) (err error)

MaskedEqual sets the mask to true where the corresponding data is equal to val Any values must be the same type as the tensor

func (*Dense) MaskedGreater

func (t *Dense) MaskedGreater(val1 interface{}) (err error)

MaskedGreater sets the mask to true where the corresponding data is greater than val Any values must be the same type as the tensor

func (*Dense) MaskedGreaterEqual

func (t *Dense) MaskedGreaterEqual(val1 interface{}) (err error)

MaskedGreaterEqual sets the mask to true where the corresponding data is greater than or equal to val Any values must be the same type as the tensor

func (*Dense) MaskedInside

func (t *Dense) MaskedInside(val1 interface{}, val2 interface{}) (err error)

MaskedInside sets the mask to true where the corresponding data is inside range of val Any values must be the same type as the tensor

func (*Dense) MaskedLess

func (t *Dense) MaskedLess(val1 interface{}) (err error)

MaskedLess sets the mask to true where the corresponding data is less than val Any values must be the same type as the tensor

func (*Dense) MaskedLessEqual

func (t *Dense) MaskedLessEqual(val1 interface{}) (err error)

MaskedLessEqual sets the mask to true where the corresponding data is less than or equal to val Any values must be the same type as the tensor

func (*Dense) MaskedNotEqual

func (t *Dense) MaskedNotEqual(val1 interface{}) (err error)

MaskedNotEqual sets the mask to true where the corresponding data is not equal to val Any values must be the same type as the tensor

func (*Dense) MaskedOutside

func (t *Dense) MaskedOutside(val1 interface{}, val2 interface{}) (err error)

MaskedOutside sets the mask to true where the corresponding data is outside range of val Any values must be the same type as the tensor

func (*Dense) MaskedValues

func (t *Dense) MaskedValues(val1 interface{}, val2 interface{}, val3 ...interface{}) (err error)

MaskedValues sets the mask to true where the corresponding data is equal to val Any values must be the same type as the tensor

func (*Dense) MatMul

func (t *Dense) MatMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)

MatMul is the basic matrix multiplication that you learned in high school. It takes an optional reuse ndarray, where the ndarray is reused as the result. If that isn't passed in, a new ndarray will be created instead.

Example
handleErr := func(err error) {
	if err != nil {
		panic(err)
	}
}

T0 := New(WithShape(10, 15), WithBacking(Range(Float64, 0, 150)))
T1 := New(WithShape(15, 10), WithBacking(Range(Float64, 150, 0)))
T2, err := MatMul(T0, T1)
handleErr(err)

fmt.Printf("T2:\n%v", T2)
Output:

T2:
⎡  5600    5495    5390    5285  ...   4970    4865    4760    4655⎤
⎢ 23600   23270   22940   22610  ...  21620   21290   20960   20630⎥
⎢ 41600   41045   40490   39935  ...  38270   37715   37160   36605⎥
⎢ 59600   58820   58040   57260  ...  54920   54140   53360   52580⎥
.
.
.
⎢113600  112145  110690  109235  ... 104870  103415  101960  100505⎥
⎢131600  129920  128240  126560  ... 121520  119840  118160  116480⎥
⎢149600  147695  145790  143885  ... 138170  136265  134360  132455⎥
⎣167600  165470  163340  161210  ... 154820  152690  150560  148430⎦
Example (Sliced)
//ASPIRATIONAL TODO: incX and incY of different sizes
handleErr := func(err error) {
	if err != nil {
		panic(err)
	}
}

T0 := New(WithShape(10, 15), WithBacking(Range(Float64, 0, 150)))
T1 := New(WithShape(15, 10), WithBacking(Range(Float64, 150, 0)))
T2, err := MatMul(T0, T1)
handleErr(err)

fmt.Printf("T2:\n%v", T2)

// Slice T0 to only take a (2, 3) on the upper quadrant
// T3 := T0[0:3, 0:2]
T3, err := T0.Slice(makeRS(0, 3), makeRS(0, 2))
handleErr(err)
fmt.Printf("T3:\n%v", T3)

T4, err := T1.Slice(makeRS(13, 15), makeRS(8, 10))
handleErr(err)
fmt.Printf("T4:\n%v", T4)

T5, err := T3.(*Dense).MatMul(T4)
handleErr(err)
fmt.Printf("T3xT4:\n%v", T5)

// Outputz:
// T2:
// ⎡  5600    5495    5390    5285  ...   4970    4865    4760    4655⎤
// ⎢ 23600   23270   22940   22610  ...  21620   21290   20960   20630⎥
// ⎢ 41600   41045   40490   39935  ...  38270   37715   37160   36605⎥
// ⎢ 59600   58820   58040   57260  ...  54920   54140   53360   52580⎥
// .
// .
// .
// ⎢113600  112145  110690  109235  ... 104870  103415  101960  100505⎥
// ⎢131600  129920  128240  126560  ... 121520  119840  118160  116480⎥
// ⎢149600  147695  145790  143885  ... 138170  136265  134360  132455⎥
// ⎣167600  165470  163340  161210  ... 154820  152690  150560  148430⎦
// T3:
// ⎡ 0   1⎤
// ⎢15  16⎥
// ⎣30  31⎦
// T4:
// ⎡12  11⎤
// ⎣ 2   1⎦
// T3xT4:
// ⎡  2    1⎤
// ⎢212  181⎥
// ⎣422  361⎦
Output:

func (*Dense) MatVecMul

func (t *Dense) MatVecMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)

MatVecMul performs a matrix-vector multiplication.

Example
handleErr := func(err error) {
	if err != nil {
		panic(err)
	}
}

T0 := New(WithShape(2, 3), WithBacking(Range(Float64, 1, 7)))
T1 := New(WithShape(3), WithBacking(Range(Float64, 0, 3)))
T2, err := T0.MatVecMul(T1)
handleErr(err)

fmt.Printf("T2:\n%v\n", T2)
Output:

T2:
[ 8  17]
Example (RowMajorSliced)
// ASPIRATIONAL TODO: IncX and incY of differering values

handleErr := func(err error) {
	if err != nil {
		panic(err)
	}
}

T0 := New(WithShape(10, 12), WithBacking(Range(Float64, 1, 121)))
T1 := New(WithShape(3, 3), WithBacking(Range(Float64, 1, 10)))
T2, err := T0.Slice(makeRS(1, 3), makeRS(3, 6))
handleErr(err)
T3, err := T1.Slice(nil, makeRS(1, 2))
handleErr(err)

// here the + formatting option is used because you should know that after this particular slice, the result will be a vector
fmt.Printf("T2:\n%+v", T2)
fmt.Printf("T3:\n%+v\n", T3)

// here we print the underlying slice of T3 just to show that it's actually a much larger slice
fmt.Printf("Underlying Slice: %v\n", T3.Data())

T4, err := T2.(*Dense).MatVecMul(T3)
handleErr(err)

fmt.Printf("T4:\n%v\n", T4)

// Outputz:
// T2:
// Matrix (2, 3) [10 1]
// ⎡14  15  16⎤
// ⎣24  25  26⎦
// T3:
// Vector (3) [3]
// [2  5  8]
// Underlying Slice: [2 3 4 5 6 7 8]
// T4:
// [261  441]
Output:

func (*Dense) Materialize

func (t *Dense) Materialize() Tensor

Materialize takes a view, copies its data and puts it in a new *Tensor.

func (*Dense) Max

func (t *Dense) Max(along ...int) (retVal *Dense, err error)

func (*Dense) MemSize

func (a *Dense) MemSize() uintptr

MemSize returns how big the slice is in bytes

func (*Dense) Memset

func (t *Dense) Memset(x interface{}) error

Memset sets all the values in the *Dense tensor.

func (*Dense) Min

func (t *Dense) Min(along ...int) (retVal *Dense, err error)

func (*Dense) Mod

func (t *Dense) Mod(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Mod performs t % other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mod(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 %% T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Mod(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] %% T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 % T2
T3:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] % T2
T3:
⎡0  1⎤
⎣3  4⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Mod(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 %% T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Mod(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 %% T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 % T2
Incr == T3: true
T3:
⎡100  101  102⎤
⎢103  104  105⎥
⎣106  107  108⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 % T2
Incr == T3: true
T3:
⎡100  101⎤
⎣103  104⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Mod(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Mod(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡0  1⎤
⎣3  4⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mod(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 %% T2\nT1 == T3: %t\nT1:\n%v\n", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Mod(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] %% T2\nV:\n%v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output:

Unsafe Operation
================
T3 = T1 % T2
T1 == T3: true
T1:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] % T2
V:
⎡0  1⎤
⎣3  4⎦

Naturally, T1 is mutated too:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

func (*Dense) ModScalar

func (t *Dense) ModScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

ModScalar performs t % other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.ModScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 %% 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T3, _ = T1.ModScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 %% T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.ModScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[0:2, 0:2] %% 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.ModScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 %% T1[0:2, 0:2]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe (tensor is left operand)
==========================
T3 = T1 % 5
T3:
⎡0  1  2⎤
⎢3  4  0⎥
⎣1  2  3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (tensor is right operand)
==========================
T3 = 5 % T1
T3:
⎡NaN    0    1⎤
⎢  2    1    0⎥
⎣  5    5    5⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is left operand)
=============================================
T3 = T1[0:2, 0:2] % 5
T3:
⎡0  1⎤
⎣3  4⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is right operand)
=============================================
T3 = 5 % T1[0:2, 0:2]
T3:
⎡NaN    0⎤
⎣  2    1⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

func (*Dense) Mul

func (t *Dense) Mul(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Mul performs t × other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mul(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 × T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Mul(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] × T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 × T2
T3:
⎡  0   11   24⎤
⎢ 39   56   75⎥
⎣ 96  119  144⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] × T2
T3:
⎡ 0  11⎤
⎣36  52⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Mul(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 × T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Mul(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 × T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 × T2
Incr == T3: true
T3:
⎡100  111  124⎤
⎢139  156  175⎥
⎣196  219  244⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 × T2
Incr == T3: true
T3:
⎡100  111⎤
⎣136  152⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Mul(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Mul(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡  0   11   24⎤
⎢ 39   56   75⎥
⎣ 96  119  144⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡ 0  11⎤
⎣36  52⎦
Example (Reuse_operand)

An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.

var T1, T2, T3 *Dense

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mul(T2, WithReuse(T1))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mul(T2, WithReuse(T2))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output:

Reuse tensor passed in
======================
T3 == T1: true
T3:
⎡  0   11   24⎤
⎢ 39   56   75⎥
⎣ 96  119  144⎦

Reuse tensor passed in
======================
T3 == T2: true
T3:
⎡  0   11   24⎤
⎢ 39   56   75⎥
⎣ 96  119  144⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Mul(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 × T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Mul(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] × T2\nV:\n%v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output:

Unsafe Operation
================
T3 = T1 × T2
T1 == T3: true
T1:
⎡  0   11   24⎤
⎢ 39   56   75⎥
⎣ 96  119  144⎦
Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] × T2
V:
⎡ 0  11⎤
⎣36  52⎦

Naturally, T1 is mutated too:
⎡ 0  11   2⎤
⎢36  52   5⎥
⎣ 6   7   8⎦

func (*Dense) MulScalar

func (t *Dense) MulScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

MulScalar performs t × other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.MulScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 * 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T3, _ = T1.MulScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 * T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.MulScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.MulScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 * T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe (tensor is left operand)
==========================
T3 = T1 * 5
T3:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (tensor is right operand)
==========================
T3 = 5 * T1
T3:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 1:3] + 5
T3:
⎡ 5  10⎤
⎢20  25⎥
⎣35  40⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is right operand)
=============================================
T3 = 5 * T1[:, 1:3]
T3:
⎡ 5  10⎤
⎢20  25⎥
⎣35  40⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.MulScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 * T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.MulScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 * T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 * T2
Incr == T3: true
T3:
⎡100  105  110⎤
⎢115  120  125⎥
⎣130  135  140⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 * T2
Incr == T3: true
T3:
⎡100  105⎤
⎣115  120⎦
Example (Reuse)

Reuse tensors may be used, with the WithReuse() function option.

var T1, V, Reuse, T3 *Dense
var sliced Tensor

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.MulScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is right operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.MulScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.MulScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.MulScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in (Tensor is left operand)
======================
T3 == Reuse: true
T3:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

Reuse tensor passed in (Tensor is right operand)
======================
T3 == Reuse: true
T3:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡ 0   5⎤
⎣15  20⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡ 0   5⎤
⎣15  20⎦
Example (Unsafe)
var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.MulScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 * 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T3, _ = T1.MulScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 * T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.MulScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.MulScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 * T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output:

Operation is unsafe (tensor is left operand)
==========================
T3 = T1 * 5
T3:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

T3 == T1: true
T1 is changed:
⎡ 0   5  10⎤
⎢15  20  25⎥
⎣30  35  40⎦

Operation is unsafe (tensor is right operand)
==========================
T3 = 5 * T1
T3:
⎡  0   25   50⎤
⎢ 75  100  125⎥
⎣150  175  200⎦

T3 == T1: true
T1 is changed:
⎡  0   25   50⎤
⎢ 75  100  125⎥
⎣150  175  200⎦

Operation is unsafe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 0:2] + 5
T3:
⎡ 0   5⎤
⎢15  20⎥
⎣30  35⎦

sliced == T3: true
T1 is changed:
⎡ 0   5   2⎤
⎢15  20   5⎥
⎣30  35   8⎦

Operation is unsafe (sliced operations - tensor is right operand)
=============================================
T3 = 5 * T1[:, 0:2]
T3:
⎡ 0   5⎤
⎢15  20⎥
⎣30  35⎦

sliced == T3: true
T1 is changed:
⎡ 0   5   2⎤
⎢15  20   5⎥
⎣30  35   8⎦

func (*Dense) Narrow added in v0.9.22

func (t *Dense) Narrow(dim, start, length int) (View, error)

Narrow narrows the tensor.

func (*Dense) NonMaskedCount

func (t *Dense) NonMaskedCount(axis ...int) interface{}

NonMaskedCount counts the non-masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds MaskedCount counts the masked elements of the array (optionally along the given axis) returns -1 if axis out of bounds

func (*Dense) Norm

func (t *Dense) Norm(ord NormOrder, axes ...int) (retVal *Dense, err error)

Norm returns the p-ordered norm of the *Dense, given the axes.

This implementation is directly adapted from Numpy, which is licenced under a BSD-like licence, and can be found here: https://docs.scipy.org/doc/numpy-1.9.1/license.html

func (*Dense) Outer

func (t *Dense) Outer(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)

Outer finds the outer product of two vectors

func (*Dense) PBDecode

func (t *Dense) PBDecode(buf []byte) error

PBDecode unmarshalls a protobuf byteslice into a *Dense.

func (*Dense) PBEncode

func (t *Dense) PBEncode() ([]byte, error)

PBEncode encodes the Dense into a protobuf byte slice.

func (*Dense) Pow

func (t *Dense) Pow(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Pow performs t ^ other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Pow(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 ^ T2\nT3:\n%1.1v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Pow(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] ^ T2\nT3:\n%1.1v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 ^ T2
T3:
⎡    0      1  4e+03⎤
⎢2e+06  3e+08  3e+10⎥
⎣3e+12  2e+14  2e+16⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] ^ T2
T3:
⎡    0      1⎤
⎣5e+05  7e+07⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Pow(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 ^ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Pow(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 ^ T2\nIncr == T3: %t\nT3:\n%1.5v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 ^ T2
Incr == T3: true
T3:
⎡       100         101        4196⎤
⎢1.5944e+06  2.6844e+08  3.0518e+10⎥
⎣2.8211e+12  2.3263e+14  1.8014e+16⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 ^ T2
Incr == T3: true
T3:
⎡       100         101⎤
⎣5.3154e+05  6.7109e+07⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Pow(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%1.1v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Pow(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%1.v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡    0      1  4e+03⎤
⎢2e+06  3e+08  3e+10⎥
⎣3e+12  2e+14  2e+16⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡    0      1⎤
⎣5e+05  7e+07⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Pow(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 ^ T2\nT1 == T3: %t\nT1:\n%1.1v\n", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Pow(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] ^ T2\nV:\n%1.1v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%1.1v", T1)
Output:

Unsafe Operation
================
T3 = T1 ^ T2
T1 == T3: true
T1:
⎡    0      1  4e+03⎤
⎢2e+06  3e+08  3e+10⎥
⎣3e+12  2e+14  2e+16⎦

Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] ^ T2
V:
⎡    0      1⎤
⎣5e+05  7e+07⎦

Naturally, T1 is mutated too:
⎡    0      1      2⎤
⎢5e+05  7e+07      5⎥
⎣    6      7      8⎦

func (*Dense) PowScalar

func (t *Dense) PowScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

PowScalar performs t ^ other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.PowScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 ^ 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T3, _ = T1.PowScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 ^ T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.PowScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[0:2, 0:2] ^ 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.PowScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 ^ T1[0:2, 0:2]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

func (*Dense) ReadCSV

func (t *Dense) ReadCSV(r io.Reader, opts ...FuncOpt) (err error)

ReadCSV reads a CSV into a *Dense. It will override the underlying data.

BUG(chewxy): reading CSV doesn't handle CSVs with different columns per row yet.

func (*Dense) ReadNpy

func (t *Dense) ReadNpy(r io.Reader) (err error)

ReadNpy reads NumPy formatted files into a *Dense

func (*Dense) Reduce

func (t *Dense) Reduce(fn interface{}, axis int, defaultValue interface{}) (retVal *Dense, err error)

Reduce applies a reduction function and reduces the values along the given axis.

func (*Dense) Repeat

func (t *Dense) Repeat(axis int, repeats ...int) (retVal Tensor, err error)

Repeat is like Numpy's repeat. It repeats the elements of an array. The repeats param defines how many times each element in the axis is repeated. Just like NumPy, the repeats param is broadcasted to fit the size of the given axis.

func (*Dense) RequiresIterator

func (t *Dense) RequiresIterator() bool

RequiresIterator indicates if an iterator is required to read the data in *Dense in the correct fashion

func (*Dense) ResetMask

func (t *Dense) ResetMask(val ...bool) error

ResetMask fills the mask with either false, or the provided boolean value

func (*Dense) Reshape

func (t *Dense) Reshape(dims ...int) error

Reshape reshapes a *Dense. If the tensors need to be materialized (either it's a view or transpose), it will be materialized before the reshape happens

func (*Dense) RollAxis

func (t *Dense) RollAxis(axis, start int, safe bool) (retVal *Dense, err error)

RollAxis rolls the axis backwards until it lies in the given position.

This method was adapted from Numpy's Rollaxis. The licence for Numpy is a BSD-like licence and can be found here: https://github.com/numpy/numpy/blob/master/LICENSE.txt

As a result of being adapted from Numpy, the quirks are also adapted. A good guide reducing the confusion around rollaxis can be found here: http://stackoverflow.com/questions/29891583/reason-why-numpy-rollaxis-is-so-confusing (see answer by hpaulj)

func (*Dense) SVD

func (t *Dense) SVD(uv, full bool) (s, u, v *Dense, err error)

SVD does the Single Value Decomposition for the *Dense.

How it works is it temporarily converts the *Dense into a gonum/mat64 matrix, and uses Gonum's SVD function to perform the SVD. In the future, when gonum/lapack fully supports float32, we'll look into rewriting this

Example
T := New(
	WithShape(4, 5),
	WithBacking([]float64{1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0}),
)
_, u, _, _ := T.SVD(true, true)
uT := u.Clone().(*Dense)
uT.T()
eye, err := u.MatMul(uT)
fmt.Println(eye)
fmt.Println(err)
Output:

⎡1  0  0  0⎤
⎢0  1  0  0⎥
⎢0  0  1  0⎥
⎣0  0  0  1⎦

<nil>

func (*Dense) SafeT

func (t *Dense) SafeT(axes ...int) (retVal *Dense, err error)

SafeT is exactly like T(), except it returns a new *Dense. The data is also copied over, unmoved.

func (*Dense) ScalarValue

func (t *Dense) ScalarValue() interface{}

ScalarValue returns the scalar value of a *Tensor, IF and ONLY IF it's a Tensor representation of a scalar value. This is required because operations like a (vec · vec) would return a scalar value. I didn't want to return interface{} for all the API methods, so the next best solution is to wrap the scalar value in a *Tensor

func (*Dense) Set

func (a *Dense) Set(i int, x interface{})

Set sets the value of the underlying array at the index i.

func (*Dense) SetAt

func (t *Dense) SetAt(v interface{}, coords ...int) error

SetAt sets the value at the given coordinate

func (*Dense) SetMask

func (t *Dense) SetMask(mask []bool)

func (*Dense) SetMaskAt

func (t *Dense) SetMaskAt(v bool, coords ...int) error

SetMaskAt sets the mask value at the given coordinate

func (*Dense) SetMaskAtIndex

func (t *Dense) SetMaskAtIndex(v bool, i int) error

SetMaskAtDataIndex set the value of the mask at a given index

func (*Dense) ShallowClone

func (t *Dense) ShallowClone() *Dense

ShallowClone clones the *Dense without making a copy of the underlying array

func (*Dense) Slice

func (t *Dense) Slice(slices ...Slice) (retVal View, err error)

Slice performs slicing on the *Dense Tensor. It returns a view which shares the same underlying memory as the original *Dense.

Given:

T = NewTensor(WithShape(2,2), WithBacking(RangeFloat64(0,4)))
V, _ := T.Slice(nil, singleSlice(1)) // T[:, 1]

Any modification to the values in V, will be reflected in T as well.

The method treats <nil> as equivalent to a colon slice. T.Slice(nil) is equivalent to T[:] in Numpy syntax

Example
var T Tensor
T = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
fmt.Printf("T:\n%v\n", T)

// T[0:2, 0:2]
T, _ = T.Slice(makeRS(0, 2), makeRS(0, 2)) // makeRS is an unexported function that creates a Slice.
fmt.Printf("T[0:2, 0:2]:\n%v\n", T)

// T[:, 1]
T, _ = T.(Slicer).Slice(nil, ss(1)) // ss is unexported
fmt.Printf("T[:, 1]:\n%v\n", T)
Output:

T:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

T[0:2, 0:2]:
⎡0  1⎤
⎣3  4⎦

T[:, 1]:
[1  4]
Example (OneDimension)

Slicing works on one dimensional arrays too:

var T Tensor
T = New(WithBacking(Range(Float64, 0, 9)))
fmt.Printf("T:\n%v\n\n", T)

T, _ = T.Slice(makeRS(0, 5))
fmt.Printf("T[0:5]:\n%v\n", T)
Output:

T:
[0  1  2  3  ... 5  6  7  8]

T[0:5]:
[0  1  2  3  4]
Example (ViewMutation)

Any modifications to the sliced value modifies the original slice as well

var T, V Tensor
T = New(WithBacking(Range(Int, 0, 16)), WithShape(4, 4))
fmt.Printf("T:\n%v\n", T)
V, _ = T.Slice(makeRS(1, 3), makeRS(1, 3))
fmt.Printf("V:\n%v\n", V)

// Now we modify V's 0th value
V.(*Dense).Set(0, 1000)
fmt.Printf("V[0] = 1000:\n%v\n", V)
fmt.Printf("T is also mutated:\n%v", T)
Output:

T:
⎡ 0   1   2   3⎤
⎢ 4   5   6   7⎥
⎢ 8   9  10  11⎥
⎣12  13  14  15⎦

V:
⎡ 5   6⎤
⎣ 9  10⎦

V[0] = 1000:
⎡1000     6⎤
⎣   9    10⎦

T is also mutated:
⎡   0     1     2     3⎤
⎢   4  1000     6     7⎥
⎢   8     9    10    11⎥
⎣  12    13    14    15⎦

func (*Dense) SliceInto

func (t *Dense) SliceInto(view *Dense, slices ...Slice) (retVal View, err error)

SliceInto is a convenience method. It does NOT copy the values - it simply updates the AP of the view. The underlying data is the same. This method will override ALL the metadata in view.

func (*Dense) SoftenMask

func (t *Dense) SoftenMask() bool

SoftenMask forces the mask to soft

func (*Dense) Stack

func (t *Dense) Stack(axis int, others ...*Dense) (retVal *Dense, err error)

Stack stacks the other tensors along the axis specified. It is like Numpy's stack function.

func (*Dense) Sub

func (t *Dense) Sub(other *Dense, opts ...FuncOpt) (retVal *Dense, err error)

Sub performs t - other elementwise. Both t and other must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T2, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Sub(T2)
fmt.Printf("Default operation is safe\n==========================\nT3 = T1 - T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
T3, _ = V.Sub(T2)
fmt.Printf("Default operation is safe (sliced operations)\n=============================================\nT3 = T1[0:2, 0:2] + T2\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe
==========================
T3 = T1 - T2
T3:
⎡-10  -10  -10⎤
⎢-10  -10  -10⎥
⎣-10  -10  -10⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations)
=============================================
T3 = T1[0:2, 0:2] + T2
T3:
⎡-10  -10⎤
⎣ -9   -9⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T2, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Incr = New(WithBacking([]float64{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.Sub(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Incr = New(WithBacking([]float64{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.Sub(T2, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 + T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 - T2
Incr == T3: true
T3:
⎡90  90  90⎤
⎢90  90  90⎥
⎣90  90  90⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 + T2
Incr == T3: true
T3:
⎡90  90⎤
⎣91  91⎦
Example (Reuse)

An optional reuse tensor can also be specified with the WithReuse function option

var T1, V, T2, Reuse, T3 *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float64, 100, 109)), WithShape(3, 3))
T3, _ = T1.Sub(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// You can also use it on operations on sliced tensors - note your reuse tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))
Reuse = New(WithBacking(Range(Float64, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.Sub(T2, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in
======================
T3 == Reuse: true
T3:
⎡-10  -10  -10⎤
⎢-10  -10  -10⎥
⎣-10  -10  -10⎦

Reuse tensor passed in (sliced tensor)
======================================
T3 == Reuse: true
T3:
⎡-10  -10⎤
⎣ -9   -9⎦
Example (Reuse_operand)

An optional reuse tensor can also be specified with the WithReuse function option. Passing in an operand would not cause a problem.

var T1, T2, T3 *Dense

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Sub(T2, WithReuse(T1))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T1: %t\nT3:\n%v\n", T3 == T1, T3)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Sub(T2, WithReuse(T2))
fmt.Printf("Reuse tensor passed in\n======================\nT3 == T2: %t\nT3:\n%v\n", T3 == T2, T3)
Output:

Reuse tensor passed in
======================
T3 == T1: true
T3:
⎡-10  -10  -10⎤
⎢-10  -10  -10⎥
⎣-10  -10  -10⎦

Reuse tensor passed in
======================
T3 == T2: true
T3:
⎡-10  -10  -10⎤
⎢-10  -10  -10⎥
⎣-10  -10  -10⎦
Example (Unsafe)

To perform unsafe operations, use the `UseUnsafe` function option

var T1, T2, T3, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
T2 = New(WithBacking(Range(Float64, 10, 19)), WithShape(3, 3))
T3, _ = T1.Sub(T2, UseUnsafe())
fmt.Printf("Unsafe Operation\n================\nT3 = T1 - T2\nT1 == T3: %t\nT1:\n%v", T1 == T3, T1)

T1 = New(WithBacking(Range(Float64, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
T2 = New(WithBacking(Range(Float64, 10, 14)), WithShape(2, 2))

V.Sub(T2, UseUnsafe()) // unsafe overwrites the data in T1
fmt.Printf("Unsafe Operation on sliced Tensors\n==================================\nV = T1[0:2, 0:2] + T2\nV:\n%v\n", V)
fmt.Printf("Naturally, T1 is mutated too:\n%v", T1)
Output:

Unsafe Operation
================
T3 = T1 - T2
T1 == T3: true
T1:
⎡-10  -10  -10⎤
⎢-10  -10  -10⎥
⎣-10  -10  -10⎦
Unsafe Operation on sliced Tensors
==================================
V = T1[0:2, 0:2] + T2
V:
⎡-10  -10⎤
⎣ -9   -9⎦

Naturally, T1 is mutated too:
⎡-10  -10    2⎤
⎢ -9   -9    5⎥
⎣  6    7    8⎦

func (*Dense) SubScalar

func (t *Dense) SubScalar(other interface{}, leftTensor bool, opts ...FuncOpt) (retVal *Dense, err error)

SubScalar performs t - other elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in other. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

Example (Basic)

By default, arithmetic operations are safe

var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.SubScalar(float32(5), true)
fmt.Printf("Default operation is safe (tensor is left operand)\n==========================\nT3 = T1 - 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T3, _ = T1.SubScalar(float32(5), false)
fmt.Printf("Default operation is safe (tensor is right operand)\n==========================\nT3 = 5 - T1\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.SubScalar(float32(5), true)
fmt.Printf("Default operation is safe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 1:3] + 5\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(1, 3))
V = sliced.(*Dense)
T3, _ = V.SubScalar(float32(5), false)
fmt.Printf("Default operation is safe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 - T1[:, 1:3]\nT3:\n%v\nT1 is unchanged:\n%v\n", T3, T1)
Output:

Default operation is safe (tensor is left operand)
==========================
T3 = T1 - 5
T3:
⎡-5  -4  -3⎤
⎢-2  -1   0⎥
⎣ 1   2   3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (tensor is right operand)
==========================
T3 = 5 - T1
T3:
⎡ 5   4   3⎤
⎢ 2   1   0⎥
⎣-1  -2  -3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 1:3] + 5
T3:
⎡-4  -3⎤
⎢-1   0⎥
⎣ 2   3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦

Default operation is safe (sliced operations - tensor is right operand)
=============================================
T3 = 5 - T1[:, 1:3]
T3:
⎡ 4   3⎤
⎢ 1   0⎥
⎣-2  -3⎦

T1 is unchanged:
⎡0  1  2⎤
⎢3  4  5⎥
⎣6  7  8⎦
Example (Incr)

Incrementing a tensor is also a function option provided by the package

var T1, T3, Incr, V *Dense
var sliced Tensor

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Incr = New(WithBacking([]float32{100, 100, 100, 100, 100, 100, 100, 100, 100}), WithShape(3, 3))
T3, _ = T1.SubScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in\n======================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)

// Operations on sliced tensor is also allowed. Note that your Incr tensor has to be the same shape as the result
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Incr = New(WithBacking([]float32{100, 100, 100, 100}), WithShape(2, 2))
T3, _ = V.SubScalar(float32(5), true, WithIncr(Incr))
fmt.Printf("Incr tensor passed in (sliced tensor)\n======================================\nIncr += T1 - T2\nIncr == T3: %t\nT3:\n%v\n", Incr == T3, T3)
Output:

Incr tensor passed in
======================
Incr += T1 - T2
Incr == T3: true
T3:
⎡ 95   96   97⎤
⎢ 98   99  100⎥
⎣101  102  103⎦

Incr tensor passed in (sliced tensor)
======================================
Incr += T1 - T2
Incr == T3: true
T3:
⎡95  96⎤
⎣98  99⎦
Example (Reuse)

Reuse tensors may be used, with the WithReuse() function option.

var T1, V, Reuse, T3 *Dense
var sliced Tensor

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.SubScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is left operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is right operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
Reuse = New(WithBacking(Range(Float32, 100, 109)), WithShape(3, 3))
T3, _ = T1.SubScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (Tensor is right operand)\n======================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.SubScalar(float32(5), true, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v\n", T3 == Reuse, T3)

// Tensor is left operand
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(makeRS(0, 2), makeRS(0, 2))
V = sliced.(*Dense)
Reuse = New(WithBacking(Range(Float32, 100, 104)), WithShape(2, 2)) // same shape as result
T3, _ = V.SubScalar(float32(5), false, WithReuse(Reuse))
fmt.Printf("Reuse tensor passed in (sliced tensor - Tensor is left operand)\n======================================\nT3 == Reuse: %t\nT3:\n%v", T3 == Reuse, T3)
Output:

Reuse tensor passed in (Tensor is left operand)
======================
T3 == Reuse: true
T3:
⎡-5  -4  -3⎤
⎢-2  -1   0⎥
⎣ 1   2   3⎦

Reuse tensor passed in (Tensor is right operand)
======================
T3 == Reuse: true
T3:
⎡ 5   4   3⎤
⎢ 2   1   0⎥
⎣-1  -2  -3⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡-5  -4⎤
⎣-2  -1⎦

Reuse tensor passed in (sliced tensor - Tensor is left operand)
======================================
T3 == Reuse: true
T3:
⎡5  4⎤
⎣2  1⎦
Example (Unsafe)
var T1, T3, V *Dense
var sliced Tensor
T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
T3, _ = T1.SubScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is left operand)\n==========================\nT3 = T1 - 5\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T3, _ = T1.SubScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (tensor is right operand)\n==========================\nT3 = 5 - T1\nT3:\n%v\nT3 == T1: %t\nT1 is changed:\n%v\n", T3, T3 == T1, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.SubScalar(float32(5), true, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is left operand)\n=============================================\nT3 = T1[:, 0:2] + 5\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)

T1 = New(WithBacking(Range(Float32, 0, 9)), WithShape(3, 3))
sliced, _ = T1.Slice(nil, makeRS(0, 2))
V = sliced.(*Dense)
T3, _ = V.SubScalar(float32(5), false, UseUnsafe())
fmt.Printf("Operation is unsafe (sliced operations - tensor is right operand)\n=============================================\nT3 = 5 - T1[:, 0:2]\nT3:\n%v\nsliced == T3: %t\nT1 is changed:\n%v\n", T3, sliced == T3, T1)
Output:

Operation is unsafe (tensor is left operand)
==========================
T3 = T1 - 5
T3:
⎡-5  -4  -3⎤
⎢-2  -1   0⎥
⎣ 1   2   3⎦

T3 == T1: true
T1 is changed:
⎡-5  -4  -3⎤
⎢-2  -1   0⎥
⎣ 1   2   3⎦

Operation is unsafe (tensor is right operand)
==========================
T3 = 5 - T1
T3:
⎡10   9   8⎤
⎢ 7   6   5⎥
⎣ 4   3   2⎦

T3 == T1: true
T1 is changed:
⎡10   9   8⎤
⎢ 7   6   5⎥
⎣ 4   3   2⎦

Operation is unsafe (sliced operations - tensor is left operand)
=============================================
T3 = T1[:, 0:2] + 5
T3:
⎡-5  -4⎤
⎢-2  -1⎥
⎣ 1   2⎦

sliced == T3: true
T1 is changed:
⎡-5  -4   2⎤
⎢-2  -1   5⎥
⎣ 1   2   8⎦

Operation is unsafe (sliced operations - tensor is right operand)
=============================================
T3 = 5 - T1[:, 0:2]
T3:
⎡ 5   4⎤
⎢ 2   1⎥
⎣-1  -2⎦

sliced == T3: true
T1 is changed:
⎡ 5   4   2⎤
⎢ 2   1   5⎥
⎣-1  -2   8⎦

func (*Dense) Sum

func (t *Dense) Sum(along ...int) (retVal *Dense, err error)

func (*Dense) T

func (t *Dense) T(axes ...int) (err error)

T performs a thunked transpose. It doesn't actually do anything, except store extra information about the post-transposed shapes and strides Usually this is more than enough, as BLAS will handle the rest of the transpose

func (*Dense) TensorMul

func (t *Dense) TensorMul(other Tensor, axesA, axesB []int) (retVal *Dense, err error)

TensorMul is for multiplying Tensors with more than 2 dimensions.

The algorithm is conceptually simple (but tricky to get right):

  1. Transpose and reshape the Tensors in such a way that both t and other are 2D matrices
  2. Use DGEMM to multiply them
  3. Reshape the results to be the new expected result

This function is a Go implementation of Numpy's tensordot method. It simplifies a lot of what Numpy does.

func (*Dense) Trace

func (t *Dense) Trace() (retVal interface{}, err error)

Trace returns the trace of the matrix (i.e. the sum of the diagonal elements). It only works for matrices

func (*Dense) Transpose

func (t *Dense) Transpose() error

Transpose() actually transposes the data. This is a generalized version of the inplace matrix transposition algorithm from Wikipedia: https://en.wikipedia.org/wiki/In-place_matrix_transposition

func (*Dense) UT

func (t *Dense) UT()

UT is a quick way to untranspose a currently transposed *Dense The reason for having this is quite simply illustrated by this problem:

T = NewTensor(WithShape(2,3,4))
T.T(1,2,0)

To untranspose that, we'd need to apply a transpose of (2,0,1). This means having to keep track and calculate the transposes. Instead, here's a helpful convenience function to instantly untranspose any previous transposes.

Nothing will happen if there was no previous transpose

func (*Dense) Uintptr

func (a *Dense) Uintptr() uintptr

Uintptr returns the pointer of the first value of the slab

func (*Dense) Vstack

func (t *Dense) Vstack(others ...*Dense) (*Dense, error)

Vstack stacks other tensors rowwise (vertical stacking). Vertical stacking requires all involved Tensors to have at least 2 dimensions

Example
var T, T1, T2, T3 *Dense
var err error

T = New(WithBacking(Range(Float64, 0, 4)), WithShape(2, 2))
T1 = New(WithBacking([]float64{1000, 2000}), WithShape(1, 2))

// Simple example
if T2, err = T.Vstack(T1); err == nil {
	fmt.Printf("T.Vstack(T1):\n%v\n", T2)
} else {
	fmt.Printf("%+v", err)
}

// You can stack more than one, as long as all the tensors have the same shape
T3 = T1.Clone().(*Dense)
if T2, err = T.Vstack(T1, T3); err == nil {
	fmt.Printf("T.Vstack(T1, T3):\n%v\n", T2)
} else {
	fmt.Printf("====\nerr %v\n%v\n===\n", err, T3.Shape())
}

// Let's look at failure conditions
// All tensors must be at least 2D
T.Reshape(4)
if _, err = T.Vstack(T1); err != nil {
	fmt.Printf("Vstacking (4) with (1, 2): %v\n", err)
}
if _, err = T1.Vstack(T); err != nil {
	fmt.Printf("Vstacking (1, 2) with (4): %v\n", err)
}
Output:

T.Vstack(T1):
⎡   0     1⎤
⎢   2     3⎥
⎣1000  2000⎦

T.Vstack(T1, T3):
⎡   0     1⎤
⎢   2     3⎥
⎢1000  2000⎥
⎣1000  2000⎦

Vstacking (4) with (1, 2): Tensor has to be at least 2 dimensions
Vstacking (1, 2) with (4): Tensor has to be at least 2 dimensions

func (*Dense) WriteCSV

func (t *Dense) WriteCSV(w io.Writer, formats ...string) (err error)

WriteCSV writes the *Dense to a CSV. It accepts an optional string formatting ("%v", "%f", etc...), which controls what is written to the CSV. If tensor is masked, invalid values are replaced by the default fill value.

func (*Dense) WriteNpy

func (t *Dense) WriteNpy(w io.Writer) (err error)

WriteNpy writes the *Tensor as a numpy compatible serialized file.

The format is very well documented here: http://docs.scipy.org/doc/numpy/neps/npy-format.html

Gorgonia specifically uses Version 1.0, as 65535 bytes should be more than enough for the headers. The values are written in little endian order, because let's face it - 90% of the world's computers are running on x86+ processors.

This method does not close the writer. Closing (if needed) is deferred to the caller If tensor is masked, invalid values are replaced by the default fill value.

func (*Dense) Zero

func (t *Dense) Zero()

type DenseStacker

type DenseStacker interface {
	StackDense(t DenseTensor, axis int, others ...DenseTensor) (retVal DenseTensor, err error)
}

DenseStacker is any engine that can stack DenseTensors along an axis. This is a specialization of Stacker.

type DenseTensor

type DenseTensor interface {
	Tensor
	Info() *AP

	IsMatrix() bool
	IsVector() bool
	IsRowVec() bool
	IsColVec() bool

	// operations
	Inner(other Tensor) (retVal interface{}, err error)
	MatMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)
	MatVecMul(other Tensor, opts ...FuncOpt) (retVal *Dense, err error)
	TensorMul(other Tensor, axesA, axesB []int) (retVal *Dense, err error)
	// contains filtered or unexported methods
}

DenseTensor is the interface for any Dense tensor.

type Densor

type Densor interface {
	Dense() *Dense
}

A Densor is any type that can return a *Dense

type Diager

type Diager interface {
	Diag(a Tensor) (Tensor, error)
}

Diager is any engine that can return a tensor that only contains the diagonal values of the input

type Diver

type Diver interface {
	// Div performs a / b
	Div(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// DivScalar divides a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand.
	// Whether or not the input tensor is clobbered is left to the implementation
	DivScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Diver is any engine that can perform elementwise division.

type Dotter

type Dotter interface {
	Dot(a, b Tensor, opts ...FuncOpt) (Tensor, error)
}

Dotter is used to implement sparse matrices

type Dtype

type Dtype struct {
	reflect.Type
}

Dtype represents a data type of a Tensor. Concretely it's implemented as an embedded reflect.Type which allows for easy reflection operations. It also implements hm.Type, for type inference in Gorgonia

func (Dtype) Apply

func (dt Dtype) Apply(hm.Subs) hm.Substitutable

func (Dtype) Eq

func (dt Dtype) Eq(other hm.Type) bool

func (Dtype) Format

func (dt Dtype) Format(s fmt.State, c rune)

func (Dtype) FreeTypeVar

func (dt Dtype) FreeTypeVar() hm.TypeVarSet

func (Dtype) Normalize

func (dt Dtype) Normalize(k, v hm.TypeVarSet) (hm.Type, error)

func (Dtype) Types

func (dt Dtype) Types() hm.Types

type Dtyper

type Dtyper interface {
	Dtype() Dtype
}

Dtyper is any type that has a Dtype

type ElEqer

type ElEqer interface {
	ElEq(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	EqScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)

	ElNe(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	NeScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

ElEqer is any engine that can perform the elementwise equality comparison operation.

type Engine

type Engine interface {
	AllocAccessible() bool                    // AllocAccessible returns true if the engine return Go-accessible memory pointers?
	Alloc(size int64) (Memory, error)         // Alloc allocates memory
	Free(mem Memory, size int64) error        // Free rees memory
	Memset(mem Memory, val interface{}) error // Memset - duh
	Memclr(mem Memory)                        // Memclr - duh
	Memcpy(dst, src Memory) error             // Memcpy - duh
	Accessible(mem Memory) (Memory, error)    // Accessible returns Go-accesible memory pointers, or errors, if it cannot be done
	WorksWith(order DataOrder) bool           // WorksWith returns true if the data order can be directly worked with
}

Engine is a representation of an execution engine. While different execution engines can have different capabilities, all execution engines must be able to allocate and free memory

type Eq

type Eq interface {
	Eq(interface{}) bool
}

Eq is any type where you can perform an equality test

type Exper

type Exper interface {
	Exp(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Exper is any engine that can perform elementwise natural exponentiation on the values in a Tensor.

type FMAer

type FMAer interface {
	FMA(a, x, y Tensor) (Tensor, error)
	FMAScalar(a Tensor, x interface{}, y Tensor) (Tensor, error)
}

FMAer is any engine that can perform fused multiply add functions: A * X + Y. Also known as Axpy.

type FlatIterator

type FlatIterator struct {
	*AP
	// contains filtered or unexported fields
}

FlatIterator is an iterator that iterates over Tensors according to the data's layout. It utilizes the *AP of a Tensor to determine what the next index is. This data structure is similar to Numpy's flatiter, with some standard Go based restrictions of course (such as, not allowing negative indices)

func FlatIteratorFromDense

func FlatIteratorFromDense(tt DenseTensor) *FlatIterator

FlatIteratorFromDense creates a new FlatIterator from a dense tensor

func (*FlatIterator) Chan

func (it *FlatIterator) Chan() (retVal chan int)

Chan returns a channel of ints. This is useful for iterating multiple Tensors at the same time.

func (*FlatIterator) Coord

func (it *FlatIterator) Coord() []int

Coord returns the next coordinate. When Next() is called, the coordinates are updated AFTER the Next() returned. See example for more details.

The returned coordinates is mutable. Changing any values in the return value will change the state of the iterator

func (*FlatIterator) Done

func (it *FlatIterator) Done() bool

Done checks whether iterators are done

func (*FlatIterator) Next

func (it *FlatIterator) Next() (int, error)

Next returns the index of the current coordinate.

func (*FlatIterator) NextInvalid

func (it *FlatIterator) NextInvalid() (int, int, error)

NextInvalid returns the index of the current coordinate. Identical to Next for FlatIterator also returns the number of increments to get to next invalid element (1 or -1 in reverse case). Like NextValid, this method's purpose is to maintain consistency with the masked iterator, for which the step between invalid elements can be anywhere from 0 to the tensor's length

func (*FlatIterator) NextValid

func (it *FlatIterator) NextValid() (int, int, error)

NextValid returns the index of the current coordinate. Identical to Next for FlatIterator Also returns the number of increments to get to next element ( 1, or -1 in reverse case). This is to maintain consistency with the masked iterator, for which the step between valid elements can be more than 1

func (*FlatIterator) NextValidity

func (it *FlatIterator) NextValidity() (int, bool, error)

NextValidity returns the index of the current coordinate, and whether or not it's valid. Identical to Next()

func (*FlatIterator) Reset

func (it *FlatIterator) Reset()

Reset resets the iterator state.

func (*FlatIterator) SetForward

func (it *FlatIterator) SetForward()

SetForward initializes iterator to run forwards

func (*FlatIterator) SetReverse

func (it *FlatIterator) SetReverse()

SetReverse initializes iterator to run backwards

func (*FlatIterator) Slice

func (it *FlatIterator) Slice(sli Slice) (retVal []int, err error)

Slice is a convenience function that augments

func (*FlatIterator) Start

func (it *FlatIterator) Start() (int, error)

Start begins iteration

type FlatMaskedIterator

type FlatMaskedIterator struct {
	*FlatIterator
	// contains filtered or unexported fields
}

FlatMaskedIterator is an iterator that iterates over simple masked Tensors. It is used when the mask stride is identical to data stride with the exception of trailing zeros, in which case the data index is always a perfect integer multiple of the mask index

func FlatMaskedIteratorFromDense

func FlatMaskedIteratorFromDense(tt MaskedTensor) *FlatMaskedIterator

FlatMaskedIteratorFromDense creates a new FlatMaskedIterator from dense tensor

func (*FlatMaskedIterator) NextInvalid

func (it *FlatMaskedIterator) NextInvalid() (int, int, error)

NextInvalid returns the index of the next invalid element as well as the number of increments to get to next invalid element

func (*FlatMaskedIterator) NextValid

func (it *FlatMaskedIterator) NextValid() (int, int, error)

NextValid returns the index of the next valid element, as well as the number of increments to get to next element

func (*FlatMaskedIterator) NextValidity

func (it *FlatMaskedIterator) NextValidity() (int, bool, error)

type FlatSparseIterator

type FlatSparseIterator struct {
	*CS
	// contains filtered or unexported fields
}

FlatSparseIterator is an iterator that works very much in the same way as flatiterator, except for sparse tensors

func NewFlatSparseIterator

func NewFlatSparseIterator(t *CS) *FlatSparseIterator

func (FlatSparseIterator) Cap added in v0.9.15

func (a FlatSparseIterator) Cap() int

func (*FlatSparseIterator) Coord

func (it *FlatSparseIterator) Coord() []int

func (FlatSparseIterator) Data

func (a FlatSparseIterator) Data() interface{}

Data returns the representation of a slice.

func (*FlatSparseIterator) Done

func (it *FlatSparseIterator) Done() bool

func (FlatSparseIterator) Get

func (a FlatSparseIterator) Get(i int) interface{}

Get returns the ith element of the underlying array of the *Dense tensor.

func (FlatSparseIterator) Len added in v0.9.15

func (a FlatSparseIterator) Len() int

func (FlatSparseIterator) Memset

func (a FlatSparseIterator) Memset(x interface{}) error

Memset sets all values in the array.

func (*FlatSparseIterator) Next

func (it *FlatSparseIterator) Next() (int, error)

func (*FlatSparseIterator) NextInvalid

func (it *FlatSparseIterator) NextInvalid() (int, int, error)

func (*FlatSparseIterator) NextValid

func (it *FlatSparseIterator) NextValid() (int, int, error)

func (*FlatSparseIterator) NextValidity

func (it *FlatSparseIterator) NextValidity() (int, bool, error)

func (*FlatSparseIterator) Reset

func (it *FlatSparseIterator) Reset()

func (FlatSparseIterator) Set

func (a FlatSparseIterator) Set(i int, x interface{})

Set sets the value of the underlying array at the index i.

func (*FlatSparseIterator) SetForward

func (it *FlatSparseIterator) SetForward()

func (*FlatSparseIterator) SetReverse

func (it *FlatSparseIterator) SetReverse()

func (*FlatSparseIterator) Start

func (it *FlatSparseIterator) Start() (int, error)

func (FlatSparseIterator) Zero

func (a FlatSparseIterator) Zero()

Zero zeroes out the underlying array of the *Dense tensor.

type Float32Engine

type Float32Engine struct {
	StdEng
}

Float32Engine is an execution engine that is optimized to only work with float32s. It assumes all data will are float32s.

Use this engine only as form of optimization. You should probably be using the basic default engine for most cases.

func (Float32Engine) Add

func (e Float32Engine) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (Float32Engine) FMA

func (e Float32Engine) FMA(a, x, y Tensor) (retVal Tensor, err error)

func (Float32Engine) FMAScalar

func (e Float32Engine) FMAScalar(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)

func (Float32Engine) Inner

func (e Float32Engine) Inner(a, b Tensor) (retVal float32, err error)

type Float64Engine

type Float64Engine struct {
	StdEng
}

Float64Engine is an execution engine that is optimized to only work with float64s. It assumes all data will are float64s.

Use this engine only as form of optimization. You should probably be using the basic default engine for most cases.

func (Float64Engine) Add

func (e Float64Engine) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (Float64Engine) FMA

func (e Float64Engine) FMA(a, x, y Tensor) (retVal Tensor, err error)

func (Float64Engine) FMAScalar

func (e Float64Engine) FMAScalar(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)

func (Float64Engine) Inner

func (e Float64Engine) Inner(a, b Tensor) (retVal float64, err error)

type FuncOpt

type FuncOpt func(*OpOpt)

FuncOpt are optionals for calling Tensor function.

func As

func As(t Dtype) FuncOpt

As makes sure that the the return Tensor is of the type specified. Currently only works for FromMat64

func AsSameType

func AsSameType() FuncOpt

AsSameType makes sure that the return Tensor is the same type as input Tensors.

func UseSafe

func UseSafe() FuncOpt

UseSafe ensures that the operation is a safe operation (copies data, does not clobber). This is the default option for most methods and functions

func UseUnsafe

func UseUnsafe() FuncOpt

UseUnsafe ensures that the operation is an unsafe operation - data will be clobbered, and operations performed inplace

func WithIncr

func WithIncr(incr Tensor) FuncOpt

WithIncr passes in a Tensor to be incremented.

func WithReuse

func WithReuse(reuse Tensor) FuncOpt

WithReuse passes in a Tensor to be reused.

type Gteer

type Gteer interface {
	Gte(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	GteScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Gteer is any engine that can perform the Gte operation.

type Gter

type Gter interface {
	Gt(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	GtScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Gter is any engine that can perform the Gt operation.

type InfChecker

type InfChecker interface {
	HasInf(t Tensor) (bool, error)
}

InfChecker checks that the tensor contains a Inf. Errors are to be returned if the concept of Inf does not apply to the data type. Other errors may also occur. See specific implementations for details

type InnerProder

type InnerProder interface {
	Inner(a, b Tensor) (interface{}, error) // Inner always returns a scalar value
}

InnerProder is any engine that can perform inner product multiplication

type InnerProderF32

type InnerProderF32 interface {
	Inner(a, b Tensor) (float32, error)
}

InnerProderF32 is an optimization for float32 - results are returned as float32.

type InnerProderF64

type InnerProderF64 interface {
	Inner(a, b Tensor) (float64, error)
}

InnerProderF64 is an optimization for float64 - results are returned as float64

type InvSqrter

type InvSqrter interface {
	InvSqrt(a Tensor, opts ...FuncOpt) (Tensor, error)
}

InvSqrter is any engine that can perform 1/sqrt(x) on the values of a Tensor.

type Inver

type Inver interface {
	Inv(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Inver is any engine that can perform 1/x for each element in the Tensor.

type Iterator

type Iterator interface {
	// Start returns the first index
	Start() (int, error)

	// Next returns the next index. Next is defined as the next value in the coordinates
	// For example: let x be a (5,5) matrix that is row-major. Current index is for the coordinate (3,3).
	// Next() returns the index of (3,4).
	//
	// If there is no underlying data store for (3,4) - say for example, the matrix is a sparse matrix, it return an error.
	// If however, there is an underlying data store for (3,4), but it's not valid (for example, masked tensors), it will not return an error.
	//
	// Second example: let x be a (5,5) matrix that is col-major. Current index is for coordinate (3,3).
	// Next() returns the index of (4,3).
	Next() (int, error)

	// NextValidity is like Next, but returns the validity of the value at the index as well.
	NextValidity() (int, bool, error)

	// NextValid returns the next valid index, as well as a skip count.
	NextValid() (int, int, error)

	// NextInvalid returns the next invalid index, as well as a skip count.
	NextInvalid() (int, int, error)

	// Reset resets the iterator
	Reset()

	// SetReverse tells the iterator to iterate in reverse
	SetReverse()

	// SetForward tells the iterator to iterate forwards
	SetForward()

	// Coord returns the coordinates
	Coord() []int

	// Done returns true when the iterator is done iterating.
	Done() bool

	// Shape returns the shape of the multidimensional tensor it's iterating on.
	Shape() Shape
}

Iterator is the generic iterator interface. It's used to iterate across multi-dimensional slices, no matter the underlying data arrangement

func IteratorFromDense

func IteratorFromDense(tts ...DenseTensor) Iterator

IteratorFromDense creates a new Iterator from a list of dense tensors

func NewIterator

func NewIterator(aps ...*AP) Iterator

NewIterator creates a new Iterator from an ap. The type of iterator depends on number of aps passed, and whether they are masked or not

type Kinder

type Kinder interface {
	Kind() reflect.Kind
}

Kinder. Bueno.

type Log10er

type Log10er interface {
	Log10(a Tensor, opt ...FuncOpt) (Tensor, error)
}

Log10er is any engine that can perform base-10 logarithm on the values in a Tensor.

type Log2er

type Log2er interface {
	Log2(a Tensor, opt ...FuncOpt) (Tensor, error)
}

Log2 is any engine that can perform base-2 logarithm on the values in a Tensor.

type Loger

type Loger interface {
	Log(a Tensor, opt ...FuncOpt) (Tensor, error)
}

Loger is any engine that can perform natural log on the values in a Tensor.

type Lteer

type Lteer interface {
	Lte(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	LteScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Lteer is any engine that can perform the Lte operation.

type Lter

type Lter interface {
	Lt(a, b Tensor, opts ...FuncOpt) (Tensor, error)
	LtScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Lter is any engine that can perform the Lt operation.

type Mapper

type Mapper interface {
	Map(fn interface{}, a Tensor, opts ...FuncOpt) (Tensor, error)
}

Mapper is any engine that can map a function onto the values of a tensor.

type MaskedTensor

type MaskedTensor interface {
	DenseTensor
	IsMasked() bool
	SetMask([]bool)
	Mask() []bool
}

type MatMuler

type MatMuler interface {
	MatMul(a, b, preallocated Tensor) error
}

MatMuler is any engine that can perform matrix multiplication

type MatVecMuler

type MatVecMuler interface {
	MatVecMul(a, b, preallocated Tensor) error
}

MatVecMuler is any engine that can perform matrix vector multiplication

type MathError

type MathError interface {
	Indices() []int
}

MathError is an error that occurs in an Array. It lists the indices for which an error has happened

type MaxBetweener added in v0.9.21

type MaxBetweener interface {
	MaxBetween(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	MaxBetweenScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

MaxBetweener is any engine that can perform an elementwise ma<x-between.

type Maxer

type Maxer interface {
	Max(a Tensor, along ...int) (Tensor, error)
}

Maxer is any engine that can find the maximum value along an axis of a Tensor.

type MemSetter

type MemSetter interface {
	Memset(interface{}) error
}

A MemSetter is any type that can set itself to a value.

type Memory

type Memory interface {
	Uintptr() uintptr
	MemSize() uintptr
}

Memory is a representation of memory of the value.

The main reason for requiring both Uintptr() and Pointer() methods is because while Go currently does not have a compacting garbage collector, from the docs of `unsafe`:

Even if a uintptr holds the address of some object, the garbage collector, will not update that uintptr's value if the object moves,
nor will that uintptr keep the object from being reclaimed.

type MemoryFlag

type MemoryFlag byte

MemoryFlag is a flag representing the use possibilities of Memory

const (
	// NativelyInaccessible indicates that the data in the memory cannot be accessed by Go code.
	NativelyInaccessible MemoryFlag = 1 << iota
	// ManuallyManaged indicates that the memory is managed by something else. Any Tensor with
	// manually managed memory will not be returned to the pool.
	ManuallyManaged
	// IsOverallocated indicates that the memory for a given tensor is overallocated (i.e. the size-in-use is smaller than the size allocated)
	IsOverallocated
)

func MakeMemoryFlag

func MakeMemoryFlag(fs ...MemoryFlag) (retVal MemoryFlag)

type MinBetweener added in v0.9.21

type MinBetweener interface {
	MinBetween(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	MinBetweenScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

MinBetweener is any engine that can perform an elementwise min=between.

type Miner

type Miner interface {
	Min(a Tensor, along ...int) (Tensor, error)
}

Miner is any engine that can find the minimum value along an axis of a Tensor.

type Moder

type Moder interface {
	// Mod performs a % b
	Mod(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// ModScalar performs a % b where one of the operands is scalar. leftTensor indicates if the tensor is the left operand.
	// Whether or not hte input tensor is clobbered is left to the implementation
	ModScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Moder is any engine that can perform elementwise Mod()

type Muler

type Muler interface {
	// Mul performs a * b
	Mul(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// MulScalar multiplies a scalar to the tensor. leftTensor indicates if the tensor is the left operand.
	// Whether or not the input tensor is clobbered is left to the implementation
	MulScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Muler is any engine that can perform elementwise multiplication. For matrix multiplication, an engine should implement MatMul() or MatVecMul() or Inner()

type MultIterator

type MultIterator struct {
	*AP // Uses AP of the largest tensor in list
	// contains filtered or unexported fields
}

MultIterator is an iterator that iterates over multiple tensors, including masked tensors.

It utilizes the *AP of a Tensor to determine what the next index is.

This data structure is similar to Numpy's flatiter, with some standard Go based restrictions of course (such as, not allowing negative indices)

func MultIteratorFromDense

func MultIteratorFromDense(tts ...DenseTensor) *MultIterator

MultIteratorFromDense creates a new MultIterator from a list of dense tensors

func NewMultIterator

func NewMultIterator(aps ...*AP) *MultIterator

NewMultIterator creates a new MultIterator from a list of APs

func (*MultIterator) Coord

func (it *MultIterator) Coord() []int

Coord returns the next coordinate. When Next() is called, the coordinates are updated AFTER the Next() returned. See example for more details.

func (*MultIterator) Done

func (it *MultIterator) Done() bool

Done checks whether iterators are done

func (*MultIterator) LastIndex

func (it *MultIterator) LastIndex(j int) int

LastIndex returns index of requested iterator

func (*MultIterator) Next

func (it *MultIterator) Next() (int, error)

Next returns the index of the next coordinate

func (*MultIterator) NextInvalid

func (it *MultIterator) NextInvalid() (int, int, error)

NextInvalid returns the index of the next invalid coordinate

func (*MultIterator) NextValid

func (it *MultIterator) NextValid() (int, int, error)

NextValid returns the index of the next valid coordinate

func (*MultIterator) NextValidity

func (it *MultIterator) NextValidity() (int, bool, error)

func (*MultIterator) Reset

func (it *MultIterator) Reset()

Reset resets the iterator state.

func (*MultIterator) SetForward

func (it *MultIterator) SetForward()

SetForward initializes iterator to run forward

func (*MultIterator) SetReverse

func (it *MultIterator) SetReverse()

SetReverse initializes iterator to run backward

func (*MultIterator) Start

func (it *MultIterator) Start() (int, error)

Start begins iteration

type NaNChecker

type NaNChecker interface {
	HasNaN(t Tensor) (bool, error)
}

NaNChecker checks that the tensor contains a NaN Errors are to be returned if the concept of NaN does not apply to the data type. Other errors may also occur. See specific implementations for details

type Neger

type Neger interface {
	Neg(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Neger is any engine that can negate the sign of the values in the tensor.d

type NoOpError

type NoOpError interface {
	NoOp() bool
}

NoOpError is a useful for operations that have no op.

type NonStdEngine

type NonStdEngine interface {
	NonStdAlloc() // noop
}

NonStdEngine are any engines that do not allocate using the default built in allocator

type NormOrder

type NormOrder float64

NormOrder represents the order of the norm. Ideally, we'd only represent norms with a uint/byte. But there are norm types that are outside numerical types, such as nuclear norm and fobenius norm. So it is internally represented by a float. If Go could use NaN and Inf as consts, it would have been best, Instead, we use constructors. Both Nuclear and Frobenius norm types are represented as NaNs

The using of NaN and Inf as "special" Norm types lead to the need for IsInf() and IsFrobenius() and IsNuclear() method

func FrobeniusNorm

func FrobeniusNorm() NormOrder

func InfNorm

func InfNorm() NormOrder

func NegInfNorm

func NegInfNorm() NormOrder

func Norm

func Norm(ord int) NormOrder

func NuclearNorm

func NuclearNorm() NormOrder

func UnorderedNorm

func UnorderedNorm() NormOrder

func (NormOrder) IsFrobenius

func (n NormOrder) IsFrobenius() bool

IsFrobenius returns true if the NormOrder is a Frobenius norm

func (NormOrder) IsInf

func (n NormOrder) IsInf(sign int) bool

func (NormOrder) IsNuclear

func (n NormOrder) IsNuclear() bool

IsNuclear returns true if the NormOrder is a nuclear norm

func (NormOrder) IsUnordered

func (n NormOrder) IsUnordered() bool

IsUnordered returns true if the NormOrder is not an ordered norm

func (NormOrder) String

func (n NormOrder) String() string

func (NormOrder) Valid

func (n NormOrder) Valid() bool

Valid() is a helper method that deterines if the norm order is valid. A valid norm order is one where the fraction component is 0

type Oner

type Oner interface {
	One()
}

A Oner is any type that can set itself to the equivalent of one. It's used to implement the arrays

type OpOpt

type OpOpt struct {
	// contains filtered or unexported fields
}

OpOpt are the options used to call ops

func ParseFuncOpts

func ParseFuncOpts(opts ...FuncOpt) *OpOpt

ParseFuncOpts parses a list of FuncOpt into a single unified method call structure.

func (*OpOpt) As

func (fo *OpOpt) As() Dtype

As returns the dtype of the return value of the method call. For example:

a.Lt(b, As(Bool))

indicates that the result of the `Lt()` should be a Tensor of Bool.

Another example:

a.Add(b, As(Int))

indicates that the result of `Add()` should be converted to a Tensor of Int. Note that this function is not yet supported in most operations.

func (*OpOpt) Incr

func (fo *OpOpt) Incr() Tensor

Incr returns the tensor to be incremented in the call. Can be nil.

func (*OpOpt) IncrReuse

func (fo *OpOpt) IncrReuse() (Tensor, bool)

IncReuse returns whether a reuse tensor is to be used as the incr Tensor

func (*OpOpt) Reuse

func (fo *OpOpt) Reuse() Tensor

Reuse returns the tensor to be reused in the call. Can be nil.

func (*OpOpt) Safe

func (fo *OpOpt) Safe() bool

Safe signals if the op is to be done safely

func (*OpOpt) Same

func (fo *OpOpt) Same() bool

Same signals if the op is to return the same type as its inputs

type OptimizedReducer

type OptimizedReducer interface {
	OptimizedReduce(a Tensor, axis int, firstFn, lastFn, defaultFn, defaultValue interface{}, opts ...FuncOpt) (Tensor, error)
}

OptimizedReducer is any engine that can perform a reduction function with optimizations for the first dimension, last dimension and dimensions in between.

type OuterProder

type OuterProder interface {
	Outer(a, b, preallocated Tensor) error
}

OuterProder is any engine that can perform outer product (kronecker) multiplication

type Power

type Power interface {
	// Pow performs a ^ b
	Pow(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// PowScalar exponentiates a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand.
	// Whether or not the input tensor is clobbered is left to the implementation
	PowScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Power is any engine that can perform elementwise Pow()

type Proder

type Proder interface {
	Prod(a Tensor, along ...int) (Tensor, error)
}

Proder is any engine that can perform product along an axis of a Tensor.

type Reducer

type Reducer interface {
	Reduce(fn interface{}, a Tensor, axis int, defaultValue interface{}, opts ...FuncOpt) (Tensor, error)
}

Reducer is any engine that can perform a reduction function.

type Repeater

type Repeater interface {
	Repeat(t Tensor, axis int, repeats ...int) (Tensor, error)
	RepeatReuse(t Tensor, reuse Tensor, axis int, repeeats ...int) (Tensor, error)
}

Repeater is any engine that can repeat values along the given axis.

type SVDer

type SVDer interface {
	SVD(a Tensor, uv, full bool) (s, u, v Tensor, err error)
}

SVDer is any engine that can perform SVD

type ScalarRep

type ScalarRep interface {
	IsScalar() bool
	ScalarValue() interface{}
}

ScalarRep is any Tensor that can represent a scalar

type Shape

type Shape []int

Shape represents the dimensions of a Tensor. A (2,3) matrix has a shape of (2,3) - 2 rows, 3 columns. Likewise, a shape of (2,3,4) means a Tensor has 3 dimensions: 2 layers, 3 rows, 4 columns.

Vectors are of particular note. This package defines a shape of (x, 1) as a column vector and a (1, x) as a row vector. Row vectors and column vectors are matrices as well. It is important to note that row and column vectors and vanilla vectors are comparable under some circumstances

func ScalarShape

func ScalarShape() Shape

ScalarShape represents a scalar. It has no dimensions, no sizes

func (Shape) CalcStrides

func (s Shape) CalcStrides() []int

CalcStrides calculates the default strides for a shape

func (Shape) CalcStridesColMajor

func (s Shape) CalcStridesColMajor() []int

CalcStridesColMajor is like CalcStrides, but assumes a col major layout

func (Shape) CalcStridesWithMask

func (s Shape) CalcStridesWithMask(mask []bool) []int

CalcStridesWithMask is similar to CalcStrides, except that it has an argument, masks. It is used to mask out given dimensions during calculation of stride

func (Shape) Clone

func (s Shape) Clone() Shape

Clone clones a shape.

func (Shape) Concat

func (s Shape) Concat(axis int, ss ...Shape) (newShape Shape, err error)

Concat returns the expected new shape given the concatenation parameters

func (Shape) DimSize

func (s Shape) DimSize(d int) (size int, err error)

DimSize returns the size of the dimension wanted.

This method implemnents the DimSizer interface in Gorgonia.

func (Shape) Dims

func (s Shape) Dims() int

Dims returns the number of dimensions in the shape

func (Shape) Eq

func (s Shape) Eq(other Shape) bool

Eq indicates if a shape is equal with another. There is a soft concept of equality when it comes to vectors.

If s is a column vector and other is a vanilla vector, they're considered equal if the size of the column dimension is the same as the vector size; if s is a row vector and other is a vanilla vector, they're considered equal if the size of the row dimension is the same as the vector size

func (Shape) Format

func (s Shape) Format(st fmt.State, r rune)

Format implements fmt.Formatter, and formats a shape nicely

func (Shape) IsColVec

func (s Shape) IsColVec() bool

IsColVec returns true when the access pattern has the shape (x, 1)

func (Shape) IsMatrix

func (s Shape) IsMatrix() bool

IsMatrix returns true if it's a matrix. This is mostly a convenience method. RowVec and ColVecs are also considered matrices

func (Shape) IsRowVec

func (s Shape) IsRowVec() bool

IsRowVec returns true when the access pattern has the shape (1, x)

func (Shape) IsScalar

func (s Shape) IsScalar() bool

IsScalar returns true if the access pattern indicates it's a scalar value

func (Shape) IsScalarEquiv

func (s Shape) IsScalarEquiv() bool

IsScalarEquiv returns true if the access pattern indicates it's a scalar-like value

func (Shape) IsVector

func (s Shape) IsVector() bool

IsVector returns whether the access pattern falls into one of three possible definitions of vectors:

vanilla vector (not a row or a col)
column vector
row vector

func (Shape) IsVectorLike added in v0.9.12

func (s Shape) IsVectorLike() bool

IsVectorLike returns true when the shape looks like a vector e.g. a number that is surrounded by 1s:

(1, 1, ... 1, 10, 1, 1... 1)

func (Shape) Repeat

func (s Shape) Repeat(axis int, repeats ...int) (newShape Shape, finalRepeats []int, size int, err error)

Repeat returns the expected new shape given the repetition parameters.

func (Shape) S

func (s Shape) S(slices ...Slice) (retVal Shape, err error)

S gives the new shape after a shape has been sliced. It's repeated from the AP S() method mainly because there are other functions in Gorgonia that uses only shape

func (Shape) TotalSize

func (s Shape) TotalSize() int

TotalSize returns the number of elements expected in a Tensor of a certain shape

type Signer

type Signer interface {
	Sign(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Signer is any engine that can perform a sign function on the values of a Tensor.

type Slice

type Slice interface {
	Start() int
	End() int
	Step() int
}

A Slice represents a slicing operation for a Tensor.

func S added in v0.9.22

func S(start int, opt ...int) Slice

S creates a Slice. end is optional. It should be passed in as the first param of the optionals. step is optional. It should be passed in as the second param of the optionals.

Default end is start+1. Default step is 1, unless end == step+1, then it defaults to 0

type Slicer

type Slicer interface {
	Slice(...Slice) (View, error)
}

Slicer is any tensor that can slice

type SoftMaxer added in v0.9.22

type SoftMaxer interface {
	LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
	LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

	SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
	SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)
}

type Sparse

type Sparse interface {
	Tensor
	Densor
	NonZeroes() int // NonZeroes returns the number of nonzero values
}

Sparse is a sparse tensor.

type SparseTensor

type SparseTensor interface {
	Sparse
	AsCSC()
	AsCSR()
	Indices() []int
	Indptr() []int
}

type Sqrter

type Sqrter interface {
	Sqrt(a Tensor, opt ...FuncOpt) (Tensor, error)
}

Sqrter is any engine that can perform square root on the values in a Tensor.

type Squarer

type Squarer interface {
	Square(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Squarer is any engine that can square the values elementwise in a Tensor.

type Stacker

type Stacker interface {
	Stack(t Tensor, axis int, others ...Tensor) (Tensor, error)
}

Stacker is any engine that can stack multiple Tenosrs along an axis

type StdEng

type StdEng struct {
	execution.E
}

StdEng is the default execution engine that comes with the tensors. To use other execution engines, use the WithEngine construction option.

func (StdEng) Abs

func (e StdEng) Abs(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Accessible

func (e StdEng) Accessible(mem Memory) (Memory, error)

func (StdEng) Add

func (e StdEng) Add(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Add performs a + b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) AddScalar

func (e StdEng) AddScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

AddScalar performs t + s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) Alloc

func (e StdEng) Alloc(size int64) (Memory, error)

func (StdEng) AllocAccessible

func (e StdEng) AllocAccessible() bool

func (StdEng) Argmax

func (e StdEng) Argmax(t Tensor, axis int) (retVal Tensor, err error)

func (StdEng) Argmin

func (e StdEng) Argmin(t Tensor, axis int) (retVal Tensor, err error)

func (StdEng) Cbrt

func (e StdEng) Cbrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Clamp

func (e StdEng) Clamp(a Tensor, min, max interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Concat

func (e StdEng) Concat(t Tensor, axis int, others ...Tensor) (retVal Tensor, err error)

Concat tensors

func (StdEng) Cube

func (e StdEng) Cube(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Diag

func (e StdEng) Diag(t Tensor) (retVal Tensor, err error)

Diag ...

func (StdEng) Div

func (e StdEng) Div(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Div performs a ÷ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) DivScalar

func (e StdEng) DivScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

DivScalar performs t ÷ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) Dot

func (e StdEng) Dot(x, y Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) ElEq

func (e StdEng) ElEq(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

ElEq performs a == b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) ElNe

func (e StdEng) ElNe(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

ElNe performs a ≠ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) EqScalar

func (e StdEng) EqScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Exp

func (e StdEng) Exp(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) FMA

func (e StdEng) FMA(a, x, y Tensor) (Tensor, error)

func (StdEng) FMAScalar

func (e StdEng) FMAScalar(a Tensor, x interface{}, y Tensor) (Tensor, error)

func (StdEng) Free

func (e StdEng) Free(mem Memory, size int64) error

func (StdEng) Gt

func (e StdEng) Gt(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Gt performs a > b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) GtScalar

func (e StdEng) GtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

GtScalar performs t > s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) Gte

func (e StdEng) Gte(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Gte performs a ≥ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) GteScalar

func (e StdEng) GteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

GteScalar performs t ≥ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) Inner

func (e StdEng) Inner(a, b Tensor) (retVal interface{}, err error)

Inner is a thin layer over BLAS's D/Sdot. It returns a scalar value, wrapped in an interface{}, which is not quite nice.

func (StdEng) Inv

func (e StdEng) Inv(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) InvSqrt

func (e StdEng) InvSqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Log

func (e StdEng) Log(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Log10

func (e StdEng) Log10(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Log2

func (e StdEng) Log2(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) LogSoftMax added in v0.9.22

func (e StdEng) LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

LogSoftMax performs softmax but in log space. This provides some amount of numerical stabilization. Conceptually it is the same as performing a logarithm after applying the softmax function. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.

func (StdEng) LogSoftMaxB added in v0.9.22

func (e StdEng) LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

LogSoftMaxB computes the gradient of the input `x`, given the `output = LogSoftmax(x)` and its associated gradient. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.

func (StdEng) Lt

func (e StdEng) Lt(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Lt performs a < b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) LtScalar

func (e StdEng) LtScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

LtScalar performs t < s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) Lte

func (e StdEng) Lte(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Lte performs a ≤ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) LteScalar

func (e StdEng) LteScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

LteScalar performs t ≤ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s Acceptable FuncOpts are: UseUnsafe(), AsSameType(), WithReuse(). UseUnsafe() will ensure that the same type is returned. Tensors used in WithReuse has to have the same Dtype as the return value's Dtype.

func (StdEng) Map

func (e StdEng) Map(fn interface{}, a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) MatMul

func (e StdEng) MatMul(a, b, prealloc Tensor) (err error)

MatMul is a thin layer over DGEMM. DGEMM computes:

C = αA * B +  βC

To prevent needless zeroing out of the slice, we just set β to 0

func (StdEng) MatVecMul

func (e StdEng) MatVecMul(a, b, prealloc Tensor) (err error)

MatVecMul is a thin layer over BLAS' DGEMV Because DGEMV computes:

y = αA * x + βy

we set beta to 0, so we don't have to manually zero out the reused/retval tensor data

func (StdEng) Max

func (e StdEng) Max(a Tensor, along ...int) (retVal Tensor, err error)

func (StdEng) MaxBetween added in v0.9.21

func (e StdEng) MaxBetween(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) MaxBetweenScalar added in v0.9.21

func (e StdEng) MaxBetweenScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Memclr

func (e StdEng) Memclr(mem Memory)

func (StdEng) Memcpy

func (e StdEng) Memcpy(dst, src Memory) error

func (StdEng) Memset

func (e StdEng) Memset(mem Memory, val interface{}) error

func (StdEng) Min

func (e StdEng) Min(a Tensor, along ...int) (retVal Tensor, err error)

func (StdEng) MinBetween added in v0.9.21

func (e StdEng) MinBetween(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) MinBetweenScalar added in v0.9.21

func (e StdEng) MinBetweenScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Mod

func (e StdEng) Mod(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Mod performs a % b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) ModScalar

func (e StdEng) ModScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

ModScalar performs t % s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) Mul

func (e StdEng) Mul(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Mul performs a × b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) MulScalar

func (e StdEng) MulScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

MulScalar performs t × s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) NeScalar

func (e StdEng) NeScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Neg

func (e StdEng) Neg(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) OptimizedReduce

func (e StdEng) OptimizedReduce(a Tensor, axis int, firstFn, lastFn, defaultFn, defaultValue interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Outer

func (e StdEng) Outer(a, b, prealloc Tensor) (err error)

Outer is a thin wrapper over S/Dger

func (StdEng) Pow

func (e StdEng) Pow(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Pow performs a ^ b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) PowScalar

func (e StdEng) PowScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

PowScalar performs t ^ s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) Reduce

func (e StdEng) Reduce(fn interface{}, a Tensor, axis int, defaultValue interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Repeat

func (e StdEng) Repeat(t Tensor, axis int, repeats ...int) (Tensor, error)

Repeat ...

func (StdEng) RepeatReuse

func (e StdEng) RepeatReuse(t Tensor, reuse Tensor, axis int, repeats ...int) (Tensor, error)

RepeatReuse is like Repeat, but with a provided reuse Tensor. The reuseTensor must be of the same type as the input t.

func (StdEng) SVD

func (e StdEng) SVD(a Tensor, uv, full bool) (s, u, v Tensor, err error)

TODO: make it take DenseTensor

func (StdEng) SelectByIndices added in v0.9.15

func (e StdEng) SelectByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SelectByIndices selects the values given the in `indices` tensor.

Currently SelectByIndices only supports Dense tensors that do not require the use of iterators. Please make a pull request to support tensors that require the use of an iterator to traverse data.

func (StdEng) SelectByIndicesB added in v0.9.15

func (e StdEng) SelectByIndicesB(input, outGrad, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SelectByIndicesB computes the gradient of the result of `SelectByIndices`.

Currently SelectByIndicesB only supports Dense tensors that do not require the use of iterators. Please make a pull request to support tensors that require the use of an iterator to traverse data.

func (StdEng) Sign

func (e StdEng) Sign(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) SoftMax added in v0.9.22

func (e StdEng) SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SoftMax performs the softmax operation on the given tensor. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.

The softmax function is defined as :

σ(x) = e^x_i / Σ(e^x_i)

func (StdEng) SoftMaxB added in v0.9.22

func (e StdEng) SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SoftMaxB computes gradient of the input `x`, given the `output = SoftMax(x)` and its associated gradient. Currently it expects the tensor to be a Dense tensor. Please make a pull request to support sparse tensors.

func (StdEng) Sqrt

func (e StdEng) Sqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Square

func (e StdEng) Square(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) StackDense

func (e StdEng) StackDense(t DenseTensor, axis int, others ...DenseTensor) (retVal DenseTensor, err error)

func (StdEng) Sub

func (e StdEng) Sub(a Tensor, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Sub performs a - b elementwise. Both a and b must have the same shape. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) SubScalar

func (e StdEng) SubScalar(t Tensor, s interface{}, leftTensor bool, opts ...FuncOpt) (retVal Tensor, err error)

SubScalar performs t - s elementwise. The leftTensor parameter indicates if the tensor is the left operand. Only scalar types are accepted in s. Acceptable FuncOpts are: UseUnsafe(), WithReuse(T), WithIncr(T)

func (StdEng) Sum

func (e StdEng) Sum(a Tensor, along ...int) (retVal Tensor, err error)

func (StdEng) Tanh

func (e StdEng) Tanh(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func (StdEng) Trace

func (e StdEng) Trace(t Tensor) (retVal interface{}, err error)

Trace returns the trace of a matrix (i.e. the sum of the diagonal elements). If the Tensor provided is not a matrix, it will return an error

func (StdEng) Transpose

func (e StdEng) Transpose(a Tensor, expStrides []int) error

func (StdEng) WorksWith

func (e StdEng) WorksWith(order DataOrder) bool

type Suber

type Suber interface {
	// Sub performs a - b
	Sub(a, b Tensor, opts ...FuncOpt) (Tensor, error)

	// SubScalar subtracts a scalar from/to the tensor. leftTensor indicates if the tensor is the left operand.
	// Whether or not the input tensor is clobbered is left to the implementation
	SubScalar(a Tensor, b interface{}, leftTensor bool, opts ...FuncOpt) (Tensor, error)
}

Suber is any engine that can perform elementwise subtraction.

type Sumer

type Sumer interface {
	Sum(a Tensor, along ...int) (Tensor, error)
}

Sumer is any engine that can perform summation along an axis of a Tensor.

type Tanher

type Tanher interface {
	Tanh(a Tensor, opts ...FuncOpt) (Tensor, error)
}

Tanher is any engine that can perform elementwise Tanh on the values in a Tensor.

type Tensor

type Tensor interface {
	// info about the ndarray
	Shape() Shape
	Strides() []int
	Dtype() Dtype
	Dims() int
	Size() int
	DataSize() int

	// Data access related
	RequiresIterator() bool
	Iterator() Iterator
	DataOrder() DataOrder

	// ops
	Slicer
	At(...int) (interface{}, error)
	SetAt(v interface{}, coord ...int) error
	Reshape(...int) error
	T(axes ...int) error
	UT()
	Transpose() error // Transpose actually moves the data
	Apply(fn interface{}, opts ...FuncOpt) (Tensor, error)

	// data related interface
	Zeroer
	MemSetter
	Dataer
	Eq
	Cloner

	// type overloading methods
	IsScalar() bool
	ScalarValue() interface{}

	// engine/memory related stuff
	// all Tensors should be able to be expressed of as a slab of memory
	// Note: the size of each element can be acquired by T.Dtype().Size()
	Memory                      // Tensors all implement Memory
	Engine() Engine             // Engine can be nil
	IsNativelyAccessible() bool // Can Go access the memory
	IsManuallyManaged() bool    // Must Go manage the memory

	// formatters
	fmt.Formatter
	fmt.Stringer

	// all Tensors are serializable to these formats
	WriteNpy(io.Writer) error
	ReadNpy(io.Reader) error
	gob.GobEncoder
	gob.GobDecoder
	// contains filtered or unexported methods
}

Tensor represents a variety of n-dimensional arrays. The most commonly used tensor is the Dense tensor. It can be used to represent a vector, matrix, 3D matrix and n-dimensional tensors.

func Abs

func Abs(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Add

func Add(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Add performs elementwise addition on the Tensor(s). These operations are supported:

Add(*Dense, scalar)
Add(scalar, *Dense)
Add(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Argmax

func Argmax(t Tensor, axis int) (retVal Tensor, err error)

Argmax finds the index of the max value along the axis provided

Example
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2))
fmt.Printf("T:\n%v\n", T)

// argmax along the x-axis
am, _ := Argmax(T, 0)
fmt.Printf("Argmax: %v\n", am)
fmt.Printf("Argmax is %T of %v", am, am.Dtype())
Output:

T:
⎡  0  100⎤
⎣200    3⎦

Argmax: [1  0]
Argmax is *tensor.Dense of int
Example (Sliced)
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2))
fmt.Printf("T:\n%v\n", T)

// slice  creates a view
V, _ := T.Slice(nil, S(1))

// argmax along the x-axis
am, _ := Argmax(V, 0)
fmt.Printf("Argmax: %v\n", am)
fmt.Printf("Argmax is %T of %v", am, am.Dtype())
Output:

T:
⎡  0  100⎤
⎣200    3⎦

Argmax: 0
Argmax is *tensor.Dense of int

func Argmin

func Argmin(t Tensor, axis int) (retVal Tensor, err error)

Argmin finds the index of the min value along the axis provided

Example
T := New(WithBacking([]float64{0, 100, 200, 3}), WithShape(2, 2))
fmt.Printf("T:\n%v\n", T)

// argmax along the x-axis
am, _ := Argmin(T, 0)
fmt.Printf("Argmin: %v\n", am)
fmt.Printf("Argmin is %T of %v", am, am.Dtype())
Output:

T:
⎡  0  100⎤
⎣200    3⎦

Argmin: [0  1]
Argmin is *tensor.Dense of int

func ByIndices added in v0.9.15

func ByIndices(a, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

ByIndices allows for selection of value of `a` byt the indices listed in the `indices` tensor. The `indices` tensor has to be a vector-like tensor of ints.

Example
a := New(WithShape(2, 2), WithBacking([]float64{
	100, 200,
	300, 400,
}))
indices := New(WithBacking([]int{1, 1, 1, 0, 1}))
b, err := ByIndices(a, indices, 0) // we select rows 1, 1, 1, 0, 1
if err != nil {
	fmt.Println(err)
	return
}

fmt.Printf("a:\n%v\nindices: %v\nb:\n%v\n", a, indices, b)
Output:

a:
⎡100  200⎤
⎣300  400⎦

indices: [1  1  1  0  1]
b:
⎡300  400⎤
⎢300  400⎥
⎢300  400⎥
⎢100  200⎥
⎣300  400⎦

func ByIndicesB added in v0.9.15

func ByIndicesB(a, b, indices Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

ByIndicesB is the backpropagation of ByIndices.

Example
a := New(WithShape(2, 2), WithBacking([]float64{
	100, 200,
	300, 400,
}))
indices := New(WithBacking([]int{1, 1, 1, 0, 1}))
b, err := ByIndices(a, indices, 0) // we select rows 1, 1, 1, 0, 1
if err != nil {
	fmt.Println(err)
	return
}

outGrad := b.Clone().(*Dense)
outGrad.Memset(1.0)

grad, err := ByIndicesB(a, outGrad, indices, 0)
if err != nil {
	fmt.Println(err)
	return
}

fmt.Printf("a:\n%v\nindices: %v\nb:\n%v\ngrad:\n%v", a, indices, b, grad)
Output:

a:
⎡100  200⎤
⎣300  400⎦

indices: [1  1  1  0  1]
b:
⎡300  400⎤
⎢300  400⎥
⎢300  400⎥
⎢100  200⎥
⎣300  400⎦

grad:
⎡1  1⎤
⎣4  4⎦

func Cbrt

func Cbrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Clamp

func Clamp(a Tensor, min interface{}, max interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func Concat

func Concat(axis int, t Tensor, others ...Tensor) (retVal Tensor, err error)

Concat concatenates a list of Tensors. At the moment the operation only supports Tensors of the same type (*Dense can only be concatenated with a bunch of *Dense, CSCs can only be concatenated with a bunch of CSC, etc)

func Contract

func Contract(a, b Tensor, aAxes, bAxes []int) (retVal Tensor, err error)

Contract performs a contraction of given tensors along given axes

func Cube

func Cube(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Diag

func Diag(t Tensor) (retVal Tensor, err error)

func Div

func Div(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Div performs elementwise division on the Tensor(s). These operations are supported:

Div(*Dense, scalar)
Div(scalar, *Dense)
Div(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Dot

func Dot(x, y Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Dot is a highly opinionated API for performing dot product operations on two *Denses, a and b. This function is opinionated with regard to the vector operations because of how it treats operations with vectors. Vectors in this package comes in two flavours - column or row vectors. Column vectors have shape (x, 1), while row vectors have shape (1, x).

As such, it is easy to assume that performing a linalg operation on vectors would follow the same rules (i.e shapes have to be aligned for things to work). For the most part in this package, this is true. This function is one of the few notable exceptions.

Here I give three specific examples of how the expectations of vector operations will differ.

Given two vectors, a, b with shapes (4, 1) and (4, 1), Dot() will perform an inner product as if the shapes were (1, 4) and (4, 1). This will result in a scalar value
Given matrix A and vector b with shapes (2, 4) and (1, 4), Dot() will perform a matrix-vector multiplication as if the shapes were (2,4) and (4,1). This will result in a column vector with shape (2,1)
Given vector a and matrix B with shapes (3, 1) and (3, 2), Dot() will perform a matrix-vector multiplication as if it were Bᵀ * a

The main reason why this opinionated route was taken was due to the author's familiarity with NumPy, and general laziness in translating existing machine learning algorithms to fit the API of the package.

func ElEq

func ElEq(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

ElEq performs a elementwise equality comparison (a == b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func ElNe

func ElNe(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

ElNe performs a elementwise equality comparison (a != b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func Exp

func Exp(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func FMA

func FMA(a Tensor, x interface{}, y Tensor) (retVal Tensor, err error)

FMA performs Y = A * X + Y.

func Gt

func Gt(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Gt performs a elementwise greater than comparison (a > b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func Gte

func Gte(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Gte performs a elementwise greater than eq comparison (a >= b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func Inv

func Inv(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func InvSqrt

func InvSqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Log

func Log(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Log10

func Log10(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Log2

func Log2(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func LogSoftMax added in v0.9.22

func LogSoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

LogSoftMax applies log softmax to the given tensor.

func LogSoftMaxB added in v0.9.22

func LogSoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

LogSoftMaxB applies softmax backwards operation

func Lt

func Lt(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Lt performs a elementwise less than comparison (a < b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func Lte

func Lte(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Lte performs a elementwise less than eq comparison (a <= b). a and b can either be float64 or *Dense. It returns the same Tensor type as its input.

If both operands are *Dense, shape is checked first. Even though the underlying data may have the same size (say (2,2) vs (4,1)), if they have different shapes, it will error out.

func MatMul

func MatMul(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

MatMul performs matrix-matrix multiplication between two Tensors

func MatVecMul

func MatVecMul(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

MatVecMul performs matrix-vector multiplication between two Tensors. `a` is expected to be a matrix, and `b` is expected to be a vector

func Materialize

func Materialize(t Tensor) Tensor

Materialize takes a View and copies out the data into a new allocation.

func MaxBetween added in v0.9.21

func MaxBetween(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func MinBetween added in v0.9.21

func MinBetween(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

func Mod

func Mod(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Mod performs elementwise modulo on the Tensor(s). These operations are supported:

Mod(*Dense, scalar)
Mod(scalar, *Dense)
Mod(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Mul

func Mul(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Mul performs elementwise multiplication on the Tensor(s). These operations are supported:

Mul(*Dense, scalar)
Mul(scalar, *Dense)
Mul(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Neg

func Neg(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Outer

func Outer(a, b Tensor, opts ...FuncOpt) (retVal Tensor, err error)

Outer performs the outer product of two vector Tensors. Both arguments to the functions are expected to be vectors.

func Pow

func Pow(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Pow performs elementwise exponentiation on the Tensor(s). These operations are supported:

Pow(*Dense, scalar)
Pow(scalar, *Dense)
Pow(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Repeat

func Repeat(t Tensor, axis int, repeats ...int) (retVal Tensor, err error)

Repeat repeats a Tensor along the axis and given the number of repeats.

Example (UncommonUses)
T := New(WithBacking([]int{1, 2, 3, 4, 5, 6}), WithShape(2, 3))
fmt.Printf("T:\n%v", T)

fmt.Println("Axis 0 has 2 elements. So we will need to write the number of times each element is to be repeated")
fmt.Println("Here, Repeat(T, 0, 3, 2) results in this:")
T1, err := Repeat(T, 0, 3, 2)
if err != nil {
	fmt.Printf("Err %v", err)
}
fmt.Printf("%v", T1)
fmt.Println("Observe the 0th element ([1 2 3]) has been repeated 3 times, and the 1st element ([4 5 6]) has been repeated twice")
fmt.Println("")

fmt.Println("We can also repeat on Axis 1. Now along Axis 1 there are 3 elements: ([1 4], [2 5], [3 6])")
fmt.Println("So we have to specify how many times to repeat each element.")
fmt.Println("Repeat(T, 1, 2, 3, 2) yields the following result:")
T1, err = Repeat(T, 1, 2, 3, 2)
if err != nil {
	fmt.Printf("Err %v", err)
}
fmt.Printf("%v", T1)
fmt.Println("Once again, observe that the 1st element ([2 5]) has been repeated 3 times, while the rest have been repeated twice")
/*
   // TODO break this out to another example
   	T1, err = Repeat(T, AllAxes, 2, 3, 2, 2, 2, 2)
   	if err != nil {
   		fmt.Printf("Err %v", err)
   	}
   	fmt.Printf("%#v", T1)
*/
Output:

T:
⎡1  2  3⎤
⎣4  5  6⎦
Axis 0 has 2 elements. So we will need to write the number of times each element is to be repeated
Here, Repeat(T, 0, 3, 2) results in this:
⎡1  2  3⎤
⎢1  2  3⎥
⎢1  2  3⎥
⎢4  5  6⎥
⎣4  5  6⎦
Observe the 0th element ([1 2 3]) has been repeated 3 times, and the 1st element ([4 5 6]) has been repeated twice

We can also repeat on Axis 1. Now along Axis 1 there are 3 elements: ([1 4], [2 5], [3 6])
So we have to specify how many times to repeat each element.
Repeat(T, 1, 2, 3, 2) yields the following result:
⎡1  1  2  2  2  3  3⎤
⎣4  4  5  5  5  6  6⎦
Once again, observe that the 1st element ([2 5]) has been repeated 3 times, while the rest have been repeated twice

func RepeatReuse

func RepeatReuse(t, reuse Tensor, axis int, repeats ...int) (retval Tensor, err error)

RepeatReuse repeats a Tensor along the axis and the given number of repeats, and puts the results in the provided reuse tensor. If the reuse tensor is not correctly sized, then an error will be given, but the results will still be valid.

Example
var T, T1 *Dense
T = New(WithBacking([]float64{1, 2, 3, 4}), WithShape(1, 4))
T1 = New(Of(Float64), WithShape(3, 4))

var T2 Tensor
var err error
if T2, err = RepeatReuse(T, T1, 0, 3); err != nil {
	fmt.Printf("Err %v", err)
}
fmt.Printf("RepeatReuse(T, T1):\n%v", T2)
fmt.Printf("T1 == T2: %t\n", T1 == T2)

// But if your reuse is wrongly shaped, an error occurs
T1 = New(Of(Float64), WithShape(1, 4)) // too small
if _, err = RepeatReuse(T, T1, 0, 3); err != nil {
	fmt.Printf("Expected Error: %v\n", err)
}
Output:

RepeatReuse(T, T1):
⎡1  2  3  4⎤
⎢1  2  3  4⎥
⎣1  2  3  4⎦
T1 == T2: true
Expected Error: Reuse shape is (1, 4). Expected shape is (3, 4)

func Sign

func Sign(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func SoftMax added in v0.9.22

func SoftMax(x Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SoftMax applies softmax to the given tensor.

func SoftMaxB added in v0.9.22

func SoftMaxB(output, grad Tensor, axis int, opts ...FuncOpt) (retVal Tensor, err error)

SoftMaxB applies softmax backwards operation

func Sqrt

func Sqrt(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Square

func Square(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Stack

func Stack(axis int, t Tensor, others ...Tensor) (retVal Tensor, err error)

Stack stacks a list of other Tensors. At the moment the operation only supports Tensors of the same type. (*Dense can only be stacked with *Dense... etc)

func Sub

func Sub(a, b interface{}, opts ...FuncOpt) (retVal Tensor, err error)

Sub performs elementwise subtraction on the Tensor(s). These operations are supported:

Sub(*Dense, scalar)
Sub(scalar, *Dense)
Sub(*Dense, *Dense)

If the Unsafe flag is passed in, the data of the first tensor will be overwritten

func Sum

func Sum(t Tensor, along ...int) (retVal Tensor, err error)

Sum sums a Tensor along the given axes

Example
T := New(WithBacking([]float64{0, 1, 2, 3}), WithShape(2, 2))
fmt.Printf("T:\n%v\n", T)

// sum along axis 0
summed, _ := Sum(T, 0)
fmt.Printf("Summed:\n%v\n", summed)

// to keep dims, simply reshape
summed.Reshape(1, 2)
fmt.Printf("Summed (Kept Dims - Shape: %v):\n%v\n\n", summed.Shape(), summed)

// summing along multiple axes
summed, _ = Sum(T, 1, 0)
fmt.Printf("Summed along (1, 0): %v", summed)
Output:

T:
⎡0  1⎤
⎣2  3⎦

Summed:
[2  4]
Summed (Kept Dims - Shape: (1, 2)):
R[2  4]

Summed along (1, 0): 6
Example (Sliced)
T := New(WithBacking([]float64{0, 1, 2, 3}), WithShape(2, 2))
fmt.Printf("T:\n%v\n", T)

V, _ := T.Slice(nil, S(1))
fmt.Printf("V:\n%v\n", V)

Σ, _ := Sum(V)
fmt.Printf("Σ: %v", Σ)
Output:

T:
⎡0  1⎤
⎣2  3⎦

V:
[1  3]
Σ: 4

func T

func T(t Tensor, axes ...int) (retVal Tensor, err error)

T safely transposes a Tensor. It returns a tensor that is not a view of the input tensor - rather, the data is all copied.

Example
// Usual example of 2D matrix being transposed:
M := New(WithBacking([]int{1, 2, 3, 4, 5, 6}), WithShape(2, 3))
M2, err := T(M)
if err != nil {
	fmt.Printf("Err: %v\n", err)
}
fmt.Printf("M:\n%v\nM2:\n%v\n", M, M2)

// T accepts optional parameters describing the permutation of axes.
// In a 2D case, there are only two options: (0, 1) or (1, 0).
// The latter is default if no parameters are passed in.
// The former is a no-op as rearranging a matrix so that the 0th axis becomes the 0th axis
// and the first axis becomes the first axis is not going to do anything.
//
// However, note that M3 is a different result.
M3, err := T(M, 0, 1)
if err != nil {
	fmt.Printf("Err: %v\n", err)
}
fmt.Printf("M3:\n%v\nM == M3: %t", M3, M == M3)
Output:

M:
⎡1  2  3⎤
⎣4  5  6⎦

M2:
⎡1  4⎤
⎢2  5⎥
⎣3  6⎦

M3:
⎡1  2  3⎤
⎣4  5  6⎦

M == M3: false
Example (Scalarlike)
// Be aware when dealing with scalarlike tensors
// scalar/scalarlikes have no effect when calling T()
// but the result is put into a new tensor
S := New(WithBacking([]float32{3.14}), WithShape())
S2, err := T(S)
if err != nil {
	fmt.Printf("Err %v", err)
}
fmt.Printf("S: %v S2 %v S == S2: %t\n", S, S2, S == S2)

// however do note that scalars and scalarlikes are not the same thing.
// for example, consider this:
_, err = T(S, 1, 0)
fmt.Printf("error when the axes are more than the shape's dims: %v\n", err)

// but if you have a tensor that is a scalar-like:
S.Reshape(1, 1)
S2, err = T(S, 1, 0)
if err != nil {
	fmt.Printf("Err: %v\n", err)
}
fmt.Printf("S:\n%v\nS2:\n%v\nS == S2: %t\n", S, S2, S == S2)
Output:

S: 3.14 S2 3.14 S == S2: false
error when the axes are more than the shape's dims: Dimension mismatch. Expected 0, got 2
S:
[[3.14]]
S2:
[[3.14]]
S == S2: false

func Tanh

func Tanh(a Tensor, opts ...FuncOpt) (retVal Tensor, err error)

func Transpose

func Transpose(t Tensor, axes ...int) (retVal Tensor, err error)

Transpose performs transposition of a tensor according to its axes.

Example (Extension)
package main

import (
	"fmt"

	"gorgonia.org/tensor"
)

// LongStruct is a type that is an arbitrarily long struct
type LongStruct struct {
	a, b, c, d, e uint64
}

// Format implements fmt.Formatter for easier-to-read output of data
func (ls LongStruct) Format(s fmt.State, c rune) {
	fmt.Fprintf(s, "{a: %d, b: %d, c: %d, d: %d, e: %d}", ls.a, ls.b, ls.c, ls.d, ls.e)
}

func main() {
	// For documentation if you're reading this on godoc:
	//
	// type LongStruct struct {
	// 		a, b, c, d, e uint64
	// }

	T := tensor.New(tensor.WithShape(2, 2),
		tensor.WithBacking([]LongStruct{
			LongStruct{0, 0, 0, 0, 0},
			LongStruct{1, 1, 1, 1, 1},
			LongStruct{2, 2, 2, 2, 2},
			LongStruct{3, 3, 3, 3, 3},
		}),
	)

	fmt.Printf("Before:\n%v\n", T)
	retVal, _ := tensor.Transpose(T) // an alternative would be to use T.T(); T.Transpose()
	fmt.Printf("After:\n%v\n", retVal)

}
Output:

Before:
⎡{a: 0, b: 0, c: 0, d: 0, e: 0}  {a: 1, b: 1, c: 1, d: 1, e: 1}⎤
⎣{a: 2, b: 2, c: 2, d: 2, e: 2}  {a: 3, b: 3, c: 3, d: 3, e: 3}⎦

After:
⎡{a: 0, b: 0, c: 0, d: 0, e: 0}  {a: 2, b: 2, c: 2, d: 2, e: 2}⎤
⎣{a: 1, b: 1, c: 1, d: 1, e: 1}  {a: 3, b: 3, c: 3, d: 3, e: 3}⎦

type Tracer

type Tracer interface {
	Trace(a Tensor) (interface{}, error)
}

Tracer is any engine that can return the trace (aka the sum of the diagonal elements).

type Transposer

type Transposer interface {
	Transpose(t Tensor, expStrides []int) error
}

Transposer is any engine that can perform an unsafe transpose of a tensor.

type Triangle

type Triangle byte

Triangle is a flag representing the "triangle"ness of a matrix

const (
	NotTriangle Triangle = iota
	Upper
	Lower
	Symmetric
)

type View

type View interface {
	Tensor
	IsView() bool
	IsMaterializable() bool
	Materialize() Tensor
}

View is any Tensor that can provide a view on memory

Example
// Slicing creates a "view" on the original tensor
T := New(WithBacking(Range(Int, 0, 16)), WithShape(4, 4))
fmt.Printf("T:\n%v\n", T)
V, _ := T.Slice(makeRS(1, 3), makeRS(1, 3))
fmt.Printf("V:\n%v\n", V)

// Now we modify V's 0th value
V.(*Dense).Set(0, 1000)
fmt.Printf("V[0] = 1000:\n%v\n", V)
fmt.Printf("T is also mutated:\n%v\n", T)

// Now we materialize the views
fmt.Printf("V is Materializable: %v\n", V.IsMaterializable())
T2 := V.Materialize()
fmt.Printf("T2 == V:\n%v\n", T2)

// Once materialized, it is decoupled from the original tensor
T2.(*Dense).Set(0, 999)
fmt.Printf("T2 is mutated:\n%v\nBut T is not mutated:\n%v\nNeither is V:\n%v", T2, T, V)
Output:

T:
⎡ 0   1   2   3⎤
⎢ 4   5   6   7⎥
⎢ 8   9  10  11⎥
⎣12  13  14  15⎦

V:
⎡ 5   6⎤
⎣ 9  10⎦

V[0] = 1000:
⎡1000     6⎤
⎣   9    10⎦

T is also mutated:
⎡   0     1     2     3⎤
⎢   4  1000     6     7⎥
⎢   8     9    10    11⎥
⎣  12    13    14    15⎦

V is Materializable: true
T2 == V:
⎡1000     6⎤
⎣   9    10⎦

T2 is mutated:
⎡999    6⎤
⎣  9   10⎦

But T is not mutated:
⎡   0     1     2     3⎤
⎢   4  1000     6     7⎥
⎢   8     9    10    11⎥
⎣  12    13    14    15⎦

Neither is V:
⎡1000     6⎤
⎣   9    10⎦

func Narrow added in v0.9.22

func Narrow(t Tensor, dim, start, length int) (View, error)

Narrow narrows the tensor.

type Zeroer

type Zeroer interface {
	Zero()
}

A Zeroer is any type that can set itself to the zeroth value. It's used to implement the arrays

Notes

Bugs

  • reading CSV doesn't handle CSVs with different columns per row yet.

Directories

Path Synopsis
internal
serialization
package serialization provides the data structures for serialization
package serialization provides the data structures for serialization
serialization/pb
Package pb is a generated protocol buffer package.
Package pb is a generated protocol buffer package.
package native is a utility package for gorgonia.org/tensor.
package native is a utility package for gorgonia.org/tensor.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL