Documentation
¶
Index ¶
- func Abs(x mat.Tensor) mat.Tensor
- func Add(x1 mat.Tensor, x2 mat.Tensor) mat.Tensor
- func AddScalar(x1, x2 mat.Tensor) mat.Tensor
- func Affine(b, w1, x1 mat.Tensor, wxPairs ...mat.Tensor) mat.Tensor
- func AppendRows(x mat.Tensor, vs ...mat.Tensor) mat.Tensor
- func At(x mat.Tensor, indices ...int) mat.Tensor
- func Backward(xs ...mat.Tensor) error
- func BiAffine(w, u, v, b, x1, x2 mat.Tensor) mat.Tensor
- func BiLinear(w, x1, x2 mat.Tensor) mat.Tensor
- func CELU(x, alpha mat.Tensor) mat.Tensor
- func ColView(x mat.Tensor, column int) mat.Tensor
- func ColViews(x mat.Tensor) []mat.Tensor
- func Concat(xs ...mat.Tensor) mat.Tensor
- func Copy(x mat.Tensor) mat.Tensor
- func Cos(x mat.Tensor) mat.Tensor
- func Div(x1, x2 mat.Tensor) mat.Tensor
- func DivScalar(x1, x2 mat.Tensor) mat.Tensor
- func Dot(x1, x2 mat.Tensor) mat.Tensor
- func Dropout(x mat.Tensor, p float64) mat.Tensor
- func DropoutFunc(p float64) func(x mat.Tensor) mat.Tensor
- func ELU(x, alpha mat.Tensor) mat.Tensor
- func Exp(x mat.Tensor) mat.Tensor
- func Flatten(x mat.Tensor) mat.Tensor
- func GELU(x mat.Tensor) mat.Tensor
- func HardSigmoid(x mat.Tensor) mat.Tensor
- func HardTanh(x mat.Tensor) mat.Tensor
- func LeakyReLU(x, alpha mat.Tensor) mat.Tensor
- func Log(x mat.Tensor) mat.Tensor
- func LogSoftmax(x mat.Tensor) mat.Tensor
- func LogSumExp(xs ...mat.Tensor) mat.Tensor
- func ManualSeed(seed uint64) *rand.LockedRand
- func Map(mapping func(mat.Tensor) mat.Tensor, xs []mat.Tensor) []mat.Tensor
- func Map2(mapping func(a mat.Tensor, b mat.Tensor) mat.Tensor, xs1 []mat.Tensor, ...) []mat.Tensor
- func Max(x1, x2 mat.Tensor) mat.Tensor
- func MaxPooling(x mat.Tensor, rows, columns int) mat.Tensor
- func Maximum(xs []mat.Tensor) mat.Tensor
- func Mean(xs []mat.Tensor) mat.Tensor
- func Min(x1, x2 mat.Tensor) mat.Tensor
- func Minimum(xs []mat.Tensor) mat.Tensor
- func Mish(x mat.Tensor) mat.Tensor
- func Mul(x1, x2 mat.Tensor) mat.Tensor
- func MulT(x1, x2 mat.Tensor) mat.Tensor
- func Neg(x mat.Tensor) mat.Tensor
- func Pad(xs []mat.Tensor, seqLen int, padding func(i int) mat.Tensor) []mat.Tensor
- func PositiveELU(x mat.Tensor) mat.Tensor
- func Pow(x mat.Tensor, power float64) mat.Tensor
- func Prod(x1, x2 mat.Tensor) mat.Tensor
- func ProdScalar(x1, x2 mat.Tensor) mat.Tensor
- func Rand() *rand.LockedRand
- func ReLU(x mat.Tensor) mat.Tensor
- func Reciprocal(x mat.Tensor) mat.Tensor
- func ReduceMax(x mat.Tensor) mat.Tensor
- func ReduceMean(x mat.Tensor) mat.Tensor
- func ReduceSum(x mat.Tensor) mat.Tensor
- func Reshape(x mat.Tensor, rows, columns int) mat.Tensor
- func ReverseSub(x1, x2 mat.Tensor) mat.Tensor
- func ReverseSubOne(x mat.Tensor) mat.Tensor
- func RotateR(x mat.Tensor, i int) mat.Tensor
- func RowView(x mat.Tensor, row int) mat.Tensor
- func RowViews(x mat.Tensor) []mat.Tensor
- func SELU(x, alpha mat.Tensor, scale mat.Tensor) mat.Tensor
- func ScalarMax(xs []mat.Tensor) mat.Tensor
- func Seed() *rand.LockedRand
- func SeparateMatrix(x mat.Tensor) [][]mat.Tensor
- func SeparateVec(x mat.Tensor) []mat.Tensor
- func SetForceSyncExecution(enable bool)
- func SiLU(x mat.Tensor) mat.Tensor
- func Sigmoid(x mat.Tensor) mat.Tensor
- func Sin(x mat.Tensor) mat.Tensor
- func Slice(x mat.Tensor, fromRow, fromCol, toRow, toCol int) mat.Tensor
- func SoftPlus(x, beta, threshold mat.Tensor) mat.Tensor
- func SoftShrink(x, lambda mat.Tensor) mat.Tensor
- func Softmax(x mat.Tensor) mat.Tensor
- func Softsign(x mat.Tensor) mat.Tensor
- func SparseMax(x mat.Tensor) mat.Tensor
- func SparseMaxLoss(x mat.Tensor) mat.Tensor
- func SplitVec(x mat.Tensor, chunks int) []mat.Tensor
- func Sqrt(x mat.Tensor) mat.Tensor
- func Square(x mat.Tensor) mat.Tensor
- func Stack(xs ...mat.Tensor) mat.Tensor
- func StopGrad(t mat.Tensor) mat.Tensor
- func Sub(x1, x2 mat.Tensor) mat.Tensor
- func SubScalar(x1, x2 mat.Tensor) mat.Tensor
- func Sum(xs ...mat.Tensor) mat.Tensor
- func Swish(x mat.Tensor) mat.Tensor
- func SwishB(x, beta mat.Tensor) mat.Tensor
- func T(x mat.Tensor) mat.Tensor
- func Tan(x mat.Tensor) mat.Tensor
- func Tanh(x mat.Tensor) mat.Tensor
- func Threshold(x, threshold, k mat.Tensor) mat.Tensor
- type AutoGradFunction
- type GradientBlocker
- type Operator
- func (o *Operator) AccGrad(grad mat.Tensor)
- func (o *Operator) At(indices ...int) mat.Tensor
- func (o *Operator) Data() float.Slice
- func (o *Operator) Dims() int
- func (o *Operator) Grad() mat.Tensor
- func (o *Operator) HasGrad() bool
- func (o *Operator) Item() float.Float
- func (o *Operator) Operands() []mat.Tensor
- func (o *Operator) RequiresGrad() bool
- func (o *Operator) Run(async ...bool) *Operator
- func (o *Operator) SetAt(m mat.Tensor, indices ...int)
- func (o *Operator) Shape() []int
- func (o *Operator) Size() int
- func (o *Operator) Value() mat.Tensor
- func (o *Operator) ZeroGrad()
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Add ¶
Add returns a new operator node as a result of the gradfn.Add function. As special case, the first node may be null. This help to keep the code as concise as possible e.g. during accumulation.
func AddScalar ¶
AddScalar returns a new operator node as a result of the gradfn.AddScalar function.
func AppendRows ¶
AppendRows returns a new operator node as a result of the gradfn.AppendRows function.
func Backward ¶
Backward initiates back-propagation from the input tensors.
The function operates according to the following mutually exclusive rules:
- If the tensors already has gradients (likely assigned externally via node.AccGrads()), those gradients are used.
- If the tensors does not have gradients assigned and is a scalar, the output gradients are automatically assigned by finding the derivative of the tensors with respect to itself (dy/dy = 1).
- If the tensors does not have gradients assigned and is not a scalar, it returns an error.
During the back-propagation process, the gradients of all tensors, except for the given tensors, are summed to the existing gradients. Unless you intend to do so, ensure that all tensors have zero gradients.
func ColViews ¶
ColViews calls ColView for each column of x, returning a new slice of column-view Nodes.
func Copy ¶ added in v1.1.0
Copy returns a new operator node as a result of the gradfn.Copy function.
func DivScalar ¶
DivScalar returns a new operator node as a result of the gradfn.DivScalar function.
func Dropout ¶
Dropout returns a new operator node as a result of the gradfn.Dropout function. If the dropout probability is zero, the operator will not be created, so the input itself is returned directly.
func DropoutFunc ¶
DropoutFunc returns a function to create a Dropout operator working with the given dropout probability.
func HardSigmoid ¶
HardSigmoid returns a new operator node as a result of the `HardSigmoid` function.
func LeakyReLU ¶
LeakyReLU returns a new operator node as a result of the gradfn.LeakyReLU function.
func LogSoftmax ¶
LogSoftmax returns a new operator node as a result of Log(Softmax(x)).
func LogSumExp ¶
LogSumExp "trick" computes the log of the sum of exponentials of input elements. When the input is one, this must be a vector. Alternatively, the calculation is conducted on a list of scalars.
func ManualSeed ¶
func ManualSeed(seed uint64) *rand.LockedRand
ManualSeed sets the seed for generating random numbers.
func Map ¶
Map returns a transformed version of xs with all its components modified according to the mapping function. It is useful for applying an operator to a sequence of nodes. Keep in mind that using this function has an overhead because of the callback, however insignificant compared to mathematical computations.
func Map2 ¶
func Map2(mapping func(a mat.Tensor, b mat.Tensor) mat.Tensor, xs1 []mat.Tensor, xs2 []mat.Tensor) []mat.Tensor
Map2 takes two arguments and applies a mapping function (that must take two arguments) to the items from the two node-slices in parallel. It panics if one slice is shorter than the other.
func MaxPooling ¶
MaxPooling returns a new operator node as a result of the gradfn.MaxPooling function.
func PositiveELU ¶
PositiveELU returns a new operator node as a result of ELU(x) + 1.
func ProdScalar ¶
ProdScalar returns a new operator node as a result of the gradfn.ProdScalar function.
func Rand ¶ added in v1.1.0
func Rand() *rand.LockedRand
Rand returns the global random number generator.
func Reciprocal ¶
Reciprocal returns a new operator node as a result of the `Reciprocal` function.
func ReduceMax ¶
ReduceMax returns a new operator node as a result of the gradfn.ReduceMax function.
func ReduceMean ¶
ReduceMean returns a new operator node as a result of the gradfn.ReduceMean function.
func ReduceSum ¶
ReduceSum returns a new operator node as a result of the gradfn.ReduceSum function.
func ReverseSub ¶
ReverseSub returns a new operator node as a result of the fn.ReverseSub function.
func ReverseSubOne ¶ added in v1.1.0
ReverseSubOne returns a new operator node as a result of applying reverse subtraction with 1.0 to the input using the fn.ReverseSub function.
func RotateR ¶
RotateR performs the right circular shift. `i` is the number of places by which the elements are shifted.
func ScalarMax ¶
ScalarMax returns a new operator node as a result of the gradfn.ScalarMax function.
func Seed ¶
func Seed() *rand.LockedRand
Seed sets the seed for generating random numbers to the current time (converted to uint64).
func SeparateMatrix ¶
SeparateMatrix returns a matrix of Node(s) represented as a slice of slice containing the elements extracted from the input. The dimensions of the resulting matrix are the same of the input.
func SeparateVec ¶
SeparateVec returns a slice of Node(s) containing the elements extracted from the input. The size of the vector equals the number of input elements. You can think of this method as the inverse of the ag.Concat operator.
func SetForceSyncExecution ¶ added in v1.1.0
func SetForceSyncExecution(enable bool)
SetForceSyncExecution enables or disables the forcing of synchronous execution for all operators. When enabled, the operators will run synchronously, regardless of the "async" flag in the Run() function. This setting can be particularly useful for debugging.
func SoftShrink ¶
SoftShrink returns a new operator node as a result of the gradfn.SoftShrink function.
func SparseMax ¶
SparseMax returns a new operator node as a result of the gradfn.SparseMax function.
func SparseMaxLoss ¶
SparseMaxLoss returns a new operator node as a result of the gradfn.SparseMaxLoss function.
func StopGrad ¶
StopGrad creates a new GradientBlocker that stops the accumulated gradients from flowing through the wrapped Node.
func SubScalar ¶
SubScalar returns a new operator node as a result of the gradfn.SubScalar function.
func Sum ¶
Sum returns the value that describes the sum of the sample. It panics if the input is empty.
Types ¶
type AutoGradFunction ¶ added in v1.1.0
type AutoGradFunction interface { // Forward computes the output of the function. Forward() (mat.Tensor, error) // Backward computes the backward pass given the gradient of the output. Backward(gy mat.Tensor) error // Operands returns the list of operands. Operands() []mat.Tensor }
AutoGradFunction represents a function with automatic differentiation features. It's used to define a new operator.
type GradientBlocker ¶ added in v1.1.0
GradientBlocker embeds any tensors implementation disabling gradients handling and blocking gradients accumulation.
func (*GradientBlocker) AccGrad ¶ added in v1.1.0
func (r *GradientBlocker) AccGrad(_ mat.Tensor)
AccGrad has no effects on a GradientBlocker Node.
func (*GradientBlocker) Grad ¶ added in v1.1.0
func (r *GradientBlocker) Grad() mat.Tensor
Grad always returns nil on a GradientBlocker Node.
func (*GradientBlocker) HasGrad ¶ added in v1.1.0
func (r *GradientBlocker) HasGrad() bool
HasGrad always returns false on a GradientBlocker Node.
func (*GradientBlocker) RequiresGrad ¶ added in v1.1.0
func (r *GradientBlocker) RequiresGrad() bool
RequiresGrad always returns false on a GradientBlocker Node.
func (*GradientBlocker) ZeroGrad ¶ added in v1.1.0
func (r *GradientBlocker) ZeroGrad()
ZeroGrad has no effects on a GradientBlocker Node.
type Operator ¶
type Operator struct {
// contains filtered or unexported fields
}
Operator is a type of node. It's used to represent a function with automatic differentiation features.
func NewOperator ¶
func NewOperator(f AutoGradFunction) *Operator
NewOperator creates a new operator with the given AutoGradFunction. Note that the operator's Value() can only be accessed after calling the Run() function.
func (*Operator) At ¶ added in v1.1.0
At returns the value at the given indices. It panics if the given indices are out of range.
func (*Operator) RequiresGrad ¶
RequiresGrad returns true if the node requires gradients.
func (*Operator) Run ¶ added in v1.1.0
Run starts the execution of the operator, performing the forward pass. If the optional async argument is set to true, the forward pass will be executed in a separate goroutine. The function returns a pointer to the Operator, allowing for method chaining.
func (*Operator) SetAt ¶ added in v1.1.0
SetAt sets the value at the given indices. It panics if the given indices are out of range.