nn

package
v0.9.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 30, 2023 License: Apache-2.0 Imports: 10 Imported by: 31

Documentation

Index

Constants

View Source
const SEP = "."

SEP is a separator to separate path elements in the tensor names.

Variables

This section is empty.

Functions

func BCELoss added in v0.3.14

func BCELoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

BCELoss calculates a binary cross entropy loss.

- logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func BatchAccuracyForLogits

func BatchAccuracyForLogits(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogits calculates average accuracy of test batches.

NOTE: Pytorch uses `NoGradGuard` which is a thread local scope and it sets a global flag that is checked by the backend whenever an op is done on a variable. The guard itself saved the current status and set it to false in the constructor. And restore the saved status in it’s destructor. That way it is similar to a with torch.no_grad(): block in python. This seems not working in Go. There 2 ways to get around. One is freeze VarStore, the other is set manually set AutoGrad at `loss` tensor. I.e., `loss = loss.MustSetRequiresGrad(true)`

func BatchAccuracyForLogitsIdx

func BatchAccuracyForLogitsIdx(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

BatchAccuracyForLogitIdx is an alternative of BatchAccuracyForLogits to calculate accuracy for specified batch on module weight. It uses tensor indexing instead of Iter2

func BatchAccuracyForLogitsOld added in v0.3.0

func BatchAccuracyForLogitsOld(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)

func CalculateFans added in v0.8.0

func CalculateFans(shape []int64) (fanIn, fanOut int64, err error)

CalculateFans calculates fan-in and fan-out based on tensor shape.

func CrossEntropyLoss added in v0.3.14

func CrossEntropyLoss(logits, target *ts.Tensor, opts ...LossFnOption) *ts.Tensor

CrossEntropyLoss calculates cross entropy loss. Ref. https://github.com/pytorch/pytorch/blob/15be189f0de4addf4f68d18022500f67617ab05d/torch/nn/functional.py#L2012 - logits: tensor of shape [B, C, H, W] corresponding the raw output of the model. - target: ground truth tensor of shape [B, 1, H, W] - posWeight: scalar representing the weight attributed to positive class. This is especially useful for an imbalanced dataset

func MSELoss added in v0.8.0

func MSELoss(logits, labels *ts.Tensor, reductionOpt ...int64) *ts.Tensor

MSELoss calculates Mean-Square Loss.

- reductionOpt: either 0 ("none"); 1 ("mean"); 2 ("sum"). Default=mean

func NewBuffer added in v0.6.2

func NewBuffer(path *Path, name string, x *ts.Tensor, persistentOpt ...bool) *ts.Tensor

NewBuffer creates new buffer.

Buffer is different from Parameter as its requiredGrad always false. - `o.Persistent` param. Default=true. If `true` buffer variable will be saved when `nn.VarStore.Save()` is called.

Ref. - https://github.com/pytorch/pytorch/blob/f71eede85a69caed637008e331f5ac5f5b7717ae/torch/nn/modules/module.py#L275 - https://discuss.pytorch.org/t/what-is-the-difference-between-register-buffer-and-register-parameter-of-nn-module/32723/2

func NewConstInit

func NewConstInit(v float64) constInit

func NewGlorotNInit

func NewGlorotNInit() glorotNInit

func NewKaimingUniformInit

func NewKaimingUniformInit(opts ...KaimingOption) *kaimingUniformInit

func NewParameter added in v0.6.0

func NewParameter(path *Path, name string, x *ts.Tensor, requireGradOpt ...bool) *ts.Tensor

NewParameter creates a kind of tensor that is considered as a module parameter. Ref. https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html

func NewRandnInit

func NewRandnInit(mean, stdev float64) randnInit

func NewUniformInit

func NewUniformInit(lo, up float64) uniformInit

func WithUint8

func WithUint8(n uint8) func() uint8

WithUint8 returns an uint8 value option

func XavierUniform_ added in v0.8.0

func XavierUniform_(x *ts.Tensor, gainOpt ...float64)

XavierUniform fills the input tensor with values according to the method described in the paper `Understanding the difficulty of training deep feedforward neural networks` using a uniform distribution

Also known as Glorot initialization.

Paper: https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf Pytorch implementation: https://github.com/pytorch/pytorch/blob/df50f91571891ec3f87977a2bdd4a2b609d70afc/torch/nn/init.py#L310

Types

type AdamConfig

type AdamConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamConfig

func DefaultAdamConfig() *AdamConfig

DefaultAdamConfig creates AdamConfig with default values

func NewAdamConfig

func NewAdamConfig(beta1, beta2, wd float64) *AdamConfig

NewAdamConfig creates AdamConfig with specified values

func (*AdamConfig) Build

func (c *AdamConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type AdamWConfig added in v0.3.11

type AdamWConfig struct {
	Beta1 float64
	Beta2 float64
	Wd    float64
}

func DefaultAdamWConfig added in v0.3.11

func DefaultAdamWConfig() *AdamWConfig

DefaultAdamWConfig creates AdamWConfig with default values

func NewAdamWConfig added in v0.3.11

func NewAdamWConfig(beta1, beta2, wd float64) *AdamWConfig

NewAdamWConfig creates AdamWConfig with specified values

func (*AdamWConfig) Build added in v0.3.11

func (c *AdamWConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

Build builds AdamW optimizer

type AddOpt added in v0.6.2

type AddOpt func(*AddOpts)

func WithPersistent added in v0.6.2

func WithPersistent(v bool) AddOpt

func WithVarType added in v0.6.2

func WithVarType(v string) AddOpt

type AddOpts added in v0.6.2

type AddOpts struct {
	VarType    string
	Persistent bool
}

type BatchNorm

type BatchNorm struct {
	RunningMean *ts.Tensor
	RunningVar  *ts.Tensor
	Ws          *ts.Tensor
	Bs          *ts.Tensor
	Nd          uint
	// contains filtered or unexported fields
}

A batch-normalization layer.

func BatchNorm1D

func BatchNorm1D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a three dimension input.

The input shape is assumed to be (N, C, L). Normalization is performed over the first batch dimension N.

func BatchNorm2D

func BatchNorm2D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a four dimension input.

The input shape is assumed to be (N, C, H, W). Normalization is performed over the first batch dimension N.

func BatchNorm3D

func BatchNorm3D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm

Applies Batch Normalization over a five dimension input.

The input shape is assumed to be (N, C, D, H, W). Normalization is performed over the first batch dimension N.

func NewBatchNorm

func NewBatchNorm(vs *Path, nd uint, outDim int64, config *BatchNormConfig) *BatchNorm

NewBatchNorm creates a new BatchNorm layer

func (*BatchNorm) Forward added in v0.6.0

func (bn *BatchNorm) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward forwards inputs through the module. NOTE. This forwarding will update BatchNorm weight by default (training=true). Wrap module with tensor.NoGrad() when running model inference mode.

func (*BatchNorm) ForwardT

func (bn *BatchNorm) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

type BatchNormConfig

type BatchNormConfig struct {
	CudnnEnable bool
	Eps         float64
	Momentum    float64
	WsInit      Init
	BsInit      Init
}

Batch-normalization config.

func DefaultBatchNormConfig

func DefaultBatchNormConfig() *BatchNormConfig

type ClipOpt added in v0.6.2

type ClipOpt func(*ClipOpts)

func WithErrorIfNonFinite added in v0.6.2

func WithErrorIfNonFinite(v bool) ClipOpt

func WithNormType added in v0.6.2

func WithNormType(v float64) ClipOpt

type ClipOpts added in v0.6.2

type ClipOpts struct {
	NormType         float64
	ErrorIfNonFinite bool
}

type Conv

type Conv interface{}

func NewConv

func NewConv(vs *Path, inDim, outDim int64, ksizes []int64, config interface{}) Conv

NewConv is a generic builder to build Conv1D, Conv2D, Conv3D. It returns an interface Conv which might need a type assertion for further use.

type Conv1D

type Conv1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv1DConfig
}

Conv1D is convolution 1D struct.

func NewConv1D

func NewConv1D(vs *Path, inDim, outDim, k int64, cfg *Conv1DConfig) *Conv1D

NewConv1D creates Conv1D struct.

func (*Conv1D) Forward

func (c *Conv1D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv1D) ForwardT

func (c *Conv1D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv1DConfig

type Conv1DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv1DConfig is configuration struct for convolution 1D.

func DefaultConv1DConfig

func DefaultConv1DConfig() *Conv1DConfig

DefaultConvConfig create a default 1D ConvConfig

func NewConv1DConfig added in v0.4.2

func NewConv1DConfig(opts ...Conv1DConfigOpt) *Conv1DConfig

NewConv1DConfig creates Conv1DConfig.

type Conv1DConfigOpt added in v0.4.2

type Conv1DConfigOpt func(*Conv1DConfig)

Conv1DConfigOpt is option for Conv1DConfig.

func WithBias1D added in v0.4.2

func WithBias1D(val bool) Conv1DConfigOpt

WithBias1D adds bias 1D option.

func WithBsInit1D added in v0.4.2

func WithBsInit1D(val Init) Conv1DConfigOpt

WithBsInit adds BsInit 1D option.

func WithDilation1D added in v0.4.2

func WithDilation1D(val int64) Conv1DConfigOpt

WithDilation1D adds dilation 1D option.

func WithGroup1D added in v0.4.2

func WithGroup1D(val int64) Conv1DConfigOpt

func WithPadding1D added in v0.4.2

func WithPadding1D(val int64) Conv1DConfigOpt

WithPadding1D adds padding 1D option.

func WithStride1D added in v0.4.2

func WithStride1D(val int64) Conv1DConfigOpt

withStride1D adds stride 1D option.

func WithWsInit1D added in v0.4.2

func WithWsInit1D(val Init) Conv1DConfigOpt

WithWsInit adds WsInit 1D option.

type Conv2D

type Conv2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv2DConfig
}

Conv2D is convolution 2D struct.

func NewConv2D

func NewConv2D(vs *Path, inDim, outDim int64, k int64, cfg *Conv2DConfig) *Conv2D

NewConv2D creates new Conv2D.

func (*Conv2D) Forward

func (c *Conv2D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv2D) ForwardT

func (c *Conv2D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv2DConfig

type Conv2DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv2DConfig is configuration for convolution 2D.

func DefaultConv2DConfig

func DefaultConv2DConfig() *Conv2DConfig

DefaultConvConfig2D creates a default 2D ConvConfig

func NewConv2DConfig added in v0.4.2

func NewConv2DConfig(opts ...Conv2DConfigOpt) *Conv2DConfig

NewConv2DConfig creates Conv2DConfig.

type Conv2DConfigOpt added in v0.4.2

type Conv2DConfigOpt func(*Conv2DConfig)

Conv2DConfigOpt is option type for Conv2DConfig.

func WithBias2D added in v0.4.2

func WithBias2D(val bool) Conv2DConfigOpt

WithBias2D adds bias 2D option.

func WithBsInit2D added in v0.4.2

func WithBsInit2D(val Init) Conv2DConfigOpt

WithBsInit adds BsInit 2D option.

func WithDilation2D added in v0.4.2

func WithDilation2D(val int64) Conv2DConfigOpt

WithDilation2D adds dilation 2D option.

func WithGroup2D added in v0.4.2

func WithGroup2D(val int64) Conv2DConfigOpt

WithGroup2D adds group 2D option.

func WithPadding2D added in v0.4.2

func WithPadding2D(val int64) Conv2DConfigOpt

WithPadding2D adds padding 2D option.

func WithStride2D added in v0.4.2

func WithStride2D(val int64) Conv2DConfigOpt

WithStride2D adds stride 2D option.

func WithWsInit2D added in v0.4.2

func WithWsInit2D(val Init) Conv2DConfigOpt

WithWsInit2D adds WsInit 2D option.

type Conv3D

type Conv3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *Conv3DConfig
}

Conv3D is convolution 3D struct.

func NewConv3D

func NewConv3D(vs *Path, inDim, outDim, k int64, cfg *Conv3DConfig) *Conv3D

NewConv3D creates new Conv3D struct.

func (*Conv3D) Forward

func (c *Conv3D) Forward(xs *ts.Tensor) *ts.Tensor

func (*Conv3D) ForwardT

func (c *Conv3D) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type Conv3DConfig

type Conv3DConfig struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	Groups   int64
	Bias     bool
	WsInit   Init
	BsInit   Init
}

Conv3DConfig is configuration struct for convolution 3D.

func DefaultConv3DConfig added in v0.4.5

func DefaultConv3DConfig() *Conv3DConfig

DefaultConvConfig3D creates a default 3D ConvConfig

func NewConv3DConfig added in v0.4.5

func NewConv3DConfig(opts ...Conv3DConfigOpt) *Conv3DConfig

NewConv3DConfig creates Conv3DConfig.

type Conv3DConfigOpt added in v0.4.5

type Conv3DConfigOpt func(*Conv3DConfig)

Conv3DConfigOpt is option type for Conv3DConfig.

func WithBias3D added in v0.4.5

func WithBias3D(val bool) Conv3DConfigOpt

WithBias3D adds bias 3D option.

func WithBsInit3D added in v0.4.5

func WithBsInit3D(val Init) Conv3DConfigOpt

WithBsInit adds BsInit 3D option.

func WithDilation3D added in v0.4.5

func WithDilation3D(val int64) Conv3DConfigOpt

WithDilation3D adds dilation 3D option.

func WithGroup3D added in v0.4.5

func WithGroup3D(val int64) Conv3DConfigOpt

WithGroup3D adds group 3D option.

func WithPadding3D added in v0.4.5

func WithPadding3D(val int64) Conv3DConfigOpt

WithPadding3D adds padding 3D option.

func WithStride3D added in v0.4.5

func WithStride3D(val int64) Conv3DConfigOpt

WithStride3D adds stride 3D option.

func WithWsInit3D added in v0.4.5

func WithWsInit3D(val Init) Conv3DConfigOpt

WithWsInit3D adds WsInit 3D option.

type ConvTranspose1D

type ConvTranspose1D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose1DConfig
}

func NewConvTranspose1D

func NewConvTranspose1D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose1DConfig) *ConvTranspose1D

func (*ConvTranspose1D) Forward

func (c *ConvTranspose1D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose1DConfig

type ConvTranspose1DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

func DefaultConvTranspose1DConfig

func DefaultConvTranspose1DConfig() *ConvTranspose1DConfig

DefaultConvConfig create a default 1D ConvConfig

type ConvTranspose2D

type ConvTranspose2D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose2DConfig
}

func NewConvTranspose2D

func NewConvTranspose2D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose2DConfig) *ConvTranspose2D

func (*ConvTranspose2D) Forward

func (c *ConvTranspose2D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose2DConfig

type ConvTranspose2DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type ConvTranspose3D

type ConvTranspose3D struct {
	Ws     *ts.Tensor
	Bs     *ts.Tensor // optional
	Config *ConvTranspose3DConfig
}

func NewConvTranspose3D

func NewConvTranspose3D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose3DConfig) *ConvTranspose3D

func (*ConvTranspose3D) Forward

func (c *ConvTranspose3D) Forward(xs *ts.Tensor) *ts.Tensor

type ConvTranspose3DConfig

type ConvTranspose3DConfig struct {
	Stride        []int64
	Padding       []int64
	OutputPadding []int64
	Dilation      []int64
	Groups        int64
	Bias          bool
	WsInit        Init
	BsInit        Init
}

type CosineAnnealingLR added in v0.3.10

type CosineAnnealingLR struct {
	// contains filtered or unexported fields
}

CosineAnnealingLR set the learning rates of each optimizer parameter group by using a cosine annealing schedule where eta max is set to initial learning rate and Tcur is the number of epochs since the last restart in SGDR (Stochastic Gradient Descent with Warm Restarts).

NOTE. this implements only the cosine annealing part of SGDR, and not the starts. Ref. - https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR - https://arxiv.org/abs/1608.03983

func NewCosineAnnealingLR added in v0.3.10

func NewCosineAnnealingLR(opt *Optimizer, tmax int, etaMin float64) *CosineAnnealingLR

NewConsineAnnealingLR creates a new ConsineAnnealingLR.

func (*CosineAnnealingLR) Build added in v0.3.10

func (ca *CosineAnnealingLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CosineAnnealingLR) SetLRs added in v0.3.10

func (ca *CosineAnnealingLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type CosineAnnealingWarmRestarts added in v0.3.11

type CosineAnnealingWarmRestarts struct {
	// contains filtered or unexported fields
}

CosineAnnealingWarmRestart sets the learning rate of each parameter group / using a cosine annealing schedule.

Source: Stochastic Gradient Descent with Warm Restarts: https://arxiv.org/abs/1608.03983

func NewCosineAnnealingWarmRestarts added in v0.3.11

func NewCosineAnnealingWarmRestarts(opt *Optimizer, t0 int, opts ...CosineAnnealingWarmRestartsOption) *CosineAnnealingWarmRestarts

func (*CosineAnnealingWarmRestarts) Build added in v0.3.11

Build implement scheduler interface

func (*CosineAnnealingWarmRestarts) SetLRs added in v0.3.11

func (s *CosineAnnealingWarmRestarts) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

NOTE. scheduler.Step(epoch) could be called after every batch update

type CosineAnnealingWarmRestartsOption added in v0.3.11

type CosineAnnealingWarmRestartsOption func(*CosineAnnealingWarmRestartsOptions)

func WithCosineAnnealingLastEpoch added in v0.3.11

func WithCosineAnnealingLastEpoch(v int) CosineAnnealingWarmRestartsOption

func WithEtaMin added in v0.3.11

func WithTMult added in v0.3.11

type CosineAnnealingWarmRestartsOptions added in v0.3.11

type CosineAnnealingWarmRestartsOptions struct {
	TMult     int
	EtaMin    float64
	LastEpoch int
}

type CyclicLR added in v0.3.11

type CyclicLR struct {
	// contains filtered or unexported fields
}

CyclicLR sets the learning rate of each parameter group according to cyclical learning rate policy (CLR). The policy cycles the learning rate between two boundaries with a constant frequency, as detailed in the paper `Cyclical Learning Rates for Training Neural Networks`_. The distance between the two boundaries can be scaled on a per-iteration or per-cycle basis.

Cyclical learning rate policy changes the learning rate after every batch. `Step()` should be called after a batch has been used for training. This class has three built-in policies, as put forth in the paper: - "triangular": A basic triangular cycle without amplitude scaling. - "triangular2": A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": A cycle that scales initial amplitude by :math:`\text{gamma}^{\text{cycle iterations}}` at each cycle iteration.

Source: - Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - bckenstler/CLR: https://github.com/bckenstler/CLR

func NewCyclicLR added in v0.3.11

func NewCyclicLR(opt *Optimizer, baseLRs, maxLRs []float64, opts ...CyclicOption) *CyclicLR

func (*CyclicLR) Build added in v0.3.11

func (cyc *CyclicLR) Build() *LRScheduler

Build implements scheduler interface.

func (*CyclicLR) SetLRs added in v0.3.11

func (cyc *CyclicLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

It calculates the learning rate at batch index. This function treats `lastEpoch` as the last batch index. NOTE. If `cycleMomentum` is “true“, this function has a side effect of updating the optimizer's momentum.

type CyclicOption added in v0.3.11

type CyclicOption func(*CyclicOptions)

func WithCyclicBaseMomentum added in v0.3.11

func WithCyclicBaseMomentum(v float64) CyclicOption

func WithCyclicCycleMomentum added in v0.3.11

func WithCyclicCycleMomentum(v bool) CyclicOption

func WithCyclicGamma added in v0.3.11

func WithCyclicGamma(v float64) CyclicOption

func WithCyclicLastEpoch added in v0.3.11

func WithCyclicLastEpoch(v int) CyclicOption

func WithCyclicMaxMomentum added in v0.3.11

func WithCyclicMaxMomentum(v float64) CyclicOption

func WithCyclicMode added in v0.3.11

func WithCyclicMode(v string) CyclicOption

func WithCyclicScaleFn added in v0.3.11

func WithCyclicScaleFn(v func(x float64) float64) CyclicOption

func WithCyclicScaleMode added in v0.3.11

func WithCyclicScaleMode(v string) CyclicOption

func WithCyclicStepSizeDown added in v0.3.11

func WithCyclicStepSizeDown(v int) CyclicOption

func WithCyclicStepSizeUp added in v0.3.11

func WithCyclicStepSizeUp(v int) CyclicOption

type CyclicOptions added in v0.3.11

type CyclicOptions struct {
	StepSizeUp    int                     // 2000
	StepSizeDown  int                     // -1
	Mode          string                  // "triangular"
	Gamma         float64                 // 1.0
	ScaleFn       func(x float64) float64 // nil
	ScaleMode     string                  // "cycle"
	CycleMomentum bool                    // true
	BaseMomentum  float64                 // 0.8
	MaxMomentum   float64                 // 0.9
	LastEpoch     int                     // -1
}

type Dropout added in v0.6.0

type Dropout struct {
	// contains filtered or unexported fields
}

Dropout represents a neural network dropout layer.

func NewDropout added in v0.6.0

func NewDropout(p float64) *Dropout

NewDropout creates a new Dropout layer

func (*Dropout) ForwardT added in v0.6.0

func (d *Dropout) ForwardT(input *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT for Dropout layer.

type Embedding

type Embedding struct {
	Ws *ts.Tensor
	// contains filtered or unexported fields
}

An embedding layer.

An embedding layer acts as a simple lookup table that stores embeddings. This is commonly used to store word embeddings.

func NewEmbedding

func NewEmbedding(vs *Path, numEmbeddings int64, embeddingDim int64, config *EmbeddingConfig) *Embedding

NewEmbedding creates a new Embedding

func (*Embedding) Forward

func (e *Embedding) Forward(xs *ts.Tensor) *ts.Tensor

Forward implements Module interface for Embedding

func (*Embedding) ForwardT

func (e *Embedding) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT interface for Embedding

type EmbeddingConfig

type EmbeddingConfig struct {
	Sparse          bool
	ScaleGradByFreq bool
	WsInit          Init
	PaddingIdx      int64
}

Configuration option for an embedding layer.

func DefaultEmbeddingConfig

func DefaultEmbeddingConfig() *EmbeddingConfig

type Entry

type Entry struct {
	// contains filtered or unexported fields
}

Entry holds an entry corresponding to a given name in Path.

func (*Entry) MustOrKaimingUniform added in v0.6.2

func (e *Entry) MustOrKaimingUniform(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrKaimingUniform returns the existing entry if, otherwise create a new variable.

func (*Entry) MustOrOnes added in v0.6.2

func (e *Entry) MustOrOnes(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrOnes returns the existing entry if found, otherwise create a new variable.

func (*Entry) MustOrOnesNoTrain added in v0.6.2

func (e *Entry) MustOrOnesNoTrain(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrOnesNoTrain returns the existing entry if found, otherwise create a new variable.

func (*Entry) MustOrRandn added in v0.6.2

func (e *Entry) MustOrRandn(dims []int64, mean, stdev float64, opts ...AddOpt) *ts.Tensor

MustOrRandn returns the existing entry if, otherwise create a new variable.

func (*Entry) MustOrRandnStandard added in v0.6.2

func (e *Entry) MustOrRandnStandard(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrRandnStandard returns the existing entry if, otherwise create a new variable.

func (*Entry) MustOrUniform added in v0.6.2

func (e *Entry) MustOrUniform(dims []int64, lo, up float64, opts ...AddOpt) *ts.Tensor

MustOrUniform returns the existing entry if found, otherwise create a new variable.

func (*Entry) MustOrVar added in v0.6.2

func (e *Entry) MustOrVar(dims []int64, init Init, opts ...AddOpt) *ts.Tensor

MustOrVar returns the existing entry if found, otherwise creates a new variable. It panics if error.

func (*Entry) MustOrVarCopy added in v0.6.2

func (e *Entry) MustOrVarCopy(tensor *ts.Tensor) *ts.Tensor

MustOrVarCopy returns the existing entry if found, otherwise create a new variable.

func (*Entry) MustOrZeros added in v0.6.2

func (e *Entry) MustOrZeros(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrZeros returns the exising entry if found, otherwise creates a new variable.

func (*Entry) MustOrZerosNoTrain added in v0.6.2

func (e *Entry) MustOrZerosNoTrain(dims []int64, opts ...AddOpt) *ts.Tensor

MustOrZerosNoTrain returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrKaimingUniform

func (e *Entry) OrKaimingUniform(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrKaimingUniform returns the existing entry if, otherwise create a new variable.

func (*Entry) OrOnes

func (e *Entry) OrOnes(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrOnes returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrOnesNoTrain

func (e *Entry) OrOnesNoTrain(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrOnesNoTrain returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrRandn

func (e *Entry) OrRandn(dims []int64, mean, stdev float64, opts ...AddOpt) (*ts.Tensor, error)

OrRandn returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrRandnStandard

func (e *Entry) OrRandnStandard(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrRandnStandard returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrUniform

func (e *Entry) OrUniform(dims []int64, lo, up float64, opts ...AddOpt) (*ts.Tensor, error)

OrUniform returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrVar

func (e *Entry) OrVar(dims []int64, init Init, opts ...AddOpt) (*ts.Tensor, error)

OrVar returns the existing entry if found, otherwise create a new variable.

If this entry name matches the name of a variables stored in the var store, the corresponding tensor is returned. Otherwise a new variable is added to the var-store with the entry name and is initialized according to the init parameter.

func (*Entry) OrVarCopy

func (e *Entry) OrVarCopy(tensor *ts.Tensor) (*ts.Tensor, error)

OrVarCopy returns the existing entry if found, otherwise create a new variable.

func (*Entry) OrZeros

func (e *Entry) OrZeros(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrZeros returns the existing entry if found, otherwise creates a new variable.

func (*Entry) OrZerosNoTrain

func (e *Entry) OrZerosNoTrain(dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OrZerosNoTrain returns the existing entry if found, otherwise create a new variable.

type ExponentialLR added in v0.3.10

type ExponentialLR struct {
	// contains filtered or unexported fields
}

ExponentialLR decays the learning rates of each optimizer parameter group by gamma every epochs.

func NewExponentialLR added in v0.3.10

func NewExponentialLR(opt *Optimizer, gamma float64) *ExponentialLR

NewExponentialLR creates a new ExponentialLR.

func (*ExponentialLR) Build added in v0.3.10

func (e *ExponentialLR) Build() *LRScheduler

Build implements scheduler interface.

func (*ExponentialLR) SetLRs added in v0.3.10

func (e *ExponentialLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ForwardTWith

type ForwardTWith func(*ts.Tensor, bool) *ts.Tensor

func (ForwardTWith) ForwardT

func (fw ForwardTWith) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

type ForwardWith

type ForwardWith func(*ts.Tensor) *ts.Tensor

ForwardWith is a handler function to implement Module interface for any (anonymous) function it wraps.

Ref. https://stackoverflow.com/a/42182987 NOTE: Specifically, `ForwardWith` is used to wrap anonymous function as input parameter of `AddFn` Sequential method.

func (ForwardWith) Forward

func (fw ForwardWith) Forward(xs *ts.Tensor) *ts.Tensor

type Func

type Func struct {
	// contains filtered or unexported fields
}

func NewFunc

func NewFunc(fn func(*ts.Tensor) *ts.Tensor) (retVal Func)

func (Func) Forward

func (fn Func) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

func (Func) ForwardT

func (fn Func) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT for Func object as well.

NOTE: train param will not be used.

type FuncT

type FuncT struct {
	// contains filtered or unexported fields
}

func NewFuncT

func NewFuncT(fn func(*ts.Tensor, bool) *ts.Tensor) (retVal FuncT)

func (FuncT) ForwardT

func (fn FuncT) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

Implement Module interface for Func: ====================================

type GRU

type GRU struct {
	// contains filtered or unexported fields
}

A Gated Recurrent Unit (GRU) layer.

https://en.wikipedia.org/wiki/Gated_recurrent_unit

func NewGRU

func NewGRU(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) (retVal *GRU)

NewGRU create a new GRU layer

func (*GRU) Seq

func (g *GRU) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*GRU) SeqInit

func (g *GRU) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*GRU) Step

func (g *GRU) Step(input *ts.Tensor, inState State) State

func (*GRU) ZeroState

func (g *GRU) ZeroState(batchDim int64) State

type GRUState

type GRUState struct {
	Tensor *ts.Tensor
}

GRUState is a GRU state. It contains a single tensor.

func (*GRUState) Value

func (gs *GRUState) Value() *ts.Tensor

type Identity added in v0.6.0

type Identity struct{}

func NewIdentity added in v0.6.0

func NewIdentity() *Identity

func (*Identity) Forward added in v0.6.0

func (m *Identity) Forward(x *ts.Tensor) *ts.Tensor

type Init

type Init interface {
	// creates a new tensor with specified initiation
	InitTensor(dims []int64, device gotch.Device, dtypeOpt ...gotch.DType) (retVal *ts.Tensor)

	// re-initializes (in-place) an existing tensor with the specified initiation
	Set(tensor *ts.Tensor)
}

type KaimingOption added in v0.8.0

type KaimingOption func(*KaimingOptions)

func WithKaimingMode added in v0.8.0

func WithKaimingMode(v string) KaimingOption

func WithKaimingNegativeSlope added in v0.8.0

func WithKaimingNegativeSlope(v float64) KaimingOption

func WithKaimingNonLinearity added in v0.8.0

func WithKaimingNonLinearity(v string) KaimingOption

type KaimingOptions added in v0.8.0

type KaimingOptions struct {
	NegativeSlope float64
	Mode          string
	NonLinearity  string
}

kaiminguniformInit : ====================

func DefaultKaimingOptions added in v0.8.0

func DefaultKaimingOptions() *KaimingOptions

func NewKaimingOptions added in v0.8.0

func NewKaimingOptions(opts ...KaimingOption) *KaimingOptions

type LRScheduler added in v0.3.10

type LRScheduler struct {
	// contains filtered or unexported fields
}

LRScheduler is a scheduler to update optimizer learning rates.

func NewLRScheduler added in v0.4.2

func NewLRScheduler(s scheduler) *LRScheduler

func (*LRScheduler) Step added in v0.3.10

func (s *LRScheduler) Step(opts ...SchedulerOption)

Step updates optimizer learning rate.

type LSTM

type LSTM struct {
	// contains filtered or unexported fields
}

A Long Short-Term Memory (LSTM) layer.

https://en.wikipedia.org/wiki/Long_short-term_memory

func NewLSTM

func NewLSTM(vs *Path, inDim, hiddenDim int64, cfg *RNNConfig) *LSTM

NewLSTM creates a LSTM layer.

func (*LSTM) Seq

func (l *LSTM) Seq(input *ts.Tensor) (*ts.Tensor, State)

func (*LSTM) SeqInit

func (l *LSTM) SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)

func (*LSTM) Step

func (l *LSTM) Step(input *ts.Tensor, inState State) State

func (*LSTM) ZeroState

func (l *LSTM) ZeroState(batchDim int64) State

type LSTMState

type LSTMState struct {
	Tensor1 *ts.Tensor
	Tensor2 *ts.Tensor
}

The state for a LSTM network, this contains two tensors.

func (*LSTMState) C

func (ls *LSTMState) C() *ts.Tensor

The cell state vector.

func (*LSTMState) H

func (ls *LSTMState) H() *ts.Tensor

The hidden state vector, which is also the output of the LSTM.

type LambdaFn added in v0.3.10

type LambdaFn func(in interface{}) float64

type LambdaLR added in v0.3.10

type LambdaLR struct {
	// contains filtered or unexported fields
}

LamdaLR calculates new learning rate for each parameter group by applying Lambda function to the corresponding INITIAL learning rate.

func NewLambdaLR added in v0.3.10

func NewLambdaLR(opt *Optimizer, ldFns []LambdaFn) *LambdaLR

NewLambdaLRS creates a new LambdaLRS.

func (*LambdaLR) Build added in v0.3.10

func (l *LambdaLR) Build() *LRScheduler

Build implements scheduler interface.

func (*LambdaLR) SetLRs added in v0.3.10

func (l *LambdaLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type LayerNorm

type LayerNorm struct {
	Config          *LayerNormConfig
	Ws              *ts.Tensor // optional
	Bs              *ts.Tensor // optional
	NormalizedShape []int64
}

A layer-normalization layer.

func NewLayerNorm

func NewLayerNorm(vs *Path, normalizedShape []int64, config *LayerNormConfig) *LayerNorm

func (*LayerNorm) Forward

func (ln *LayerNorm) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

type LayerNormConfig

type LayerNormConfig struct {
	CudnnEnable       bool
	Eps               float64
	ElementwiseAffine bool
	WsInit            Init
	BsInit            Init
	WsName            string // Default="weight", can change to e.g., "gamma"
	BsName            string // Default="bias", can change to e.g., "beta"
}

Layer-normalization config.

func DefaultLayerNormConfig

func DefaultLayerNormConfig() *LayerNormConfig

type Linear

type Linear struct {
	Ws *ts.Tensor
	Bs *ts.Tensor
}

Linear is a linear fully-connected layer

func NewLinear

func NewLinear(vs *Path, inDim, outDim int64, c *LinearConfig) *Linear

NewLinear creates a new linear layer y = x*wT + b inDim - input dimension (x) [input features - columns] outDim - output dimension (y) [output features - columns] NOTE: w will have shape{outDim, inDim}; b will have shape{outDim}

func (*Linear) Forward

func (l *Linear) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward proceeds input node through linear layer. NOTE: - It assumes that node has dimensions of 2 (matrix). To make it work for matrix multiplication, input node should has same number of **column** as number of **column** in `LinearLayer` `Ws` property as weights matrix will be transposed before multiplied to input node. (They are all used `inDim`) - Input node should have shape of `shape{batch size, input features}`. (shape{batchSize, inDim}). The input features is `inDim` while the output feature is `outDim` in `LinearConfig` struct.

Example:

inDim := 3
outDim := 2
batchSize := 4
weights: 2x3
[ 1 1 1
	1 1 1 ]

input node: 3x4
[ 1 1 1
  1 1 1
  1 1 1
	1 1 1 ]

func (*Linear) ForwardT

func (l *Linear) ForwardT(xs *ts.Tensor, train bool) (retVal *ts.Tensor)

ForwardT implements ModuleT interface for Linear layer.

NOTE: train param will not be used.

type LinearConfig

type LinearConfig struct {
	WsInit Init // iniital weights
	BsInit Init // optional initial bias
	Bias   bool
}

LinearConfig is a configuration for a linear layer

func DefaultLinearConfig

func DefaultLinearConfig() *LinearConfig

DefaultLinearConfig creates default LinearConfig with weights initiated using KaimingUniform and Bias is set to true

type LossFnOption added in v0.3.14

type LossFnOption func(*lossFnOptions)

func WithLossFnIgnoreIndex added in v0.3.14

func WithLossFnIgnoreIndex(val int64) LossFnOption

func WithLossFnPosWeight added in v0.3.14

func WithLossFnPosWeight(val int64) LossFnOption

func WithLossFnReduction added in v0.3.14

func WithLossFnReduction(val int64) LossFnOption

func WithLossFnWeights added in v0.3.14

func WithLossFnWeights(vals []float64) LossFnOption

type MaxPool2D added in v0.6.0

type MaxPool2D struct {
	Kernel   []int64
	Stride   []int64
	Padding  []int64
	Dilation []int64
	CeilMode bool
}

func NewMaxPool2D added in v0.6.0

func NewMaxPool2D(kernelSize []int64, opts ...MaxPool2DOpt) *MaxPool2D

func (*MaxPool2D) Forward added in v0.6.0

func (m *MaxPool2D) Forward(x *ts.Tensor) *ts.Tensor

type MaxPool2DOpt added in v0.6.0

type MaxPool2DOpt func(*MaxPool2DOpts)

func OptCeilModeMp2D added in v0.6.0

func OptCeilModeMp2D(v bool) MaxPool2DOpt

func OptDilationMp2D added in v0.6.0

func OptDilationMp2D(v []int64) MaxPool2DOpt

func OptPaddingMp2D added in v0.6.0

func OptPaddingMp2D(v []int64) MaxPool2DOpt

func OptStrideMp2D added in v0.6.0

func OptStrideMp2D(v []int64) MaxPool2DOpt

type MaxPool2DOpts added in v0.6.0

type MaxPool2DOpts struct {
	Stride   []int64
	Padding  []int64
	Dilation []int64
	CeilMode bool
}

func DefaultMaxPool2DOpts added in v0.6.0

func DefaultMaxPool2DOpts() *MaxPool2DOpts

type MultiStepLR added in v0.3.10

type MultiStepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamm once the number of epochs reaches one of the milestones.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewMultiStepLR added in v0.3.10

func NewMultiStepLR(opt *Optimizer, milestones []int, gamma float64) *MultiStepLR

NewStepLR creates a new StepLR.

func (*MultiStepLR) Build added in v0.3.10

func (ms *MultiStepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiStepLR) SetLRs added in v0.3.10

func (ms *MultiStepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type MultiplicativeLR added in v0.3.10

type MultiplicativeLR struct {
	// contains filtered or unexported fields
}

MultiplicativeLR calculates new learning rates for each optimizer para groups by applying corresponding Lambda function to the CURRENT learning rate.

func NewMultiplicativeLR added in v0.3.10

func NewMultiplicativeLR(opt *Optimizer, ldFns []LambdaFn) *MultiplicativeLR

NewMultiplicativeLR creates a new MultiplicativeLR.

func (*MultiplicativeLR) Build added in v0.3.10

func (m *MultiplicativeLR) Build() *LRScheduler

Build implements scheduler interface.

func (*MultiplicativeLR) SetLRs added in v0.3.10

func (m *MultiplicativeLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type OneCycleLR added in v0.3.11

type OneCycleLR struct {
	// contains filtered or unexported fields
}

OneCycleLR sets the learning rate of each parameter group according to the 1cycle learning rate policy. The 1cycle policy anneals the learning rate from an initial learning rate to some maximum learning rate and then from that maximum learning rate to some minimum learning rate much lower than the initial learning rate.

This policy was initially described in the paper `Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates`_. The 1cycle learning rate policy changes the learning rate after every batch. `step` should be called after a batch has been used for training. This scheduler is not chainable.

Note also that the total number of steps in the cycle can be determined in one of two ways (listed in order of precedence): - A value for total_steps is explicitly provided. - A number of epochs (epochs) and a number of steps per epoch (steps_per_epoch) are provided. In this case, the number of total steps is inferred by total_steps = epochs * steps_per_epoch You must either provide a value for total_steps or provide a value for both epochs and steps_per_epoch.

Source: Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates https://arxiv.org/abs/1708.07120

func NewOneCycleLR added in v0.3.11

func NewOneCycleLR(opt *Optimizer, maxLR float64, opts ...OneCycleOption) *OneCycleLR

func (*OneCycleLR) Build added in v0.3.11

func (oc *OneCycleLR) Build() *LRScheduler

func (*OneCycleLR) SetLRs added in v0.3.11

func (oc *OneCycleLR) SetLRs(opts ...SchedulerOption)

type OneCycleOption added in v0.3.11

type OneCycleOption func(*OneCycleOptions)

func WithOneCycleAnnealStrategy added in v0.3.11

func WithOneCycleAnnealStrategy(v string) OneCycleOption

func WithOneCycleBaseMomentum added in v0.3.11

func WithOneCycleBaseMomentum(v float64) OneCycleOption

func WithOneCycleCycleMomentum added in v0.3.11

func WithOneCycleCycleMomentum(v bool) OneCycleOption

func WithOneCycleDivFactor added in v0.3.11

func WithOneCycleDivFactor(v float64) OneCycleOption

func WithOneCycleEpochs added in v0.3.11

func WithOneCycleEpochs(v int) OneCycleOption

func WithOneCycleFinalDivFactor added in v0.3.11

func WithOneCycleFinalDivFactor(v float64) OneCycleOption

func WithOneCycleLastEpoch added in v0.3.11

func WithOneCycleLastEpoch(v int) OneCycleOption

func WithOneCycleMaxMomentum added in v0.3.11

func WithOneCycleMaxMomentum(v float64) OneCycleOption

func WithOneCyclePctStart added in v0.3.11

func WithOneCyclePctStart(v float64) OneCycleOption

func WithOneCycleStepsPerEpoch added in v0.3.11

func WithOneCycleStepsPerEpoch(v int) OneCycleOption

func WithOneCycleTotalSteps added in v0.3.11

func WithOneCycleTotalSteps(v int) OneCycleOption

type OneCycleOptions added in v0.3.11

type OneCycleOptions struct {
	TotalSteps     int
	Epochs         int
	StepsPerEpoch  int
	PctStart       float64
	AnnealStrategy string
	CycleMomentum  bool
	BaseMomentum   float64
	MaxMomentum    float64
	DivFactor      float64
	FinalDivFactor float64
	LastEpoch      int
}

type Optimizer

type Optimizer struct {
	// contains filtered or unexported fields
}

Optimizer is a struct object to run gradient descent.

func (*Optimizer) AddParamGroup added in v0.3.10

func (opt *Optimizer) AddParamGroup(tensors []*ts.Tensor)

func (*Optimizer) BackwardStep

func (opt *Optimizer) BackwardStep(loss *ts.Tensor) error

BackwardStep applies a backward step pass, update the gradients, and performs an optimization step.

func (*Optimizer) BackwardStepClip

func (opt *Optimizer) BackwardStepClip(loss *ts.Tensor, max float64) error

BackwardStepClip applies a backward step pass, update the gradients, and performs an optimization step.

The gradients are clipped based on `max` before being applied.

func (*Optimizer) BackwardStepClipNorm added in v0.3.10

func (opt *Optimizer) BackwardStepClipNorm(loss *ts.Tensor, max float64, opts ...ClipOpt) error

BackwardStepClipNorm applies a backward step pass, update the gradients, and performs an optimization step.

The gradients L2 norm is clipped based on `max`.

func (*Optimizer) ClipGradNorm added in v0.3.10

func (opt *Optimizer) ClipGradNorm(max float64, opts ...ClipOpt) error

/ Clips gradient L2 norm over all trainable parameters.

The norm is computed over all gradients together, as if they were concatenated into a single vector.

/ Args: - max: max norm of the gradient - o.NormType. Type of the used p-norm, can be "inf" for infinity norm. Default= 2.0 - o.ErrorIfNonFinite bool. If true, throw error if total norm of the gradients from paramters is "nan", "inf" or "-inf". Default=false Returns: total norm of the parameters (viewed as a single vector) ref. https://github.com/pytorch/pytorch/blob/cb4aeff7d8e4c70bb638cf159878c5204d0cc2da/torch/nn/utils/clip_grad.py#L59

func (*Optimizer) ClipGradValue

func (opt *Optimizer) ClipGradValue(max float64)

Clips gradient value at some specified maximum value.

func (*Optimizer) GetLRs added in v0.3.10

func (opt *Optimizer) GetLRs() []float64

func (*Optimizer) MustBackwardStep added in v0.6.2

func (opt *Optimizer) MustBackwardStep(loss *ts.Tensor)

MustBackwardStep applies a backward step pass, update the gradients, and performs an optimization step.

func (*Optimizer) MustBackwardStepClip added in v0.6.2

func (opt *Optimizer) MustBackwardStepClip(loss *ts.Tensor, max float64)

MustBackwardStepClip applies a backward step pass, update the gradients, and performs an optimization step.

The gradients are clipped based on `max` before being applied.

func (*Optimizer) MustBackwardStepClipNorm added in v0.6.2

func (opt *Optimizer) MustBackwardStepClipNorm(loss *ts.Tensor, max float64, opts ...ClipOpt)

MustBackwardStepClipNorm applies a backward step pass, update the gradients, and performs an optimization step.

The gradients L2 norm is clipped based on `max`.

func (*Optimizer) MustStep added in v0.6.2

func (opt *Optimizer) MustStep()

MustStep performs an optimization step, updating the tracked tensors based on their gradients.

func (*Optimizer) MustZeroGrad added in v0.6.2

func (opt *Optimizer) MustZeroGrad()

MustZeroGrad zeroes the gradient for the tensors tracked by this optimizer.

func (*Optimizer) ParamGroupNum added in v0.3.10

func (opt *Optimizer) ParamGroupNum() int

func (*Optimizer) ResetStepCount added in v0.3.10

func (opt *Optimizer) ResetStepCount()

ResetStepCount set step count to zero.

func (*Optimizer) SetLR

func (opt *Optimizer) SetLR(lr float64)

SetLR sets the optimizer learning rate.

NOTE. it sets a SINGLE value of learning rate for all parameter groups. Most of the time, there's one parameter group.

func (*Optimizer) SetLRs added in v0.3.10

func (opt *Optimizer) SetLRs(lrs []float64)

SetLRs sets learning rates for ALL parameter groups respectively.

func (*Optimizer) SetMomentum

func (opt *Optimizer) SetMomentum(m float64)

SetMomentum sets the optimizer momentum.

func (*Optimizer) Step

func (opt *Optimizer) Step() error

Step performs an optimization step, updating the tracked tensors based on their gradients.

func (*Optimizer) StepCount added in v0.3.10

func (opt *Optimizer) StepCount() int

StepCount get current step count.

func (*Optimizer) ZeroGrad

func (opt *Optimizer) ZeroGrad() error

ZeroGrad zeroes the gradient for the tensors tracked by this optimizer.

type OptimizerConfig

type OptimizerConfig interface {

	// Build builds an optimizer with the specified learning rate handling variables stored in `vs`.
	//
	// NOTE: Build is a 'default' method. It can be called by wrapping
	// 'DefaultBuild' function
	// E.g. AdamOptimizerConfig struct have a method to fullfil `Build` method of
	// OptimizerConfig by wrapping `DefaultBuild` like
	// (config AdamOptimizerConfig) Build(vs VarStore, lr float64) (retVal Optimizer, err error){
	//		return defaultBuild(config, vs, lr)
	// }
	Build(vs *VarStore, lr float64) (*Optimizer, error)
	// contains filtered or unexported methods
}

OptimizerConfig defines Optimizer configurations. These configs can be used to build optimizer.

type Path

type Path struct {
	// contains filtered or unexported fields
}

Path is variable store with an associated path for variables naming.

func (*Path) Add added in v0.3.7

func (p *Path) Add(name string, x *ts.Tensor, trainable bool, opts ...AddOpt) (*ts.Tensor, error)

Add adds a tensor to a given path.

Args - name: intention name of variable in VarStore (if duplicated, it will be added a suffix number) - x: tensor holding values to keep in VarStore - trainable: marked whether tensor is trainable. - o.VarType: variable type, i.e., either "parameter" or "buffer" - o.Persistent: whether to save this variables when `VarStore.Save()` is called. Only applied to `buffer` type. Returns a reference to a tensor stored in VarStore and error if occurred.

func (*Path) Device

func (p *Path) Device() gotch.Device

Device gets the device where the VarStore variables are stored.

func (*Path) Entry

func (p *Path) Entry(name string) *Entry

Entry gets the entry corresponding to a given name for in-place manipulation.

func (*Path) Get

func (p *Path) Get(name string) (*ts.Tensor, error)

Get gets a reference to tensor corresponding to a given name if present.

func (*Path) KaimingUniform

func (p *Path) KaimingUniform(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

KaimingUniform creates a new variable initialized randomly with kaiming uniform.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution which bounds follow Kaiming initialization.

func (*Path) MustAdd added in v0.6.2

func (p *Path) MustAdd(name string, x *ts.Tensor, trainable bool, opts ...AddOpt) *ts.Tensor

MustAdd adds a tensor to a given path.

Args - name: intention name of variable in VarStore (if duplicated, it will be added a suffix number) - x: tensor holding values to keep in VarStore - trainable: marked whether tensor is trainable. - o.VarType: variable type, i.e., either "parameter" or "buffer" - o.Persistent: whether to save this variables when `VarStore.Save()` is called. Only applied to `buffer` type. Returns a reference to a tensor stored in VarStore.

func (*Path) MustGet added in v0.6.2

func (p *Path) MustGet(name string) *ts.Tensor

MustGet gets a reference to a tensor corresponding to a given name if present. It panics if error occurred.

func (*Path) MustKaimingUniform added in v0.6.2

func (p *Path) MustKaimingUniform(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustKaimingUniform creates a new variable initialized randomly with kaiming uniforms. It panics if error occurred.

func (*Path) MustNewVar added in v0.6.2

func (p *Path) MustNewVar(name string, dims []int64, ini Init, opts ...AddOpt) *ts.Tensor

MustNewVar create a new variable. It panics if error.

func (*Path) MustOnes added in v0.6.2

func (p *Path) MustOnes(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustOnes creates a new variable initialized with ones. It panics if error occurred.

func (*Path) MustOnesNoTrain added in v0.6.2

func (p *Path) MustOnesNoTrain(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustOnesNoTrain creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with ones.

func (*Path) MustRandn added in v0.6.2

func (p *Path) MustRandn(name string, dims []int64, mean float64, stdev float64, opts ...AddOpt) *ts.Tensor

MustRandn creates a new variable initialized randomly with normal distribution. It panics if error occurred.

func (*Path) MustRandnStandard added in v0.6.2

func (p *Path) MustRandnStandard(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustRandnStandard creates a new variable initialized randomly with normal distribution. It panics if error occurred.

func (*Path) MustRemove added in v0.7.0

func (p *Path) MustRemove(name string)

MustRemove removes a variable from `VarStore`

func (*Path) MustUniform added in v0.6.2

func (p *Path) MustUniform(name string, dims []int64, lo, up float64, opts ...AddOpt) *ts.Tensor

MustUniform creates a new variable initialized randomly with uniform distribution. It panics if error occurred.

func (*Path) MustVarCopy added in v0.6.2

func (p *Path) MustVarCopy(name string, t *ts.Tensor) *ts.Tensor

VarCopy creates a new variable initialized by copying an existing tensor.

func (*Path) MustZeros added in v0.6.2

func (p *Path) MustZeros(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustZeros create a new variables with zero values. It panics if error.

func (*Path) MustZerosNoTrain added in v0.6.2

func (p *Path) MustZerosNoTrain(name string, dims []int64, opts ...AddOpt) *ts.Tensor

MustZerosNoTrain creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with zeros.

func (*Path) NewVar

func (p *Path) NewVar(name string, dims []int64, ini Init, opts ...AddOpt) (*ts.Tensor, error)

NewVar creates a new variable.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized as per the related argument.

func (*Path) Ones

func (p *Path) Ones(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

Ones creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with ones.

func (*Path) OnesNoTrain

func (p *Path) OnesNoTrain(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

OnesNoTrain creates a new variable initialized with ones.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with ones.

func (*Path) Paths added in v0.6.0

func (p *Path) Paths() []string

Paths returns all sub paths from current path.

func (*Path) Randn

func (p *Path) Randn(name string, dims []int64, mean float64, stdev float64, opts ...AddOpt) (*ts.Tensor, error)

Randn creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a normal distribution with the specified mean and standard deviation.

func (*Path) RandnStandard

func (p *Path) RandnStandard(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

RandnStandard creates a new variable initialized randomly with normal distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a standard normal distribution.

func (*Path) Remove added in v0.7.0

func (p *Path) Remove(name string) error

Remove removes a variable from `VarStore`

func (*Path) SetGroup added in v0.3.10

func (p *Path) SetGroup(g uint)

func (*Path) Sub

func (p *Path) Sub(str string) *Path

Sub gets a sub-path of the given path.

func (*Path) ToBFloat16 added in v0.8.0

func (p *Path) ToBFloat16()

ToBFloat16() converts all variables in current path and subpaths to `BFloat16` dtype.

func (*Path) ToDType added in v0.6.2

func (p *Path) ToDType(dtype gotch.DType)

ToDType casts all variables in this path and its sub-paths to the specified dtype.

NOTE. this method should be used for floating-point conversion, i.e., "gotch.Float", "gotch.Half", "gotch.BFloat16", "gotch.Double".

func (*Path) ToDevice added in v0.8.0

func (p *Path) ToDevice(device gotch.Device)

func (*Path) ToDouble added in v0.6.2

func (p *Path) ToDouble()

ToDouble casts all variables in current path and subpaths to `Double` precision dtype.

func (*Path) ToFloat added in v0.6.2

func (p *Path) ToFloat(floatDTypeOpt ...gotch.DType)

ToFloat casts all variables in current path and subpaths to `Float` precision.

func (*Path) ToHalf added in v0.6.2

func (p *Path) ToHalf()

ToHalf casts all variables in current path and subpaths to `Half` precision dtype.

func (*Path) Uniform

func (p *Path) Uniform(name string, dims []int64, lo, up float64, opts ...AddOpt) (*ts.Tensor, error)

Uniform creates a new variable initialized randomly with uniform distribution.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution between the specified bounds.

func (*Path) VarCopy

func (p *Path) VarCopy(name string, t *ts.Tensor) (*ts.Tensor, error)

VarCopy creates a new variable initialized by copying an existing tensor.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized by copying some given tensor.

func (*Path) Zeros

func (p *Path) Zeros(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

Zeros creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with zeros.

func (*Path) ZerosNoTrain

func (p *Path) ZerosNoTrain(name string, dims []int64, opts ...AddOpt) (*ts.Tensor, error)

ZerosNoTrain creates a new variable initialized with zeros.

The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with zeros.

type RMSPropConfig

type RMSPropConfig struct {
	Alpha    float64
	Eps      float64
	Wd       float64
	Momentum float64
	Centered bool
}

func DefaultRMSPropConfig

func DefaultRMSPropConfig() *RMSPropConfig

DefaultAdamConfig creates AdamConfig with default values

func NewRMSPropConfig

func NewRMSPropConfig(alpha, eps, wd, momentum float64, centered bool) *RMSPropConfig

NewRMSPropConfig creates RMSPropConfig with specified values

func (*RMSPropConfig) Build

func (c *RMSPropConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type RNN

type RNN interface {

	// A zero state from which the recurrent network is usually initialized.
	ZeroState(batchDim int64) State

	// Applies a single step of the recurrent network.
	//
	// The input should have dimensions [batch_size, features].
	Step(input *ts.Tensor, inState State) State

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	// The initial state is the result of applying zero_state.
	Seq(input *ts.Tensor) (*ts.Tensor, State)

	// Applies multiple steps of the recurrent network.
	//
	// The input should have dimensions [batch_size, seq_len, features].
	SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State)
}

type RNNConfig

type RNNConfig struct {
	HasBiases     bool
	NumLayers     int64
	Dropout       float64
	Train         bool
	Bidirectional bool
	BatchFirst    bool
}

The GRU and LSTM layers share the same config. Configuration for the GRU and LSTM layers.

func DefaultRNNConfig

func DefaultRNNConfig() *RNNConfig

Default creates default RNN configuration

type ReduceLROnPlateau added in v0.3.11

type ReduceLROnPlateau struct {
	// contains filtered or unexported fields
}

ReduceLROnPlateau reduces learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

func NewReduceLROnPlateau added in v0.3.11

func NewReduceLROnPlateau(opt *Optimizer, opts ...ReduceLROnPlateauOption) *ReduceLROnPlateau

func (*ReduceLROnPlateau) Build added in v0.3.11

func (s *ReduceLROnPlateau) Build() *LRScheduler

Build implements scheduler interface.

func (*ReduceLROnPlateau) SetLRs added in v0.3.11

func (s *ReduceLROnPlateau) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type ReduceLROnPlateauOption added in v0.3.11

type ReduceLROnPlateauOption func(*ReduceLROnPlateauOptions)

func WithReduceOnPlateauCooldown added in v0.3.11

func WithReduceOnPlateauCooldown(cooldown int) ReduceLROnPlateauOption

func WithReduceOnPlateauEps added in v0.3.11

func WithReduceOnPlateauEps(eps float64) ReduceLROnPlateauOption

func WithReduceOnPlateauFactor added in v0.3.11

func WithReduceOnPlateauFactor(factor float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMinLRs added in v0.3.11

func WithReduceOnPlateauMinLRs(minLRs []float64) ReduceLROnPlateauOption

func WithReduceOnPlateauMode added in v0.3.11

func WithReduceOnPlateauMode(mode string) ReduceLROnPlateauOption

func WithReduceOnPlateauPatience added in v0.3.11

func WithReduceOnPlateauPatience(patience int) ReduceLROnPlateauOption

func WithReduceOnPlateauThreshold added in v0.3.11

func WithReduceOnPlateauThreshold(threshold float64) ReduceLROnPlateauOption

func WithReduceOnPlateauThresholdMode added in v0.3.11

func WithReduceOnPlateauThresholdMode(thresholdMode string) ReduceLROnPlateauOption

func WithReduceOnPlateauVerbose added in v0.3.11

func WithReduceOnPlateauVerbose(verbose bool) ReduceLROnPlateauOption

type ReduceLROnPlateauOptions added in v0.3.11

type ReduceLROnPlateauOptions struct {
	Mode          string
	Factor        float64
	Patience      int
	Verbose       bool
	Threshold     float64
	ThresholdMode string
	MinLRs        []float64
	Cooldown      int
	Eps           float64
}

type SGDConfig

type SGDConfig struct {
	Momentum  float64
	Dampening float64
	Wd        float64
	Nesterov  bool
}

SGDConfig holds parameters for building the SGD (Stochastic Gradient Descent) optimizer.

func DefaultSGDConfig

func DefaultSGDConfig() *SGDConfig

DefaultSGDConfig creates SGDConfig with default values.

func NewSGDConfig

func NewSGDConfig(momentum, dampening, wd float64, nesterov bool) *SGDConfig

NewSGD creates the configuration for a SGD optimizer with specified values

func (*SGDConfig) Build

func (c *SGDConfig) Build(vs *VarStore, lr float64) (*Optimizer, error)

type SchedulerOption added in v0.3.11

type SchedulerOption func(*SchedulerOptions)

func WithLastEpoch added in v0.3.11

func WithLastEpoch(epoch int) SchedulerOption

func WithLoss added in v0.3.11

func WithLoss(loss float64) SchedulerOption

type SchedulerOptions added in v0.3.11

type SchedulerOptions struct {
	// Metrics   map[string]interface{}
	Loss      float64 // Usually metrics is loss value
	LastEpoch int
}

func DefaultSchedulerOptions added in v0.4.3

func DefaultSchedulerOptions() *SchedulerOptions

type Sequential

type Sequential struct {
	// contains filtered or unexported fields
}

Sequential is a layer (container) that combines multiple other layers.

func Seq

func Seq() *Sequential

Seq creates a new empty sequential layer

func (*Sequential) Add

func (s *Sequential) Add(l ts.Module)

Add appends a layer after all the current layers.

func (*Sequential) AddFn

func (s *Sequential) AddFn(fn ts.Module)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*Sequential) Forward

func (s *Sequential) Forward(xs *ts.Tensor) (retVal *ts.Tensor)

Forward implements Module interface for Sequential

func (*Sequential) ForwardAll

func (s *Sequential) ForwardAll(xs *ts.Tensor, opts ...uint8) (retVal []*ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*Sequential) IsEmpty

func (s *Sequential) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*Sequential) Len

func (s *Sequential) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type SequentialT

type SequentialT struct {
	// contains filtered or unexported fields
}

SequentialT is a sequential layer combining new layers with support for a training mode.

func SeqT

func SeqT() *SequentialT

/ SeqT creates a new empty sequential layer.

func (*SequentialT) Add

func (s *SequentialT) Add(l ts.ModuleT)

Add appends a layer after all the current layers.

func (*SequentialT) AddFn

func (s *SequentialT) AddFn(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface

func (*SequentialT) AddFnT

func (s *SequentialT) AddFnT(fn ts.ModuleT)

AddFn appends a closure after all the current layers.

NOTE: fn should have signature `func(t ts.Tensor, train bool) ts.Tensor` and it implements Module interface

func (*SequentialT) ForwardAllT

func (s *SequentialT) ForwardAllT(xs *ts.Tensor, train bool, opts ...uint8) (retVal []*ts.Tensor)

ForwardAll applies the forward pass and returns the output for each layer.

func (*SequentialT) ForwardT

func (s *SequentialT) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor

func (*SequentialT) IsEmpty

func (s *SequentialT) IsEmpty() (retVal bool)

IsEmpty returns true if this layer does not have any sub-layers.

func (*SequentialT) Len

func (s *SequentialT) Len() (retVal int64)

Len returns number of sub-layers embedded in this layer

type State

type State interface{}

type StepLR added in v0.3.10

type StepLR struct {
	// contains filtered or unexported fields
}

StepLR decays the learning rates of each optimizer parameter group by gamma every step size epochs.

NOTE. Such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

func NewStepLR added in v0.3.10

func NewStepLR(opt *Optimizer, stepSize int, gamma float64) *StepLR

NewStepLR creates a new StepLR.

func (*StepLR) Build added in v0.3.10

func (s *StepLR) Build() *LRScheduler

Build implements scheduler interface.

func (*StepLR) SetLRs added in v0.3.10

func (s *StepLR) SetLRs(opts ...SchedulerOption)

SetLRs implements scheduler interface.

type TrainableCModule added in v0.3.7

type TrainableCModule struct {
	Inner *ts.CModule
}

TrainableCModule is a trainable version of JIT Pytorch module

These modules can be created via TorchScript python API. See: https://pytorch.org/docs/stable/jit.html

func TrainableCModuleLoad added in v0.3.7

func TrainableCModuleLoad(p *Path, file string) (*TrainableCModule, error)

TrainableCModuleLoad loads a PyTorch saved JIT module from a file and adds tensors (weights) to `varstore` so that module can be trained.

func TrainableCModuleLoadData added in v0.3.7

func TrainableCModuleLoadData(p *Path, stream io.Reader) (*TrainableCModule, error)

func (*TrainableCModule) ForwardT added in v0.3.7

func (m *TrainableCModule) ForwardT(x *ts.Tensor, train bool) *ts.Tensor

ForwardT implements ModuleT for TrainableCModule. NOTE: train parameter will not be used.

func (*TrainableCModule) Save added in v0.3.7

func (m *TrainableCModule) Save(file string) error

Save saves TrainableCModule to specified file.

func (*TrainableCModule) SetEval added in v0.3.7

func (m *TrainableCModule) SetEval()

SetEval set TrainableCModule to inference mode

func (*TrainableCModule) SetTrain added in v0.3.7

func (m *TrainableCModule) SetTrain()

SetTrain set TrainableCModule to train mode

type Var added in v0.3.10

type Var struct {
	Tensor    *ts.Tensor
	Group     uint   // optimizer parameter group
	Type      string // can be "parameter" or "buffer"
	Trainable bool   // marked this variable is either trainable or not.For "buffer" type, it's always `false`
	Persitent bool   // only applied to "buffer" type. All parameters are persistent (when do VarStore.Save()).
}

type VarStore

type VarStore struct {
	sync.Mutex
	// contains filtered or unexported fields
}

VarStore is used to store variables used by one or multiple layers. It specifies a SINGLE device where all variables are stored.

func NewVarStore

func NewVarStore(device gotch.Device) *VarStore

NewVarStore creates a new variable store located on the specified device

func (*VarStore) Copy

func (vs *VarStore) Copy(src *VarStore) error

Copy copies variable values from a source VarStore to this VarStore.

All the variables in this var store have to exist with the same name in the source var store, otherwise an error is returned.

func (*VarStore) Destroy added in v0.8.0

func (vs *VarStore) Destroy()

Destroy deletes all tensors in varstore and set it to nil.

func (*VarStore) Device

func (vs *VarStore) Device() gotch.Device

Device returns device for this VarStore.

func (*VarStore) Freeze

func (vs *VarStore) Freeze() error

Freeze freezes this VarStore.

Gradients for the variables in this store are not tracked anymore.

func (*VarStore) IsEmpty

func (vs *VarStore) IsEmpty() bool

IsEmpty returns true if no tensors currently kept in this VarStore.

func (*VarStore) Len

func (vs *VarStore) Len() int

Len returns the number of tensors currently kept in this VarStore.

func (*VarStore) Load

func (vs *VarStore) Load(filepath string) error

Load loads VarStore variable values from a file.

NOTE: Weight values for all the tensors currently stored in the VarStore gets loaded from the given file. Note that the set of variables stored in the VarStore is not changed, only the values for these tensors are modified. It will throw error if name of the loaded tensors can not find in the current VarStore named tensors set.

func (*VarStore) LoadPartial

func (vs *VarStore) LoadPartial(filepath string) ([]string, error)

LoadPartial loads the VarStore variable values from a file if it exists.

Weight values for the tensors currently stored in the var-store and the given file get loaded from the given file. If a variable in the var store is not present in the given file, it is skipped and its values are not updated. This method should be used if pre-trained weight for only parts of the model are available. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified.

Returns a String Vector containing the names of missing variables.

func (*VarStore) LoadWeights added in v0.6.2

func (vs *VarStore) LoadWeights(namedTensors []ts.NamedTensor) error

LoadWeights loads pretrained weights to VarStore.

func (*VarStore) LoadWeightsPartial added in v0.6.2

func (vs *VarStore) LoadWeightsPartial(namedTensors []ts.NamedTensor) ([]string, error)

LoadWeightsPartial loads the VarStore variable values from a file if it exists.

Weight values for the tensors currently stored in the var-store and the given file get loaded from the given file. If a variable in the var store is not present in the given file, it is skipped and its values are not updated. This method should be used if pre-trained weight for only parts of the model are available. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified.

Returns a String Vector containing the names of missing variables.

func (*VarStore) Root

func (vs *VarStore) Root() *Path

Root gets the root path for this VarStore.

NOTE: Variables are named and organized using paths. This function returns the top level path for the var store and can be combined with '/' to create sub-paths.

func (*VarStore) Save

func (vs *VarStore) Save(filepath string) error

Save saves the VarStore variable values to a file.

NOTE: Weight values for all the tensors currently stored in the var-store gets saved in the given file.

func (*VarStore) Summary added in v0.6.0

func (vs *VarStore) Summary()

Summary prints a simple list of all named variables with their shapes.

func (*VarStore) ToBFloat16 added in v0.8.0

func (vs *VarStore) ToBFloat16()

ToBFloat16 casts all float-like variables in VarStore to `BFloat16` dtype.

NOTE. float-like includes `Half`, `Float` and `Double` dtype.

func (*VarStore) ToDType added in v0.6.2

func (vs *VarStore) ToDType(dtype gotch.DType)

ToDType casts all variables in VarStore to specified DType.

NOTE. only float-like types (Half, BFloat16, Float, Double) can ensure convertible.

func (*VarStore) ToDevice added in v0.8.0

func (vs *VarStore) ToDevice(device gotch.Device)

func (*VarStore) ToDouble added in v0.6.2

func (vs *VarStore) ToDouble()

ToDouble casts all float-like variables in VarStore to `Double` dtype.

NOTE. float-like includes `Half`, `Float` and `Double` dtype.

func (*VarStore) ToFloat added in v0.6.2

func (vs *VarStore) ToFloat()

ToFloat casts all float-like variables in VarStore to `Float` dtype.

NOTE. float-like includes `Half`,`BFloat16`, `Float` and `Double` dtype.

func (*VarStore) ToHalf added in v0.6.2

func (vs *VarStore) ToHalf()

ToHalf casts all float-like variables in VarStore to `Half` dtype.

NOTE. float-like includes `Half`, `Float` and `Double` dtype.

func (*VarStore) TrainableVariables

func (vs *VarStore) TrainableVariables() []*ts.Tensor

TrainableVariabless returns reference to all trainable variables kept in VarStore.

func (*VarStore) Unfreeze

func (vs *VarStore) Unfreeze() error

Unfreeze unfreezes a VarStore.

Gradients for the variables in this store are tracked again.

func (*VarStore) Variables

func (vs *VarStore) Variables() map[string]ts.Tensor

Variables returns reference of all variables and their names in a map[variable_name]Tensor

NOTE. returned map includes all variables of "parameter" and "buffer" type.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL