Documentation
¶
Index ¶
- Constants
- func BatchAccuracyForLogits(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)
- func BatchAccuracyForLogitsIdx(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)
- func NewConstInit(v float64) constInit
- func NewGlorotNInit() glorotNInit
- func NewKaimingUniformInit() kaimingUniformInit
- func NewRandnInit(mean, stdev float64) randnInit
- func NewUniformInit(lo, up float64) uniformInit
- func WithUint8(n uint8) func() uint8
- type AdamConfig
- type BatchNorm
- func BatchNorm1D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
- func BatchNorm2D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
- func BatchNorm3D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
- func NewBatchNorm(vs *Path, nd uint, outDim int64, config *BatchNormConfig) *BatchNorm
- type BatchNormConfig
- type Conv
- type Conv1D
- type Conv1DConfig
- type Conv2D
- type Conv2DConfig
- type Conv3D
- type Conv3DConfig
- type ConvTranspose1D
- type ConvTranspose1DConfig
- type ConvTranspose2D
- type ConvTranspose2DConfig
- type ConvTranspose3D
- type ConvTranspose3DConfig
- type Embedding
- type EmbeddingConfig
- type Entry
- func (e *Entry) OrKaimingUniform(dims []int64) (retVal *ts.Tensor)
- func (e *Entry) OrOnes(dims []int64) (retVal *ts.Tensor)
- func (e *Entry) OrOnesNoTrain(dims []int64) (retVal *ts.Tensor)
- func (e *Entry) OrRandn(dims []int64, mean, stdev float64) (retVal *ts.Tensor)
- func (e *Entry) OrRandnStandard(dims []int64) (retVal *ts.Tensor)
- func (e *Entry) OrUniform(dims []int64, lo, up float64) (retVal *ts.Tensor)
- func (e *Entry) OrVar(dims []int64, init Init) (retVal *ts.Tensor)
- func (e *Entry) OrVarCopy(tensor *ts.Tensor) (retVal *ts.Tensor)
- func (e *Entry) OrZeros(dims []int64) (retVal *ts.Tensor)
- func (e *Entry) OrZerosNoTrain(dims []int64) (retVal *ts.Tensor)
- type ForwardTWith
- type ForwardWith
- type Func
- type FuncT
- type GRU
- type GRUState
- type Init
- type LSTM
- type LSTMState
- type LayerNorm
- type LayerNormConfig
- type Linear
- type LinearConfig
- type Optimizer
- func (opt *Optimizer) BackwardStep(loss *ts.Tensor)
- func (opt *Optimizer) BackwardStepClip(loss *ts.Tensor, max float64)
- func (opt *Optimizer) ClipGradValue(max float64)
- func (opt *Optimizer) SetLR(lr float64)
- func (opt *Optimizer) SetMomentum(m float64)
- func (opt *Optimizer) Step()
- func (opt *Optimizer) ZeroGrad()
- type OptimizerConfig
- type Path
- func (p *Path) Device() gotch.Device
- func (p *Path) Entry(name string) *Entry
- func (p *Path) Get(name string) (retVal *ts.Tensor, err error)
- func (p *Path) KaimingUniform(name string, dims []int64) (retVal *ts.Tensor)
- func (p *Path) NewVar(name string, dims []int64, ini Init) (retVal *ts.Tensor)
- func (p *Path) Ones(name string, dims []int64) (retVal *ts.Tensor)
- func (p *Path) OnesNoTrain(name string, dims []int64) (retVal *ts.Tensor)
- func (p *Path) Randn(name string, dims []int64, mean float64, stdev float64) (retVal *ts.Tensor)
- func (p *Path) RandnStandard(name string, dims []int64) (retVal *ts.Tensor)
- func (p *Path) Sub(str string) *Path
- func (p *Path) Uniform(name string, dims []int64, lo, up float64) (retVal *ts.Tensor)
- func (p *Path) VarCopy(name string, t *ts.Tensor) (retVal *ts.Tensor)
- func (p *Path) Zeros(name string, dims []int64) (retVal *ts.Tensor)
- func (p *Path) ZerosNoTrain(name string, dims []int64) (retVal *ts.Tensor)
- type RMSPropConfig
- type RNN
- type RNNConfig
- type SGDConfig
- type Sequential
- func (s *Sequential) Add(l ts.Module)
- func (s *Sequential) AddFn(fn ts.Module)
- func (s *Sequential) Forward(xs *ts.Tensor) (retVal *ts.Tensor)
- func (s *Sequential) ForwardAll(xs *ts.Tensor, opts ...uint8) (retVal []ts.Tensor)
- func (s *Sequential) IsEmpty() (retVal bool)
- func (s *Sequential) Len() (retVal int64)
- type SequentialT
- func (s *SequentialT) Add(l ts.ModuleT)
- func (s *SequentialT) AddFn(fn ts.ModuleT)
- func (s *SequentialT) AddFnT(fn ts.ModuleT)
- func (s *SequentialT) ForwardAllT(xs *ts.Tensor, train bool, opts ...uint8) (retVal []ts.Tensor)
- func (s *SequentialT) ForwardT(xs *ts.Tensor, train bool) *ts.Tensor
- func (s *SequentialT) IsEmpty() (retVal bool)
- func (s *SequentialT) Len() (retVal int64)
- type State
- type VarStore
- func (vs *VarStore) Copy(src VarStore) (err error)
- func (vs *VarStore) Device() gotch.Device
- func (vs *VarStore) Freeze()
- func (vs *VarStore) IsEmpty() (retVal bool)
- func (vs *VarStore) Len() (retVal int)
- func (vs *VarStore) Load(filepath string) error
- func (vs *VarStore) LoadPartial(filepath string) (retVal []string, err error)
- func (vs *VarStore) Root() *Path
- func (vs *VarStore) Save(filepath string) error
- func (vs *VarStore) TrainableVariables() (retVal []ts.Tensor)
- func (vs *VarStore) Unfreeze()
- func (vs *VarStore) Variables() (retVal map[string]ts.Tensor)
- type Variables
Constants ¶
const SEP = "."
SEP is a separator to separate path elements in the tensor names.
Variables ¶
This section is empty.
Functions ¶
func BatchAccuracyForLogits ¶
func BatchAccuracyForLogits(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)
BatchAccuracyForLogits calculates average accuracy of test batches.
NOTE: Pytorch uses `NoGradGuard` which is a thread local scope and it sets a global flag that is checked by the backend whenever an op is done on a variable. The guard itself saved the current status and set it to false in the constructor. And restore the saved status in it’s destructor. That way it is similar to a with torch.no_grad(): block in python. This seems not working in Go. There 2 ways to get around. One is freeze VarStore, the other is set manually set AutoGrad at `loss` tensor. I.e., `loss = loss.MustSetRequiresGrad(true)`
func BatchAccuracyForLogitsIdx ¶
func BatchAccuracyForLogitsIdx(vs *VarStore, m ts.ModuleT, xs, ys *ts.Tensor, d gotch.Device, batchSize int) (retVal float64)
BatchAccuracyForLogitIdx is an alternative of BatchAccuracyForLogits to calculate accuracy for specified batch on module weight. It uses tensor indexing instead of Iter2
func NewConstInit ¶
func NewConstInit(v float64) constInit
func NewGlorotNInit ¶
func NewGlorotNInit() glorotNInit
func NewKaimingUniformInit ¶
func NewKaimingUniformInit() kaimingUniformInit
func NewRandnInit ¶
func NewRandnInit(mean, stdev float64) randnInit
func NewUniformInit ¶
func NewUniformInit(lo, up float64) uniformInit
Types ¶
type AdamConfig ¶
func DefaultAdamConfig ¶
func DefaultAdamConfig() *AdamConfig
DefaultAdamConfig creates AdamConfig with default values
func NewAdamConfig ¶
func NewAdamConfig(beta1, beta2, wd float64) *AdamConfig
NewAdamConfig creates AdamConfig with specified values
type BatchNorm ¶
type BatchNorm struct { RunningMean *ts.Tensor RunningVar *ts.Tensor Ws *ts.Tensor Bs *ts.Tensor Nd uint // contains filtered or unexported fields }
A batch-normalization layer.
func BatchNorm1D ¶
func BatchNorm1D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
Applies Batch Normalization over a three dimension input.
The input shape is assumed to be (N, C, L). Normalization is performed over the first batch dimension N.
func BatchNorm2D ¶
func BatchNorm2D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
Applies Batch Normalization over a four dimension input.
The input shape is assumed to be (N, C, H, W). Normalization is performed over the first batch dimension N.
func BatchNorm3D ¶
func BatchNorm3D(vs *Path, outDim int64, config *BatchNormConfig) *BatchNorm
Applies Batch Normalization over a five dimension input.
The input shape is assumed to be (N, C, D, H, W). Normalization is performed over the first batch dimension N.
func NewBatchNorm ¶
func NewBatchNorm(vs *Path, nd uint, outDim int64, config *BatchNormConfig) *BatchNorm
NewBatchNorm creates a new BatchNorm layer
type BatchNormConfig ¶
type BatchNormConfig struct { CudnnEnable bool Eps float64 Momentum float64 WsInit Init BsInit Init }
Batch-normalization config.
func DefaultBatchNormConfig ¶
func DefaultBatchNormConfig() *BatchNormConfig
type Conv1DConfig ¶
type Conv1DConfig struct { Stride []int64 Padding []int64 Dilation []int64 Groups int64 Bias bool WsInit Init BsInit Init }
func DefaultConv1DConfig ¶
func DefaultConv1DConfig() *Conv1DConfig
DefaultConvConfig create a default 1D ConvConfig
type Conv2DConfig ¶
type Conv2DConfig struct { Stride []int64 Padding []int64 Dilation []int64 Groups int64 Bias bool WsInit Init BsInit Init }
func DefaultConv2DConfig ¶
func DefaultConv2DConfig() *Conv2DConfig
DefaultConvConfig2D creates a default 2D ConvConfig
type Conv3DConfig ¶
type ConvTranspose1D ¶
type ConvTranspose1D struct { Ws *ts.Tensor Bs *ts.Tensor // optional Config *ConvTranspose1DConfig }
func NewConvTranspose1D ¶
func NewConvTranspose1D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose1DConfig) *ConvTranspose1D
type ConvTranspose1DConfig ¶
type ConvTranspose1DConfig struct { Stride []int64 Padding []int64 OutputPadding []int64 Dilation []int64 Groups int64 Bias bool WsInit Init BsInit Init }
func DefaultConvTranspose1DConfig ¶
func DefaultConvTranspose1DConfig() *ConvTranspose1DConfig
DefaultConvConfig create a default 1D ConvConfig
type ConvTranspose2D ¶
type ConvTranspose2D struct { Ws *ts.Tensor Bs *ts.Tensor // optional Config *ConvTranspose2DConfig }
func NewConvTranspose2D ¶
func NewConvTranspose2D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose2DConfig) *ConvTranspose2D
type ConvTranspose2DConfig ¶
type ConvTranspose3D ¶
type ConvTranspose3D struct { Ws *ts.Tensor Bs *ts.Tensor // optional Config *ConvTranspose3DConfig }
func NewConvTranspose3D ¶
func NewConvTranspose3D(vs *Path, inDim, outDim int64, ksizes []int64, cfg *ConvTranspose3DConfig) *ConvTranspose3D
type ConvTranspose3DConfig ¶
type Embedding ¶
An embedding layer.
An embedding layer acts as a simple lookup table that stores embeddings. This is commonly used to store word embeddings.
func NewEmbedding ¶
func NewEmbedding(vs *Path, numEmbeddings int64, embeddingDim int64, config *EmbeddingConfig) *Embedding
NewEmbedding creates a new Embedding
type EmbeddingConfig ¶
Configuration option for an embedding layer.
func DefaultEmbeddingConfig ¶
func DefaultEmbeddingConfig() *EmbeddingConfig
type Entry ¶
type Entry struct {
// contains filtered or unexported fields
}
Entry holds an entry corresponding to a given name in Path.
func (*Entry) OrKaimingUniform ¶
Returns the existing entry if, otherwise create a new variable.
func (*Entry) OrOnesNoTrain ¶
OrOnesNoTrain returns the existing entry if, otherwise create a new variable.
func (*Entry) OrRandnStandard ¶
OrRandnStandard returns the existing entry if, otherwise create a new variable.
func (*Entry) OrVar ¶
OrVar returns the existing entry if, otherwise create a new variable.
If this entry name matches the name of a variables stored in the var store, the corresponding tensor is returned. Otherwise a new variable is added to the var-store with the entry name and is initialized according to the init parameter.
type ForwardTWith ¶
type ForwardWith ¶
ForwardWith is a handler function to implement Module interface for any (anonymous) function it wraps.
Ref. https://stackoverflow.com/a/42182987 NOTE: Specifically, `ForwardWith` is used to wrap anonymous function as input parameter of `AddFn` Sequential method.
type Func ¶
type Func struct {
// contains filtered or unexported fields
}
type GRU ¶
type GRU struct {
// contains filtered or unexported fields
}
A Gated Recurrent Unit (GRU) layer.
type LSTM ¶
type LSTM struct {
// contains filtered or unexported fields
}
A Long Short-Term Memory (LSTM) layer.
type LayerNorm ¶
type LayerNorm struct { Config *LayerNormConfig Ws *ts.Tensor // optional Bs *ts.Tensor // optional NormalizedShape []int64 }
A layer-normalization layer.
func NewLayerNorm ¶
func NewLayerNorm(vs Path, normalizedShape []int64, config *LayerNormConfig) *LayerNorm
type LayerNormConfig ¶
type LayerNormConfig struct { CudnnEnable bool Eps float64 ElementwiseAffine bool WsInit Init BsInit Init }
Layer-normalization config.
func DefaultLayerNormConfig ¶
func DefaultLayerNormConfig() *LayerNormConfig
type Linear ¶
Linear is a linear fully-connected layer
func NewLinear ¶
func NewLinear(vs *Path, inDim, outDim int64, c *LinearConfig) *Linear
NewLinear creates a new linear layer y = x*wT + b inDim - input dimension (x) [input features - columns] outDim - output dimension (y) [output features - columns] NOTE: w will have shape{outDim, inDim}; b will have shape{outDim}
func (*Linear) Forward ¶
Forward proceeds input node through linear layer. NOTE: - It assumes that node has dimensions of 2 (matrix). To make it work for matrix multiplication, input node should has same number of **column** as number of **column** in `LinearLayer` `Ws` property as weights matrix will be transposed before multiplied to input node. (They are all used `inDim`) - Input node should have shape of `shape{batch size, input features}`. (shape{batchSize, inDim}). The input features is `inDim` while the output feature is `outDim` in `LinearConfig` struct.
Example:
inDim := 3 outDim := 2 batchSize := 4 weights: 2x3 [ 1 1 1 1 1 1 ] input node: 3x4 [ 1 1 1 1 1 1 1 1 1 1 1 1 ]
type LinearConfig ¶
type LinearConfig struct { WsInit Init // iniital weights BsInit Init // optional initial bias Bias bool }
LinearConfig is a configuration for a linear layer
func DefaultLinearConfig ¶
func DefaultLinearConfig() *LinearConfig
DefaultLinearConfig creates default LinearConfig with weights initiated using KaimingUniform and Bias is set to true
type Optimizer ¶
type Optimizer struct {
// contains filtered or unexported fields
}
Optimizer is a struct object to run gradient descent.
func (*Optimizer) BackwardStep ¶
BackwardStep applies a backward step pass, update the gradients, and performs an optimization step.
func (*Optimizer) BackwardStepClip ¶
BackwardStepClip applies a backward step pass, update the gradients, and performs an optimization step.
The gradients are clipped based on `max` before being applied.
func (*Optimizer) ClipGradValue ¶
Clips gradient value at some specified maximum value.
func (*Optimizer) SetMomentum ¶
SetMomentum sets the optimizer momentum.
type OptimizerConfig ¶
type OptimizerConfig interface { // Build builds an optimizer with the specified learning rate handling variables stored in `vs`. // // NOTE: Build is a 'default' method. It can be called by wrapping // 'DefaultBuild' function // E.g. AdamOptimizerConfig struct have a method to fullfil `Build` method of // OptimizerConfig by wrapping `DefaultBuild` like // (config AdamOptimizerConfig) Build(vs VarStore, lr float64) (retVal Optimizer, err error){ // return defaultBuild(config, vs, lr) // } Build(vs *VarStore, lr float64) (*Optimizer, error) // contains filtered or unexported methods }
OptimizerConfig defines Optimizer configurations. These configs can be used to build optimizer.
type Path ¶
type Path struct {
// contains filtered or unexported fields
}
Path is variable store with an associated path for variables naming.
func (*Path) KaimingUniform ¶
KaimingUniform creates a new variable initialized randomly with kaiming uniform.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution which bounds follow Kaiming initialization.
func (*Path) NewVar ¶
NewVar creates a new variable.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized as per the related argument.
func (*Path) Ones ¶
Ones creates a new variable initialized with ones.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with ones.
func (*Path) OnesNoTrain ¶
OnesNoTrain creates a new variable initialized with ones.
The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with ones.
func (*Path) Randn ¶
Randn creates a new variable initialized randomly with normal distribution.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a normal distribution with the specified mean and standard deviation.
func (*Path) RandnStandard ¶
RandnStandard creates a new variable initialized randomly with normal distribution.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a standard normal distribution.
func (*Path) Uniform ¶
Uniform creates a new variable initialized randomly with uniform distribution.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized randomly using a uniform distribution between the specified bounds.
func (*Path) VarCopy ¶
VarCopy creates a new variable initialized by copying an existing tensor.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized by copying some given tensor.
func (*Path) Zeros ¶
Zeros creates a new variable initialized with zeros.
The new variable is named according to the name parameter and has the specified shape. The variable is trainable, its gradient will be tracked. The variable uses a float tensor initialized with zeros.
func (*Path) ZerosNoTrain ¶
ZerosNoTrain creates a new variable initialized with zeros.
The new variable is named according to the name parameter and has the specified shape. The variable will not be trainable so gradients will not be tracked. The variable uses a float tensor initialized with zeros.
type RMSPropConfig ¶
func DefaultRMSPropConfig ¶
func DefaultRMSPropConfig() *RMSPropConfig
DefaultAdamConfig creates AdamConfig with default values
func NewRMSPropConfig ¶
func NewRMSPropConfig(alpha, eps, wd, momentum float64, centered bool) *RMSPropConfig
NewRMSPropConfig creates RMSPropConfig with specified values
type RNN ¶
type RNN interface { // A zero state from which the recurrent network is usually initialized. ZeroState(batchDim int64) State // Applies a single step of the recurrent network. // // The input should have dimensions [batch_size, features]. Step(input *ts.Tensor, inState State) State // Applies multiple steps of the recurrent network. // // The input should have dimensions [batch_size, seq_len, features]. // The initial state is the result of applying zero_state. Seq(input *ts.Tensor) (*ts.Tensor, State) // Applies multiple steps of the recurrent network. // // The input should have dimensions [batch_size, seq_len, features]. SeqInit(input *ts.Tensor, inState State) (*ts.Tensor, State) }
type RNNConfig ¶
type RNNConfig struct { HasBiases bool NumLayers int64 Dropout float64 Train bool Bidirectional bool BatchFirst bool }
The GRU and LSTM layers share the same config. Configuration for the GRU and LSTM layers.
func DefaultRNNConfig ¶
func DefaultRNNConfig() *RNNConfig
Default creates default RNN configuration
type SGDConfig ¶
SGDConfig holds parameters for building the SGD (Stochastic Gradient Descent) optimizer.
func DefaultSGDConfig ¶
func DefaultSGDConfig() *SGDConfig
DefaultSGDConfig creates SGDConfig with default values.
func NewSGDConfig ¶
NewSGD creates the configuration for a SGD optimizer with specified values
type Sequential ¶
type Sequential struct {
// contains filtered or unexported fields
}
Sequential is a layer (container) that combines multiple other layers.
func (*Sequential) Add ¶
func (s *Sequential) Add(l ts.Module)
Add appends a layer after all the current layers.
func (*Sequential) AddFn ¶
func (s *Sequential) AddFn(fn ts.Module)
AddFn appends a closure after all the current layers.
NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface
func (*Sequential) Forward ¶
func (s *Sequential) Forward(xs *ts.Tensor) (retVal *ts.Tensor)
Forward implements Module interface for Sequential
func (*Sequential) ForwardAll ¶
ForwardAll applies the forward pass and returns the output for each layer.
func (*Sequential) IsEmpty ¶
func (s *Sequential) IsEmpty() (retVal bool)
IsEmpty returns true if this layer does not have any sub-layers.
func (*Sequential) Len ¶
func (s *Sequential) Len() (retVal int64)
Len returns number of sub-layers embedded in this layer
type SequentialT ¶
type SequentialT struct {
// contains filtered or unexported fields
}
SequentialT is a sequential layer combining new layers with support for a training mode.
func (*SequentialT) Add ¶
func (s *SequentialT) Add(l ts.ModuleT)
Add appends a layer after all the current layers.
func (*SequentialT) AddFn ¶
func (s *SequentialT) AddFn(fn ts.ModuleT)
AddFn appends a closure after all the current layers.
NOTE: fn should have signature `func(t ts.Tensor) ts.Tensor` and it implements Module interface
func (*SequentialT) AddFnT ¶
func (s *SequentialT) AddFnT(fn ts.ModuleT)
AddFn appends a closure after all the current layers.
NOTE: fn should have signature `func(t ts.Tensor, train bool) ts.Tensor` and it implements Module interface
func (*SequentialT) ForwardAllT ¶
ForwardAll applies the forward pass and returns the output for each layer.
func (*SequentialT) IsEmpty ¶
func (s *SequentialT) IsEmpty() (retVal bool)
IsEmpty returns true if this layer does not have any sub-layers.
func (*SequentialT) Len ¶
func (s *SequentialT) Len() (retVal int64)
Len returns number of sub-layers embedded in this layer
type VarStore ¶
type VarStore struct { Vars Variables // contains filtered or unexported fields }
VarStore is used to store variables used by one or multiple layers. It specifies a SINGLE device where all variables are stored.
func NewVarStore ¶
NewVarStore creates a new variable store located on the specified device
func (*VarStore) Copy ¶
Copy copies variable values from a source var store to this var store.
All the variables in this var store have to exist with the same name in the source var store, otherwise an error is returned.
func (*VarStore) Freeze ¶
func (vs *VarStore) Freeze()
Freeze freezes a var store.
Gradients for the variables in this store are not tracked anymore.
func (*VarStore) IsEmpty ¶
IsEmpty returns true if no tensors are currently stored on this var-store
func (*VarStore) Load ¶
Load loads the var-store variable values from a file.
NOTE: Weight values for all the tensors currently stored in the var-store gets loaded from the given file. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified. It will throw error if name of the loaded tensors can not find in the current var-store named tensors set.
func (*VarStore) LoadPartial ¶
LoadPartial loads the var-store variable values from a file if it exists.
Weight values for the tensors currently stored in the var-store and the given file get loaded from the given file. If a variable in the var store is not present in the given file, it is skipped and its values are not updated. This method should be used if pre-trained weight for only parts of the model are available. Note that the set of variables stored in the var-store is not changed, only the values for these tensors are modified.
Returns a String Vector containing the names of missing variables.
func (*VarStore) Root ¶
Root gets the root path for this var-store
NOTE: Variables are named and organized using paths. This function returns the top level path for the var store and can be combined with '/' to create sub-paths.
func (*VarStore) Save ¶
Save saves the var-store variable values to a file
NOTE: Weight values for all the tensors currently stored in the var-store gets saved in the given file.
func (*VarStore) TrainableVariables ¶
TrainableVariabless returns all trainable variables for this var-store
type Variables ¶
type Variables struct { NamedVariables map[string]*ts.Tensor TrainableVariables []ts.Tensor // contains filtered or unexported fields }
Variables represents a collection of tensors.
NOTE: When the variable store is frozen, trainable still is set to tree, however the tensor is not set to require gradients.