decoder

package
v1.4.31 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 27, 2023 License: BSD-3-Clause Imports: 14 Imported by: 6

README

Docs: GoDoc

The decoder package provides standalone decoders that can sample variables from emer network layers and provide a supervised one-layer categorical decoding of what is being represented in those layers. This can provide an important point of reference relative to whatever the network itself is generating, and is especially useful for more self-organizing networks that may not have supervised training at all.

An MPI version of TrainMPI and BackMPI shares weight changes across MPI nodes, using the mpi.Comm, which must be set to the initialized communicator by the sim.

SoftMax

The SoftMax decoder is the best choice for a 1-hot classification decoder, using the SoftMax function.

Here's the basic API:

  • InitLayer to initialize with number of categories and layer(s) for input.

  • Decode with variable name to record that variable from layers, and decode based on the current state info for that variable. You can also access the full sorted list of category outputs in the Sorted field of the SoftMax object.

  • Train after Decode with index of current ground-truth category value.

It is also possible to use the decoder without reference to emer Layers by just calling Init, Forward, Sort, and Train.

A learning rate of about 0.05 works well for large layers, and 0.1 can be used for smaller, less complex cases.

Linear

The Linear decoder is the best choice for factorial, independent categories where any number of them might be active at a time. It learns using the delta rule for each output unit. Uses the same API as above, except Decode takes a full slice of target values for each category output, and the results are found in the Units[i].Act variable, which can be returned into a slice using the Output method.

Vote

TopVoteInt takes a slice of ints representing votes for which category index was selected (or anything really), and returns the one with the most votes, choosing at random for any ties at the top, along with the number of votes for it.

TopVoteString is the same as TopVoteInt but with string-valued votes.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IdentityFunc added in v1.3.36

func IdentityFunc(x float32) float32

func LogisticFunc added in v1.3.36

func LogisticFunc(x float32) float32

LogisticFunc implements the standard logistic function. Its outputs are in the range (0, 1). Also known as Sigmoid. See https://en.wikipedia.org/wiki/Logistic_function.

func TopVoteInt added in v1.1.42

func TopVoteInt(votes []int) (int, int)

TopVoteInt returns the choice with the most votes among a list of votes as integer-valued choices, and also returns the number of votes for that item. In the case of ties, it chooses one at random (otherwise it would have a bias toward the lowest numbered item).

func TopVoteString added in v1.1.42

func TopVoteString(votes []string) (string, int)

TopVoteString returns the choice with the most votes among a list of votes as string-valued choices, and also returns the number of votes for that item. In the case of ties, it chooses one at random (otherwise it would have a bias toward the lowest numbered item).

Types

type ActivationFunc added in v1.3.36

type ActivationFunc func(float32) float32

type Layer added in v1.3.39

type Layer interface {
	Name() string
	UnitValsTensor(tsr etensor.Tensor, varNm string, di int) error
	Shape() *etensor.Shape
}

Layer is the subset of emer.Layer that is used by this code

type Linear added in v1.3.36

type Linear struct {

	// [def: 0.1] learning rate
	LRate float32 `def:"0.1" desc:"learning rate"`

	// layers to decode
	Layers []Layer `desc:"layers to decode"`

	// unit values -- read this for decoded output
	Units []LinearUnit `desc:"unit values -- read this for decoded output"`

	// number of inputs -- total sizes of layer inputs
	NInputs int `desc:"number of inputs -- total sizes of layer inputs"`

	// number of outputs -- total sizes of layer inputs
	NOutputs int `desc:"number of outputs -- total sizes of layer inputs"`

	// input values, copied from layers
	Inputs []float32 `desc:"input values, copied from layers"`

	// [view: -] for holding layer values
	ValsTsrs map[string]*etensor.Float32 `view:"-" desc:"for holding layer values"`

	// synaptic weights: outer loop is units, inner loop is inputs
	Weights etensor.Float32 `desc:"synaptic weights: outer loop is units, inner loop is inputs"`

	// activation function
	ActivationFn ActivationFunc `desc:"activation function"`

	// which pool to use within a layer
	PoolIndex int `desc:"which pool to use within a layer"`

	// [view: -] mpi communicator -- MPI users must set this to their comm -- do direct assignment
	Comm *mpi.Comm `view:"-" desc:"mpi communicator -- MPI users must set this to their comm -- do direct assignment"`

	// delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs
	MPIDWts etensor.Float32 `desc:"delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs"`
}

Linear is a linear neural network, which can be configured with a custom activation function. By default it will use the identity function. It learns using the delta rule for each output unit.

func (*Linear) Back added in v1.3.36

func (dec *Linear) Back() float32

Back compute the backward error propagation pass Returns SSE (sum squared error) of difference between targets and outputs.

func (*Linear) BackMPI added in v1.4.30

func (dec *Linear) BackMPI() float32

BackMPI compute the backward error propagation pass Returns SSE (sum squared error) of difference between targets and outputs.

func (*Linear) Decode added in v1.3.36

func (dec *Linear) Decode(varNm string, di int)

Decode decodes the given variable name from layers (forward pass). Decoded values are in Units[i].Act -- see also Output to get into a []float32. di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*Linear) Forward added in v1.3.36

func (dec *Linear) Forward()

Forward compute the forward pass from input

func (*Linear) Init added in v1.3.36

func (dec *Linear) Init(nOutputs, nInputs int, poolIndex int, activationFn ActivationFunc)

Init initializes detector with number of categories and number of inputs

func (*Linear) InitLayer added in v1.3.36

func (dec *Linear) InitLayer(nOutputs int, layers []Layer, activationFn ActivationFunc)

InitLayer initializes detector with number of categories and layers

func (*Linear) InitPool added in v1.3.39

func (dec *Linear) InitPool(nOutputs int, layer Layer, poolIndex int, activationFn ActivationFunc)

InitPool initializes detector with number of categories, 1 layer and

func (*Linear) Input added in v1.3.36

func (dec *Linear) Input(varNm string, di int)

Input grabs the input from given variable in layers di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*Linear) Output added in v1.3.36

func (dec *Linear) Output(acts *[]float32)

Output returns the resulting Decoded output activation values into given slice which is automatically resized if not of sufficient size.

func (*Linear) SetTargets added in v1.4.30

func (dec *Linear) SetTargets(targs []float32) error

SetTargets sets given target correct answers, as []float32 values. Also returns and prints an error if targets are not sufficient length for NOutputs.

func (*Linear) Train added in v1.3.36

func (dec *Linear) Train(targs []float32) (float32, error)

Train trains the decoder with given target correct answers, as []float32 values. Returns SSE (sum squared error) of difference between targets and outputs. Also returns and prints an error if targets are not sufficient length for NOutputs.

func (*Linear) TrainMPI added in v1.4.30

func (dec *Linear) TrainMPI(targs []float32) (float32, error)

TrainMPI trains the decoder with given target correct answers, as []float32 values. Returns SSE (sum squared error) of difference between targets and outputs. Also returns and prints an error if targets are not sufficient length for NOutputs. MPI version uses mpi to synchronize weight changes across parallel nodes.

func (*Linear) ValsTsr added in v1.3.36

func (dec *Linear) ValsTsr(name string) *etensor.Float32

ValsTsr gets value tensor of given name, creating if not yet made

type LinearUnit added in v1.3.36

type LinearUnit struct {

	// target activation value -- typically 0 or 1 but can be within that range too
	Target float32 `desc:"target activation value -- typically 0 or 1 but can be within that range too"`

	// final activation = sum x * w -- this is the decoded output
	Act float32 `desc:"final activation = sum x * w -- this is the decoded output"`

	// net input = sum x * w
	Net float32 `desc:"net input = sum x * w"`
}

LinearUnit has variables for Linear decoder unit

type SoftMax

type SoftMax struct {

	// [def: 0.1] learning rate
	Lrate float32 `def:"0.1" desc:"learning rate"`

	// layers to decode
	Layers []emer.Layer `desc:"layers to decode"`

	// number of different categories to decode
	NCats int `desc:"number of different categories to decode"`

	// unit values
	Units []SoftMaxUnit `desc:"unit values"`

	// sorted list of indexes into Units, in descending order from strongest to weakest -- i.e., Sorted[0] has the most likely categorization, and its activity is Units[Sorted[0]].Act
	Sorted []int `` /* 183-byte string literal not displayed */

	// number of inputs -- total sizes of layer inputs
	NInputs int `desc:"number of inputs -- total sizes of layer inputs"`

	// input values, copied from layers
	Inputs []float32 `desc:"input values, copied from layers"`

	// current target index of correct category
	Target int `desc:"current target index of correct category"`

	// [view: -] for holding layer values
	ValsTsrs map[string]*etensor.Float32 `view:"-" desc:"for holding layer values"`

	// synaptic weights: outer loop is units, inner loop is inputs
	Weights etensor.Float32 `desc:"synaptic weights: outer loop is units, inner loop is inputs"`

	// [view: -] mpi communicator -- MPI users must set this to their comm -- do direct assignment
	Comm *mpi.Comm `view:"-" desc:"mpi communicator -- MPI users must set this to their comm -- do direct assignment"`

	// delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs
	MPIDWts etensor.Float32 `desc:"delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs"`
}

SoftMax is a softmax decoder, which is the best choice for a 1-hot classification using the widely-used SoftMax function: https://en.wikipedia.org/wiki/Softmax_function

func (*SoftMax) Back

func (sm *SoftMax) Back()

Back compute the backward error propagation pass

func (*SoftMax) BackMPI added in v1.4.30

func (sm *SoftMax) BackMPI()

BackMPI compute the backward error propagation pass MPI version shares weight changes across nodes

func (*SoftMax) Decode

func (sm *SoftMax) Decode(varNm string, di int) int

Decode decodes the given variable name from layers (forward pass) See Sorted list of indexes for the decoding output -- i.e., Sorted[0] is the most likely -- that is returned here as a convenience. di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*SoftMax) Forward

func (sm *SoftMax) Forward()

Forward compute the forward pass from input

func (*SoftMax) Init

func (sm *SoftMax) Init(ncats, ninputs int)

Init initializes detector with number of categories and number of inputs

func (*SoftMax) InitLayer

func (sm *SoftMax) InitLayer(ncats int, layers []emer.Layer)

InitLayer initializes detector with number of categories and layers

func (*SoftMax) Input

func (sm *SoftMax) Input(varNm string, di int)

Input grabs the input from given variable in layers di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*SoftMax) Load added in v1.4.31

func (sm *SoftMax) Load(path string) error

Load loads the decoder weights from given file path. If the shape of the decoder does not match the shape of the saved weights, an error will be returned.

func (*SoftMax) Save added in v1.4.31

func (sm *SoftMax) Save(path string) error

Save saves the decoder weights to given file path. If path ends in .gz, it will be gzipped.

func (*SoftMax) Sort

func (sm *SoftMax) Sort()

Sort updates Sorted indexes of the current Unit category activations sorted from highest to lowest. i.e., the 0-index value has the strongest decoded output category, 1 the next-strongest, etc.

func (*SoftMax) Train

func (sm *SoftMax) Train(targ int)

Train trains the decoder with given target correct answer (0..NCats-1)

func (*SoftMax) TrainMPI added in v1.4.30

func (sm *SoftMax) TrainMPI(targ int)

TrainMPI trains the decoder with given target correct answer (0..NCats-1) MPI version uses mpi to synchronize weight changes across parallel nodes.

func (*SoftMax) ValsTsr

func (sm *SoftMax) ValsTsr(name string) *etensor.Float32

ValsTsr gets value tensor of given name, creating if not yet made

type SoftMaxUnit added in v1.1.44

type SoftMaxUnit struct {

	// final activation = e^Ge / sum e^Ge
	Act float32 `desc:"final activation = e^Ge / sum e^Ge"`

	// net input = sum x * w
	Net float32 `desc:"net input = sum x * w"`

	// exp(Net)
	Exp float32 `desc:"exp(Net)"`
}

SoftMaxUnit has variables for softmax decoder unit

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL