decoder

package
v2.0.0-dev0.0.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 15, 2024 License: BSD-3-Clause Imports: 15 Imported by: 0

README

Docs: GoDoc

The decoder package provides standalone decoders that can sample variables from emer network layers and provide a supervised one-layer categorical decoding of what is being represented in those layers. This can provide an important point of reference relative to whatever the network itself is generating, and is especially useful for more self-organizing networks that may not have supervised training at all.

An MPI version of TrainMPI and BackMPI shares weight changes across MPI nodes, using the mpi.Comm, which must be set to the initialized communicator by the sim.

SoftMax

The SoftMax decoder is the best choice for a 1-hot classification decoder, using the SoftMax function.

Here's the basic API:

  • InitLayer to initialize with number of categories and layer(s) for input.

  • Decode with variable name to record that variable from layers, and decode based on the current state info for that variable. You can also access the full sorted list of category outputs in the Sorted field of the SoftMax object.

  • Train after Decode with index of current ground-truth category value.

It is also possible to use the decoder without reference to emer Layers by just calling Init, Forward, Sort, and Train.

A learning rate of about 0.05 works well for large layers, and 0.1 can be used for smaller, less complex cases.

Linear

The Linear decoder is the best choice for factorial, independent categories where any number of them might be active at a time. It learns using the delta rule for each output unit. Uses the same API as above, except Decode takes a full slice of target values for each category output, and the results are found in the Units[i].Act variable, which can be returned into a slice using the Output method.

Vote

TopVoteInt takes a slice of ints representing votes for which category index was selected (or anything really), and returns the one with the most votes, choosing at random for any ties at the top, along with the number of votes for it.

TopVoteString is the same as TopVoteInt but with string-valued votes.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IdentityFunc

func IdentityFunc(x float32) float32

func LogisticFunc

func LogisticFunc(x float32) float32

LogisticFunc implements the standard logistic function. Its outputs are in the range (0, 1). Also known as Sigmoid. See https://en.wikipedia.org/wiki/Logistic_function.

func TopVoteInt

func TopVoteInt(votes []int) (int, int)

TopVoteInt returns the choice with the most votes among a list of votes as integer-valued choices, and also returns the number of votes for that item. In the case of ties, it chooses one at random (otherwise it would have a bias toward the lowest numbered item).

func TopVoteString

func TopVoteString(votes []string) (string, int)

TopVoteString returns the choice with the most votes among a list of votes as string-valued choices, and also returns the number of votes for that item. In the case of ties, it chooses one at random (otherwise it would have a bias toward the lowest numbered item).

Types

type ActivationFunc

type ActivationFunc func(float32) float32

type Layer

type Layer interface {
	Name() string
	UnitValuesTensor(tsr etensor.Tensor, varNm string, di int) error
	Shape() *etensor.Shape
}

Layer is the subset of emer.Layer that is used by this code

type Linear

type Linear struct {

	// learning rate
	LRate float32 `default:"0.1"`

	// layers to decode
	Layers []Layer

	// unit values -- read this for decoded output
	Units []LinearUnit

	// number of inputs -- total sizes of layer inputs
	NInputs int

	// number of outputs -- total sizes of layer inputs
	NOutputs int

	// input values, copied from layers
	Inputs []float32

	// for holding layer values
	ValuesTsrs map[string]*etensor.Float32 `view:"-"`

	// synaptic weights: outer loop is units, inner loop is inputs
	Weights etensor.Float32

	// activation function
	ActivationFn ActivationFunc

	// which pool to use within a layer
	PoolIndex int

	// mpi communicator -- MPI users must set this to their comm -- do direct assignment
	Comm *mpi.Comm `view:"-"`

	// delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs
	MPIDWts etensor.Float32
}

Linear is a linear neural network, which can be configured with a custom activation function. By default it will use the identity function. It learns using the delta rule for each output unit.

func (*Linear) Back

func (dec *Linear) Back() float32

Back compute the backward error propagation pass Returns SSE (sum squared error) of difference between targets and outputs.

func (*Linear) BackMPI

func (dec *Linear) BackMPI() float32

BackMPI compute the backward error propagation pass Returns SSE (sum squared error) of difference between targets and outputs.

func (*Linear) Decode

func (dec *Linear) Decode(varNm string, di int)

Decode decodes the given variable name from layers (forward pass). Decoded values are in Units[i].Act -- see also Output to get into a []float32. di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*Linear) Forward

func (dec *Linear) Forward()

Forward compute the forward pass from input

func (*Linear) Init

func (dec *Linear) Init(nOutputs, nInputs int, poolIndex int, activationFn ActivationFunc)

Init initializes detector with number of categories and number of inputs

func (*Linear) InitLayer

func (dec *Linear) InitLayer(nOutputs int, layers []Layer, activationFn ActivationFunc)

InitLayer initializes detector with number of categories and layers

func (*Linear) InitPool

func (dec *Linear) InitPool(nOutputs int, layer Layer, poolIndex int, activationFn ActivationFunc)

InitPool initializes detector with number of categories, 1 layer and

func (*Linear) Input

func (dec *Linear) Input(varNm string, di int)

Input grabs the input from given variable in layers di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*Linear) Output

func (dec *Linear) Output(acts *[]float32)

Output returns the resulting Decoded output activation values into given slice which is automatically resized if not of sufficient size.

func (*Linear) SetTargets

func (dec *Linear) SetTargets(targs []float32) error

SetTargets sets given target correct answers, as []float32 values. Also returns and prints an error if targets are not sufficient length for NOutputs.

func (*Linear) Train

func (dec *Linear) Train(targs []float32) (float32, error)

Train trains the decoder with given target correct answers, as []float32 values. Returns SSE (sum squared error) of difference between targets and outputs. Also returns and prints an error if targets are not sufficient length for NOutputs.

func (*Linear) TrainMPI

func (dec *Linear) TrainMPI(targs []float32) (float32, error)

TrainMPI trains the decoder with given target correct answers, as []float32 values. Returns SSE (sum squared error) of difference between targets and outputs. Also returns and prints an error if targets are not sufficient length for NOutputs. MPI version uses mpi to synchronize weight changes across parallel nodes.

func (*Linear) ValuesTsr

func (dec *Linear) ValuesTsr(name string) *etensor.Float32

ValuesTsr gets value tensor of given name, creating if not yet made

type LinearUnit

type LinearUnit struct {

	// target activation value -- typically 0 or 1 but can be within that range too
	Target float32

	// final activation = sum x * w -- this is the decoded output
	Act float32

	// net input = sum x * w
	Net float32
}

LinearUnit has variables for Linear decoder unit

type SoftMax

type SoftMax struct {

	// learning rate
	Lrate float32 `default:"0.1"`

	// layers to decode
	Layers []emer.Layer

	// number of different categories to decode
	NCats int

	// unit values
	Units []SoftMaxUnit

	// sorted list of indexes into Units, in descending order from strongest to weakest -- i.e., Sortedhas the most likely categorization, and its activity is Units].Act
	Sorted []int

	// number of inputs -- total sizes of layer inputs
	NInputs int

	// input values, copied from layers
	Inputs []float32

	// current target index of correct category
	Target int

	// for holding layer values
	ValuesTsrs map[string]*etensor.Float32 `view:"-"`

	// synaptic weights: outer loop is units, inner loop is inputs
	Weights etensor.Float32

	// mpi communicator -- MPI users must set this to their comm -- do direct assignment
	Comm *mpi.Comm `view:"-"`

	// delta weight changes: only for MPI mode -- outer loop is units, inner loop is inputs
	MPIDWts etensor.Float32
}

SoftMax is a softmax decoder, which is the best choice for a 1-hot classification using the widely-used SoftMax function: https://en.wikipedia.org/wiki/Softmax_function

func (*SoftMax) Back

func (sm *SoftMax) Back()

Back compute the backward error propagation pass

func (*SoftMax) BackMPI

func (sm *SoftMax) BackMPI()

BackMPI compute the backward error propagation pass MPI version shares weight changes across nodes

func (*SoftMax) Decode

func (sm *SoftMax) Decode(varNm string, di int) int

Decode decodes the given variable name from layers (forward pass) See Sorted list of indexes for the decoding output -- i.e., Sorted[0] is the most likely -- that is returned here as a convenience. di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*SoftMax) Forward

func (sm *SoftMax) Forward()

Forward compute the forward pass from input

func (*SoftMax) Init

func (sm *SoftMax) Init(ncats, ninputs int)

Init initializes detector with number of categories and number of inputs

func (*SoftMax) InitLayer

func (sm *SoftMax) InitLayer(ncats int, layers []emer.Layer)

InitLayer initializes detector with number of categories and layers

func (*SoftMax) Input

func (sm *SoftMax) Input(varNm string, di int)

Input grabs the input from given variable in layers di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*SoftMax) Load

func (sm *SoftMax) Load(path string) error

Load loads the decoder weights from given file path. If the shape of the decoder does not match the shape of the saved weights, an error will be returned.

func (*SoftMax) Save

func (sm *SoftMax) Save(path string) error

Save saves the decoder weights to given file path. If path ends in .gz, it will be gzipped.

func (*SoftMax) Sort

func (sm *SoftMax) Sort()

Sort updates Sorted indexes of the current Unit category activations sorted from highest to lowest. i.e., the 0-index value has the strongest decoded output category, 1 the next-strongest, etc.

func (*SoftMax) Train

func (sm *SoftMax) Train(targ int)

Train trains the decoder with given target correct answer (0..NCats-1)

func (*SoftMax) TrainMPI

func (sm *SoftMax) TrainMPI(targ int)

TrainMPI trains the decoder with given target correct answer (0..NCats-1) MPI version uses mpi to synchronize weight changes across parallel nodes.

func (*SoftMax) ValuesTsr

func (sm *SoftMax) ValuesTsr(name string) *etensor.Float32

ValuesTsr gets value tensor of given name, creating if not yet made

type SoftMaxUnit

type SoftMaxUnit struct {

	// final activation = e^Ge / sum e^Ge
	Act float32

	// net input = sum x * w
	Net float32

	// exp(Net)
	Exp float32
}

SoftMaxUnit has variables for softmax decoder unit

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL