axon

package
v1.7.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 8, 2023 License: BSD-3-Clause Imports: 56 Imported by: 34

Documentation

Overview

Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, PVLV, PBWM, etc.

The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.

The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.

This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.

One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.

The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.

Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)

The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables

There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.

Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.

The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.

Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., SpikeFmG computes activation from conductances G.

The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.

Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per projection.

The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.

Index

Constants

View Source
const (
	Version     = "v1.7.7"
	GitCommit   = "c8f33c3"          // the commit JUST BEFORE the release
	VersionDate = "2023-02-08 23:48" // UTC
)
View Source
const NeuronVarStart = 5

NeuronVarStart is the starting *field* index (not byte count!) where float32 variables start -- all prior must be 32 bit (uint32, int32), Note: all non-float32 infrastructure variables must be at the start!

View Source
const SynapseVarStart = 16

SynapseVarStart is the *byte* offset (4 per 32 bit) of fields in the Synapse structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!

Variables

View Source
var (
	NeuronLayerVars  = []string{"DA", "ACh", "NE", "Ser", "Gated"}
	NNeuronLayerVars = len(NeuronLayerVars)
)

NeuronLayerVars are layer-level variables displayed as neuron layers.

View Source
var KiT_DAModTypes = kit.Enums.AddEnum(DAModTypesN, kit.NotBitFlag, nil)
View Source
var KiT_GPLayerTypes = kit.Enums.AddEnum(GPLayerTypesN, kit.NotBitFlag, nil)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
View Source
var KiT_LayerTypes = kit.Enums.AddEnum(LayerTypesN, kit.NotBitFlag, nil)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
View Source
var KiT_PrjnGTypes = kit.Enums.AddEnum(PrjnGTypesN, kit.NotBitFlag, nil)
View Source
var KiT_PrjnTypes = kit.Enums.AddEnum(PrjnTypesN, kit.NotBitFlag, nil)
View Source
var LayerProps = ki.Props{
	"EnumType:Typ": KiT_LayerTypes,
	"ToolBar": ki.PropSlice{
		{"Defaults", ki.Props{
			"icon": "reset",
			"desc": "return all parameters to their intial default values",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"LesionNeurons", ki.Props{
			"icon": "close",
			"desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)",
			"Args": ki.PropSlice{
				{"Proportion", ki.Props{
					"desc": "proportion (0 -- 1) of neurons to lesion",
				}},
			},
		}},
		{"UnLesionNeurons", ki.Props{
			"icon": "reset",
			"desc": "Un-Lesion (reset the Off flag) for all neurons in the layer",
		}},
	},
}
View Source
var NetworkProps = ki.Props{
	"ToolBar": ki.PropSlice{
		{"SaveWtsJSON", ki.Props{
			"label": "Save Wts...",
			"icon":  "file-save",
			"desc":  "Save json-formatted weights",
			"Args": ki.PropSlice{
				{"Weights File Name", ki.Props{
					"default-field": "WtsFile",
					"ext":           ".wts,.wts.gz",
				}},
			},
		}},
		{"OpenWtsJSON", ki.Props{
			"label": "Open Wts...",
			"icon":  "file-open",
			"desc":  "Open json-formatted weights",
			"Args": ki.PropSlice{
				{"Weights File Name", ki.Props{
					"default-field": "WtsFile",
					"ext":           ".wts,.wts.gz",
				}},
			},
		}},
		{"sep-file", ki.BlankProp{}},
		{"Build", ki.Props{
			"icon": "update",
			"desc": "build the network's neurons and synapses according to current params",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the network weight values according to prjn parameters",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the network activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"AddLayer", ki.Props{
			"label": "Add Layer...",
			"icon":  "new",
			"desc":  "add a new layer to network",
			"Args": ki.PropSlice{
				{"Layer Name", ki.Props{}},
				{"Layer Shape", ki.Props{
					"desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)",
				}},
				{"Layer Type", ki.Props{
					"desc": "type of layer -- used for determining how inputs are applied",
				}},
			},
		}},
		{"ConnectLayerNames", ki.Props{
			"label": "Connect Layers...",
			"icon":  "new",
			"desc":  "add a new connection between layers in the network",
			"Args": ki.PropSlice{
				{"Send Layer Name", ki.Props{}},
				{"Recv Layer Name", ki.Props{}},
				{"Pattern", ki.Props{
					"desc": "pattern to connect with",
				}},
				{"Prjn Type", ki.Props{
					"desc": "type of projection -- direction, or other more specialized factors",
				}},
			},
		}},
		{"AllPrjnScales", ki.Props{
			"icon":        "file-sheet",
			"desc":        "AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections.  These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.",
			"show-return": true,
		}},
	},
}
View Source
var NeuronVarProps = map[string]string{
	"GeSyn":     `range:"2"`,
	"Ge":        `range:"2"`,
	"GeM":       `range:"2"`,
	"Vm":        `min:"0" max:"1"`,
	"VmDend":    `min:"0" max:"1"`,
	"ISI":       `auto-scale:"+"`,
	"ISIAvg":    `auto-scale:"+"`,
	"Gi":        `auto-scale:"+"`,
	"Gk":        `auto-scale:"+"`,
	"ActDel":    `auto-scale:"+"`,
	"ActDiff":   `auto-scale:"+"`,
	"RLRate":    `auto-scale:"+"`,
	"AvgPct":    `range:"2"`,
	"TrgAvg":    `range:"2"`,
	"DTrgAvg":   `auto-scale:"+"`,
	"MahpN":     `auto-scale:"+"`,
	"GknaMed":   `auto-scale:"+"`,
	"GknaSlow":  `auto-scale:"+"`,
	"Gnmda":     `auto-scale:"+"`,
	"GnmdaSyn":  `auto-scale:"+"`,
	"GnmdaLrn":  `auto-scale:"+"`,
	"NmdaCa":    `auto-scale:"+"`,
	"GgabaB":    `auto-scale:"+"`,
	"GABAB":     `auto-scale:"+"`,
	"GABABx":    `auto-scale:"+"`,
	"Gvgcc":     `auto-scale:"+"`,
	"VgccCa":    `auto-scale:"+"`,
	"VgccCaInt": `auto-scale:"+"`,
	"Gak":       `auto-scale:"+"`,
	"SSGi":      `auto-scale:"+"`,
	"SSGiDend":  `auto-scale:"+"`,
}
View Source
var NeuronVars = []string{}
View Source
var NeuronVarsMap map[string]int
View Source
var PrjnProps = ki.Props{
	"EnumType:Typ": KiT_PrjnTypes,
}
View Source
var SynapseVarProps = map[string]string{
	"DWt":  `auto-scale:"+"`,
	"DSWt": `auto-scale:"+"`,
	"CaM":  `auto-scale:"+"`,
	"CaP":  `auto-scale:"+"`,
	"CaD":  `auto-scale:"+"`,
	"Tr":   `auto-scale:"+"`,
	"DTr":  `auto-scale:"+"`,
}
View Source
var SynapseVars = []string{"Wt", "LWt", "SWt", "DWt", "DSWt", "Ca", "CaM", "CaP", "CaD", "Tr", "DTr"}
View Source
var SynapseVarsMap map[string]int

Functions

func DecaySynCa added in v1.3.21

func DecaySynCa(sy *Synapse, decay float32)

DecaySynCa decays synaptic calcium by given factor (between trials) Not used by default.

func EnvApplyInputs added in v1.3.36

func EnvApplyInputs(net *Network, ev env.Env)

EnvApplyInputs applies input patterns from given env.Env environment to Input and Target layer types, assuming that env provides State with the same names as the layers. If these assumptions don't fit, use a separate method.

func GetRandomNumber added in v1.7.7

func GetRandomNumber(index uint32, counter slrand.Counter, funIdx RandFunIdx) float32

GetRandomNumber returns a random number that depends on the index, counter and function index. We increment the counter after each cycle, so that we get new random numbers. This whole scheme exists to ensure equal results under different multithreading settings.

func InitSynCa added in v1.3.21

func InitSynCa(sy *Synapse)

InitSynCa initializes synaptic calcium state, including CaUpT

func JsonToParams

func JsonToParams(b []byte) string

JsonToParams reformates json output to suitable params display output

func LogAddCaLrnDiagnosticItems added in v1.5.3

func LogAddCaLrnDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddCaLrnDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These were useful for the development of the Ca-based "trace" learning rule that directly uses NMDA and VGCC-like spiking Ca

func LogAddDiagnosticItems added in v1.3.35

func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, times ...etime.Times)

LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.

func LogAddExtraDiagnosticItems added in v1.5.8

func LogAddExtraDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddExtraDiagnosticItems adds extra Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.

func LogAddLayerGeActAvgItems added in v1.3.35

func LogAddLayerGeActAvgItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)

LogAddLayerGeActAvgItems adds Ge and Act average items for Hidden and Target layers for given mode and time (e.g., Test, Cycle) These are useful for monitoring layer activity during testing.

func LogAddPCAItems added in v1.3.35

func LogAddPCAItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddPCAItems adds PCA statistics to log for Hidden and Target layers across 3 given time levels, in higher to lower order, e.g., Run, Epoch, Trial These are useful for diagnosing the behavior of the network.

func LogAddPulvCorSimItems added in v1.7.0

func LogAddPulvCorSimItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddPulvCorSimItems adds CorSim stats for Pulv / Pulvinar layers aggregated across three time scales, ordered from higher to lower, e.g., Run, Epoch, Trial.

func LogInputLayer added in v1.7.7

func LogInputLayer(lg *elog.Logs, net *Network)

func LogTestErrors added in v1.3.35

func LogTestErrors(lg *elog.Logs)

LogTestErrors records all errors made across TestTrials, at Test Epoch scope

func LooperResetLogBelow added in v1.3.35

func LooperResetLogBelow(man *looper.Manager, logs *elog.Logs)

LooperResetLogBelow adds a function in OnStart to all stacks and loops to reset the log at the level below each loop -- this is good default behavior.

func LooperSimCycleAndLearn added in v1.3.35

func LooperSimCycleAndLearn(man *looper.Manager, net *Network, ctx *Context, viewupdt *netview.ViewUpdt)

LooperSimCycleAndLearn adds Cycle and DWt, WtFmDWt functions to looper for given network, ctx, and netview update manager

func LooperStdPhases added in v1.3.35

func LooperStdPhases(man *looper.Manager, ctx *Context, net *Network, plusStart, plusEnd int)

LooperStdPhases adds the minus and plus phases of the theta cycle, along with embedded beta phases which just record St1 and St2 activity in this case. plusStart is start of plus phase, typically 150, and plusEnd is end of plus phase, typically 199 resets the state at start of trial

func LooperUpdtNetView added in v1.3.35

func LooperUpdtNetView(man *looper.Manager, viewupdt *netview.ViewUpdt)

LooperUpdtNetView adds netview update calls at each time level

func LooperUpdtPlots added in v1.3.35

func LooperUpdtPlots(man *looper.Manager, gui *egui.GUI)

LooperUpdtPlots adds plot update calls at each time level

func NeuronVarIdxByName

func NeuronVarIdxByName(varNm string) (int, error)

NeuronVarIdxByName returns the index of the variable in the Neuron, or error

func PCAStats added in v1.3.35

func PCAStats(net *Network, lg *elog.Logs, stats *estats.Stats)

PCAStats computes PCA statistics on recorded hidden activation patterns from Analyze, Trial log data

func SaveWeights added in v1.3.29

func SaveWeights(net *Network, ctrString, runName string) string

SaveWeights saves network weights to filename with WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.

func SaveWeightsIfArgSet added in v1.3.35

func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string) string

SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. uses WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.

func SetNeuronExtPosNeg added in v1.7.0

func SetNeuronExtPosNeg(ni uint32, nrn *Neuron, val float32)

SetNeuronExtPosNeg sets neuron Ext value based on neuron index with positive values going in first unit, negative values rectified to positive in 2nd unit

func SigFun

func SigFun(w, gain, off float32) float32

SigFun is the sigmoid function for value w in 0-1 range, with gain and offset params

func SigFun61

func SigFun61(w float32) float32

SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params

func SigInvFun

func SigInvFun(w, gain, off float32) float32

SigInvFun is the inverse of the sigmoid function

func SigInvFun61

func SigInvFun61(w float32) float32

SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params

func SynapseVarByName

func SynapseVarByName(varNm string) (int, error)

SynapseVarByName returns the index of the variable in the Synapse, or error

func ToggleLayersOff added in v1.3.29

func ToggleLayersOff(net *Network, layerNames []string, off bool)

ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.

func WeightsFileName added in v1.3.35

func WeightsFileName(net *Network, ctrString, runName string) string

WeightsFileName returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,

Types

type ActAvgParams

type ActAvgParams struct {
	Nominal   float32     `` /* 745-byte string literal not displayed */
	AdaptGi   slbool.Bool `` /* 349-byte string literal not displayed */
	Offset    float32     `` /* 315-byte string literal not displayed */
	HiTol     float32     `` /* 266-byte string literal not displayed */
	LoTol     float32     `` /* 266-byte string literal not displayed */
	AdaptRate float32     `` /* 263-byte string literal not displayed */
	// contains filtered or unexported fields
}

ActAvgParams represents the nominal average activity levels in the layer and parameters for adapting the computed Gi inhibition levels to maintain average activity within a target range.

func (*ActAvgParams) Adapt added in v1.2.37

func (aa *ActAvgParams) Adapt(gimult *float32, act float32) bool

Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.

func (*ActAvgParams) AvgFmAct

func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32, dt float32)

AvgFmAct updates the running-average activation given average activity level in layer

func (*ActAvgParams) Defaults

func (aa *ActAvgParams) Defaults()

func (*ActAvgParams) Update

func (aa *ActAvgParams) Update()

type ActAvgVals added in v1.2.32

type ActAvgVals struct {
	ActMAvg   float32 `` /* 141-byte string literal not displayed */
	ActPAvg   float32 `inactive:"+" desc:"running-average plus-phase activity integrated at Dt.LongAvgTau"`
	AvgMaxGeM float32 `inactive:"+" desc:"running-average max of minus-phase Ge value across the layer integrated at Dt.LongAvgTau"`
	AvgMaxGiM float32 `inactive:"+" desc:"running-average max of minus-phase Gi value across the layer integrated at Dt.LongAvgTau"`
	GiMult    float32 `inactive:"+" desc:"multiplier on inhibition -- adapted to maintain target activity level"`
	// contains filtered or unexported fields
}

ActAvgVals are long-running-average activation levels stored in the LayerVals, for monitoring and adapting inhibition and possibly scaling parameters.

type ActInitParams

type ActInitParams struct {
	Vm     float32 `def:"0.3" desc:"initial membrane potential -- see Erev.L for the resting potential (typically .3)"`
	Act    float32 `def:"0" desc:"initial activation value -- typically 0"`
	GeBase float32 `` /* 268-byte string literal not displayed */
	GiBase float32 `` /* 235-byte string literal not displayed */
	GeVar  float32 `` /* 167-byte string literal not displayed */
	GiVar  float32 `` /* 167-byte string literal not displayed */
	// contains filtered or unexported fields
}

ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.

func (*ActInitParams) Defaults

func (ai *ActInitParams) Defaults()

func (*ActInitParams) GetGeBase added in v1.7.7

func (ai *ActInitParams) GetGeBase() float32

GeBase returns the baseline Ge value: Ge + rand(GeVar) > 0

func (*ActInitParams) GetGiBase added in v1.7.7

func (ai *ActInitParams) GetGiBase() float32

GiBase returns the baseline Gi value: Gi + rand(GiVar) > 0

func (*ActInitParams) Update

func (ai *ActInitParams) Update()

type ActParams

type ActParams struct {
	Spike   SpikeParams       `view:"inline" desc:"Spiking function parameters"`
	Dend    DendParams        `view:"inline" desc:"dendrite-specific parameters"`
	Init    ActInitParams     `` /* 155-byte string literal not displayed */
	Decay   DecayParams       `` /* 233-byte string literal not displayed */
	Dt      DtParams          `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"`
	Gbar    chans.Chans       `view:"inline" desc:"[Defaults: 1, .2, 1, 1] maximal conductances levels for channels"`
	Erev    chans.Chans       `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"`
	Clamp   ClampParams       `view:"inline" desc:"how external inputs drive neural activations"`
	Noise   SpikeNoiseParams  `view:"inline" desc:"how, where, when, and how much noise to add"`
	VmRange minmax.F32        `` /* 165-byte string literal not displayed */
	Mahp    chans.MahpParams  `` /* 173-byte string literal not displayed */
	Sahp    chans.SahpParams  `` /* 182-byte string literal not displayed */
	KNa     chans.KNaMedSlow  `` /* 220-byte string literal not displayed */
	NMDA    chans.NMDAParams  `` /* 252-byte string literal not displayed */
	GABAB   chans.GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"`
	VGCC    chans.VGCCParams  `` /* 159-byte string literal not displayed */
	AK      chans.AKsParams   `` /* 135-byte string literal not displayed */
	SKCa    chans.SKCaParams  `` /* 140-byte string literal not displayed */
	Attn    AttnParams        `view:"inline" desc:"Attentional modulation parameters: how Attn modulates Ge"`
}

axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.

func (*ActParams) DecayState

func (ac *ActParams) DecayState(nrn *Neuron, decay, glong float32)

DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState

func (*ActParams) Defaults

func (ac *ActParams) Defaults()

func (*ActParams) GSkCaFmCa added in v1.7.0

func (ac *ActParams) GSkCaFmCa(nrn *Neuron)

GSkCaFmCa updates the SKCa channel if used

func (*ActParams) GeFmSyn added in v1.5.12

func (ac *ActParams) GeFmSyn(ctx *Context, ni uint32, nrn *Neuron, geSyn, geExt float32)

GeFmSyn integrates Ge excitatory conductance from GeSyn. geExt is extra conductance to add to the final Ge value

func (*ActParams) GeNoise added in v1.3.23

func (ac *ActParams) GeNoise(ctx *Context, ni uint32, nrn *Neuron)

GeNoise updates nrn.GeNoise if active

func (*ActParams) GiFmSyn added in v1.5.12

func (ac *ActParams) GiFmSyn(ctx *Context, ni uint32, nrn *Neuron, giSyn float32) float32

GiFmSyn integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)

func (*ActParams) GiNoise added in v1.3.23

func (ac *ActParams) GiNoise(ctx *Context, ni uint32, nrn *Neuron)

GiNoise updates nrn.GiNoise if active

func (*ActParams) GkFmVm added in v1.6.0

func (ac *ActParams) GkFmVm(nrn *Neuron)

GkFmVm updates all the Gk-based conductances: Mahp, KNa, Gak

func (*ActParams) GvgccFmVm added in v1.3.24

func (ac *ActParams) GvgccFmVm(nrn *Neuron)

GvgccFmVm updates all the VGCC voltage-gated calcium channel variables from VmDend

func (*ActParams) InetFmG

func (ac *ActParams) InetFmG(vm, ge, gl, gi, gk float32) float32

InetFmG computes net current from conductances and Vm

func (*ActParams) InitActs

func (ac *ActParams) InitActs(nrn *Neuron)

InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)

func (*ActParams) InitLongActs added in v1.2.66

func (ac *ActParams) InitLongActs(nrn *Neuron)

InitLongActs initializes longer time-scale activation states in neuron (SpkPrv, SpkSt*, ActM, ActP, GeM) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)

func (*ActParams) NMDAFmRaw added in v1.3.1

func (ac *ActParams) NMDAFmRaw(nrn *Neuron, geTot float32)

NMDAFmRaw updates all the NMDA variables from total Ge (GeRaw + Ext) and current Vm, Spiking

func (*ActParams) SpikeFmVm added in v1.6.12

func (ac *ActParams) SpikeFmVm(nrn *Neuron)

SpikeFmG computes Spike from Vm and ISI-based activation

func (*ActParams) Update

func (ac *ActParams) Update()

Update must be called after any changes to parameters

func (*ActParams) VmFmG

func (ac *ActParams) VmFmG(nrn *Neuron)

VmFmG computes membrane potential Vm from conductances Ge, Gi, and Gk.

func (*ActParams) VmFmInet added in v1.2.95

func (ac *ActParams) VmFmInet(vm, dt, inet float32) float32

VmFmInet computes new Vm value from inet, clamping range

func (*ActParams) VmInteg added in v1.2.96

func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32, nvm, inet *float32)

VmInteg integrates Vm over VmSteps to obtain a more stable value Returns the new Vm and inet values.

type AttnParams added in v1.2.85

type AttnParams struct {
	On  slbool.Bool `desc:"is attentional modulation active?"`
	Min float32     `viewif:"On" desc:"minimum act multiplier if attention is 0"`
	// contains filtered or unexported fields
}

AttnParams determine how the Attn modulates Ge

func (*AttnParams) Defaults added in v1.2.85

func (at *AttnParams) Defaults()

func (*AttnParams) ModVal added in v1.2.85

func (at *AttnParams) ModVal(val float32, attn float32) float32

ModVal returns the attn-modulated value -- attn must be between 1-0

func (*AttnParams) Update added in v1.2.85

func (at *AttnParams) Update()

type AvgMaxPhases added in v1.7.0

type AvgMaxPhases struct {
	Cycle minmax.AvgMax32 `inactive:"+" view:"inline" desc:"updated every cycle -- this is the source of all subsequent time scales"`
	Minus minmax.AvgMax32 `inactive:"+" view:"inline" desc:"at the end of the minus phase"`
	Plus  minmax.AvgMax32 `inactive:"+" view:"inline" desc:"at the end of the plus phase"`
}

AvgMaxPhases contains the average and maximum values over a Pool of neurons, at different time scales within a standard ThetaCycle of updating. It is much more efficient on the GPU to just grab everything in one pass at the cycle level, and then take snapshots from there. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.

func (*AvgMaxPhases) CycleToMinus added in v1.7.0

func (am *AvgMaxPhases) CycleToMinus()

CycleToMinus grabs current Cycle values into the Minus phase values

func (*AvgMaxPhases) CycleToPlus added in v1.7.0

func (am *AvgMaxPhases) CycleToPlus()

CycleToPlus grabs current Cycle values into the Plus phase values

type AxonLayer

type AxonLayer interface {
	emer.Layer

	// AsAxon returns this layer as a axon.Layer -- so that the AxonLayer
	// interface does not need to include accessors to all the basic stuff
	AsAxon() *Layer

	// LayerType returns the axon-specific LayerTypes type
	LayerType() LayerTypes

	// NeurStartIdx is the starting index in global network slice of neurons for
	// neurons in this layer -- convenience interface method for threading dispatch.
	NeurStartIdx() int

	// SetBuildConfig sets named configuration parameter to given string value
	// to be used in the PostBuild stage -- mainly for layer names that need to be
	// looked up and turned into indexes, after entire network is built.
	SetBuildConfig(param, val string)

	// PostBuild performs special post-Build() configuration steps for specific algorithms,
	// using configuration data from SetBuildConfig during the ConfigNet process.
	PostBuild()

	// InitWts initializes the weight values in the network, i.e., resetting learning
	// Also calls InitActs
	InitWts()

	// InitActAvg initializes the running-average activation values that drive learning.
	InitActAvg()

	// InitActs fully initializes activation state -- only called automatically during InitWts
	InitActs()

	// InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
	InitWtSym()

	// InitGScale computes the initial scaling factor for synaptic input conductances G,
	// stored in GScale.Scale, based on sending layer initial activation.
	InitGScale()

	// InitExt initializes external input state -- called prior to apply ext
	InitExt()

	// ApplyExt applies external input in the form of an etensor.Tensor
	// If the layer is a Target or Compare layer type, then it goes in Target
	// otherwise it goes in Ext.
	ApplyExt(ext etensor.Tensor)

	// ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats
	// If the layer is a Target or Compare layer type, then it goes in Target
	// otherwise it goes in Ext
	ApplyExt1D(ext []float64)

	// UpdateExtFlags updates the neuron flags for external input based on current
	// layer Type field -- call this if the Type has changed since the last
	// ApplyExt* method call.
	UpdateExtFlags()

	// IsTarget returns true if this layer is a Target layer.
	// By default, returns true for layers of Type == emer.Target
	// Other Target layers include the TRCLayer in deep predictive learning.
	// It is also used in SynScale to not apply it to target layers.
	// In both cases, Target layers are purely error-driven.
	IsTarget() bool

	// IsInput returns true if this layer is an Input layer.
	// By default, returns true for layers of Type == emer.Input
	// Used to prevent adapting of inhibition or TrgAvg values.
	IsInput() bool

	// RecvPrjns returns the slice of receiving projections for this layer
	RecvPrjns() *AxonPrjns

	// SendPrjns returns the slice of sending projections for this layer
	SendPrjns() *AxonPrjns

	// GatherSpikes integrates G*Raw and G*Syn values for given neuron
	// while integrating the Prjn-level GSyn integrated values.
	// ni is layer-specific index of neuron within its layer.
	GatherSpikes(ctx *Context, ni uint32, nrn *Neuron)

	// GiFmSpikes integrates new inhibitory conductances from Spikes
	// at the layer and pool level
	GiFmSpikes(ctx *Context)

	// PoolGiFmSpikes computes inhibition Gi from Spikes within relevant Pools
	// this is a second pass after GiFmSpikes so that it can also deal with
	// between-layer inhibition.
	PoolGiFmSpikes(ctx *Context)

	// CycleNeuron does one cycle (msec) of updating at the neuron level
	// calls the following via this AxonLay interface:
	// * GInteg
	// * SpikeFmG
	// * PostAct
	CycleNeuron(ctx *Context, ni uint32, nrn *Neuron)

	// GInteg integrates conductances G over time (Ge, NMDA, etc).
	// reads pool Gi values.
	GInteg(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

	// SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that
	SpikeFmG(ctx *Context, ni uint32, nrn *Neuron)

	// PostSpike does updates at neuron level after spiking has been computed.
	// This is where special layer types add extra code.
	PostSpike(ctx *Context, ni uint32, nrn *Neuron)

	// SendSpike sends spike to receivers -- last step in Cycle, integrated
	// the next time around.
	// Writes to sending projections for this neuron.
	SendSpike(ctx *Context)

	// CyclePost is called after the standard Cycle update, as a separate
	// network layer loop.
	// This is reserved for any kind of special ad-hoc types that
	// need to do something special after Spiking is finally computed and Sent.
	// It ONLY runs on the CPU, not the GPU -- should update global values
	// in the Context state which are re-sync'd back to GPU,
	// and values in other layers MUST come from LayerVals because
	// this is the only data that is sync'd back from the GPU each cycle.
	// For example, updating a neuromodulatory signal such as dopamine.
	CyclePost(ctx *Context)

	// NewState handles all initialization at start of new input pattern,
	// including computing Ge scaling from running average activation etc.
	// should already have presented the external input to the network at this point.
	NewState(ctx *Context)

	// DecayState decays activation state by given proportion (default is on ly.Params.Act.Init.Decay)
	DecayState(ctx *Context, decay, glong float32)

	// MinusPhase does updating after end of minus phase
	MinusPhase(ctx *Context)

	// PlusPhase does updating after end of plus phase
	PlusPhase(ctx *Context)

	// SpkSt1 saves current activations into SpkSt1
	SpkSt1(ctx *Context)

	// SpkSt2 saves current activations into SpkSt2
	SpkSt2(ctx *Context)

	// CorSimFmActs computes the correlation similarity
	// (centered cosine aka normalized dot product)
	// in activation state between minus and plus phases
	// (1 = identical, 0 = uncorrelated).
	CorSimFmActs()

	// DWtLayer does weight change at the layer level.
	// does NOT call main projection-level DWt method.
	// in base, only calls DTrgAvgFmErr
	DWtLayer(ctx *Context)

	// WtFmDWtLayer does weight update at the layer level.
	// does NOT call main projection-level WtFmDWt method.
	// in base, only calls TrgAvgFmD
	WtFmDWtLayer(ctx *Context)

	// SlowAdapt is the layer-level slow adaptation functions.
	// Calls AdaptInhib and AvgDifFmTrgAvg for Synaptic Scaling.
	// Does NOT call projection-level methods.
	SlowAdapt(ctx *Context)

	// SynFail updates synaptic weight failure only -- normally done as part of DWt
	// and WtFmDWt, but this call can be used during testing to update failing synapses.
	SynFail(ctx *Context)
}

AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.

All of the structural API is in emer.Layer, which this interface also inherits for convenience.

type AxonNetwork

type AxonNetwork interface {
	emer.Network

	// AsAxon returns this network as a axon.Network -- so that the
	// AxonNetwork interface does not need to include accessors
	// to all the basic stuff
	AsAxon() *Network

	// NewStateImpl handles all initialization at start of new input pattern, including computing
	// input scaling from running average activation etc.
	NewStateImpl(ctx *Context)

	// Cycle handles entire update for one cycle (msec) of neuron activity state.
	CycleImpl(ctx *Context)

	// MinusPhaseImpl does updating after minus phase
	MinusPhaseImpl(ctx *Context)

	// PlusPhaseImpl does updating after plus phase
	PlusPhaseImpl(ctx *Context)

	// DWtImpl computes the weight change (learning) based on current
	// running-average activation values
	DWtImpl(ctx *Context)

	// WtFmDWtImpl updates the weights from delta-weight changes.
	// Also calls SynScale every Interval times
	WtFmDWtImpl(ctx *Context)

	// SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling,
	// GScale conductance scaling, and adapting inhibition
	SlowAdapt(ctx *Context)
}

AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * NewPhase * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls through the interface cast. Specialized algorithms should thus only change the Impl version, which is what is exposed here in this interface.

There is now a strong constraint that all Cycle level computation takes place in one pass at the Layer level, which greatly improves threading efficiency.

All of the structural API is in emer.Network, which this interface also inherits for convenience.

type AxonPrjn

type AxonPrjn interface {
	emer.Prjn

	// AsAxon returns this prjn as a axon.Prjn -- so that the AxonPrjn
	// interface does not need to include accessors to all the basic stuff.
	AsAxon() *Prjn

	// PrjnType returns the axon-specific PrjnTypes type
	PrjnType() PrjnTypes

	// InitWts initializes weight values according to Learn.WtInit params
	InitWts()

	// InitWtSym initializes weight symmetry -- is given the reciprocal projection where
	// the Send and Recv layers are reversed.
	InitWtSym(rpj AxonPrjn)

	// InitGBuffs initializes the per-projection synaptic conductance buffers.
	// This is not typically needed (called during InitWts, InitActs)
	// but can be called when needed.  Must be called to completely initialize
	// prior activity, e.g., full Glong clearing.
	InitGBuffs()

	// SendSpike sends a spike from sending neuron index si,
	// to add to buffer on receivers.
	SendSpike(ctx *Context, sendIdx int, nrn *Neuron)

	// RecvSpikes receives spikes from the sending neurons at index sendIdx
	// into the GBuf buffer on the receiver side. The buffer on the receiver side
	// is a ring buffer, which is used for modelling the time delay between
	// sending and receiving spikes.
	RecvSpikes(ctx *Context, recvIdx int)

	// SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode.
	// Optimized version only updates at point of spiking.
	// This pass goes through in sending order, filtering on sending spike.
	SendSynCa(ctx *Context)

	// RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode.
	// Optimized version only updates at point of spiking.
	// This pass goes through in recv order, filtering on recv spike.
	RecvSynCa(ctx *Context)

	// DWt computes the weight change (learning) -- on sending projections.
	DWt(ctx *Context)

	// DWtSubMean subtracts the mean from any projections that have SubMean > 0.
	// This is called on *receiving* projections, prior to WtFmDwt.
	DWtSubMean(ctx *Context)

	// WtFmDWt updates the synaptic weight values from delta-weight changes,
	// on sending projections
	WtFmDWt(ctx *Context)

	// SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling,
	// GScale conductance scaling, and adapting inhibition
	SlowAdapt(ctx *Context)

	// SynFail updates synaptic weight failure only -- normally done as part of DWt
	// and WtFmDWt, but this call can be used during testing to update failing synapses.
	SynFail(ctx *Context)
}

AxonPrjn defines the essential algorithmic API for Axon, at the projection level. These are the methods that the axon.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.

All of the structural API is in emer.Prjn, which this interface also inherits for convenience.

type AxonPrjns added in v1.6.14

type AxonPrjns []AxonPrjn

func (*AxonPrjns) Add added in v1.6.14

func (pl *AxonPrjns) Add(p AxonPrjn)

type BLAParams added in v1.7.0

type BLAParams struct {
	NegLRate float32 `` /* 143-byte string literal not displayed */
	// contains filtered or unexported fields
}

BLAParams has parameters for basolateral amygdala. Most of BLA learning is handled by NeuroMod settings for DA and ACh modulation.

func (*BLAParams) Defaults added in v1.7.0

func (bp *BLAParams) Defaults()

func (*BLAParams) Update added in v1.7.0

func (bp *BLAParams) Update()

type BurstParams added in v1.7.0

type BurstParams struct {
	ThrRel float32 `` /* 348-byte string literal not displayed */
	ThrAbs float32 `` /* 241-byte string literal not displayed */
	// contains filtered or unexported fields
}

BurstParams determine how the 5IB Burst activation is computed from CaSpkP integrated spiking values in Super layers -- thresholded.

func (*BurstParams) Defaults added in v1.7.0

func (bp *BurstParams) Defaults()

func (*BurstParams) ThrFmAvgMax added in v1.7.0

func (bp *BurstParams) ThrFmAvgMax(avg, mx float32) float32

ThrFmAvgMax returns threshold from average and maximum values

func (*BurstParams) Update added in v1.7.0

func (bp *BurstParams) Update()

type CTParams added in v1.7.0

type CTParams struct {
	GeGain   float32 `` /* 239-byte string literal not displayed */
	DecayTau float32 `` /* 227-byte string literal not displayed */
	DecayDt  float32 `view:"-" json:"-" xml:"-" desc:"1 / tau"`
	// contains filtered or unexported fields
}

CTParams control the CT corticothalamic neuron special behavior

func (*CTParams) Defaults added in v1.7.0

func (cp *CTParams) Defaults()

func (*CTParams) Update added in v1.7.0

func (cp *CTParams) Update()

type CaLrnParams added in v1.5.1

type CaLrnParams struct {
	Norm      float32           `` /* 188-byte string literal not displayed */
	SpkVGCC   slbool.Bool       `` /* 133-byte string literal not displayed */
	SpkVgccCa float32           `def:"35" desc:"multiplier on spike for computing Ca contribution to CaLrn in SpkVGCC mode"`
	VgccTau   float32           `` /* 268-byte string literal not displayed */
	Dt        kinase.CaDtParams `view:"inline" desc:"time constants for integrating CaLrn across M, P and D cascading levels"`
	VgccDt    float32           `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	NormInv   float32           `view:"-" json:"-" xml:"-" inactive:"+" desc:"= 1 / Norm"`
	// contains filtered or unexported fields
}

CaLrnParams parameterizes the neuron-level calcium signals driving learning: CaLrn = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamic VGCC channel directly. CaLrn is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).

func (*CaLrnParams) CaLrn added in v1.5.1

func (np *CaLrnParams) CaLrn(nrn *Neuron)

CaLrn updates the CaLrn value and its cascaded values, based on NMDA, VGCC Ca it first calls VgccCa to update the spike-driven version of that variable, and perform its time-integration.

func (*CaLrnParams) Defaults added in v1.5.1

func (np *CaLrnParams) Defaults()

func (*CaLrnParams) Update added in v1.5.1

func (np *CaLrnParams) Update()

func (*CaLrnParams) VgccCa added in v1.5.1

func (np *CaLrnParams) VgccCa(nrn *Neuron)

VgccCa updates the simulated VGCC calcium from spiking, if that option is selected, and performs time-integration of VgccCa

type CaSpkParams added in v1.5.1

type CaSpkParams struct {
	SpikeG float32 `` /* 464-byte string literal not displayed */
	SynTau float32 `` /* 415-byte string literal not displayed */

	SynDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`

	Dt kinase.CaDtParams `` /* 202-byte string literal not displayed */
	// contains filtered or unexported fields
}

CaSpkParams parameterizes the neuron-level spike-driven calcium signals, starting with CaSyn that is integrated at the neuron level and drives synapse-level, pre * post Ca integration, which provides the Tr trace that multiplies error signals, and drives learning directly for Target layers. CaSpk* values are integrated separately at the Neuron level and used for UpdtThr and RLRate as a proxy for the activation (spiking) based learning signal.

func (*CaSpkParams) CaFmSpike added in v1.5.1

func (np *CaSpkParams) CaFmSpike(nrn *Neuron)

CaFmSpike computes CaSpk* and CaSyn calcium signals based on current spike.

func (*CaSpkParams) Defaults added in v1.5.1

func (np *CaSpkParams) Defaults()

func (*CaSpkParams) Update added in v1.5.1

func (np *CaSpkParams) Update()

type ClampParams

type ClampParams struct {
	IsInput  slbool.Bool `inactive:"+" desc:"is this a clamped input layer?  set automatically based on layer type at initialization"`
	IsTarget slbool.Bool `inactive:"+" desc:"is this a target layer?  set automatically based on layer type at initialization"`
	Ge       float32     `def:"0.8,1.5" desc:"amount of Ge driven for clamping -- generally use 0.8 for Target layers, 1.5 for Input layers"`
	Add      slbool.Bool `` /* 207-byte string literal not displayed */
	ErrThr   float32     `def:"0.5" desc:"threshold on neuron Act activity to count as active for computing error relative to target in PctErr method"`
	// contains filtered or unexported fields
}

ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.

func (*ClampParams) Defaults

func (cp *ClampParams) Defaults()

func (*ClampParams) Update

func (cp *ClampParams) Update()

type Context added in v1.7.0

type Context struct {
	Mode        etime.Modes `desc:"current evaluation mode, e.g., Train, Test, etc"`
	Phase       int32       `desc:"phase counter: typicaly 0-1 for minus-plus but can be more phases for other algorithms"`
	PlusPhase   slbool.Bool `` /* 126-byte string literal not displayed */
	PhaseCycle  int32       `desc:"cycle within current phase -- minus or plus"`
	Cycle       int32       `` /* 156-byte string literal not displayed */
	ThetaCycles int32       `` /* 173-byte string literal not displayed */
	CycleTot    int32       `` /* 260-byte string literal not displayed */
	Time        float32     `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"`
	Testing     slbool.Bool `` /* 179-byte string literal not displayed */
	TimePerCyc  float32     `def:"0.001" desc:"amount of time to increment per cycle"`

	RandCtr  slrand.Counter `` /* 226-byte string literal not displayed */
	NeuroMod NeuroModVals   `` /* 201-byte string literal not displayed */
	// contains filtered or unexported fields
}

Context contains all of the global context state info that is shared across every step of the computation. It is passed around to all relevant computational functions, and is updated on the CPU and synced to the GPU after every cycle. It is the *only* mechanism for communication from CPU to GPU. It contains timing, Testing vs. Training mode, random number context, global neuromodulation, etc.

func NewContext added in v1.7.0

func NewContext() *Context

NewContext returns a new Time struct with default parameters

func (*Context) CycleInc added in v1.7.0

func (tm *Context) CycleInc()

CycleInc increments at the cycle level

func (*Context) Defaults added in v1.7.0

func (tm *Context) Defaults()

Defaults sets default values

func (*Context) NewPhase added in v1.7.0

func (tm *Context) NewPhase(plusPhase bool)

NewPhase resets PhaseCycle = 0 and sets the plus phase as specified

func (*Context) NewState added in v1.7.0

func (tm *Context) NewState(mode etime.Modes)

NewState resets counters at start of new state (trial) of processing. Pass the evaluation model associated with this new state -- if !Train then testing will be set to true.

func (*Context) Reset added in v1.7.0

func (tm *Context) Reset()

Reset resets the counters all back to zero

type CorSimStats added in v1.3.35

type CorSimStats struct {
	Cor float32 `` /* 203-byte string literal not displayed */
	Avg float32 `` /* 138-byte string literal not displayed */
	Var float32 `` /* 139-byte string literal not displayed */
	// contains filtered or unexported fields
}

CorSimStats holds correlation similarity (centered cosine aka normalized dot product) statistics at the layer level

func (*CorSimStats) Init added in v1.3.35

func (cd *CorSimStats) Init()

type DAModTypes added in v1.7.0

type DAModTypes int32

DAModTypes are types of dopamine modulation of neural activity.

const (
	// NoDAMod means there is no effect of dopamine on neural activity
	NoDAMod DAModTypes = iota

	// D1Mod is for neurons that primarily express dopamine D1 receptors,
	// which are excitatory from DA bursts, inhibitory from dips.
	// Cortical neurons can generally use this type, while subcortical
	// populations are more diverse in having both D1 and D2 subtypes.
	D1Mod

	// D2Mod is for neurons that primarily express dopamine D2 receptors,
	// which are excitatory from DA dips, inhibitory from bursts.
	D2Mod

	// D1AbsMod is like D1Mod, except the absolute value of DA is used
	// instead of the signed value.
	// There are a subset of DA neurons that send increased DA for
	// both negative and positive outcomes, targeting frontal neurons.
	D1AbsMod

	DAModTypesN
)

func (*DAModTypes) FromString added in v1.7.0

func (i *DAModTypes) FromString(s string) error

func (DAModTypes) String added in v1.7.0

func (i DAModTypes) String() string

type DecayParams added in v1.2.59

type DecayParams struct {
	Act   float32 `` /* 391-byte string literal not displayed */
	Glong float32 `` /* 332-byte string literal not displayed */
	AHP   float32 `` /* 198-byte string literal not displayed */
	// contains filtered or unexported fields
}

DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.

func (*DecayParams) Defaults added in v1.2.59

func (ai *DecayParams) Defaults()

func (*DecayParams) Update added in v1.2.59

func (ai *DecayParams) Update()

type DendParams added in v1.2.95

type DendParams struct {
	GbarExp float32     `` /* 221-byte string literal not displayed */
	GbarR   float32     `` /* 150-byte string literal not displayed */
	SSGi    float32     `` /* 337-byte string literal not displayed */
	HasMod  slbool.Bool `` /* 184-byte string literal not displayed */
	ModGain float32     `` /* 162-byte string literal not displayed */
	// contains filtered or unexported fields
}

DendParams are the parameters for updating dendrite-specific dynamics

func (*DendParams) Defaults added in v1.2.95

func (dp *DendParams) Defaults()

func (*DendParams) Update added in v1.2.95

func (dp *DendParams) Update()

type DtParams

type DtParams struct {
	Integ       float32 `` /* 649-byte string literal not displayed */
	VmTau       float32 `` /* 328-byte string literal not displayed */
	VmDendTau   float32 `` /* 335-byte string literal not displayed */
	VmSteps     int32   `` /* 223-byte string literal not displayed */
	GeTau       float32 `def:"5" min:"1" desc:"time constant for decay of excitatory AMPA receptor conductance."`
	GiTau       float32 `def:"7" min:"1" desc:"time constant for decay of inhibitory GABAa receptor conductance."`
	IntTau      float32 `` /* 393-byte string literal not displayed */
	LongAvgTau  float32 `` /* 336-byte string literal not displayed */
	MaxCycStart int32   `` /* 138-byte string literal not displayed */

	VmDt      float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"`
	VmDendDt  float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"`
	DtStep    float32 `view:"-" json:"-" xml:"-" desc:"1 / VmSteps"`
	GeDt      float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	GiDt      float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	IntDt     float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	LongAvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)

func (*DtParams) AvgVarUpdt added in v1.2.45

func (dp *DtParams) AvgVarUpdt(avg, vr *float32, val float32)

AvgVarUpdt updates the average and variance from current value, using LongAvgDt

func (*DtParams) Defaults

func (dp *DtParams) Defaults()

func (*DtParams) GeSynFmRaw added in v1.2.97

func (dp *DtParams) GeSynFmRaw(geSyn, geRaw float32) float32

GeSynFmRaw integrates a synaptic conductance from raw spiking using GeTau

func (*DtParams) GeSynFmRawSteady added in v1.5.12

func (dp *DtParams) GeSynFmRawSteady(geRaw float32) float32

GeSynFmRawSteady returns the steady-state GeSyn that would result from receiving a steady increment of GeRaw every time step = raw * GeTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau

func (*DtParams) GiSynFmRaw added in v1.2.97

func (dp *DtParams) GiSynFmRaw(giSyn, giRaw float32) float32

GiSynFmRaw integrates a synaptic conductance from raw spiking using GiTau

func (*DtParams) GiSynFmRawSteady added in v1.5.12

func (dp *DtParams) GiSynFmRawSteady(giRaw float32) float32

GiSynFmRawSteady returns the steady-state GiSyn that would result from receiving a steady increment of GiRaw every time step = raw * GiTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau

func (*DtParams) Update

func (dp *DtParams) Update()

type GPLayerTypes added in v1.7.0

type GPLayerTypes int32

GPLayerTypes is a GPLayer axon-specific layer type enum.

const (
	// GPeOut is Outer layer of GPe neurons, receiving inhibition from MtxGo
	GPeOut GPLayerTypes = iota

	// GPeIn is Inner layer of GPe neurons, receiving inhibition from GPeOut and MtxNo
	GPeIn

	// GPeTA is arkypallidal layer of GPe neurons, receiving inhibition from GPeIn
	// and projecting inhibition to Mtx
	GPeTA

	// GPi is the inner globus pallidus, functionally equivalent to SNr,
	// receiving from MtxGo and GPeIn, and sending inhibition to VThal
	GPi

	GPLayerTypesN
)

The GPLayer types

func (*GPLayerTypes) FromString added in v1.7.0

func (i *GPLayerTypes) FromString(s string) error

func (GPLayerTypes) MarshalJSON added in v1.7.0

func (ev GPLayerTypes) MarshalJSON() ([]byte, error)

func (GPLayerTypes) String added in v1.7.0

func (i GPLayerTypes) String() string

func (*GPLayerTypes) UnmarshalJSON added in v1.7.0

func (ev *GPLayerTypes) UnmarshalJSON(b []byte) error

type GPParams added in v1.7.0

type GPParams struct {
	GPType GPLayerTypes `viewif:"LayType=GPLayer" view:"inline" desc:"type of GP Layer -- must set during config using SetBuildConfig of GPType."`
	// contains filtered or unexported fields
}

GPLayer represents a globus pallidus layer, including: GPeOut, GPeIn, GPeTA (arkypallidal), and GPi (see GPType for type). Typically just a single unit per Pool representing a given stripe.

func (*GPParams) Defaults added in v1.7.0

func (gp *GPParams) Defaults()

func (*GPParams) Update added in v1.7.0

func (gp *GPParams) Update()

type GScaleVals added in v1.2.37

type GScaleVals struct {
	Scale float32 `` /* 240-byte string literal not displayed */
	Rel   float32 `` /* 159-byte string literal not displayed */
	// contains filtered or unexported fields
}

GScaleVals holds the conductance scaling values. These are computed once at start and remain constant thereafter, and therefore belong on Params and not on PrjnVals.

type InhibParams

type InhibParams struct {
	ActAvg ActAvgParams    `` /* 173-byte string literal not displayed */
	Layer  fsfffb.GiParams `` /* 128-byte string literal not displayed */
	Pool   fsfffb.GiParams `view:"inline" desc:"inhibition across sub-pools of units, for layers with 4D shape"`
}

axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as expected average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time

func (*InhibParams) Defaults

func (ip *InhibParams) Defaults()

func (*InhibParams) Update

func (ip *InhibParams) Update()

type LRateMod added in v1.6.13

type LRateMod struct {
	On   slbool.Bool `desc:"toggle use of this modulation factor"`
	Base float32     `viewif:"On" min:"0" max:"1" desc:"baseline learning rate -- what you get for correct cases"`

	Range minmax.F32 `` /* 191-byte string literal not displayed */
	// contains filtered or unexported fields
}

LRateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.

func (*LRateMod) Defaults added in v1.6.13

func (lr *LRateMod) Defaults()

func (*LRateMod) LRateMod added in v1.6.13

func (lr *LRateMod) LRateMod(net *Network, fact float32) float32

LRateMod calls LRateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.

func (*LRateMod) Mod added in v1.6.13

func (lr *LRateMod) Mod(fact float32) float32

Mod returns the learning rate modulation factor as a function of any kind of normalized modulation factor, e.g., an error measure. If fact <= Range.Min, returns Base If fact >= Range.Max, returns 1 otherwise, returns proportional value between Base..1

func (*LRateMod) Update added in v1.6.13

func (lr *LRateMod) Update()

type LRateParams added in v1.6.13

type LRateParams struct {
	Base  float32 `` /* 199-byte string literal not displayed */
	Sched float32 `desc:"scheduled learning rate multiplier, simulating reduction in plasticity over aging"`
	Mod   float32 `desc:"dynamic learning rate modulation due to neuromodulatory or other such factors"`
	Eff   float32 `inactive:"+" desc:"effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base"`
}

LRateParams manages learning rate parameters

func (*LRateParams) Defaults added in v1.6.13

func (ls *LRateParams) Defaults()

func (*LRateParams) Init added in v1.6.13

func (ls *LRateParams) Init()

Init initializes modulation values back to 1 and updates Eff

func (*LRateParams) Update added in v1.6.13

func (ls *LRateParams) Update()

func (*LRateParams) UpdateEff added in v1.7.0

func (ls *LRateParams) UpdateEff()

type LaySpecialVals added in v1.7.0

type LaySpecialVals struct {
	V1 float32 `inactive:"+" desc:"one value"`
	V2 float32 `inactive:"+" desc:"one value"`
	V3 float32 `inactive:"+" desc:"one value"`
	V4 float32 `inactive:"+" desc:"one value"`
}

LaySpecialVals holds special values used to communicate to other layers based on neural values, used for special algorithms such as RL where some of the computation is done algorithmically.

type Layer

type Layer struct {
	LayerBase
	Params *LayerParams `desc:"all layer-level parameters -- these must remain constant once configured"`
	Vals   *LayerVals   `desc:"layer-level state values that are updated during computation"`
}

axon.Layer implements the basic Axon spiking activation function, and manages learning in the projections.

func (*Layer) AdaptInhib added in v1.2.37

func (ly *Layer) AdaptInhib(ctx *Context)

AdaptInhib adapts inhibition

func (*Layer) AllParams

func (ly *Layer) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Layer) AnyGated added in v1.7.0

func (ly *Layer) AnyGated() bool

AnyGated returns true if the layer-level pool Gated flag is true, which indicates if any of the layers gated.

func (*Layer) ApplyExt

func (ly *Layer) ApplyExt(ext etensor.Tensor)

ApplyExt applies external input in the form of an etensor.Float32. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt1D

func (ly *Layer) ApplyExt1D(ext []float64)

ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt1D32

func (ly *Layer) ApplyExt1D32(ext []float32)

ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt1DTsr

func (ly *Layer) ApplyExt1DTsr(ext etensor.Tensor)

ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext

func (*Layer) ApplyExt2D

func (ly *Layer) ApplyExt2D(ext etensor.Tensor)

ApplyExt2D applies 2D tensor external input

func (*Layer) ApplyExt2Dto4D

func (ly *Layer) ApplyExt2Dto4D(ext etensor.Tensor)

ApplyExt2Dto4D applies 2D tensor external input to a 4D layer

func (*Layer) ApplyExt4D

func (ly *Layer) ApplyExt4D(ext etensor.Tensor)

ApplyExt4D applies 4D tensor external input

func (*Layer) ApplyExtFlags

func (ly *Layer) ApplyExtFlags() (clrmsk, setmsk NeuronFlags, toTarg bool)

ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)

func (*Layer) AsAxon

func (ly *Layer) AsAxon() *Layer

AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff

func (*Layer) AvgDifFmTrgAvg added in v1.6.0

func (ly *Layer) AvgDifFmTrgAvg()

AvgDifFmTrgAvg updates neuron-level AvgDif values from AvgPct - TrgAvg which is then used for synaptic scaling of LWt values in Prjn SynScale.

func (*Layer) AvgGeM added in v1.2.21

func (ly *Layer) AvgGeM(ctx *Context)

AvgGeM computes the average and max GeM stats, updated in MinusPhase

func (*Layer) AvgMaxVarByPool added in v1.6.0

func (ly *Layer) AvgMaxVarByPool(varNm string, poolIdx int) minmax.AvgMax32

AvgMaxVarByPool returns the average and maximum value of given variable for given pool index (0 = entire layer, 1.. are subpools for 4D only). Uses fast index-based variable access.

func (*Layer) BLAPostBuild added in v1.7.0

func (ly *Layer) BLAPostBuild()

func (*Layer) BetweenLayerGi added in v1.7.5

func (ly *Layer) BetweenLayerGi(ctx *Context)

BetweenLayerGi computes inhibition Gi between layers

func (*Layer) BetweenLayerGiMax added in v1.7.5

func (ly *Layer) BetweenLayerGiMax(maxGi float32, net *Network, layIdx int32) float32

BetweenLayerGiMax returns max gi value for input maxGi vs the given layIdx layer

func (*Layer) ClearTargExt added in v1.2.65

func (ly *Layer) ClearTargExt()

ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Layer) CorSimFmActs added in v1.3.35

func (ly *Layer) CorSimFmActs()

CorSimFmActs computes the correlation similarity (centered cosine aka normalized dot product) in activation state between minus and plus phases.

func (*Layer) CostEst

func (ly *Layer) CostEst() (neur, syn, tot int)

CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.

func (*Layer) CycleNeuron added in v1.6.0

func (ly *Layer) CycleNeuron(ctx *Context, ni uint32, nrn *Neuron)

CycleNeuron does one cycle (msec) of updating at the neuron level

func (*Layer) CyclePost

func (ly *Layer) CyclePost(ctx *Context)

CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Spiking is finally computed and Sent. It ONLY runs on the CPU, not the GPU -- should update global values in the Context state which are re-sync'd back to GPU, and values in other layers MUST come from LayerVals because this is the only data that is sync'd back from the GPU each cycle. For example, updating a neuromodulatory signal such as dopamine.

func (*Layer) DTrgAvgFmErr added in v1.2.32

func (ly *Layer) DTrgAvgFmErr()

DTrgAvgFmErr computes change in TrgAvg based on unit-wise error signal Called by DWtLayer at the layer level

func (*Layer) DTrgSubMean added in v1.6.0

func (ly *Layer) DTrgSubMean()

DTrgSubMean subtracts the mean from DTrgAvg values Called by TrgAvgFmD

func (*Layer) DWtLayer added in v1.6.0

func (ly *Layer) DWtLayer(ctx *Context)

DWtLayer does weight change at the layer level. does NOT call main projection-level DWt method. in base, only calls DTrgAvgFmErr

func (*Layer) DecayCaLrnSpk added in v1.5.1

func (ly *Layer) DecayCaLrnSpk(decay float32)

DecayCaLrnSpk decays neuron-level calcium learning and spiking variables by given factor, which is typically ly.Params.Act.Decay.Glong. Note: this is NOT called by default and is generally not useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. It is only here for reference or optional testing.

func (*Layer) DecayState

func (ly *Layer) DecayState(ctx *Context, decay, glong float32)

DecayState decays activation state by given proportion (default decay values are ly.Params.Act.Decay.Act, Glong)

func (*Layer) DecayStateLayer added in v1.7.0

func (ly *Layer) DecayStateLayer(ctx *Context, decay, glong float32)

DecayStateLayer does layer-level decay, but not neuron level

func (*Layer) DecayStatePool

func (ly *Layer) DecayStatePool(pool int, decay, glong float32)

DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)

func (*Layer) Defaults

func (ly *Layer) Defaults()

func (*Layer) GInteg added in v1.5.12

func (ly *Layer) GInteg(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

GInteg integrates conductances G over time (Ge, NMDA, etc). calls SpecialGFmRawSyn, GiInteg

func (*Layer) GPDefaults added in v1.7.0

func (ly *Layer) GPDefaults()

func (*Layer) GPPostBuild added in v1.7.0

func (ly *Layer) GPPostBuild()

func (*Layer) GPiDefaults added in v1.7.0

func (ly *Layer) GPiDefaults()

func (*Layer) GatedFmSpkMax added in v1.7.0

func (ly *Layer) GatedFmSpkMax(thr float32) bool

GatedFmSpkMax updates the Gated state in Pools of given layer, based on Avg SpkMax being above given threshold. returns true if any gated.

func (*Layer) GatherSpikes added in v1.7.2

func (ly *Layer) GatherSpikes(ctx *Context, ni uint32, nrn *Neuron)

GatherSpikes integrates G*Raw and G*Syn values for given recv neuron while integrating the Recv Prjn-level GSyn integrated values. ni is layer-specific index of neuron within its layer.

func (*Layer) GiFmSpikes added in v1.5.12

func (ly *Layer) GiFmSpikes(ctx *Context)

GiFmSpikes gets the Spike, GeRaw and GeExt from neurons in the pools where Spike drives FBsRaw -- raw feedback signal, GeRaw drives FFsRaw -- aggregate feedforward excitatory spiking input GeExt represents extra excitatory input from other sources. Then integrates new inhibitory conductances therefrom, at the layer and pool level. Called separately by Network.CycleImpl on all Layers Also updates all AvgMax values at the Cycle level.

func (*Layer) HasPoolInhib added in v1.2.79

func (ly *Layer) HasPoolInhib() bool

HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.

func (*Layer) InitActAvg

func (ly *Layer) InitActAvg()

InitActAvg initializes the running-average activation values that drive learning. and the longer time averaging values.

func (*Layer) InitActs

func (ly *Layer) InitActs()

InitActs fully initializes activation state -- only called automatically during InitWts

func (*Layer) InitExt

func (ly *Layer) InitExt()

InitExt initializes external input state -- called prior to apply ext

func (*Layer) InitGScale added in v1.2.37

func (ly *Layer) InitGScale()

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Layer) InitPrjnGBuffs added in v1.5.12

func (ly *Layer) InitPrjnGBuffs()

InitPrjnGBuffs initializes the projection-level conductance buffers and conductance integration values for receiving projections in this layer.

func (*Layer) InitWtSym

func (ly *Layer) InitWtSym()

InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers

func (*Layer) InitWts

func (ly *Layer) InitWts()

InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs

func (*Layer) IsInput added in v1.2.32

func (ly *Layer) IsInput() bool

IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == emer.Input Used to prevent adapting of inhibition or TrgAvg values.

func (*Layer) IsInputOrTarget added in v1.6.11

func (ly *Layer) IsInputOrTarget() bool

IsInputOrTarget returns true if this layer is either an Input or a Target layer.

func (*Layer) IsLearnTrgAvg added in v1.2.32

func (ly *Layer) IsLearnTrgAvg() bool

func (*Layer) IsTarget

func (ly *Layer) IsTarget() bool

IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == emer.Target Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.

func (*Layer) LRateMod added in v1.6.13

func (ly *Layer) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Layer) LRateSched added in v1.6.13

func (ly *Layer) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Layer) LesionNeurons

func (ly *Layer) LesionNeurons(prop float32) int

LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed

func (*Layer) LocalistErr2D added in v1.5.3

func (ly *Layer) LocalistErr2D() (err bool, minusIdx, plusIdx int)

LocalistErr2D decodes a 2D layer with Y axis = redundant units, X = localist units returning the indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)

func (*Layer) LocalistErr4D added in v1.5.3

func (ly *Layer) LocalistErr4D() (err bool, minusIdx, plusIdx int)

LocalistErr4D decodes a 4D layer with each pool representing a localist value. Returns the flat 1D indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)

func (*Layer) MatrixDefaults added in v1.7.0

func (ly *Layer) MatrixDefaults()

func (*Layer) MatrixGated added in v1.7.0

func (ly *Layer) MatrixGated() bool

MatrixGated is called after std PlusPhase, on CPU, has Pool info downloaded from GPU

func (*Layer) MatrixPostBuild added in v1.7.0

func (ly *Layer) MatrixPostBuild()

func (*Layer) MinusPhase added in v1.2.63

func (ly *Layer) MinusPhase(ctx *Context)

MinusPhase does updating at end of the minus phase

func (*Layer) NewState added in v1.2.63

func (ly *Layer) NewState(ctx *Context)

NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()

func (*Layer) Object added in v1.7.0

func (ly *Layer) Object() interface{}

Object returns the object with parameters to be set by emer.Params

func (*Layer) PTMaintDefaults added in v1.7.2

func (ly *Layer) PTMaintDefaults()

func (*Layer) PctUnitErr

func (ly *Layer) PctUnitErr() float64

PctUnitErr returns the proportion of units where the thresholded value of Target (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Params.Act.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations.

func (*Layer) PlusPhase added in v1.2.63

func (ly *Layer) PlusPhase(ctx *Context)

PlusPhase does updating at end of the plus phase

func (*Layer) PlusPhasePost added in v1.7.0

func (ly *Layer) PlusPhasePost(ctx *Context)

PlusPhasePost does special algorithm processing at end of plus

func (*Layer) PoolGiFmSpikes added in v1.5.12

func (ly *Layer) PoolGiFmSpikes(ctx *Context)

PoolGiFmSpikes computes inhibition Gi from Spikes within relevant Pools

func (*Layer) PostBuild added in v1.7.0

func (ly *Layer) PostBuild()

PostBuild performs special post-Build() configuration steps for specific algorithms, using configuration data set in BuildConfig during the ConfigNet process.

func (*Layer) PostSpike added in v1.7.0

func (ly *Layer) PostSpike(ctx *Context, ni uint32, nrn *Neuron)

PostSpike does updates at neuron level after spiking has been computed. This is where special layer types add extra code. It also updates the CaSpkPCyc stats.

func (*Layer) PulvPostBuild added in v1.7.0

func (ly *Layer) PulvPostBuild()

PulvPostBuild does post-Build config of Pulvinar based on BuildConfig options

func (*Layer) PulvinarDriver added in v1.7.0

func (ly *Layer) PulvinarDriver(ni uint32) (drvGe, nonDrvPct float32)

func (*Layer) RSalAChMaxLayAct added in v1.7.0

func (ly *Layer) RSalAChMaxLayAct(maxAct float32, net *Network, layIdx int32) float32

RSalAChMaxLayAct returns the updated maxAct value using LayVals.ActAvg.CaSpkP.Max from given layer index, subject to any relevant RewThr thresholding.

func (*Layer) RSalAChPostBuild added in v1.7.0

func (ly *Layer) RSalAChPostBuild()

func (*Layer) RWDaPostBuild added in v1.7.0

func (ly *Layer) RWDaPostBuild()

RWDaPostBuild does post-Build config

func (*Layer) ReadWtsJSON

func (ly *Layer) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.

func (*Layer) RecvPrjnVals

func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error

RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).

func (*Layer) STNDefaults added in v1.7.0

func (ly *Layer) STNDefaults()

func (*Layer) SendPrjnVals

func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error

SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).

func (*Layer) SendSpike

func (ly *Layer) SendSpike(ctx *Context)

SendSpike sends spike to receivers for all neurons that spiked last step in Cycle, integrated the next time around.

func (*Layer) SetSubMean added in v1.6.11

func (ly *Layer) SetSubMean(trgAvg, prjn float32)

SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning

func (*Layer) SetWts

func (ly *Layer) SetWts(lw *weights.Layer) error

SetWts sets the weights for this layer from weights.Layer decoded values

func (*Layer) SlowAdapt added in v1.2.37

func (ly *Layer) SlowAdapt(ctx *Context)

SlowAdapt is the layer-level slow adaptation functions. Calls AdaptInhib and AvgDifFmTrgAvg for Synaptic Scaling. Does NOT call projection-level methods.

func (*Layer) SpikeFmG added in v1.6.0

func (ly *Layer) SpikeFmG(ctx *Context, ni uint32, nrn *Neuron)

SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that

func (*Layer) SpkSt1 added in v1.5.10

func (ly *Layer) SpkSt1(ctx *Context)

SpkSt1 saves current activation state in SpkSt1 variables (using CaP)

func (*Layer) SpkSt2 added in v1.5.10

func (ly *Layer) SpkSt2(ctx *Context)

SpkSt2 saves current activation state in SpkSt2 variables (using CaP)

func (*Layer) SynFail added in v1.2.92

func (ly *Layer) SynFail(ctx *Context)

SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.

func (*Layer) TDDaPostBuild added in v1.7.0

func (ly *Layer) TDDaPostBuild()

TDDaPostBuild does post-Build config

func (*Layer) TDIntegPostBuild added in v1.7.0

func (ly *Layer) TDIntegPostBuild()

TDIntegPostBuild does post-Build config

func (*Layer) TargToExt added in v1.2.65

func (ly *Layer) TargToExt()

TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Layer) TrgAvgFmD added in v1.2.32

func (ly *Layer) TrgAvgFmD()

TrgAvgFmD updates TrgAvg from DTrgAvg it is called by WtFmDWtLayer

func (*Layer) UnLesionNeurons

func (ly *Layer) UnLesionNeurons()

UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer

func (*Layer) UnitVal

func (ly *Layer) UnitVal(varNm string, idx []int) float32

UnitVal returns value of given variable name on given unit, using shape-based dimensional index

func (*Layer) UnitVal1D

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitVals

func (ly *Layer) UnitVals(vals *[]float32, varNm string) error

UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*Layer) UnitValsRepTensor added in v1.3.6

func (ly *Layer) UnitValsRepTensor(tsr etensor.Tensor, varNm string) error

UnitValsRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIdxs -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to RepShape to hold all the values if subset is defined, otherwise it calls UnitValsTensor and is identical to that. Returns error on invalid var name.

func (*Layer) UnitValsTensor

func (ly *Layer) UnitValsTensor(tsr etensor.Tensor, varNm string) error

UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.

func (*Layer) UnitVarIdx

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNames

func (ly *Layer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*Layer) UnitVarProps

func (ly *Layer) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Layer) Update added in v1.7.0

func (ly *Layer) Update()

Update is an interface for generically updating after edits this should be used only for the values on the struct itself. UpdateParams is used to update all parameters, including Prjn.

func (*Layer) UpdateExtFlags

func (ly *Layer) UpdateExtFlags()

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Layer) UpdateParams

func (ly *Layer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer. This is not called Update because it is not just about the local values in the struct.

func (*Layer) VThalDefaults added in v1.7.0

func (ly *Layer) VThalDefaults()

func (*Layer) WriteWtsJSON

func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)

WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

func (*Layer) WtFmDWtLayer added in v1.6.0

func (ly *Layer) WtFmDWtLayer(ctx *Context)

WtFmDWtLayer does weight update at the layer level. does NOT call main projection-level WtFmDWt method. in base, only calls TrgAvgFmD

type LayerBase added in v1.4.5

type LayerBase struct {
	AxonLay     AxonLayer         `` /* 297-byte string literal not displayed */
	Network     emer.Network      `` /* 141-byte string literal not displayed */
	Nm          string            `` /* 151-byte string literal not displayed */
	Cls         string            `desc:"Class is for applying parameter styles, can be space separated multple tags"`
	Off         bool              `desc:"inactivate this layer -- allows for easy experimentation"`
	Shp         etensor.Shape     `` /* 219-byte string literal not displayed */
	Typ         emer.LayerType    `` /* 161-byte string literal not displayed */
	Rel         relpos.Rel        `view:"inline" desc:"Spatial relationship to other layer, determines positioning"`
	Ps          mat32.Vec3        `` /* 154-byte string literal not displayed */
	Idx         int               `` /* 278-byte string literal not displayed */
	NeurStIdx   int               `view:"-" inactive:"-" desc:"starting index of neurons for this layer within the global Network list"`
	RepIxs      []int             `` /* 128-byte string literal not displayed */
	RepShp      etensor.Shape     `view:"-" desc:"shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape"`
	RcvPrjns    AxonPrjns         `desc:"list of receiving projections into this layer from other layers"`
	SndPrjns    AxonPrjns         `desc:"list of sending projections from this layer to other layers"`
	Neurons     []Neuron          `` /* 133-byte string literal not displayed */
	Pools       []Pool            `` /* 234-byte string literal not displayed */
	BuildConfig map[string]string `` /* 523-byte string literal not displayed */
}

LayerBase manages the structural elements of the layer, which are common to any Layer type. The main Layer then can just have the algorithm-specific code.

func (*LayerBase) ApplyParams added in v1.4.5

func (ly *LayerBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*LayerBase) Build added in v1.7.0

func (ly *LayerBase) Build() error

Build constructs the layer state, including calling Build on the projections

func (*LayerBase) BuildConfigByName added in v1.7.0

func (ly *LayerBase) BuildConfigByName(nm string) (string, error)

BuildConfigByName looks for given BuildConfig option by name, and reports & returns an error if not found.

func (*LayerBase) BuildConfigFindLayer added in v1.7.0

func (ly *LayerBase) BuildConfigFindLayer(nm string, mustName bool) int32

BuildConfigFindLayer looks for BuildConfig of given name and if found, looks for layer with corresponding name. if mustName is true, then an error is logged if the BuildConfig name does not exist. An error is always logged if the layer name is not found. -1 is returned in any case of not found.

func (*LayerBase) BuildPools added in v1.7.0

func (ly *LayerBase) BuildPools(nu int) error

BuildPools builds the inhibitory pools structures -- nu = number of units in layer

func (*LayerBase) BuildPrjns added in v1.7.0

func (ly *LayerBase) BuildPrjns() error

BuildPrjns builds the projections, recv-side

func (*LayerBase) BuildSubPools added in v1.7.0

func (ly *LayerBase) BuildSubPools()

BuildSubPools initializes neuron start / end indexes for sub-pools

func (*LayerBase) Class added in v1.4.5

func (ly *LayerBase) Class() string

func (*LayerBase) Config added in v1.4.5

func (ly *LayerBase) Config(shape []int, typ emer.LayerType)

Config configures the basic properties of the layer

func (*LayerBase) Idx4DFrom2D added in v1.4.5

func (ly *LayerBase) Idx4DFrom2D(x, y int) ([]int, bool)

func (*LayerBase) Index added in v1.4.5

func (ly *LayerBase) Index() int

func (*LayerBase) InitName added in v1.4.5

func (ly *LayerBase) InitName(lay emer.Layer, name string, net emer.Network)

InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).

func (*LayerBase) Is2D added in v1.4.5

func (ly *LayerBase) Is2D() bool

func (*LayerBase) Is4D added in v1.4.5

func (ly *LayerBase) Is4D() bool

func (*LayerBase) IsOff added in v1.4.5

func (ly *LayerBase) IsOff() bool

func (*LayerBase) Label added in v1.4.5

func (ly *LayerBase) Label() string

func (*LayerBase) LayerType added in v1.7.0

func (ly *LayerBase) LayerType() LayerTypes

func (*LayerBase) NPools added in v1.4.5

func (ly *LayerBase) NPools() int

NPools returns the number of unit sub-pools according to the shape parameters. Currently supported for a 4D shape, where the unit pools are the first 2 Y,X dims and then the units within the pools are the 2nd 2 Y,X dims

func (*LayerBase) NRecvPrjns added in v1.4.5

func (ly *LayerBase) NRecvPrjns() int

func (*LayerBase) NSendPrjns added in v1.4.5

func (ly *LayerBase) NSendPrjns() int

func (*LayerBase) Name added in v1.4.5

func (ly *LayerBase) Name() string

func (*LayerBase) NeurStartIdx added in v1.6.0

func (ly *LayerBase) NeurStartIdx() int

func (*LayerBase) NonDefaultParams added in v1.4.5

func (ly *LayerBase) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.

func (*LayerBase) Pool added in v1.7.0

func (ly *LayerBase) Pool(idx int) *Pool

Pool returns pool at given index

func (*LayerBase) PoolTry added in v1.7.0

func (ly *LayerBase) PoolTry(idx int) (*Pool, error)

PoolTry returns pool at given index, returns error if index is out of range

func (*LayerBase) Pos added in v1.4.5

func (ly *LayerBase) Pos() mat32.Vec3

func (*LayerBase) RecipToRecvPrjn added in v1.7.2

func (ly *LayerBase) RecipToRecvPrjn(rpj emer.Prjn) (emer.Prjn, bool)

RecipToRecvPrjn finds the reciprocal projection to the given recv projection within the ly layer. i.e., where ly is instead the *sending* layer to same other layer B that is the sender of the rpj projection we're receiving from.

ly = A, other layer = B:

rpj: R=A <- S=B spj: S=A -> R=B

returns false if not found.

func (*LayerBase) RecipToSendPrjn added in v1.4.5

func (ly *LayerBase) RecipToSendPrjn(spj emer.Prjn) (emer.Prjn, bool)

RecipToSendPrjn finds the reciprocal projection to the given sending projection within the ly layer. i.e., where ly is instead the *receiving* layer from same other layer B that is the receiver of the spj projection we're sending to.

ly = A,  other layer = B:

spj: S=A -> R=B rpj: R=A <- S=B

returns false if not found.

func (*LayerBase) RecvNameTry added in v1.7.0

func (ly *LayerBase) RecvNameTry(receiver string) (emer.Prjn, error)

func (*LayerBase) RecvNameTypeTry added in v1.7.0

func (ly *LayerBase) RecvNameTypeTry(receiver, typ string) (emer.Prjn, error)

func (*LayerBase) RecvPrjn added in v1.4.5

func (ly *LayerBase) RecvPrjn(idx int) emer.Prjn

func (*LayerBase) RecvPrjns added in v1.4.5

func (ly *LayerBase) RecvPrjns() *AxonPrjns

func (*LayerBase) RelPos added in v1.4.5

func (ly *LayerBase) RelPos() relpos.Rel

func (*LayerBase) RepIdxs added in v1.4.5

func (ly *LayerBase) RepIdxs() []int

func (*LayerBase) RepShape added in v1.4.8

func (ly *LayerBase) RepShape() *etensor.Shape

RepShape returns the shape to use for representative units

func (*LayerBase) SendNameTry added in v1.7.0

func (ly *LayerBase) SendNameTry(sender string) (emer.Prjn, error)

func (*LayerBase) SendNameTypeTry added in v1.7.0

func (ly *LayerBase) SendNameTypeTry(sender, typ string) (emer.Prjn, error)

func (*LayerBase) SendPrjn added in v1.4.5

func (ly *LayerBase) SendPrjn(idx int) emer.Prjn

func (*LayerBase) SendPrjns added in v1.4.5

func (ly *LayerBase) SendPrjns() *AxonPrjns

func (*LayerBase) SetBuildConfig added in v1.7.0

func (ly *LayerBase) SetBuildConfig(param, val string)

SetBuildConfig sets named configuration parameter to given string value to be used in the PostBuild stage -- mainly for layer names that need to be looked up and turned into indexes, after entire network is built.

func (*LayerBase) SetClass added in v1.4.5

func (ly *LayerBase) SetClass(cls string)

func (*LayerBase) SetIndex added in v1.4.5

func (ly *LayerBase) SetIndex(idx int)

func (*LayerBase) SetName added in v1.4.5

func (ly *LayerBase) SetName(nm string)

func (*LayerBase) SetOff added in v1.4.5

func (ly *LayerBase) SetOff(off bool)

func (*LayerBase) SetPos added in v1.4.5

func (ly *LayerBase) SetPos(pos mat32.Vec3)

func (*LayerBase) SetRelPos added in v1.4.5

func (ly *LayerBase) SetRelPos(rel relpos.Rel)

func (*LayerBase) SetRepIdxsShape added in v1.4.8

func (ly *LayerBase) SetRepIdxsShape(idxs, shape []int)

SetRepIdxsShape sets the RepIdxs, and RepShape and as list of dimension sizes

func (*LayerBase) SetShape added in v1.4.5

func (ly *LayerBase) SetShape(shape []int)

SetShape sets the layer shape and also uses default dim names

func (*LayerBase) SetThread added in v1.4.5

func (ly *LayerBase) SetThread(thr int)

func (*LayerBase) SetType added in v1.4.5

func (ly *LayerBase) SetType(typ emer.LayerType)

func (*LayerBase) Shape added in v1.4.5

func (ly *LayerBase) Shape() *etensor.Shape

func (*LayerBase) Size added in v1.4.5

func (ly *LayerBase) Size() mat32.Vec2

func (*LayerBase) Thread added in v1.4.5

func (ly *LayerBase) Thread() int

todo: remove from emer.Layer api

func (*LayerBase) Type added in v1.4.5

func (ly *LayerBase) Type() emer.LayerType

func (*LayerBase) TypeName added in v1.4.5

func (ly *LayerBase) TypeName() string

func (*LayerBase) VarRange added in v1.7.0

func (ly *LayerBase) VarRange(varNm string) (min, max float32, err error)

VarRange returns the min / max values for given variable todo: support r. s. projection values

type LayerIdxs added in v1.7.0

type LayerIdxs struct {
	PoolSt uint32 `inactive:"+" desc:"start of pools for this layer -- first one is always the layer-wide pool"`
	NeurSt uint32 `inactive:"+" desc:"start of neurons for this layer in global array (same as Layer.NeurStIdx)"`
	NeurN  uint32 `inactive:"+" desc:"number of neurons in layer"`
	RecvSt uint32 `inactive:"+" desc:"start index into RecvPrjns global array"`
	RecvN  uint32 `inactive:"+" desc:"number of recv projections"`
	// contains filtered or unexported fields
}

LayerIdxs contains index access into global arrays for GPU.

type LayerParams added in v1.7.0

type LayerParams struct {
	LayType LayerTypes `` /* 140-byte string literal not displayed */

	Act   ActParams       `view:"add-fields" desc:"Activation parameters and methods for computing activations"`
	Inhib InhibParams     `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"`
	Learn LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"`

	Burst   BurstParams   `` /* 181-byte string literal not displayed */
	CT      CTParams      `` /* 234-byte string literal not displayed */
	Pulv    PulvParams    `` /* 241-byte string literal not displayed */
	RSalACh RSalAChParams `` /* 173-byte string literal not displayed */
	RWPred  RWPredParams  `` /* 170-byte string literal not displayed */
	RWDa    RWDaParams    `` /* 177-byte string literal not displayed */
	TDInteg TDIntegParams `viewif:"LayType=TDIntegLayer" view:"inline" desc:"parameterizes TD reward integration layer"`
	TDDa    TDDaParams    `` /* 180-byte string literal not displayed */
	BLA     BLAParams     `` /* 166-byte string literal not displayed */
	Matrix  MatrixParams  `` /* 144-byte string literal not displayed */
	GP      GPParams      `viewif:"LayType=GPLayer" view:"inline" desc:"type of GP Layer."`

	Idxs LayerIdxs `view:"-" desc:"recv and send projection array access info"`

	LayInhib1Idx int32 `` /* 143-byte string literal not displayed */
	LayInhib2Idx int32 `` /* 143-byte string literal not displayed */
	LayInhib3Idx int32 `` /* 143-byte string literal not displayed */
	LayInhib4Idx int32 `` /* 144-byte string literal not displayed */
	// contains filtered or unexported fields
}

LayerParams contains all of the layer parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.

func (*LayerParams) AllParams added in v1.7.0

func (ly *LayerParams) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*LayerParams) BLADefaults added in v1.7.0

func (ly *LayerParams) BLADefaults()

func (*LayerParams) CTDefaults added in v1.7.0

func (ly *LayerParams) CTDefaults()

called in Defaults for CT layer type

func (*LayerParams) Defaults added in v1.7.0

func (ly *LayerParams) Defaults()

func (*LayerParams) GFmRawSyn added in v1.7.0

func (ly *LayerParams) GFmRawSyn(ctx *Context, ni uint32, nrn *Neuron)

GFmRawSyn computes overall Ge and GiSyn conductances for neuron from GeRaw and GeSyn values, including NMDA, VGCC, AMPA, and GABA-A channels. drvAct is for Pulvinar layers, activation of driving neuron

func (*LayerParams) GNeuroMod added in v1.7.0

func (ly *LayerParams) GNeuroMod(ctx *Context, ni uint32, nrn *Neuron, vals *LayerVals)

GNeuroMod does neuromodulation of conductances

func (*LayerParams) GatherSpikesInit added in v1.7.2

func (ly *LayerParams) GatherSpikesInit(nrn *Neuron)

GatherSpikesInit initializes G*Raw and G*Syn values for given neuron prior to integration

func (*LayerParams) GeToPool added in v1.7.2

func (ly *LayerParams) GeToPool(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, lpl *Pool, subPool bool)

GeToPool adds Spike, GeRaw and GeExt from each neuron into the Pools

func (*LayerParams) GiInteg added in v1.7.0

func (ly *LayerParams) GiInteg(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

GiInteg adds Gi values from all sources including SubPool computed inhib and updates GABAB as well

func (*LayerParams) LayPoolGiFmSpikes added in v1.7.0

func (ly *LayerParams) LayPoolGiFmSpikes(ctx *Context, lpl *Pool, vals *LayerVals)

LayPoolGiFmSpikes computes inhibition Gi from Spikes for layer-level pool

func (*LayerParams) MinusPhase added in v1.7.0

func (ly *LayerParams) MinusPhase(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

MinusPhase does neuron level minus-phase updating

func (*LayerParams) NewState added in v1.7.0

func (ly *LayerParams) NewState(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point.

func (*LayerParams) PlusPhase added in v1.7.0

func (ly *LayerParams) PlusPhase(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, lpl *Pool, vals *LayerVals)

PlusPhase does neuron level plus-phase updating

func (*LayerParams) PostSpike added in v1.7.0

func (ly *LayerParams) PostSpike(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, vals *LayerVals)

PostSpike does updates at neuron level after spiking has been computed. it is called *after* PostSpikeSpecial. It also updates the CaSpkPCyc stats.

func (*LayerParams) PostSpikeSpecial added in v1.7.0

func (ly *LayerParams) PostSpikeSpecial(ctx *Context, ni uint32, nrn *Neuron, pl *Pool, lpl *Pool, vals *LayerVals)

PostSpikeSpecial does updates at neuron level after spiking has been computed. This is where special layer types add extra code. It also updates the CaSpkPCyc stats.

func (*LayerParams) PulvDefaults added in v1.7.0

func (ly *LayerParams) PulvDefaults()

called in Defaults for Pulvinar layer type

func (*LayerParams) RWDefaults added in v1.7.0

func (ly *LayerParams) RWDefaults()

func (*LayerParams) RWPredDefaults added in v1.7.0

func (ly *LayerParams) RWPredDefaults()

func (*LayerParams) SpecialPostGs added in v1.7.0

func (ly *LayerParams) SpecialPostGs(ctx *Context, ni uint32, nrn *Neuron, saveVal float32)

SpecialPostGs is used for special layer types to do things after the standard updates in GFmRawSyn. It is passed the saveVal from SpecialPreGs

func (*LayerParams) SpecialPreGs added in v1.7.0

func (ly *LayerParams) SpecialPreGs(ctx *Context, ni uint32, nrn *Neuron, drvGe float32, nonDrvPct float32) float32

SpecialPreGs is used for special layer types to do things to the conductance values prior to doing the standard updates in GFmRawSyn drvAct is for Pulvinar layers, activation of driving neuron

func (*LayerParams) SpikeFmG added in v1.7.0

func (ly *LayerParams) SpikeFmG(ctx *Context, ni uint32, nrn *Neuron)

SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that

func (*LayerParams) SubPoolGiFmSpikes added in v1.7.0

func (ly *LayerParams) SubPoolGiFmSpikes(ctx *Context, pl *Pool, lpl *Pool, lyInhib bool, giMult float32)

SubPoolGiFmSpikes computes inhibition Gi from Spikes within a sub-pool pl is guaranteed not to be the overall layer pool

func (*LayerParams) TDDefaults added in v1.7.0

func (ly *LayerParams) TDDefaults()

func (*LayerParams) TDPredDefaults added in v1.7.0

func (ly *LayerParams) TDPredDefaults()

func (*LayerParams) Update added in v1.7.0

func (ly *LayerParams) Update()

type LayerTypes added in v1.7.0

type LayerTypes int32

LayerTypes is an axon-specific layer type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.LayerType, although we replace Hidden -> Super.

const (
	// Super is a superficial cortical layer (lamina 2-3-4)
	// which does not receive direct input or targets.
	// In more generic models, it should be used as a Hidden layer,
	// and maps onto the Hidden type in emer.LayerType.
	SuperLayer LayerTypes = iota

	// Input is a layer that receives direct external input
	// in its Ext inputs.  Biologically, it can be a primary
	// sensory layer, or a thalamic layer.
	InputLayer

	// Target is a layer that receives direct external target inputs
	// used for driving plus-phase learning.
	// Simple target layers are generally not used in more biological
	// models, which instead use predictive learning via Pulvinar
	// or related mechanisms.
	TargetLayer

	// Compare is a layer that receives external comparison inputs,
	// which drive statistics but do NOT drive activation
	// or learning directly.  It is rarely used in axon.
	CompareLayer

	// CT are layer 6 corticothalamic projecting neurons,
	// which drive "top down" predictions in Pulvinar layers.
	// They maintain information over time via stronger NMDA
	// channels and use maintained prior state information to
	// generate predictions about current states forming on Super
	// layers that then drive PT (5IB) bursting activity, which
	// are the plus-phase drivers of Pulvinar activity.
	CTLayer

	// Pulvinar are thalamic relay cell neurons in the higher-order
	// Pulvinar nucleus of the thalamus, and functionally isomorphic
	// neurons in the MD thalamus, and potentially other areas.
	// These cells alternately reflect predictions driven by CT projections,
	// and actual outcomes driven by 5IB Burst activity from corresponding
	// PT or Super layer neurons that provide strong driving inputs.
	PulvinarLayer

	// TRNLayer is thalamic reticular nucleus layer for inhibitory competition
	// within the thalamus.
	TRNLayer

	// PTMaintLayer implements the pyramidal tract layer 5 intrinsic bursting
	// (5IB) deep neurons, which provide the main output signal from cortex,
	// specifically the subset of PT neurons that are gated by the BG to
	// drive sustained active maintenance, via strong NMDA channels.
	// Set projections from thalamus to be modulatory, and use Act.Dend.ModGain
	// to set extra strength these inputs which are only briefly active.
	PTMaintLayer

	// RewLayer represents positive or negative reward values across 2 units,
	// showing spiking rates for each, and Act always represents signed value.
	RewLayer

	// RSalienceAChLayer reads Max layer activity from specified source layer(s)
	// and optionally the global Context.NeuroMod.Rew or RewPred state variables,
	// and updates the global ACh = Max of all as the positively-rectified,
	// non-prediction-discounted reward salience signal.
	// Acetylcholine (ACh) is known to represent something like this signal.
	RSalienceAChLayer

	// RWPredLayer computes reward prediction for a simple Rescorla-Wagner
	// learning dynamic (i.e., PV learning in the PVLV framework).
	// Activity is computed as linear function of excitatory conductance
	// (which can be negative -- there are no constraints).
	// Use with RWPrjn which does simple delta-rule learning on minus-plus.
	RWPredLayer

	// RWDaLayer computes a dopamine (DA) signal based on a simple Rescorla-Wagner
	// learning dynamic (i.e., PV learning in the PVLV framework).
	// It computes difference between r(t) and RWPred values.
	// r(t) is accessed directly from a Rew layer -- if no external input then no
	// DA is computed -- critical for effective use of RW only for PV cases.
	// RWPred prediction is also accessed directly from Rew layer to avoid any issues.
	RWDaLayer

	// TDPredLayer is the temporal differences reward prediction layer.
	// It represents estimated value V(t) in the minus phase, and computes
	// estimated V(t+1) based on its learned weights in plus phase,
	// using the TDPredPrjn projection type for DA modulated learning.
	TDPredLayer

	// TDIntegLayer is the temporal differences reward integration layer.
	// It represents estimated value V(t) from prior time step in the minus phase,
	// and estimated discount * V(t+1) + r(t) in the plus phase.
	// It gets Rew, PrevPred from Context.NeuroMod, and Special
	// LayerVals from TDPredLayer.
	TDIntegLayer

	// TDDaLayer computes a dopamine (DA) signal as the temporal difference (TD)
	// between the TDIntegLayer activations in the minus and plus phase.
	// These are retrieved from Special LayerVals.
	TDDaLayer

	// BLALayer represents a basolateral amygdala layer
	// which learns to associate arbitrary stimuli (CSs)
	// with behaviorally salient outcomes (USs)
	BLALayer

	// CeMLayer represents a central nucleus of the amygdala layer.
	CeMLayer

	// PPTgLayer represents a pedunculopontine tegmental nucleus layer.
	// it subtracts prior trial's excitatory conductance to
	// compute the temporal derivative over time, with a positive
	// rectification.
	// also sets Act to the exact differenence.
	PPTgLayer

	// MatrixLayer represents the matrisome medium spiny neurons (MSNs)
	// that are the main Go / NoGo gating units in BG.
	// These are strongly modulated by phasic dopamine: D1 = Go, D2 = NoGo.
	MatrixLayer

	// STNLayer represents subthalamic nucleus neurons, with two subtypes:
	// STNp are more strongly driven and get over bursting threshold, driving strong,
	// rapid activation of the KCa channels, causing a long pause in firing, which
	// creates a window during which GPe dynamics resolve Go vs. No balance.
	// STNs are more weakly driven and thus more slowly activate KCa, resulting in
	// a longer period of activation, during which the GPi is inhibited to prevent
	// premature gating based only MtxGo inhibition -- gating only occurs when
	// GPeIn signal has had a chance to integrate its MtxNo inputs.
	STNLayer

	// GPLayer represents a globus pallidus layer in the BG, including:
	// GPeOut, GPeIn, GPeTA (arkypallidal), and GPi.
	// Typically just a single unit per Pool representing a given stripe.
	GPLayer

	// VThalLayer represents a BG gated thalamic layer,
	// which receives BG gating in the form of an
	// inhibitory projection from GPi.  Located
	// mainly in the Ventral thalamus: VA / VM / VL,
	// and also parts of MD mediodorsal thalamus.
	VThalLayer

	LayerTypesN
)

The layer types

func (*LayerTypes) FromString added in v1.7.0

func (i *LayerTypes) FromString(s string) error

func (LayerTypes) MarshalJSON added in v1.7.0

func (ev LayerTypes) MarshalJSON() ([]byte, error)

func (LayerTypes) String added in v1.7.0

func (i LayerTypes) String() string

func (*LayerTypes) UnmarshalJSON added in v1.7.0

func (ev *LayerTypes) UnmarshalJSON(b []byte) error

type LayerVals added in v1.7.0

type LayerVals struct {
	ActAvg   ActAvgVals     `view:"inline" desc:"running-average activation levels used for Ge scaling and adaptive inhibition"`
	CorSim   CorSimStats    `desc:"correlation (centered cosine aka normalized dot product) similarity between ActM, ActP states"`
	NeuroMod NeuroModVals   `view:"inline" desc:"neuromodulatory values: global to the layer, copied from Context"`
	Special  LaySpecialVals `` /* 282-byte string literal not displayed */
}

LayerVals holds extra layer state that is updated per layer. It is sync'd down from the GPU to the CPU after every Cycle.

type LearnNeurParams

type LearnNeurParams struct {
	CaLrn     CaLrnParams      `` /* 376-byte string literal not displayed */
	CaSpk     CaSpkParams      `` /* 456-byte string literal not displayed */
	LrnNMDA   chans.NMDAParams `` /* 266-byte string literal not displayed */
	TrgAvgAct TrgAvgActParams  `` /* 126-byte string literal not displayed */
	RLRate    RLRateParams     `` /* 184-byte string literal not displayed */
	NeuroMod  NeuroModParams   `` /* 221-byte string literal not displayed */
}

axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning

func (*LearnNeurParams) CaFmSpike added in v1.3.5

func (ln *LearnNeurParams) CaFmSpike(nrn *Neuron)

CaFmSpike updates all spike-driven calcium variables, including CaLrn and CaSpk. Computed after new activation for current cycle is updated.

func (*LearnNeurParams) DecayCaLrnSpk added in v1.5.1

func (ln *LearnNeurParams) DecayCaLrnSpk(nrn *Neuron, decay float32)

DecayNeurCa decays neuron-level calcium learning and spiking variables by given factor. Note: this is NOT called by default and is generally not useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. It is only here for reference or optional testing.

func (*LearnNeurParams) Defaults

func (ln *LearnNeurParams) Defaults()

func (*LearnNeurParams) InitNeurCa added in v1.3.9

func (ln *LearnNeurParams) InitNeurCa(nrn *Neuron)

InitCaLrnSpk initializes the neuron-level calcium learning and spking variables. Called by InitWts (at start of learning).

func (*LearnNeurParams) LrnNMDAFmRaw added in v1.3.11

func (ln *LearnNeurParams) LrnNMDAFmRaw(nrn *Neuron, geTot float32)

LrnNMDAFmRaw updates the separate NMDA conductance and calcium values based on GeTot = GeRaw + external ge conductance. These are the variables that drive learning -- can be the same as activation but also can be different for testing learning Ca effects independent of activation effects.

func (*LearnNeurParams) Update

func (ln *LearnNeurParams) Update()

type LearnSynParams

type LearnSynParams struct {
	Learn slbool.Bool `desc:"enable learning for this projection"`

	LRate    LRateParams     `desc:"learning rate parameters, supporting two levels of modulation on top of base learning rate."`
	Trace    TraceParams     `desc:"trace-based learning parameters"`
	KinaseCa kinase.CaParams `view:"inline" desc:"kinase calcium Ca integration parameters"`
	// contains filtered or unexported fields
}

LearnSynParams manages learning-related parameters at the synapse-level.

func (*LearnSynParams) CHLdWt

func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32

CHLdWt returns the error-driven weight change component for a CHL contrastive hebbian learning rule, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function

func (*LearnSynParams) Defaults

func (ls *LearnSynParams) Defaults()

func (*LearnSynParams) DeltaDWt added in v1.5.1

func (ls *LearnSynParams) DeltaDWt(plus, minus float32) float32

DeltaDWt returns the error-driven weight change component for a simple delta between a minus and plus phase factor, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function

func (*LearnSynParams) Update

func (ls *LearnSynParams) Update()

type MatrixParams added in v1.7.0

type MatrixParams struct {
	// GPHasPools     bool        `desc:"do the GP pathways that we drive have separate pools that compete for selecting one out of multiple options in parallel (true) or is it a single big competition for Go vs. No (false).  If true, then Matrix must also have pools "`
	GateThr        float32 `desc:"threshold on layer Avg SpkMax for Matrix Go and VThal layers to count as having gated"`
	NoGoGeLrn      float32 `` /* 288-byte string literal not displayed */
	OtherMatrixIdx int32   `` /* 130-byte string literal not displayed */
	ThalLay1Idx    int32   `` /* 169-byte string literal not displayed */
	ThalLay2Idx    int32   `` /* 169-byte string literal not displayed */
	ThalLay3Idx    int32   `` /* 169-byte string literal not displayed */
	ThalLay4Idx    int32   `` /* 169-byte string literal not displayed */
	ThalLay5Idx    int32   `` /* 169-byte string literal not displayed */
}

MatrixParams has parameters for BG Striatum Matrix MSN layers These are the main Go / NoGo gating units in BG. DA, ACh learning rate modulation is pre-computed on the recv neuron RLRate variable via NeuroMod. Also uses Pool.Gated for InvertNoGate, updated in PlusPhase prior to DWt call. Must set Learn.NeuroMod.DAMod = D1Mod or D2Mod via SetBuildConfig("DAMod").

func (*MatrixParams) Defaults added in v1.7.0

func (mp *MatrixParams) Defaults()

func (*MatrixParams) Update added in v1.7.0

func (mp *MatrixParams) Update()

type MatrixPrjnParams added in v1.7.0

type MatrixPrjnParams struct {
	CurTrlDA    slbool.Bool `` /* 277-byte string literal not displayed */
	NoGateLRate float32     `` /* 272-byte string literal not displayed */
	AChDecay    float32     `` /* 167-byte string literal not displayed */
	UseHasRew   slbool.Bool `` /* 182-byte string literal not displayed */
}

MatrixPrjnParams for trace-based learning in the MatrixPrjn. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is reset at time of reward based on ACh level from CINs.

func (*MatrixPrjnParams) Defaults added in v1.7.0

func (tp *MatrixPrjnParams) Defaults()

func (*MatrixPrjnParams) TraceDecay added in v1.7.0

func (tp *MatrixPrjnParams) TraceDecay(ctx *Context, ach float32) float32

TraceDecay returns the decay factor as a function of ach level and context

func (*MatrixPrjnParams) Update added in v1.7.0

func (tp *MatrixPrjnParams) Update()

type NetThreads added in v1.6.2

type NetThreads struct {
	Neurons   int `desc:"for basic neuron-level computation -- highly parallel and linear in memory -- should be able to use a lot of threads"`
	SendSpike int `` /* 174-byte string literal not displayed */
	SynCa     int `` /* 142-byte string literal not displayed */
}

NetThreads parameterizes how many goroutines to use for each task

func (*NetThreads) Set added in v1.6.2

func (nt *NetThreads) Set(neurons, sendSpike, synCa int) error

Set sets number of goroutines manually for each task This exists mainly for testing, just use SetDefaults in normal use, and GOMAXPROCS=1 to force single-threaded operation.

func (*NetThreads) SetDefaults added in v1.6.2

func (nt *NetThreads) SetDefaults(nNeurons, nPrjns, nLayers int)

SetDefaults uses heuristics to determine the number of goroutines to use for each task: Neurons, SendSpike, SynCa.

func (*NetThreads) String added in v1.7.1

func (nt *NetThreads) String() string

type Network

type Network struct {
	NetworkBase
	SlowInterval int `` /* 174-byte string literal not displayed */
	SlowCtr      int `inactive:"+" desc:"counter for how long it has been since last SlowAdapt step"`
}

axon.Network has parameters for running a basic rate-coded Axon network

func NewNetwork added in v1.2.94

func NewNetwork(name string) *Network

NewNetwork returns a new axon Network

func (*Network) AddAmygdala added in v1.7.0

func (nt *Network) AddAmygdala(prefix string, neg bool, nUs, unY, unX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, pptg AxonLayer)

AddAmygdala adds a full amygdala complex including BLA, CeM, and PPTg. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included.

func (*Network) AddBG added in v1.7.0

func (nt *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi AxonLayer)

AddBG adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical). A CIN or more widely used RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func (*Network) AddBG4D added in v1.7.0

func (nt *Network) AddBG4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi AxonLayer)

AddBG4D adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. This version makes 4D pools throughout the GP layers, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) A CIN or more widely used RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func (*Network) AddBLALayers added in v1.7.0

func (nt *Network) AddBLALayers(prefix string, pos bool, nUs, unY, unX int, rel relpos.Relations, space float32) (acq, ext AxonLayer)

AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence

func (*Network) AddCINLayer added in v1.7.0

func (nt *Network) AddCINLayer(name, mtxGo, mtxNo string, space float32) *Layer

AddCINLayer adds a RSalienceLayer unsigned reward salience coding ACh layer which sends ACh to given Matrix Go and No layers (names), and is default located to the right of the MtxNo layer with given spacing. CIN is a cholinergic interneuron interspersed in the striatum that shows these response properties and modulates learning in the striatum around US and CS events. If other ACh modulation is needed, a global RSalienceLayer can be used.

func (*Network) AddCTLayer2D added in v1.7.0

func (nt *Network) AddCTLayer2D(name string, nNeurY, nNeurX int) *Layer

AddCTLayer2D adds a CT Layer of given size, with given name.

func (*Network) AddCTLayer4D added in v1.7.0

func (nt *Network) AddCTLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddCTLayer4D adds a CT Layer of given size, with given name.

func (*Network) AddClampDaLayer added in v1.7.0

func (nt *Network) AddClampDaLayer(name string) *Layer

AddClampDaLayer adds a ClampDaLayer of given name

func (*Network) AddGPeLayer2D added in v1.7.0

func (nt *Network) AddGPeLayer2D(name string, nNeurY, nNeurX int) *Layer

AddGPLayer2D adds a GPLayer of given size, with given name. Must set the GPType BuildConfig setting to appropriate GPLayerType

func (*Network) AddGPeLayer4D added in v1.7.0

func (nt *Network) AddGPeLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddGPLayer4D adds a GPLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddGPiLayer2D added in v1.7.0

func (nt *Network) AddGPiLayer2D(name string, nNeurY, nNeurX int) *Layer

AddGPiLayer2D adds a GPiLayer of given size, with given name.

func (*Network) AddGPiLayer4D added in v1.7.0

func (nt *Network) AddGPiLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddGPiLayer4D adds a GPiLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddInputPulv2D added in v1.7.0

func (nt *Network) AddInputPulv2D(name string, nNeurY, nNeurX int, space float32) (emer.Layer, *Layer)

AddInputPulv2D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.

func (*Network) AddInputPulv4D added in v1.7.0

func (nt *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (emer.Layer, *Layer)

AddInputPulv4D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.

func (*Network) AddMatrixLayer added in v1.7.0

func (nt *Network) AddMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer

AddMatrixLayer adds a MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*Network) AddPPTgLayer added in v1.7.0

func (nt *Network) AddPPTgLayer(prefix string, nUs, unY, unX int) AxonLayer

AddPPTgLayer adds a PPTgLayer

func (*Network) AddPTMaintLayer2D added in v1.7.2

func (nt *Network) AddPTMaintLayer2D(name string, nNeurY, nNeurX int) *Layer

AddPTMaintLayer2D adds a PTMaintLayer of given size, with given name.

func (*Network) AddPTMaintLayer4D added in v1.7.2

func (nt *Network) AddPTMaintLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddPTMaintLayer4D adds a PTMaintLayer of given size, with given name.

func (*Network) AddPTMaintThalForSuper added in v1.7.2

func (nt *Network) AddPTMaintThalForSuper(super, ct emer.Layer, suffix string, superToPT, ptSelf, ctToThal prjn.Pattern, space float32) (pt, thal emer.Layer)

AddPTMaintThalForSuper adds a PTMaint pyramidal tract active maintenance layer and a Thalamus layer for given superficial layer (deep.SuperLayer) and associated CT with given suffix (e.g., MD, VM). PT and Thal have SetClass(super.Name()) called to allow shared params. Projections are made with given classes: SuperToPT, PTSelfMaint, CTtoThal, PTtoThal, ThalToPT The PT and Thal layers are positioned behind the CT layer.

func (*Network) AddPulvForSuper added in v1.7.0

func (nt *Network) AddPulvForSuper(super emer.Layer, space float32) emer.Layer

AddPulvForSuper adds a Pulvinar for given superficial layer (SuperLayer) with a P suffix. The Pulv.Driver is set to Super. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the CT layer.

func (*Network) AddPulvLayer2D added in v1.7.0

func (nt *Network) AddPulvLayer2D(name string, nNeurY, nNeurX int) *Layer

AddPulvLayer2D adds a Pulvinar Layer of given size, with given name.

func (*Network) AddPulvLayer4D added in v1.7.0

func (nt *Network) AddPulvLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddPulvLayer4D adds a Pulvinar Layer of given size, with given name.

func (*Network) AddRSalienceAChLayer added in v1.7.0

func (nt *Network) AddRSalienceAChLayer(name string) *Layer

AddRSalienceAChLayer adds an RSalienceAChLayer unsigned reward salience coding ACh layer.

func (*Network) AddRWLayers added in v1.7.0

func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da AxonLayer)

AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero.

func (*Network) AddRewLayer added in v1.7.0

func (nt *Network) AddRewLayer(name string) *Layer

AddRewLayer adds a RewLayer of given name

func (*Network) AddSTNLayer2D added in v1.7.0

func (nt *Network) AddSTNLayer2D(name string, nNeurY, nNeurX int) *Layer

AddSTNLayer2D adds a subthalamic nucleus Layer of given size, with given name.

func (*Network) AddSTNLayer4D added in v1.7.0

func (nt *Network) AddSTNLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddSTNLayer4D adds a subthalamic nucleus Layer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*Network) AddSuperCT2D added in v1.7.0

func (nt *Network) AddSuperCT2D(name string, shapeY, shapeX int, space float32, pat prjn.Pattern) (super, ct emer.Layer)

AddSuperCT2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn projection from Super to CT using given projection pattern, and NO Pulv Pulvinar. CT is placed Behind Super.

func (*Network) AddSuperCT4D added in v1.7.0

func (nt *Network) AddSuperCT4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, pat prjn.Pattern) (super, ct emer.Layer)

AddSuperCT4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn projection from Super to CT using given projection pattern, and NO Pulv Pulvinar. CT is placed Behind Super.

func (*Network) AddSuperLayer2D added in v1.7.0

func (nt *Network) AddSuperLayer2D(name string, nNeurY, nNeurX int) *Layer

AddSuperLayer2D adds a Super Layer of given size, with given name.

func (*Network) AddSuperLayer4D added in v1.7.0

func (nt *Network) AddSuperLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddSuperLayer4D adds a Super Layer of given size, with given name.

func (*Network) AddTDLayers added in v1.7.0

func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td AxonLayer)

AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Projection from Rew to RewInteg is given class TDRewToInteg -- should have no learning and 1 weight.

func (*Network) AddThalLayer2D added in v1.7.0

func (nt *Network) AddThalLayer2D(name string, nNeurY, nNeurX int) *Layer

AddThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure

func (*Network) AddThalLayer4D added in v1.7.0

func (nt *Network) AddThalLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.

func (*Network) AsAxon

func (nt *Network) AsAxon() *Network

func (*Network) ClearTargExt added in v1.2.65

func (nt *Network) ClearTargExt()

ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Network) CollectDWts

func (nt *Network) CollectDWts(dwts *[]float32) bool

CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors.

func (*Network) ConnectCTSelf added in v1.7.0

func (nt *Network) ConnectCTSelf(ly emer.Layer, pat prjn.Pattern) (ctxt, maint emer.Prjn)

ConnectCTSelf adds a Self (Lateral) CTCtxtPrjn projection within a CT layer, in addition to a regular lateral projection, which supports active maintenance. The CTCtxtPrjn has a Class label of CTSelfCtxt, and the regular one is CTSelfMaint

func (*Network) ConnectCtxtToCT added in v1.7.0

func (nt *Network) ConnectCtxtToCT(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectCtxtToCT adds a CTCtxtPrjn from given sending layer to a CT layer

func (*Network) ConnectPTMaintSelf added in v1.7.2

func (nt *Network) ConnectPTMaintSelf(ly emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectPTMaintSelf adds a Self (Lateral) projection within a PTMaintLayer, which supports active maintenance, with a class of PTSelfMaint

func (*Network) ConnectSuperToCT added in v1.7.0

func (nt *Network) ConnectSuperToCT(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectSuperToCT adds a CTCtxtPrjn from given sending Super layer to a CT layer This automatically sets the FmSuper flag to engage proper defaults, Uses given projection pattern -- e.g., Full, OneToOne, or PoolOneToOne

func (*Network) ConnectToBLA added in v1.7.0

func (nt *Network) ConnectToBLA(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToBLA adds a BLAPrjn from given sending layer to a BLA layer

func (*Network) ConnectToMatrix added in v1.7.0

func (nt *Network) ConnectToMatrix(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixPrjn from given sending layer to a matrix layer

func (*Network) ConnectToPulv added in v1.7.0

func (nt *Network) ConnectToPulv(super, ct, pulv emer.Layer, toPulvPat, fmPulvPat prjn.Pattern) (toPulv, toSuper, toCT emer.Prjn)

ConnectToPulv connects Super and CT with given Pulv: CT -> Pulv is class CTToPulv, From Pulv = type = Back, class = FmPulv toPulvPat is the prjn.Pattern CT -> Pulv and fmPulvPat is Pulv -> CT, Super Typically Pulv is a different shape than Super and CT, so use Full or appropriate topological pattern

func (*Network) ConnectToRWPrjn added in v1.7.0

func (nt *Network) ConnectToRWPrjn(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToRWPred adds a RWPrjn from given sending layer to a RWPred layer

func (*Network) Cycle

func (nt *Network) Cycle(ctx *Context)

Cycle runs one cycle of activation updating. It just calls the CycleImpl method through the AxonNetwork interface, thereby ensuring any specialized algorithm-specific version is called as needed (in general, strongly prefer updating the Layer specific version).

func (*Network) CycleImpl

func (nt *Network) CycleImpl(ctx *Context)

CycleImpl handles entire update for one cycle (msec) of neuron activity

func (*Network) DWt

func (nt *Network) DWt(ctx *Context)

DWt computes the weight change (learning) based on current running-average activation values

func (*Network) DWtImpl

func (nt *Network) DWtImpl(ctx *Context)

DWtImpl computes the weight change (learning) based on current running-average activation values

func (*Network) DecayState

func (nt *Network) DecayState(ctx *Context, decay, glong float32)

DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g) This is called automatically in NewState, but is avail here for ad-hoc decay cases.

func (*Network) DecayStateByType added in v1.7.1

func (nt *Network) DecayStateByType(ctx *Context, decay, glong float32, types ...LayerTypes)

DecayStateByType decays activation state for given class name(s) by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) InitActs

func (nt *Network) InitActs()

InitActs fully initializes activation state -- not automatically called

func (*Network) InitExt

func (nt *Network) InitExt()

InitExt initializes external input state -- call prior to applying external inputs to layers

func (*Network) InitGScale added in v1.2.92

func (nt *Network) InitGScale()

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Network) InitTopoSWts added in v1.2.75

func (nt *Network) InitTopoSWts()

InitTopoSWts initializes SWt structural weight parameters from prjn types that support topographic weight patterns, having flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts

func (*Network) InitWts

func (nt *Network) InitWts()

InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)

func (*Network) LRateMod added in v1.6.13

func (nt *Network) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Network) LRateSched added in v1.6.13

func (nt *Network) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Network) LayersSetOff

func (nt *Network) LayersSetOff(off bool)

LayersSetOff sets the Off flag for all layers to given setting

func (*Network) MinusPhase added in v1.2.63

func (nt *Network) MinusPhase(ctx *Context)

MinusPhase does updating after end of minus phase

func (*Network) MinusPhaseImpl added in v1.2.63

func (nt *Network) MinusPhaseImpl(ctx *Context)

MinusPhaseImpl does updating after end of minus phase

func (*Network) NewLayer

func (nt *Network) NewLayer() emer.Layer

NewLayer returns new layer of proper type

func (*Network) NewPrjn

func (nt *Network) NewPrjn() emer.Prjn

NewPrjn returns new prjn of proper type

func (*Network) NewState added in v1.2.63

func (nt *Network) NewState(ctx *Context)

NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()

func (*Network) NewStateImpl added in v1.2.63

func (nt *Network) NewStateImpl(ctx *Context)

NewStateImpl handles all initialization at start of new input state

func (*Network) PlusPhase added in v1.2.63

func (nt *Network) PlusPhase(ctx *Context)

PlusPhase does updating after end of plus phase

func (*Network) PlusPhaseImpl added in v1.2.63

func (nt *Network) PlusPhaseImpl(ctx *Context)

PlusPhaseImpl does updating after end of plus phase

func (*Network) SetDWts

func (nt *Network) SetDWts(dwts []float32, navg int)

SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg)

func (*Network) SetSubMean added in v1.6.11

func (nt *Network) SetSubMean(trgAvg, prjn float32)

SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning

func (*Network) SizeReport

func (nt *Network) SizeReport() string

SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint.

func (*Network) SlowAdapt added in v1.2.37

func (nt *Network) SlowAdapt(ctx *Context)

SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, GScale conductance scaling, and adapting inhibition

func (*Network) SpkSt1 added in v1.5.10

func (nt *Network) SpkSt1(ctx *Context)

SpkSt1 saves current acts into SpkSt1 (using CaSpkP)

func (*Network) SpkSt2 added in v1.5.10

func (nt *Network) SpkSt2(ctx *Context)

SpkSt2 saves current acts into SpkSt2 (using CaSpkP)

func (*Network) SynFail added in v1.2.92

func (nt *Network) SynFail(ctx *Context)

SynFail updates synaptic failure

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) SynVarProps

func (nt *Network) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*Network) TargToExt added in v1.2.65

func (nt *Network) TargToExt()

TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Network) UnLesionNeurons

func (nt *Network) UnLesionNeurons()

UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) UnitVarProps

func (nt *Network) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Network) UpdateExtFlags

func (nt *Network) UpdateExtFlags()

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

func (*Network) WtFmDWt

func (nt *Network) WtFmDWt(ctx *Context)

WtFmDWt updates the weights from delta-weight changes. Also calls SynScale every Interval times

func (*Network) WtFmDWtImpl

func (nt *Network) WtFmDWtImpl(ctx *Context)

WtFmDWtImpl updates the weights from delta-weight changes.

type NetworkBase added in v1.4.5

type NetworkBase struct {
	EmerNet       emer.Network          `` /* 274-byte string literal not displayed */
	Nm            string                `desc:"overall name of network -- helps discriminate if there are multiple"`
	WtsFile       string                `desc:"filename of last weights file loaded or saved"`
	LayMap        map[string]emer.Layer `view:"-" desc:"map of name to layers -- layer names must be unique"`
	LayClassMap   map[string][]string   `view:"-" desc:"map of layer classes -- made during Build"`
	MinPos        mat32.Vec3            `view:"-" desc:"minimum display position in network"`
	MaxPos        mat32.Vec3            `view:"-" desc:"maximum display position in network"`
	MetaData      map[string]string     `` /* 194-byte string literal not displayed */
	CPURecvSpikes bool                  `` /* 285-byte string literal not displayed */

	// Implementation level code below:
	MaxDelay uint32      `view:"-" desc:"maximum synaptic delay across any projection in the network -- used for sizing the GBuf accumulation buffer."`
	Layers   emer.Layers `desc:"array of layers, via emer.Layer interface pointer"`
	// todo: could now have concrete list of all Layer objects here
	LayParams  []LayerParams `view:"-" desc:"array of layer parameters, in 1-to-1 correspondence with Layers"`
	LayVals    []LayerVals   `view:"-" desc:"array of layer values, in 1-to-1 correspondence with Layers"`
	Neurons    []Neuron      `view:"-" desc:"entire network's allocation of neurons -- can be operated upon in parallel"`
	Prjns      []AxonPrjn    `view:"-" desc:"[Layers][RecvPrjns] pointers to all projections in the network, via the AxonPrjn interface"`
	PrjnParams []PrjnParams  `view:"-" desc:"[Layers][RecvPrjns] array of projection parameters, in 1-to-1 correspondence with Prjns"`
	Synapses   []Synapse     `view:"-" desc:"[Layers][RecvPrjns][RecvNeurons] entire network's allocation of synapses"`
	PrjnGBuf   []float32     `` /* 147-byte string literal not displayed */
	PrjnGSyns  []float32     `` /* 198-byte string literal not displayed */

	Threads     NetThreads             `desc:"threading config and implementation for CPU"`
	RecFunTimes bool                   `view:"-" desc:"record function timer information"`
	FunTimes    map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"`
	WaitGp      sync.WaitGroup         `view:"-" desc:"network-level wait group for synchronizing threaded layer calls"`
}

NetworkBase manages the basic structural components of a network (layers). The main Network then can just have the algorithm-specific code.

func (*NetworkBase) AddLayer added in v1.4.5

func (nt *NetworkBase) AddLayer(name string, shape []int, typ emer.LayerType) emer.Layer

AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.

func (*NetworkBase) AddLayer2D added in v1.4.5

func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ emer.LayerType) emer.Layer

AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.

func (*NetworkBase) AddLayer4D added in v1.4.5

func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer

AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.

func (*NetworkBase) AddLayerInit added in v1.4.5

func (nt *NetworkBase) AddLayerInit(ly emer.Layer, name string, shape []int, typ emer.LayerType)

AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.

func (*NetworkBase) AllParams added in v1.4.5

func (nt *NetworkBase) AllParams() string

AllParams returns a listing of all parameters in the Network.

func (*NetworkBase) AllPrjnScales added in v1.4.5

func (nt *NetworkBase) AllPrjnScales() string

AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.

func (*NetworkBase) ApplyParams added in v1.4.5

func (nt *NetworkBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*NetworkBase) BidirConnectLayerNames added in v1.4.5

func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)

BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) BidirConnectLayers added in v1.4.5

func (nt *NetworkBase) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)

BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) BidirConnectLayersPy added in v1.4.5

func (nt *NetworkBase) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)

BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.

func (*NetworkBase) Bounds added in v1.4.5

func (nt *NetworkBase) Bounds() (min, max mat32.Vec3)

func (*NetworkBase) BoundsUpdt added in v1.4.5

func (nt *NetworkBase) BoundsUpdt()

BoundsUpdt updates the Min / Max display bounds for 3D display

func (*NetworkBase) Build added in v1.4.5

func (nt *NetworkBase) Build() error

Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity. Configures threading using heuristics based on final network size.

func (*NetworkBase) BuildPrjnGBuf added in v1.7.2

func (nt *NetworkBase) BuildPrjnGBuf()

BuildPrjnGBuf builds the PrjnGBuf, PrjnGSyns, based on the MaxDelay values in thePrjnParams, which should have been configured by this point. Called by default in InitWts()

func (*NetworkBase) ConnectLayerNames added in v1.4.5

func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)

ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) ConnectLayers added in v1.4.5

func (nt *NetworkBase) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn

ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) ConnectLayersPrjn added in v1.4.5

func (nt *NetworkBase) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn

ConnectLayersPrjn makes connection using given projection between two layers, adding given prjn to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) DeleteAll added in v1.4.5

func (nt *NetworkBase) DeleteAll()

DeleteAll deletes all layers, prepares network for re-configuring and building

func (*NetworkBase) FunTimerStart added in v1.4.5

func (nt *NetworkBase) FunTimerStart(fun string)

FunTimerStart starts function timer for given function name -- ensures creation of timer

func (*NetworkBase) FunTimerStop added in v1.4.5

func (nt *NetworkBase) FunTimerStop(fun string)

FunTimerStop stops function timer -- timer must already exist

func (*NetworkBase) InitName added in v1.4.5

func (nt *NetworkBase) InitName(net emer.Network, name string)

InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name.

func (*NetworkBase) Label added in v1.4.5

func (nt *NetworkBase) Label() string

func (*NetworkBase) LateralConnectLayer added in v1.4.5

func (nt *NetworkBase) LateralConnectLayer(lay emer.Layer, pat prjn.Pattern) emer.Prjn

LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) LateralConnectLayerPrjn added in v1.4.5

func (nt *NetworkBase) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn

LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkBase) Layer added in v1.4.5

func (nt *NetworkBase) Layer(idx int) emer.Layer

func (*NetworkBase) LayerByName added in v1.4.5

func (nt *NetworkBase) LayerByName(name string) emer.Layer

LayerByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.

func (*NetworkBase) LayerByNameTry added in v1.4.5

func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)

LayerByNameTry returns a layer by looking it up by name -- returns error message if layer is not found

func (*NetworkBase) LayerMapParallel added in v1.6.17

func (nt *NetworkBase) LayerMapParallel(fun func(ly AxonLayer), funame string, nThreads int)

LayerMapParallel applies function of given name to all layers using nThreads go routines if nThreads > 1, otherwise runs sequentially.

func (*NetworkBase) LayerMapSeq added in v1.6.17

func (nt *NetworkBase) LayerMapSeq(fun func(ly AxonLayer), funame string)

LayerMapSeq applies function of given name to all layers sequentially.

func (*NetworkBase) LayersByClass added in v1.4.5

func (nt *NetworkBase) LayersByClass(classes ...string) []string

LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.

func (*NetworkBase) LayersByType added in v1.7.1

func (nt *NetworkBase) LayersByType(layType ...LayerTypes) []string

LayersByType returns a list of layer names by given layer types. Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.

func (*NetworkBase) Layout added in v1.4.5

func (nt *NetworkBase) Layout()

Layout computes the 3D layout of layers based on their relative position settings

func (*NetworkBase) MakeLayMap added in v1.4.5

func (nt *NetworkBase) MakeLayMap()

MakeLayMap updates layer map based on current layers

func (*NetworkBase) NLayers added in v1.4.5

func (nt *NetworkBase) NLayers() int

func (*NetworkBase) Name added in v1.4.5

func (nt *NetworkBase) Name() string

emer.Network interface methods:

func (*NetworkBase) NeuronFun added in v1.6.0

func (nt *NetworkBase) NeuronFun(fun func(ly AxonLayer, ni uint32, nrn *Neuron), funame string)

NeuronFun applies function of given name to all neurons, using NetThreads.Neurons number of goroutines.

func (*NetworkBase) NeuronMapParallel added in v1.6.17

func (nt *NetworkBase) NeuronMapParallel(fun func(ly AxonLayer, ni uint32, nrn *Neuron), funame string, nThreads int)

NeuronMapParallel applies function of given name to all neurons using as many go routines as configured in NetThreads.Neurons.

func (*NetworkBase) NeuronMapSequential added in v1.6.17

func (nt *NetworkBase) NeuronMapSequential(fun func(ly AxonLayer, ni uint32, nrn *Neuron), funame string)

NeuronMapSequential applies function of given name to all neurons sequentially.

func (*NetworkBase) NonDefaultParams added in v1.4.5

func (nt *NetworkBase) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.

func (*NetworkBase) OpenWtsCpp added in v1.4.5

func (nt *NetworkBase) OpenWtsCpp(filename gi.FileName) error

OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.

func (*NetworkBase) OpenWtsJSON added in v1.4.5

func (nt *NetworkBase) OpenWtsJSON(filename gi.FileName) error

OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.

func (*NetworkBase) PrjnMapParallel added in v1.6.17

func (nt *NetworkBase) PrjnMapParallel(fun func(prjn AxonPrjn), funame string, nThreads int)

PrjnMapParallel applies function of given name to all projections using nThreads go routines if nThreads > 1, otherwise runs sequentially.

func (*NetworkBase) PrjnMapSeq added in v1.6.17

func (nt *NetworkBase) PrjnMapSeq(fun func(pj AxonPrjn), funame string)

PrjnMapSeq applies function of given name to all projections sequentially.

func (*NetworkBase) ReadWtsCpp added in v1.4.5

func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error

ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.

func (*NetworkBase) ReadWtsJSON added in v1.4.5

func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.

func (*NetworkBase) SaveWtsJSON added in v1.4.5

func (nt *NetworkBase) SaveWtsJSON(filename gi.FileName) error

SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.

func (*NetworkBase) SendSpikeFun added in v1.6.2

func (nt *NetworkBase) SendSpikeFun(fun func(ly AxonLayer), funame string)

SendSpikeFun applies function of given name to all layers using as many goroutines as configured in NetThreads.SendSpike

func (*NetworkBase) SetWts added in v1.4.5

func (nt *NetworkBase) SetWts(nw *weights.Network) error

SetWts sets the weights for this network from weights.Network decoded values

func (*NetworkBase) StdVertLayout added in v1.4.5

func (nt *NetworkBase) StdVertLayout()

StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings

func (*NetworkBase) SynCaFun added in v1.6.2

func (nt *NetworkBase) SynCaFun(fun func(pj AxonPrjn), funame string)

SynCaFun applies function of given name to all projections, using NetThreads.SynCa number of goroutines.

func (*NetworkBase) ThreadReport added in v1.7.1

func (nt *NetworkBase) ThreadReport()

ThreadsReport reports the number of threads used

func (*NetworkBase) TimerReport added in v1.4.5

func (nt *NetworkBase) TimerReport()

TimerReport reports the amount of time spent in each function, and in each thread

func (*NetworkBase) VarRange added in v1.4.5

func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)

VarRange returns the min / max values for given variable todo: support r. s. projection values

func (*NetworkBase) WriteWtsJSON added in v1.4.5

func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error

WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

type NeuroModParams added in v1.7.0

type NeuroModParams struct {
	DAMod       DAModTypes `desc:"effects of dopamine modulation on excitatory and inhibitory conductances"`
	DAModGain   float32    `` /* 194-byte string literal not displayed */
	DALRateMod  float32    `` /* 165-byte string literal not displayed */
	AChLRateMod float32    `` /* 168-byte string literal not displayed */
	AChDisInhib float32    `min:"0" def:"0,5" desc:"amount of extra Gi inhibition added in proportion to 1 - ACh level -- makes ACh disinhibitory"`
	BurstGain   float32    `` /* 189-byte string literal not displayed */
	DipGain     float32    `` /* 249-byte string literal not displayed */
	// contains filtered or unexported fields
}

NeuroModParams specifies the effects of neuromodulators on neural activity and learning rate. These can apply to any neuron type, and are applied in the core cycle update equations.

func (*NeuroModParams) Defaults added in v1.7.0

func (nm *NeuroModParams) Defaults()

func (*NeuroModParams) GGain added in v1.7.0

func (nm *NeuroModParams) GGain(da float32) float32

GGain returns effective Ge and Gi gain factor given dopamine (DA) +/- burst / dip value (0 = tonic level). factor is 1 for no modulation, otherwise higher or lower.

func (*NeuroModParams) GiFmACh added in v1.7.0

func (nm *NeuroModParams) GiFmACh(ach float32) float32

GIFmACh returns amount of extra inhibition to add based on disinhibitory effects of ACh -- no inhibition when ACh = 1, extra when < 1.

func (*NeuroModParams) LRMod added in v1.7.0

func (nm *NeuroModParams) LRMod(da, ach float32) float32

LRMod returns overall learning rate modulation factor due to neuromodulation from given dopamine (DA) and ACh inputs. If DALRateMod is true and DAMod == D1Mod or D2Mod, then the sign is a function of the DA

func (*NeuroModParams) LRModFact added in v1.7.0

func (nm *NeuroModParams) LRModFact(pct, val float32) float32

LRModFact returns learning rate modulation factor for given inputs.

func (*NeuroModParams) Update added in v1.7.0

func (nm *NeuroModParams) Update()

type NeuroModVals added in v1.7.0

type NeuroModVals struct {
	Rew      float32     `` /* 186-byte string literal not displayed */
	HasRew   slbool.Bool `inactive:"+" desc:"must be set to true when a reward is present -- otherwise Rew is ignored"`
	RewPred  float32     `inactive:"+" desc:"reward prediction -- computed by a special reward prediction layer"`
	PrevPred float32     `inactive:"+" desc:"previous time step reward prediction -- e.g., for TDPredLayer"`
	DA       float32     `` /* 279-byte string literal not displayed */
	ACh      float32     `` /* 259-byte string literal not displayed */
	NE       float32     `inactive:"+" desc:"norepinepherine -- not yet in use"`
	Ser      float32     `inactive:"+" desc:"serotonin -- not yet in use"`

	AChRaw float32 `inactive:"+" desc:"raw ACh value used in updating global ACh value by RSalienceAChLayer"`
	// contains filtered or unexported fields
}

NeuroModVals neuromodulatory values -- they are global to the layer and affect learning rate and other neural activity parameters of neurons.

func (*NeuroModVals) NewState added in v1.7.0

func (nm *NeuroModVals) NewState()

NewState is called by Context.NewState at start of new trial

func (*NeuroModVals) Reset added in v1.7.0

func (nm *NeuroModVals) Reset()

func (*NeuroModVals) SetRew added in v1.7.0

func (nm *NeuroModVals) SetRew(rew float32, hasRew bool)

SetRew is a convenience function for setting the external reward

type Neuron

type Neuron struct {
	Flags    NeuronFlags `desc:"bit flags for binary state variables"`
	NeurIdx  uint32      `desc:"index of this neuron within its owning layer"`
	LayIdx   uint32      `desc:"index of the layer that this neuron belongs to -- needed for neuron-level parallel code."`
	SubPool  uint32      `` /* 214-byte string literal not displayed */
	SubPoolN uint32      `desc:"index in network-wide list of all pools"`

	Spike  float32 `desc:"whether neuron has spiked or not on this cycle (0 or 1)"`
	Spiked float32 `` /* 224-byte string literal not displayed */
	Act    float32 `` /* 402-byte string literal not displayed */
	ActInt float32 `` /* 478-byte string literal not displayed */
	ActM   float32 `` /* 228-byte string literal not displayed */
	ActP   float32 `` /* 229-byte string literal not displayed */
	Ext    float32 `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"`
	Target float32 `desc:"target value: drives learning to produce this activation value"`

	Ge     float32 `desc:"total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E"`
	Gi     float32 `desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"`
	Gk     float32 `` /* 148-byte string literal not displayed */
	Inet   float32 `desc:"net current produced by all channels -- drives update of Vm"`
	Vm     float32 `desc:"membrane potential -- integrates Inet current over time"`
	VmDend float32 `desc:"dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking"`

	CaSyn   float32 `` /* 459-byte string literal not displayed */
	CaSpkM  float32 `` /* 294-byte string literal not displayed */
	CaSpkP  float32 `` /* 328-byte string literal not displayed */
	CaSpkD  float32 `` /* 325-byte string literal not displayed */
	CaSpkPM float32 `desc:"minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value."`
	CaLrn   float32 `` /* 669-byte string literal not displayed */
	CaM     float32 `` /* 174-byte string literal not displayed */
	CaP     float32 `` /* 192-byte string literal not displayed */
	CaD     float32 `` /* 192-byte string literal not displayed */
	CaDiff  float32 `desc:"difference between CaP - CaD -- this is the error signal that drives error-driven learning."`

	SpkMaxCa float32 `` /* 213-byte string literal not displayed */
	SpkMax   float32 `` /* 235-byte string literal not displayed */
	SpkPrv   float32 `` /* 155-byte string literal not displayed */
	SpkSt1   float32 `` /* 235-byte string literal not displayed */
	SpkSt2   float32 `` /* 236-byte string literal not displayed */
	RLRate   float32 `` /* 191-byte string literal not displayed */

	ActAvg  float32 `` /* 194-byte string literal not displayed */
	AvgPct  float32 `` /* 158-byte string literal not displayed */
	TrgAvg  float32 `` /* 169-byte string literal not displayed */
	DTrgAvg float32 `` /* 164-byte string literal not displayed */
	AvgDif  float32 `` /* 173-byte string literal not displayed */
	Attn    float32 `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"`

	ISI    float32 `desc:"current inter-spike-interval -- counts up since last spike.  Starts at -1 when initialized."`
	ISIAvg float32 `` /* 320-byte string literal not displayed */

	GeNoiseP float32 `` /* 201-byte string literal not displayed */
	GeNoise  float32 `desc:"integrated noise excitatory conductance, added into Ge"`
	GiNoiseP float32 `` /* 201-byte string literal not displayed */
	GiNoise  float32 `desc:"integrated noise inhibotyr conductance, added into Gi"`

	GeExt    float32 `desc:"extra excitatory conductance added to Ge -- from Ext input, GeCtxt etc"`
	GeRaw    float32 `desc:"raw excitatory conductance (net input) received from senders = current raw spiking drive"`
	GeSyn    float32 `` /* 214-byte string literal not displayed */
	GeBase   float32 `desc:"baseline level of Ge, added to GeRaw, for intrinsic excitability"`
	GiRaw    float32 `desc:"raw inhibitory conductance (net input) received from senders  = current raw spiking drive"`
	GiSyn    float32 `` /* 293-byte string literal not displayed */
	GiBase   float32 `desc:"baseline level of Gi, added to GiRaw, for intrinsic excitability"`
	GModRaw  float32 `desc:"modulatory conductance, received from GType = ModulatoryG projections"`
	GModSyn  float32 `desc:"modulatory conductance, received from GType = ModulatoryG projections"`
	GeSynMax float32 `desc:"maximum GeSyn value across the ThetaCycle"`
	GeSynPrv float32 `desc:"previous GeSynMax value from the previous ThetaCycle"`

	SSGi     float32 `desc:"SST+ somatostatin positive slow spiking inhibition"`
	SSGiDend float32 `desc:"amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend)"`
	Gak      float32 `desc:"conductance of A-type K potassium channels"`

	GeM float32 `` /* 165-byte string literal not displayed */
	GiM float32 `` /* 168-byte string literal not displayed */

	MahpN    float32 `desc:"accumulating voltage-gated gating value for the medium time scale AHP"`
	SahpCa   float32 `desc:"slowly accumulating calcium value that drives the slow AHP"`
	SahpN    float32 `desc:"sAHP gating value"`
	GknaMed  float32 `` /* 131-byte string literal not displayed */
	GknaSlow float32 `` /* 129-byte string literal not displayed */

	GnmdaSyn float32 `desc:"integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant"`
	Gnmda    float32 `` /* 137-byte string literal not displayed */
	GnmdaLrn float32 `` /* 159-byte string literal not displayed */
	NmdaCa   float32 `desc:"NMDA calcium computed from GnmdaLrn, drives learning via CaM"`
	SnmdaO   float32 `` /* 314-byte string literal not displayed */
	SnmdaI   float32 `` /* 255-byte string literal not displayed */

	GgabaB float32 `` /* 127-byte string literal not displayed */
	GABAB  float32 `desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"`
	GABABx float32 `desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"`

	Gvgcc     float32 `desc:"conductance (via Ca) for VGCC voltage gated calcium channels"`
	VgccM     float32 `desc:"activation gate of VGCC channels"`
	VgccH     float32 `desc:"inactivation gate of VGCC channels"`
	VgccCa    float32 `desc:"instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc"`
	VgccCaInt float32 `desc:"time-integrated VGCC calcium flux -- this is actually what drives learning"`

	SKCai float32 `` /* 158-byte string literal not displayed */
	SKCaM float32 `desc:"Calcium-gated potassium channel gating factor, driven by SKCai via a Hill equation as in chans.SKPCaParams."`
	Gsk   float32 `desc:"Calcium-gated potassium channel conductance as a function of Gbar * SKCaM."`

	Burst     float32 `desc:"5IB bursting activation value, computed by thresholding regular CaSpkP value in Super superficial layers"`
	BurstPrv  float32 `desc:"previous Burst bursting activation from prior time step -- used for context-based learning"`
	CtxtGe    float32 `desc:"context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers."`
	CtxtGeRaw float32 `` /* 138-byte string literal not displayed */
	// contains filtered or unexported fields
}

axon.Neuron holds all of the neuron (unit) level variables. This is the most basic version, without any optional features. All variables accessible via Unit interface must be float32 and start at the top, in contiguous order

func (*Neuron) ClearFlag

func (nrn *Neuron) ClearFlag(flag NeuronFlags)

func (*Neuron) HasFlag

func (nrn *Neuron) HasFlag(flag NeuronFlags) bool

func (*Neuron) IsOff

func (nrn *Neuron) IsOff() bool

IsOff returns true if the neuron has been turned off (lesioned)

func (*Neuron) SetFlag

func (nrn *Neuron) SetFlag(flag NeuronFlags)

func (*Neuron) VarByIndex

func (nrn *Neuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in NeuronVars list)

func (*Neuron) VarByName

func (nrn *Neuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*Neuron) VarNames

func (nrn *Neuron) VarNames() []string

type NeuronFlags added in v1.6.4

type NeuronFlags int32

NeuronFlags are bit-flags encoding relevant binary state for neurons

const (
	// NeuronOff flag indicates that this neuron has been turned off (i.e., lesioned)
	NeuronOff NeuronFlags = 1

	// NeuronHasExt means the neuron has external input in its Ext field
	NeuronHasExt NeuronFlags = 1 << 2

	// NeuronHasTarg means the neuron has external target input in its Target field
	NeuronHasTarg NeuronFlags = 1 << 3

	// NeuronHasCmpr means the neuron has external comparison input in its Target field -- used for computing
	// comparison statistics but does not drive neural activity ever
	NeuronHasCmpr NeuronFlags = 1 << 4
)

The neuron flags

func (NeuronFlags) String added in v1.6.4

func (i NeuronFlags) String() string

type Pool

type Pool struct {
	StIdx, EdIdx   uint32      `inactive:"+" desc:"starting and ending (exlusive) layer-wise indexes for the list of neurons in this pool"`
	StIdxG, EdIdxG uint32      `view:"-" desc:"starting and ending (exlusive) global network-wide indexes for the list of neurons in this pool"`
	LayIdx         uint32      `view:"-" desc:"layer index in global layer list"`
	PoolIdx        uint32      `view:"-" desc:"pool index in global pool list: [Layer][Pool]"`
	LayPoolIdx     uint32      `view:"-" desc:"pool index for layer-wide pool, only if this is not a LayPool"`
	IsLayPool      slbool.Bool `inactive:"+" desc:"is this a layer-wide pool?  if not, it represents a sub-pool of units within a 4D layer"`
	Gated          slbool.Bool `inactive:"+" desc:"for special types where relevant (e.g., MatrixLayer, VThalLayer), indicates if the pool was gated"`

	Inhib  fsfffb.Inhib    `inactive:"+" desc:"fast-slow FFFB inhibition values"`
	AvgMax PoolAvgMax      `desc:"average and max values for relevant variables in this pool, at different time scales"`
	AvgDif minmax.AvgMax32 `inactive:"+" view:"inline" desc:"absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg"`
	// contains filtered or unexported fields
}

Pool contains computed values for FS-FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition

func (*Pool) Init

func (pl *Pool) Init()

Init is callled during InitActs

func (*Pool) NNeurons added in v1.5.12

func (pl *Pool) NNeurons() int

NNeurons returns the number of neurons in the pool: EdIdx - StIdx

type PoolAvgMax added in v1.7.0

type PoolAvgMax struct {
	CaSpkP AvgMaxPhases `` /* 252-byte string literal not displayed */
	CaSpkD AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum CaSpkD longer-term depression / DAPK1 signal in layer"`
	SpkMax AvgMaxPhases `` /* 136-byte string literal not displayed */
	Act    AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum Act firing rate value"`
	Ge     AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum Ge excitatory conductance value"`
	Gi     AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum Gi inhibitory conductance value"`
}

PoolAvgMax contains the average and maximum values over a Pool of neurons for different variables of interest, at Cycle, Minus and Plus phase timescales. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.

func (*PoolAvgMax) CalcAvg added in v1.7.0

func (am *PoolAvgMax) CalcAvg()

CalcAvg does CalcAvg on Cycle level

func (*PoolAvgMax) CycleToMinus added in v1.7.0

func (am *PoolAvgMax) CycleToMinus()

CycleToMinus grabs current Cycle values into the Minus phase values

func (*PoolAvgMax) CycleToPlus added in v1.7.0

func (am *PoolAvgMax) CycleToPlus()

CycleToPlus grabs current Cycle values into the Plus phase values

func (*PoolAvgMax) Init added in v1.7.0

func (am *PoolAvgMax) Init()

Init does Init on Cycle level -- for update start

func (*PoolAvgMax) UpdateVals added in v1.7.0

func (am *PoolAvgMax) UpdateVals(nrn *Neuron, ni int32)

UpdateVals for neuron values

type Prjn

type Prjn struct {
	PrjnBase
	Params *PrjnParams `desc:"all prjn-level parameters -- these must remain constant once configured"`
}

axon.Prjn is a basic Axon projection with synaptic learning parameters

func (*Prjn) AllParams

func (pj *Prjn) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Prjn) AsAxon

func (pj *Prjn) AsAxon() *Prjn

AsAxon returns this prjn as a axon.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the AxonPrjn interface does not need to include accessors to all the basic stuff.

func (*Prjn) Class added in v1.7.0

func (pj *Prjn) Class() string

func (*Prjn) DWt

func (pj *Prjn) DWt(ctx *Context)

DWt computes the weight change (learning), based on synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.

func (*Prjn) DWtSubMean added in v1.2.23

func (pj *Prjn) DWtSubMean(ctx *Context)

DWtSubMean subtracts the mean from any projections that have SubMean > 0. This is called on *receiving* projections, prior to WtFmDwt.

func (*Prjn) Defaults

func (pj *Prjn) Defaults()

func (*Prjn) InitGBuffs added in v1.5.10

func (pj *Prjn) InitGBuffs()

InitGBuffs initializes the per-projection synaptic conductance buffers. This is not typically needed (called during InitWts, InitActs) but can be called when needed. Must be called to completely initialize prior activity, e.g., full Glong clearing.

func (*Prjn) InitWtSym

func (pj *Prjn) InitWtSym(rpjp AxonPrjn)

InitWtSym initializes weight symmetry. Is given the reciprocal projection where the Send and Recv layers are reversed (see LayerBase RecipToRecvPrjn)

func (*Prjn) InitWts

func (pj *Prjn) InitWts()

InitWts initializes weight values according to SWt params, enforcing current constraints.

func (*Prjn) InitWtsSyn

func (pj *Prjn) InitWtsSyn(sy *Synapse, mean, spct float32)

InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*Prjn) LRateMod added in v1.6.13

func (pj *Prjn) LRateMod(mod float32)

LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.

func (*Prjn) LRateSched added in v1.6.13

func (pj *Prjn) LRateSched(sched float32)

LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.

func (*Prjn) Object added in v1.7.0

func (pj *Prjn) Object() interface{}

Object returns the object with parameters to be set by emer.Params

func (*Prjn) PrjnType added in v1.7.0

func (pj *Prjn) PrjnType() PrjnTypes

PrjnType returns axon specific cast of pj.Typ prjn type

func (*Prjn) ReadWtsJSON

func (pj *Prjn) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.

func (*Prjn) RecvSpikes added in v1.7.2

func (pj *Prjn) RecvSpikes(ctx *Context, recvIdx int)

RecvSpikes receives spikes from the sending neurons at index sendIdx into the GBuf buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.

func (*Prjn) RecvSynCa added in v1.3.18

func (pj *Prjn) RecvSynCa(ctx *Context)

RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in recv order, filtering on recv spike. Threading: Can be called concurrently for all prjns, since it updates synapses (which are local to a single prjn).

func (*Prjn) SWtFmWt added in v1.2.45

func (pj *Prjn) SWtFmWt()

SWtFmWt updates structural, slowly-adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.

func (*Prjn) SWtRescale added in v1.2.45

func (pj *Prjn) SWtRescale()

SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.

func (*Prjn) SendSpike

func (pj *Prjn) SendSpike(ctx *Context, sendIdx int, nrn *Neuron)

SendSpike sends a spike from the sending neuron at index sendIdx into the GBuf buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.

func (*Prjn) SendSynCa added in v1.3.22

func (pj *Prjn) SendSynCa(ctx *Context)

SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in sending order, filtering on sending spike. Threading: Can be called concurrently for all prjns, since it updates synapses (which are local to a single prjn).

func (*Prjn) SetSWtsFunc added in v1.2.75

func (pj *Prjn) SetSWtsFunc(swtFun func(si, ri int, send, recv *etensor.Shape) float32)

SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.

func (*Prjn) SetSWtsRPool added in v1.2.75

func (pj *Prjn) SetSWtsRPool(swts etensor.Tensor)

SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.

func (*Prjn) SetSynVal

func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error

SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.

func (*Prjn) SetWts

func (pj *Prjn) SetWts(pw *weights.Prjn) error

SetWts sets the weights for this projection from weights.Prjn decoded values

func (*Prjn) SetWtsFunc

func (pj *Prjn) SetWtsFunc(wtFun func(si, ri int, send, recv *etensor.Shape) float32)

SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.

func (*Prjn) SlowAdapt added in v1.2.37

func (pj *Prjn) SlowAdapt(ctx *Context)

SlowAdapt does the slow adaptation: SWt learning and SynScale

func (*Prjn) SynFail added in v1.2.92

func (pj *Prjn) SynFail(ctx *Context)

SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.

func (*Prjn) SynScale added in v1.2.23

func (pj *Prjn) SynScale()

SynScale performs synaptic scaling based on running average activation vs. targets. Layer-level AvgDifFmTrgAvg function must be called first.

func (*Prjn) Update added in v1.7.0

func (pj *Prjn) Update()

Update is interface that does local update of struct vals

func (*Prjn) UpdateParams

func (pj *Prjn) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values

func (*Prjn) WriteWtsJSON

func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)

WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

func (*Prjn) WtFmDWt

func (pj *Prjn) WtFmDWt(ctx *Context)

WtFmDWt updates the synaptic weight values from delta-weight changes. called on the *receiving* projections.

type PrjnBase added in v1.4.14

type PrjnBase struct {
	AxonPrj AxonPrjn      `` /* 267-byte string literal not displayed */
	Off     bool          `desc:"inactivate this projection -- allows for easy experimentation"`
	Cls     string        `desc:"Class is for applying parameter styles, can be space separated multple tags"`
	Notes   string        `desc:"can record notes about this projection here"`
	Send    emer.Layer    `desc:"sending layer for this projection"`
	Recv    emer.Layer    `` /* 167-byte string literal not displayed */
	Pat     prjn.Pattern  `desc:"pattern of connectivity"`
	Typ     emer.PrjnType `` /* 154-byte string literal not displayed */

	RecvConNAvgMax minmax.AvgMax32 `inactive:"+" view:"inline" desc:"average and maximum number of recv connections in the receiving layer"`
	SendConNAvgMax minmax.AvgMax32 `inactive:"+" view:"inline" desc:"average and maximum number of sending connections in the sending layer"`

	RecvCon    []StartN  `` /* 261-byte string literal not displayed */
	Syns       []Synapse `` /* 239-byte string literal not displayed */
	RecvConIdx []uint32  `` /* 304-byte string literal not displayed */

	SendCon    []StartN `` /* 236-byte string literal not displayed */
	SendSynIdx []uint32 `` /* 236-byte string literal not displayed */
	SendConIdx []uint32 `` /* 394-byte string literal not displayed */

	// spike aggregation values:
	GBuf  []float32 `` /* 232-byte string literal not displayed */
	GSyns []float32 `` /* 245-byte string literal not displayed */
}

PrjnBase contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The exact same struct object is added to the Recv and Send layers, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation.

func (*PrjnBase) ApplyParams added in v1.4.14

func (pj *PrjnBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*PrjnBase) Build added in v1.7.0

func (pj *PrjnBase) Build() error

Build constructs the full connectivity among the layers. Calls Validate and returns error if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern. Does NOT allocate synapses -- these are set by Network from global slice.

func (*PrjnBase) Class added in v1.4.14

func (pj *PrjnBase) Class() string

func (*PrjnBase) Connect added in v1.4.14

func (pj *PrjnBase) Connect(slay, rlay emer.Layer, pat prjn.Pattern, typ emer.PrjnType)

Connect sets the connectivity between two layers and the pattern to use in interconnecting them

func (*PrjnBase) Init added in v1.4.14

func (pj *PrjnBase) Init(prjn emer.Prjn)

Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.

func (*PrjnBase) IsOff added in v1.4.14

func (pj *PrjnBase) IsOff() bool

func (*PrjnBase) Label added in v1.4.14

func (pj *PrjnBase) Label() string

func (*PrjnBase) Name added in v1.4.14

func (pj *PrjnBase) Name() string

func (*PrjnBase) NonDefaultParams added in v1.4.14

func (pj *PrjnBase) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.

func (*PrjnBase) Pattern added in v1.4.14

func (pj *PrjnBase) Pattern() prjn.Pattern

func (*PrjnBase) PrjnTypeName added in v1.4.14

func (pj *PrjnBase) PrjnTypeName() string

func (*PrjnBase) RecvLay added in v1.4.14

func (pj *PrjnBase) RecvLay() emer.Layer

func (*PrjnBase) RecvSyns added in v1.7.2

func (pj *PrjnBase) RecvSyns(ri int) []Synapse

RecvSyns returns the receiving synapses for given receiving unit index within the receiving layer, to be iterated over for processing.

func (*PrjnBase) SendLay added in v1.4.14

func (pj *PrjnBase) SendLay() emer.Layer

func (*PrjnBase) SendSynIdxs added in v1.7.2

func (pj *PrjnBase) SendSynIdxs(si int) []uint32

SendSynIdxs returns the sending synapse indexes for given sending unit index within the sending layer, to be iterated over for processing.

func (*PrjnBase) SetClass added in v1.7.0

func (pj *PrjnBase) SetClass(cls string) emer.Prjn

func (*PrjnBase) SetConStartN added in v1.7.2

func (pj *PrjnBase) SetConStartN(con *[]StartN, avgmax *minmax.AvgMax32, tn *etensor.Int32) uint32

SetConStartN sets the *Con StartN values given n tensor from Pat. Returns total number of connections for this direction.

func (*PrjnBase) SetOff added in v1.4.14

func (pj *PrjnBase) SetOff(off bool)

SetOff individual projection. Careful: Layer.SetOff(true) will reactivate all prjns of that layer, so prjn-level lesioning should always be done last.

func (*PrjnBase) SetPattern added in v1.7.0

func (pj *PrjnBase) SetPattern(pat prjn.Pattern) emer.Prjn

func (*PrjnBase) SetType added in v1.7.0

func (pj *PrjnBase) SetType(typ emer.PrjnType) emer.Prjn

func (*PrjnBase) String added in v1.4.14

func (pj *PrjnBase) String() string

String satisfies fmt.Stringer for prjn

func (*PrjnBase) Syn1DNum added in v1.7.0

func (pj *PrjnBase) Syn1DNum() int

Syn1DNum returns the number of synapses for this prjn as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynVals.

func (*PrjnBase) SynIdx added in v1.7.0

func (pj *PrjnBase) SynIdx(sidx, ridx int) int

SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes). Returns -1 if synapse not found between these two neurons. Requires searching within connections for sending unit.

func (*PrjnBase) SynVal added in v1.7.0

func (pj *PrjnBase) SynVal(varNm string, sidx, ridx int) float32

SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)

func (*PrjnBase) SynVal1D added in v1.7.0

func (pj *PrjnBase) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*PrjnBase) SynVals added in v1.7.0

func (pj *PrjnBase) SynVals(vals *[]float32, varNm string) error

SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (receiver based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*PrjnBase) SynVarIdx added in v1.7.0

func (pj *PrjnBase) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*PrjnBase) SynVarNames added in v1.7.0

func (pj *PrjnBase) SynVarNames() []string

func (*PrjnBase) SynVarNum added in v1.7.0

func (pj *PrjnBase) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.

func (*PrjnBase) SynVarProps added in v1.7.0

func (pj *PrjnBase) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*PrjnBase) Type added in v1.4.14

func (pj *PrjnBase) Type() emer.PrjnType

func (*PrjnBase) TypeName added in v1.4.14

func (pj *PrjnBase) TypeName() string

func (*PrjnBase) Validate added in v1.4.14

func (pj *PrjnBase) Validate(logmsg bool) error

Validate tests for non-nil settings for the projection -- returns error message or nil if no problems (and logs them if logmsg = true)

type PrjnGTypes added in v1.7.0

type PrjnGTypes int32

PrjnGTypes represents the conductance (G) effects of a given projection, including excitatory, inhibitory, and modulatory.

const (
	// Excitatory projections drive Ge conductance on receiving neurons,
	// which send to GiRaw and GiSyn neuron variables.
	ExcitatoryG PrjnGTypes = iota

	// Inhibitory projections drive Gi inhibitory conductance,
	// which send to GiRaw and GiSyn neuron variables.
	InhibitoryG

	// Modulatory projections have a multiplicative effect on other inputs,
	// which send to GModRaw and GModSyn neuron variables.
	ModulatoryG

	// Context projections are for inputs to CT layers, which update
	// only at the end of the plus phase, and send to CtxtGe.
	ContextG

	PrjnGTypesN
)

The projection conductance types

func (*PrjnGTypes) FromString added in v1.7.0

func (i *PrjnGTypes) FromString(s string) error

func (PrjnGTypes) MarshalJSON added in v1.7.0

func (ev PrjnGTypes) MarshalJSON() ([]byte, error)

func (PrjnGTypes) String added in v1.7.0

func (i PrjnGTypes) String() string

func (*PrjnGTypes) UnmarshalJSON added in v1.7.0

func (ev *PrjnGTypes) UnmarshalJSON(b []byte) error

type PrjnIdxs added in v1.7.0

type PrjnIdxs struct {
	PrjnIdx   uint32 // index of the projection in global prjn list: [Layer][RecvPrjns]
	RecvLay   uint32 // index of the receiving layer in global list of layers
	RecvLaySt uint32 // starting index of neurons in recv layer -- so we don't need layer to get to neurons
	RecvLayN  uint32 // number of neurons in recv layer
	SendLay   uint32 // index of the sending layer in global list of layers
	SendLaySt uint32 // starting index of neurons in sending layer -- so we don't need layer to get to neurons
	SendLayN  uint32 // number of neurons in send layer
	GBufSt    uint32 // start index into global PrjnGBuf global array: [Layer][RecvPrjns][RecvNeurons][MaxDelay+1]
	GSynSt    uint32 // start index into global PrjnGSyn global array: [Layer][RecvPrjns][RecvNeurons]
	RecvConSt uint32 // start index into global PrjnRecvCon array: [Layer][RecvPrjns][RecvNeurons]
	SynapseSt uint32 // start index into global Synapse array: [Layer][RecvPrjns][Synapses]
	// contains filtered or unexported fields
}

PrjnIdxs contains prjn-level index information into global memory arrays

func (*PrjnIdxs) RecvNIdxToLayIdx added in v1.7.2

func (pi *PrjnIdxs) RecvNIdxToLayIdx(ni uint32) uint32

RecvNIdxToLayIdx converts a neuron's index in network level global list of all neurons to receiving layer-specific index-- e.g., for accessing GBuf and GSyn values. Just subtracts RecvLaySt -- docu-function basically..

func (*PrjnIdxs) SendNIdxToLayIdx added in v1.7.2

func (pi *PrjnIdxs) SendNIdxToLayIdx(ni uint32) uint32

SendNIdxToLayIdx converts a neuron's index in network level global list of all neurons to sending layer-specific index. Just subtracts SendLaySt -- docu-function basically..

type PrjnParams added in v1.7.0

type PrjnParams struct {
	PrjnType PrjnTypes `` /* 138-byte string literal not displayed */

	Com       SynComParams    `view:"inline" desc:"synaptic communication parameters: delay, probability of failure"`
	PrjnScale PrjnScaleParams `` /* 215-byte string literal not displayed */
	SWt       SWtParams       `` /* 147-byte string literal not displayed */
	Learn     LearnSynParams  `view:"add-fields" desc:"synaptic-level learning parameters for learning in the fast LWt values."`
	GScale    GScaleVals      `view:"inline" desc:"conductance scaling values"`

	RLPred RLPredPrjnParams `` /* 418-byte string literal not displayed */
	Matrix MatrixPrjnParams `` /* 374-byte string literal not displayed */

	Idxs PrjnIdxs `view:"-" desc:"recv and send neuron-level projection index array access info"`
	// contains filtered or unexported fields
}

PrjnParams contains all of the prjn parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.

func (*PrjnParams) AllParams added in v1.7.0

func (pj *PrjnParams) AllParams() string

func (*PrjnParams) BLAPrjnDefaults added in v1.7.0

func (pj *PrjnParams) BLAPrjnDefaults()

func (*PrjnParams) CycleSynCaSyn added in v1.7.2

func (pj *PrjnParams) CycleSynCaSyn(ctx *Context, sy *Synapse, sn, rn *Neuron)

CycleSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. This version updates every cycle, for GPU usage called on each synapse.

func (*PrjnParams) DWtSyn added in v1.7.0

func (pj *PrjnParams) DWtSyn(ctx *Context, sy *Synapse, sn, rn *Neuron, layPool, subPool *Pool, isTarget bool)

DWtSyn is the overall entry point for weight change (learning) at given synapse. It selects appropriate function based on projection type. rpl is the receiving layer SubPool

func (*PrjnParams) DWtSynCortex added in v1.7.0

func (pj *PrjnParams) DWtSynCortex(ctx *Context, sy *Synapse, sn, rn *Neuron, layPool, subPool *Pool, isTarget bool)

DWtSynCortex computes the weight change (learning) at given synapse for cortex. Uses synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.

func (*PrjnParams) DWtSynMatrix added in v1.7.0

func (pj *PrjnParams) DWtSynMatrix(ctx *Context, sy *Synapse, sn, rn *Neuron, layPool, subPool *Pool)

DWtSynMatrix computes the weight change (learning) at given synapse, for the MatrixPrjn type.

func (*PrjnParams) DWtSynRWPred added in v1.7.0

func (pj *PrjnParams) DWtSynRWPred(ctx *Context, sy *Synapse, sn, rn *Neuron, layPool, subPool *Pool)

DWtSynRWPred computes the weight change (learning) at given synapse, for the RWPredPrjn type

func (*PrjnParams) DWtSynTDPred added in v1.7.0

func (pj *PrjnParams) DWtSynTDPred(ctx *Context, sy *Synapse, sn, rn *Neuron, layPool, subPool *Pool)

DWtSynTDPred computes the weight change (learning) at given synapse, for the TDRewPredPrjn type

func (*PrjnParams) Defaults added in v1.7.0

func (pj *PrjnParams) Defaults()

func (*PrjnParams) GatherSpikes added in v1.7.2

func (pj *PrjnParams) GatherSpikes(ctx *Context, ly *LayerParams, ni uint32, nrn *Neuron, gRaw float32, gSyn *float32)

GatherSpikes integrates G*Raw and G*Syn values for given neuron from the given Prjn-level GRaw value, first integrating projection-level GSyn value.

func (*PrjnParams) IsExcitatory added in v1.7.0

func (pj *PrjnParams) IsExcitatory() bool

func (*PrjnParams) IsInhib added in v1.7.0

func (pj *PrjnParams) IsInhib() bool

func (*PrjnParams) RLPredPrjnDefaults added in v1.7.0

func (pj *PrjnParams) RLPredPrjnDefaults()

func (*PrjnParams) RecvSynCaSyn added in v1.7.1

func (pj *PrjnParams) RecvSynCaSyn(ctx *Context, sy *Synapse, sn *Neuron, rnCaSyn, updtThr float32)

RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in recv order, filtering on recv spike. Threading: Can be called concurrently for all prjns, since it updates synapses (which are local to a single prjn).

func (*PrjnParams) SendSynCaSyn added in v1.7.1

func (pj *PrjnParams) SendSynCaSyn(ctx *Context, sy *Synapse, rn *Neuron, snCaSyn, updtThr float32)

SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in sending order, filtering on sending spike. Threading: Can be called concurrently for all prjns, since it updates synapses (which are local to a single prjn).

func (*PrjnParams) SynRecvLayIdx added in v1.7.2

func (pj *PrjnParams) SynRecvLayIdx(sy *Synapse) uint32

SynRecvLayIdx converts the Synapse RecvIdx of recv neuron's index in network level global list of all neurons to receiving layer-specific index.

func (*PrjnParams) SynSendLayIdx added in v1.7.2

func (pj *PrjnParams) SynSendLayIdx(sy *Synapse) uint32

SynSendLayIdx converts the Synapse SendIdx of sending neuron's index in network level global list of all neurons to sending layer-specific index.

func (*PrjnParams) Update added in v1.7.0

func (pj *PrjnParams) Update()

func (*PrjnParams) WtFmDWtSyn added in v1.7.0

func (pj *PrjnParams) WtFmDWtSyn(ctx *Context, sy *Synapse)

WtFmDWtSyn is the overall entry point for updating weights from weight changes.

func (*PrjnParams) WtFmDWtSynCortex added in v1.7.0

func (pj *PrjnParams) WtFmDWtSynCortex(ctx *Context, sy *Synapse)

WtFmDWtSynCortex updates weights from dwt changes

func (*PrjnParams) WtFmDWtSynNoLimits added in v1.7.0

func (pj *PrjnParams) WtFmDWtSynNoLimits(ctx *Context, sy *Synapse)

WtFmDWtSynNoLimits -- weight update without limits

type PrjnScaleParams added in v1.2.45

type PrjnScaleParams struct {
	Rel    float32 `` /* 255-byte string literal not displayed */
	Abs    float32 `` /* 334-byte string literal not displayed */
	AvgTau float32 `` /* 340-byte string literal not displayed */

	AvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

PrjnScaleParams are projection scaling parameters: modulates overall strength of projection, using both absolute and relative factors.

func (*PrjnScaleParams) Defaults added in v1.2.45

func (ws *PrjnScaleParams) Defaults()

func (*PrjnScaleParams) FullScale added in v1.2.45

func (ws *PrjnScaleParams) FullScale(savg, snu, ncon float32) float32

FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale

func (*PrjnScaleParams) SLayActScale added in v1.2.45

func (ws *PrjnScaleParams) SLayActScale(savg, snu, ncon float32) float32

SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.

func (*PrjnScaleParams) Update added in v1.2.45

func (ws *PrjnScaleParams) Update()

type PrjnTypes added in v1.7.0

type PrjnTypes int32

PrjnTypes is an axon-specific prjn type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.PrjnType.

const (
	// Forward is a feedforward, bottom-up projection from sensory inputs to higher layers
	ForwardPrj PrjnTypes = iota

	// Back is a feedback, top-down projection from higher layers back to lower layers
	BackPrjn

	// Lateral is a lateral projection within the same layer / area
	LateralPrjn

	// Inhib is an inhibitory projection that drives inhibitory
	// synaptic conductances instead of the default excitatory ones.
	InhibPrjn

	// CTCtxt are projections from Superficial layers to CT layers that
	// send Burst activations drive updating of CtxtGe excitatory conductance,
	// at end of plus (51B Bursting) phase.  Biologically, this projection
	// comes from the PT layer 5IB neurons, but it is simpler to use the
	// Super neurons directly, and PT are optional for most network types.
	// These projections also use a special learning rule that
	// takes into account the temporal delays in the activation states.
	// Can also add self context from CT for deeper temporal context.
	CTCtxtPrjn

	// RWPrjn does dopamine-modulated learning for reward prediction:
	// Da * Send.CaSpkP (integrated current spiking activity).
	// Uses RLPredPrjn parameters.
	// Use in RWPredLayer typically to generate reward predictions.
	// If the Da sign is positive, the first recv unit learns fully;
	// for negative, second one learns fully.  Lower lrate applies for
	// opposite cases.  Weights are positive-only.
	RWPrjn

	// TDPredPrjn does dopamine-modulated learning for reward prediction:
	// DWt = Da * Send.SpkPrv (activity on *previous* timestep)
	// Uses RLPredPrjn parameters.
	// Use in TDPredLayer typically to generate reward predictions.
	// If the Da sign is positive, the first recv unit learns fully;
	// for negative, second one learns fully.  Lower lrate applies for
	// opposite cases.  Weights are positive-only.
	TDPredPrjn

	// BLAPrjn implements the PVLV BLA learning rule:
	// dW = Ach * X_t-1 * (Y_t - Y_t-1)
	// The recv delta is across trials, where the US should activate on trial
	// boundary, to enable sufficient time for gating through to OFC, so
	// BLA initially learns based on US present - US absent.
	// It can also learn based on CS onset if there is a prior CS that predicts that.
	BLAPrjn

	// MatrixPrjn supports trace-based learning, where an initial
	// trace of synaptic co-activity is formed, and then modulated
	// by subsequent phasic dopamine & ACh when an outcome occurs.
	// This bridges the temporal gap between gating activity
	// and subsequent outcomes, and is based biologically on synaptic tags.
	// Trace is reset at time of reward based on ACh level (from CINs in biology).
	MatrixPrjn

	PrjnTypesN
)

The projection types

func (*PrjnTypes) FromString added in v1.7.0

func (i *PrjnTypes) FromString(s string) error

func (PrjnTypes) MarshalJSON added in v1.7.0

func (ev PrjnTypes) MarshalJSON() ([]byte, error)

func (PrjnTypes) String added in v1.7.0

func (i PrjnTypes) String() string

func (*PrjnTypes) UnmarshalJSON added in v1.7.0

func (ev *PrjnTypes) UnmarshalJSON(b []byte) error

type PulvParams added in v1.7.0

type PulvParams struct {
	DriveScale   float32 `` /* 145-byte string literal not displayed */
	FullDriveAct float32 `` /* 352-byte string literal not displayed */
	DriveLayIdx  int32   `` /* 132-byte string literal not displayed */
	// contains filtered or unexported fields
}

PulvParams provides parameters for how the plus-phase (outcome) state of Pulvinar thalamic relay cell neurons is computed from the corresponding driver neuron Burst activation (or CaSpkP if not Super)

func (*PulvParams) Defaults added in v1.7.0

func (tp *PulvParams) Defaults()

func (*PulvParams) DriveGe added in v1.7.0

func (tp *PulvParams) DriveGe(act float32) float32

DriveGe returns effective excitatory conductance to use for given driver input Burst activation

func (*PulvParams) NonDrivePct added in v1.7.0

func (tp *PulvParams) NonDrivePct(drvMax float32) float32

NonDrivePct returns the multiplier proportion of the non-driver based Ge to keep around, based on FullDriveAct and the max activity in driver layer.

func (*PulvParams) Update added in v1.7.0

func (tp *PulvParams) Update()

type RLPredPrjnParams added in v1.7.0

type RLPredPrjnParams struct {
	OppSignLRate float32 `desc:"how much to learn on opposite DA sign coding neuron (0..1)"`
	DaTol        float32 `` /* 208-byte string literal not displayed */
	// contains filtered or unexported fields
}

RLPredPrjnParams does dopamine-modulated learning for reward prediction: Da * Send.Act Used by RWPrjn and TDPredPrjn within corresponding RWPredLayer or TDPredLayer to generate reward predictions based on its incoming weights, using linear activation function. Has no weight bounds or limits on sign etc.

func (*RLPredPrjnParams) Defaults added in v1.7.0

func (pj *RLPredPrjnParams) Defaults()

func (*RLPredPrjnParams) Update added in v1.7.0

func (pj *RLPredPrjnParams) Update()

type RLRateParams added in v1.6.13

type RLRateParams struct {
	On         slbool.Bool `def:"true" desc:"use learning rate modulation"`
	SigmoidMin float32     `` /* 238-byte string literal not displayed */
	Diff       slbool.Bool `viewif:"On" desc:"modulate learning rate as a function of plus - minus differences"`
	SpkThr     float32     `` /* 129-byte string literal not displayed */
	DiffThr    float32     `viewif:"On" def:"0.02" desc:"threshold on recv neuron error delta, i.e., |CaSpkP - CaSpkD| below which lrate is at Min value"`
	Min        float32     `viewif:"On" def:"0.001" desc:"for Diff component, minimum learning rate value when below ActDiffThr"`
	// contains filtered or unexported fields
}

RLRateParams are recv neuron learning rate modulation parameters. Has two factors: the derivative of the sigmoid based on CaSpkD activity levels, and based on the phase-wise differences in activity (Diff).

func (*RLRateParams) Defaults added in v1.6.13

func (rl *RLRateParams) Defaults()

func (*RLRateParams) RLRateDiff added in v1.6.13

func (rl *RLRateParams) RLRateDiff(scap, scad float32) float32

RLRateDiff returns the learning rate as a function of difference between CaSpkP and CaSpkD values

func (*RLRateParams) RLRateSigDeriv added in v1.6.13

func (rl *RLRateParams) RLRateSigDeriv(act float32, laymax float32) float32

RLRateSigDeriv returns the sigmoid derivative learning rate factor as a function of spiking activity, with mid-range values having full learning and extreme values a reduced learning rate: deriv = act * (1 - act) The activity should be CaSpkP and the layer maximum is used to normalize that to a 0-1 range.

func (*RLRateParams) Update added in v1.6.13

func (rl *RLRateParams) Update()

type RSalAChParams added in v1.7.0

type RSalAChParams struct {
	RewThr     float32     `` /* 182-byte string literal not displayed */
	Rew        slbool.Bool `` /* 153-byte string literal not displayed */
	RewPred    slbool.Bool `desc:"use the global Context.NeuroMod.RewPred value"`
	SrcLay1Idx int32       `` /* 135-byte string literal not displayed */
	SrcLay2Idx int32       `` /* 135-byte string literal not displayed */
	SrcLay3Idx int32       `` /* 135-byte string literal not displayed */
	SrcLay4Idx int32       `` /* 135-byte string literal not displayed */
	SrcLay5Idx int32       `` /* 135-byte string literal not displayed */
}

RSalAChParams compute reward salience as ACh global neuromodulatory signal as a function of the MAX activation of its inputs.

func (*RSalAChParams) Defaults added in v1.7.0

func (rp *RSalAChParams) Defaults()

func (*RSalAChParams) Thr added in v1.7.0

func (rp *RSalAChParams) Thr(val float32) float32

Thr applies

func (*RSalAChParams) Update added in v1.7.0

func (rp *RSalAChParams) Update()

type RWDaParams added in v1.7.0

type RWDaParams struct {
	TonicGe      float32 `desc:"tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value"`
	RWPredLayIdx int32   `inactive:"+" desc:"idx of RWPredLayer to get reward prediction from -- set during Build from BuildConfig RWPredLayName"`
	// contains filtered or unexported fields
}

RWDaParams computes a dopamine (DA) signal using simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework).

func (*RWDaParams) Defaults added in v1.7.0

func (rp *RWDaParams) Defaults()

func (*RWDaParams) GeFmDA added in v1.7.0

func (rp *RWDaParams) GeFmDA(da float32) float32

GeFmDA returns excitatory conductance from DA dopamine value

func (*RWDaParams) Update added in v1.7.0

func (rp *RWDaParams) Update()

type RWPredParams added in v1.7.0

type RWPredParams struct {
	PredRange minmax.F32 `` /* 180-byte string literal not displayed */
}

RWPredParams parameterizes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework).

func (*RWPredParams) Defaults added in v1.7.0

func (rp *RWPredParams) Defaults()

func (*RWPredParams) Update added in v1.7.0

func (rp *RWPredParams) Update()

type RandFunIdx added in v1.7.7

type RandFunIdx uint32
const (
	RandFunActPGe RandFunIdx = iota
	RandFunActPGi
	RandFunIdxN
)

We use this enum to store a unique index for each function that requires random number generation. If you add a new function, you need to add a new enum entry here. RandFunIdxN is the total number of random functions. It autoincrements due to iota.

type SWtAdaptParams added in v1.2.45

type SWtAdaptParams struct {
	On       slbool.Bool `` /* 137-byte string literal not displayed */
	LRate    float32     `` /* 388-byte string literal not displayed */
	SubMean  float32     `viewif:"On" def:"1" desc:"amount of mean to subtract from SWt delta when updating -- generally best to set to 1"`
	SigGain  float32     `` /* 135-byte string literal not displayed */
	DreamVar float32     `` /* 354-byte string literal not displayed */
	// contains filtered or unexported fields
}

SWtAdaptParams manages adaptation of SWt values

func (*SWtAdaptParams) Defaults added in v1.2.45

func (sp *SWtAdaptParams) Defaults()

func (*SWtAdaptParams) RndVar added in v1.2.55

func (sp *SWtAdaptParams) RndVar() float32

RndVar returns the random variance (zero mean) based on DreamVar param

func (*SWtAdaptParams) Update added in v1.2.45

func (sp *SWtAdaptParams) Update()

type SWtInitParams added in v1.2.45

type SWtInitParams struct {
	SPct float32     `` /* 315-byte string literal not displayed */
	Mean float32     `` /* 199-byte string literal not displayed */
	Var  float32     `def:"0.25" desc:"initial variance in weight values, prior to constraints."`
	Sym  slbool.Bool `` /* 149-byte string literal not displayed */
}

SWtInitParams for initial SWt values

func (*SWtInitParams) Defaults added in v1.2.45

func (sp *SWtInitParams) Defaults()

func (*SWtInitParams) RndVar added in v1.2.45

func (sp *SWtInitParams) RndVar() float32

RndVar returns the random variance in weight value (zero mean) based on Var param

func (*SWtInitParams) Update added in v1.2.45

func (sp *SWtInitParams) Update()

type SWtParams added in v1.2.45

type SWtParams struct {
	Init  SWtInitParams  `view:"inline" desc:"initialization of SWt values"`
	Adapt SWtAdaptParams `view:"inline" desc:"adaptation of SWt values in response to LWt learning"`
	Limit minmax.F32     `def:"{0.2 0.8}" view:"inline" desc:"range limits for SWt values"`
}

SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.

func (*SWtParams) ClipSWt added in v1.2.45

func (sp *SWtParams) ClipSWt(swt float32) float32

ClipSWt returns SWt value clipped to valid range

func (*SWtParams) ClipWt added in v1.2.75

func (sp *SWtParams) ClipWt(wt float32) float32

ClipWt returns Wt value clipped to 0-1 range

func (*SWtParams) Defaults added in v1.2.45

func (sp *SWtParams) Defaults()

func (*SWtParams) InitWtsSyn added in v1.3.5

func (sp *SWtParams) InitWtsSyn(sy *Synapse, mean, spct float32)

InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*SWtParams) LWtFmWts added in v1.2.47

func (sp *SWtParams) LWtFmWts(wt, swt float32) float32

LWtFmWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.

func (*SWtParams) LinFmSigWt added in v1.2.45

func (sp *SWtParams) LinFmSigWt(wt float32) float32

LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5

func (*SWtParams) SigFmLinWt added in v1.2.45

func (sp *SWtParams) SigFmLinWt(lw float32) float32

SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt

func (*SWtParams) Update added in v1.2.45

func (sp *SWtParams) Update()

func (*SWtParams) WtFmDWt added in v1.2.45

func (sp *SWtParams) WtFmDWt(dwt, wt, lwt *float32, swt float32)

WtFmDWt updates the synaptic weights from accumulated weight changes. wt is the sigmoidal contrast-enhanced weight and lwt is the linear weight value.

func (*SWtParams) WtVal added in v1.2.45

func (sp *SWtParams) WtVal(swt, lwt float32) float32

WtVal returns the effective Wt value given the SWt and LWt values

type SpikeNoiseParams added in v1.2.94

type SpikeNoiseParams struct {
	On   slbool.Bool `desc:"add noise simulating background spiking levels"`
	GeHz float32     `` /* 163-byte string literal not displayed */
	Ge   float32     `` /* 162-byte string literal not displayed */
	GiHz float32     `` /* 177-byte string literal not displayed */
	Gi   float32     `` /* 162-byte string literal not displayed */

	GeExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GeNoiseP as it is updated"`
	GiExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GiNoiseP as it is updated"`
	// contains filtered or unexported fields
}

SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.

func (*SpikeNoiseParams) Defaults added in v1.2.94

func (an *SpikeNoiseParams) Defaults()

func (*SpikeNoiseParams) PGe added in v1.2.94

func (an *SpikeNoiseParams) PGe(ctx *Context, p *float32, ni uint32) float32

PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered

func (*SpikeNoiseParams) PGi added in v1.2.94

func (an *SpikeNoiseParams) PGi(ctx *Context, p *float32, ni uint32) float32

PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered

func (*SpikeNoiseParams) Update added in v1.2.94

func (an *SpikeNoiseParams) Update()

type SpikeParams

type SpikeParams struct {
	Thr      float32     `` /* 152-byte string literal not displayed */
	VmR      float32     `` /* 217-byte string literal not displayed */
	Tr       int32       `` /* 242-byte string literal not displayed */
	RTau     float32     `` /* 285-byte string literal not displayed */
	Exp      slbool.Bool `` /* 274-byte string literal not displayed */
	ExpSlope float32     `` /* 325-byte string literal not displayed */
	ExpThr   float32     `` /* 127-byte string literal not displayed */
	MaxHz    float32     `` /* 182-byte string literal not displayed */
	ISITau   float32     `def:"5" min:"1" desc:"constant for integrating the spiking interval in estimating spiking rate"`
	ISIDt    float32     `view:"-" desc:"rate = 1 / tau"`
	RDt      float32     `view:"-" desc:"rate = 1 / tau"`
	// contains filtered or unexported fields
}

SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)

func (*SpikeParams) ActFmISI

func (sk *SpikeParams) ActFmISI(isi, timeInc, integ float32) float32

ActFmISI computes rate-code activation from estimated spiking interval

func (*SpikeParams) ActToISI

func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32

ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ

func (*SpikeParams) AvgFmISI

func (sk *SpikeParams) AvgFmISI(avg *float32, isi float32)

AvgFmISI updates spiking ISI from current isi interval value

func (*SpikeParams) Defaults

func (sk *SpikeParams) Defaults()

func (*SpikeParams) Update

func (sk *SpikeParams) Update()

type StartN added in v1.7.2

type StartN struct {
	Start uint32 `desc:"starting offset"`
	N     uint32 `desc:"number of items -- [Start:Start+N]"`
}

StartN holds a starting offset index and a number of items arranged from Start to Start+N (exclusive). This is not 16 byte padded and only for use on CPU side.

type SynComParams

type SynComParams struct {
	GType         PrjnGTypes  `desc:"type of conductance (G) communicated by this projection"`
	Delay         uint32      `` /* 405-byte string literal not displayed */
	MaxDelay      uint32      `` /* 286-byte string literal not displayed */
	PFail         float32     `` /* 149-byte string literal not displayed */
	PFailSWt      slbool.Bool `` /* 141-byte string literal not displayed */
	CPURecvSpikes slbool.Bool `` /* 332-byte string literal not displayed */

	DelLen uint32 `view:"-" desc:"delay length = actual length of the GBuf buffer per neuron = Delay+1 -- just for speed"`
	// contains filtered or unexported fields
}

SynComParams are synaptic communication parameters: used in the Prjn parameters. Includes delay and probability of failure, and Inhib for inhibitory connections, and modulatory projections that have multiplicative-like effects.

func (*SynComParams) Defaults

func (sc *SynComParams) Defaults()

func (*SynComParams) Fail

func (sc *SynComParams) Fail(wt *float32, swt float32)

Fail updates failure status of given weight, given SWt value

func (*SynComParams) ReadIdx added in v1.7.2

func (sc *SynComParams) ReadIdx(rnIdx uint32, cycTot int32) uint32

ReadIdx returns index for reading existing spikes from the GBuf buffer, based on the layer-based recv neuron index and the ReadOff offset from the CycleTot.

func (*SynComParams) ReadOff added in v1.7.2

func (sc *SynComParams) ReadOff(cycTot int32) uint32

ReadOff returns offset for reading existing spikes from the GBuf buffer, based on Context CycleTot counter which increments each cycle. This is logically the zero position in the ring buffer.

func (*SynComParams) RingIdx added in v1.7.2

func (sc *SynComParams) RingIdx(i uint32) uint32

RingIdx returns the wrap-around ring index for given raw index. For writing and reading spikes to GBuf buffer, based on Context.CycleTot counter. RN: 0 1 2 <- recv neuron indexes DI: 0 1 2 0 1 2 0 1 2 <- delay indexes C0: ^ v <- cycle 0, ring index: ^ = write, v = read C1: ^ v <- cycle 1, shift over by 1 -- overwrite last read C2: v ^ <- cycle 2: read out value stored on C0 -- index wraps around

func (*SynComParams) Update

func (sc *SynComParams) Update()

func (*SynComParams) WriteIdx added in v1.7.2

func (sc *SynComParams) WriteIdx(rnIdx uint32, cycTot int32) uint32

WriteIdx returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index and the WriteOff offset computed from the CycleTot.

func (*SynComParams) WriteIdxOff added in v1.7.2

func (sc *SynComParams) WriteIdxOff(rnIdx, wrOff uint32) uint32

WriteIdxOff returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index and the given WriteOff offset.

func (*SynComParams) WriteOff added in v1.7.2

func (sc *SynComParams) WriteOff(cycTot int32) uint32

WriteOff returns offset for writing new spikes into the GBuf buffer, based on Context CycleTot counter which increments each cycle. This is logically the last position in the ring buffer.

func (*SynComParams) WtFail

func (sc *SynComParams) WtFail(swt float32) bool

WtFail returns true if synapse should fail, as function of SWt value (optionally)

func (*SynComParams) WtFailP

func (sc *SynComParams) WtFailP(swt float32) float32

WtFailP returns probability of weight (synapse) failure given current SWt value

type Synapse

type Synapse struct {
	RecvIdx uint32 `desc:"receiving neuron index in network's global list of neurons"`
	SendIdx uint32 `desc:"sending neuron index in network's global list of neurons"`
	PrjnIdx uint32 `desc:"projection index in global list of projections organized as [Layers][RecvPrjns]"`
	CaUpT   int32  `desc:"time in CycleTot of last updating of Ca values at the synapse level, for optimized synaptic-level Ca integration."`

	Wt   float32 `` /* 282-byte string literal not displayed */
	LWt  float32 `` /* 309-byte string literal not displayed */
	SWt  float32 `` /* 466-byte string literal not displayed */
	DWt  float32 `desc:"delta (change in) synaptic weight, from learning -- updates LWt which then updates Wt."`
	DSWt float32 `desc:"change in SWt slow synaptic weight -- accumulates DWt"`
	Ca   float32 `desc:"Raw calcium singal for Kinase learning: SpikeG * (send.CaSyn * recv.CaSyn)"`
	CaM  float32 `desc:"first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP"`
	CaP  float32 `` /* 165-byte string literal not displayed */
	CaD  float32 `` /* 164-byte string literal not displayed */
	Tr   float32 `` /* 158-byte string literal not displayed */
	DTr  float32 `desc:"delta (change in) Tr trace of synaptic activity over time"`
	// contains filtered or unexported fields
}

axon.Synapse holds state for the synaptic connection between neurons

func (*Synapse) SetVarByIndex

func (sy *Synapse) SetVarByIndex(idx int, val float32)

func (*Synapse) SetVarByName

func (sy *Synapse) SetVarByName(varNm string, val float32) error

SetVarByName sets synapse variable to given value

func (*Synapse) VarByIndex

func (sy *Synapse) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in SynapseVars list)

func (*Synapse) VarByName

func (sy *Synapse) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*Synapse) VarNames

func (sy *Synapse) VarNames() []string

type TDDaParams added in v1.7.0

type TDDaParams struct {
	TonicGe       float32 `desc:"tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value"`
	TDIntegLayIdx int32   `inactive:"+" desc:"idx of TDIntegLayer to get reward prediction from -- set during Build from BuildConfig TDIntegLayName"`
	// contains filtered or unexported fields
}

TDDaParams are params for dopamine (DA) signal as the temporal difference (TD) between the TDIntegLayer activations in the minus and plus phase.

func (*TDDaParams) Defaults added in v1.7.0

func (tp *TDDaParams) Defaults()

func (*TDDaParams) GeFmDA added in v1.7.0

func (tp *TDDaParams) GeFmDA(da float32) float32

GeFmDA returns excitatory conductance from DA dopamine value

func (*TDDaParams) Update added in v1.7.0

func (tp *TDDaParams) Update()

type TDIntegParams added in v1.7.0

type TDIntegParams struct {
	Discount     float32 `desc:"discount factor -- how much to discount the future prediction from TDPred"`
	PredGain     float32 `desc:"gain factor on TD rew pred activations"`
	TDPredLayIdx int32   `inactive:"+" desc:"idx of TDPredLayer to get reward prediction from -- set during Build from BuildConfig TDPredLayName"`
	// contains filtered or unexported fields
}

TDIntegParams are params for reward integrator layer

func (*TDIntegParams) Defaults added in v1.7.0

func (tp *TDIntegParams) Defaults()

func (*TDIntegParams) Update added in v1.7.0

func (tp *TDIntegParams) Update()

type TopoInhibParams added in v1.2.85

type TopoInhibParams struct {
	On      slbool.Bool `desc:"use topographic inhibition"`
	Width   int32       `viewif:"On" desc:"half-width of topographic inhibition within layer"`
	Sigma   float32     `viewif:"On" desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"`
	Wrap    slbool.Bool `viewif:"On" desc:"half-width of topographic inhibition within layer"`
	Gi      float32     `viewif:"On" desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"`
	FF      float32     `` /* 133-byte string literal not displayed */
	FB      float32     `` /* 139-byte string literal not displayed */
	FF0     float32     `` /* 186-byte string literal not displayed */
	WidthWt float32     `inactive:"+" desc:"weight value at width -- to assess the value of Sigma"`
	// contains filtered or unexported fields
}

TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood. TODO: not currently being used

func (*TopoInhibParams) Defaults added in v1.2.85

func (ti *TopoInhibParams) Defaults()

func (*TopoInhibParams) GiFmGeAct added in v1.2.85

func (ti *TopoInhibParams) GiFmGeAct(ge, act, ff0 float32) float32

func (*TopoInhibParams) Update added in v1.2.85

func (ti *TopoInhibParams) Update()

type TraceParams added in v1.5.1

type TraceParams struct {
	Tau     float32 `` /* 126-byte string literal not displayed */
	SubMean float32 `` /* 409-byte string literal not displayed */
	Dt      float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	// contains filtered or unexported fields
}

TraceParams manages learning rate parameters

func (*TraceParams) Defaults added in v1.5.1

func (tp *TraceParams) Defaults()

func (*TraceParams) TrFmCa added in v1.5.1

func (tp *TraceParams) TrFmCa(tr float32, ca float32) float32

TrFmCa returns updated trace factor as function of a synaptic calcium update factor and current trace

func (*TraceParams) Update added in v1.5.1

func (tp *TraceParams) Update()

type TrgAvgActParams added in v1.2.45

type TrgAvgActParams struct {
	On           slbool.Bool `desc:"whether to use target average activity mechanism to scale synaptic weights"`
	ErrLRate     float32     `` /* 255-byte string literal not displayed */
	SynScaleRate float32     `` /* 289-byte string literal not displayed */
	SubMean      float32     `` /* 235-byte string literal not displayed */
	TrgRange     minmax.F32  `` /* 181-byte string literal not displayed */
	Permute      slbool.Bool `` /* 236-byte string literal not displayed */
	Pool         slbool.Bool `` /* 206-byte string literal not displayed */
	// contains filtered or unexported fields
}

TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by neuron-wise error and difference in actual vs. target. drives synaptic scaling at a slow timescale (Network.SlowInterval).

func (*TrgAvgActParams) Defaults added in v1.2.45

func (ta *TrgAvgActParams) Defaults()

func (*TrgAvgActParams) Update added in v1.2.45

func (ta *TrgAvgActParams) Update()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL