axon

package
v1.4.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 10, 2022 License: BSD-3-Clause Imports: 55 Imported by: 34

Documentation

Overview

Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, PVLV, PBWM, etc.

The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.

The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.

This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.

One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.

The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.

Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)

The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables

There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.

Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.

The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.

Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., ActFmG computes activation from conductances G.

The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.

Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per projection.

The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.

Index

Constants

View Source
const (
	Version     = "v1.4.2"
	GitCommit   = "14f0ed0"          // the commit JUST BEFORE the release
	VersionDate = "2022-06-10 20:53" // UTC
)
View Source
const (
	// NMDAPrjn are projections that have strong NMDA channels supporting maintenance
	NMDA emer.PrjnType = emer.PrjnType(emer.PrjnTypeN) + iota
)

The GLong prjn types

View Source
const NeuronVarStart = 8

NeuronVarStart is the byte offset of fields in the Neuron structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!

View Source
const SynapseVarStart = 4

SynapseVarStart is the byte offset of fields in the Synapse structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!

Variables

View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
View Source
var KiT_NMDAPrjn = kit.Types.AddType(&NMDAPrjn{}, PrjnProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_NeurFlags = kit.Enums.AddEnum(NeurFlagsN, kit.BitFlag, nil)
View Source
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
View Source
var KiT_PrjnType = kit.Enums.AddEnumExt(emer.KiT_PrjnType, PrjnTypeN, kit.NotBitFlag, nil)
View Source
var LayerProps = ki.Props{
	"ToolBar": ki.PropSlice{
		{"Defaults", ki.Props{
			"icon": "reset",
			"desc": "return all parameters to their intial default values",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"LesionNeurons", ki.Props{
			"icon": "close",
			"desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)",
			"Args": ki.PropSlice{
				{"Proportion", ki.Props{
					"desc": "proportion (0 -- 1) of neurons to lesion",
				}},
			},
		}},
		{"UnLesionNeurons", ki.Props{
			"icon": "reset",
			"desc": "Un-Lesion (reset the Off flag) for all neurons in the layer",
		}},
	},
}
View Source
var NetworkProps = ki.Props{
	"ToolBar": ki.PropSlice{
		{"SaveWtsJSON", ki.Props{
			"label": "Save Wts...",
			"icon":  "file-save",
			"desc":  "Save json-formatted weights",
			"Args": ki.PropSlice{
				{"Weights File Name", ki.Props{
					"default-field": "WtsFile",
					"ext":           ".wts,.wts.gz",
				}},
			},
		}},
		{"OpenWtsJSON", ki.Props{
			"label": "Open Wts...",
			"icon":  "file-open",
			"desc":  "Open json-formatted weights",
			"Args": ki.PropSlice{
				{"Weights File Name", ki.Props{
					"default-field": "WtsFile",
					"ext":           ".wts,.wts.gz",
				}},
			},
		}},
		{"sep-file", ki.BlankProp{}},
		{"Build", ki.Props{
			"icon": "update",
			"desc": "build the network's neurons and synapses according to current params",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the network weight values according to prjn parameters",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the network activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"AddLayer", ki.Props{
			"label": "Add Layer...",
			"icon":  "new",
			"desc":  "add a new layer to network",
			"Args": ki.PropSlice{
				{"Layer Name", ki.Props{}},
				{"Layer Shape", ki.Props{
					"desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)",
				}},
				{"Layer Type", ki.Props{
					"desc": "type of layer -- used for determining how inputs are applied",
				}},
			},
		}},
		{"ConnectLayerNames", ki.Props{
			"label": "Connect Layers...",
			"icon":  "new",
			"desc":  "add a new connection between layers in the network",
			"Args": ki.PropSlice{
				{"Send Layer Name", ki.Props{}},
				{"Recv Layer Name", ki.Props{}},
				{"Pattern", ki.Props{
					"desc": "pattern to connect with",
				}},
				{"Prjn Type", ki.Props{
					"desc": "type of projection -- direction, or other more specialized factors",
				}},
			},
		}},
		{"AllPrjnScales", ki.Props{
			"icon":        "file-sheet",
			"desc":        "AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections.  These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.",
			"show-return": true,
		}},
	},
}
View Source
var NeuronVarProps = map[string]string{
	"GeSyn":    `range:"2"`,
	"Ge":       `range:"2"`,
	"GeM":      `range:"2"`,
	"Vm":       `min:"0" max:"1"`,
	"VmDend":   `min:"0" max:"1"`,
	"ISI":      `auto-scale:"+"`,
	"ISIAvg":   `auto-scale:"+"`,
	"Gi":       `auto-scale:"+"`,
	"Gk":       `auto-scale:"+"`,
	"ActDel":   `auto-scale:"+"`,
	"ActDif":   `auto-scale:"+"`,
	"AvgPct":   `range:"2"`,
	"TrgAvg":   `range:"2"`,
	"DTrgAvg":  `auto-scale:"+"`,
	"GknaFast": `auto-scale:"+"`,
	"GknaMed":  `auto-scale:"+"`,
	"GknaSlow": `auto-scale:"+"`,
	"Gnmda":    `auto-scale:"+"`,
	"GnmdaSyn": `auto-scale:"+"`,
	"RnmdaSyn": `auto-scale:"+"`,
	"GgabaB":   `auto-scale:"+"`,
	"GABAB":    `auto-scale:"+"`,
	"GABABx":   `auto-scale:"+"`,
}
View Source
var NeuronVars = []string{}
View Source
var NeuronVarsMap map[string]int
View Source
var PrjnProps = ki.Props{
	"EnumType:Typ": KiT_PrjnType,
}
View Source
var SynapseVarProps = map[string]string{
	"DWt":    `auto-scale:"+"`,
	"DSWt":   `auto-scale:"+"`,
	"TDWt":   `auto-scale:"+"`,
	"CaM":    `auto-scale:"+"`,
	"CaP":    `auto-scale:"+"`,
	"CaD":    `auto-scale:"+"`,
	"CaDMax": `auto-scale:"+"`,
}
View Source
var SynapseVars = []string{"Wt", "SWt", "LWt", "DWt", "DSWt", "TDWt", "Ca", "CaM", "CaP", "CaD", "CaDMax"}
View Source
var SynapseVarsMap map[string]int

Functions

func AgentApplyInputs added in v1.3.36

func AgentApplyInputs(net *Network, en agent.WorldInterface, layerName string, patfunc func(spec agent.SpaceSpec) etensor.Tensor)

AgentApplyInputs applies input patterns from given environment. It is good practice to have this be a separate method with appropriate args so that it can be used for various different contexts (training, testing, etc).

func AgentSendActionAndStep added in v1.3.37

func AgentSendActionAndStep(net *Network, ev agent.WorldInterface)

AgentSendActionAndStep takes action for this step, using either decoded cortical or reflexive subcortical action from env.

func DecaySynCa added in v1.3.21

func DecaySynCa(sy *Synapse, decay float32)

DecaySynCa decays synaptic calcium by given factor (between trials)

func EnvApplyInputs added in v1.3.36

func EnvApplyInputs(net *Network, ev env.Env)

EnvApplyInputs applies input patterns from given env.Env environment to Input and Target layer types, assuming that env provides State with the same names as the layers. If these assumptions don't fit, use a separate method.

func InitSynCa added in v1.3.21

func InitSynCa(sy *Synapse)

InitSynCa initializes synaptic calcium state, including CaUpT

func JsonToParams

func JsonToParams(b []byte) string

JsonToParams reformates json output to suitable params display output

func LogAddDiagnosticItems added in v1.3.35

func LogAddDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.

func LogAddLayerActTensorItems added in v1.3.35

func LogAddLayerActTensorItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)

LogAddLayerActTensorItems adds Act tensor recording items for Input and Target layers for given mode and time (e.g., Test, Trial)

func LogAddLayerGeActAvgItems added in v1.3.35

func LogAddLayerGeActAvgItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)

LogAddLayerGeActAvgItems adds Ge and Act average items for Hidden and Target layers for given mode and time (e.g., Test, Cycle) These are useful for monitoring layer activity during testing.

func LogAddPCAItems added in v1.3.35

func LogAddPCAItems(lg *elog.Logs, net *Network, times ...etime.Times)

LogAddPCAItems adds PCA statistics to log for Hidden and Target layers across 3 given time levels, in higher to lower order, e.g., Run, Epoch, Trial These are useful for diagnosing the behavior of the network.

func LogTestErrors added in v1.3.35

func LogTestErrors(lg *elog.Logs)

LogTestErrors records all errors made across TestTrials, at Test Epoch scope

func LooperResetLogBelow added in v1.3.35

func LooperResetLogBelow(man *looper.Manager, logs *elog.Logs)

LooperResetLogBelow adds a function in OnStart to all stacks and loops to reset the log at the level below each loop -- this is good default behavior.

func LooperSimCycleAndLearn added in v1.3.35

func LooperSimCycleAndLearn(man *looper.Manager, net *Network, time *Time, viewupdt *netview.ViewUpdt)

LooperSimCycleAndLearn adds Cycle and DWt, WtFmDWt functions to looper for given network, time, and netview update manager

func LooperStdPhases added in v1.3.35

func LooperStdPhases(man *looper.Manager, time *Time, net *Network, plusStart, plusEnd int)

LooperStdPhases adds the minus and plus phases of the theta cycle, along with embedded beta phases which just record St1 and St2 activity in this case. plusStart is start of plus phase, typically 150, and plusEnd is end of plus phase, typically 199 resets the state at start of trial

func LooperUpdtNetView added in v1.3.35

func LooperUpdtNetView(man *looper.Manager, viewupdt *netview.ViewUpdt)

LooperUpdtNetView adds netview update calls at each time level

func LooperUpdtPlots added in v1.3.35

func LooperUpdtPlots(man *looper.Manager, gui *egui.GUI)

LooperUpdtPlots adds plot update calls at each time level

func NeuronVarIdxByName

func NeuronVarIdxByName(varNm string) (int, error)

NeuronVarIdxByName returns the index of the variable in the Neuron, or error

func PCAStats added in v1.3.35

func PCAStats(net emer.Network, lg *elog.Logs, stats *estats.Stats)

PCAStats computes PCA statistics on recorded hidden activation patterns from Analyze, Trial log data

func SaveWeights added in v1.3.29

func SaveWeights(net *Network, ctrString, runName string)

SaveWeights saves network weights to filename with WeightsFileName information to identify the weights.

func SaveWeightsIfArgSet added in v1.3.35

func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string)

SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. uses WeightsFileName information to identify the weights.

func SigFun

func SigFun(w, gain, off float32) float32

SigFun is the sigmoid function for value w in 0-1 range, with gain and offset params

func SigFun61

func SigFun61(w float32) float32

SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params

func SigInvFun

func SigInvFun(w, gain, off float32) float32

SigInvFun is the inverse of the sigmoid function

func SigInvFun61

func SigInvFun61(w float32) float32

SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params

func SynapseVarByName

func SynapseVarByName(varNm string) (int, error)

SynapseVarByName returns the index of the variable in the Synapse, or error

func ToggleLayersOff added in v1.3.29

func ToggleLayersOff(net *Network, layerNames []string, off bool)

ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.

func WeightsFileName added in v1.3.35

func WeightsFileName(net *Network, ctrString, runName string) string

WeightsFileName returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,

Types

type ActAvgParams

type ActAvgParams struct {
	InhTau    float32 `` /* 249-byte string literal not displayed */
	Init      float32 `` /* 166-byte string literal not displayed */
	AdaptGi   bool    `` /* 126-byte string literal not displayed */
	Targ      float32 `` /* 151-byte string literal not displayed */
	HiTol     float32 `` /* 263-byte string literal not displayed */
	LoTol     float32 `` /* 263-byte string literal not displayed */
	AdaptRate float32 `` /* 182-byte string literal not displayed */

	InhDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

ActAvgParams represents expected average activity levels in the layer. Used for computing running-average computation that is then used for G scaling. Also specifies time constant for updating average and for the target value for adapting inhibition in inhib_adapt.

func (*ActAvgParams) Adapt added in v1.2.37

func (aa *ActAvgParams) Adapt(gimult *float32, trg, act float32) bool

Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.

func (*ActAvgParams) AvgFmAct

func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32, dt float32)

AvgFmAct updates the running-average activation given average activity level in layer

func (*ActAvgParams) Defaults

func (aa *ActAvgParams) Defaults()

func (*ActAvgParams) Update

func (aa *ActAvgParams) Update()

type ActAvgVals added in v1.2.32

type ActAvgVals struct {
	ActMAvg   float32 `` /* 141-byte string literal not displayed */
	ActPAvg   float32 `inactive:"+" desc:"running-average plus-phase activity integrated at Dt.LongAvgTau"`
	AvgMaxGeM float32 `` /* 203-byte string literal not displayed */
	AvgMaxGiM float32 `` /* 203-byte string literal not displayed */
	GiMult    float32 `inactive:"+" desc:"multiplier on inhibition -- adapted to maintain target activity level"`
}

ActAvgVals are running-average activation levels used for Ge scaling and adaptive inhibition

type ActInitParams

type ActInitParams struct {
	Vm  float32 `def:"0.3" desc:"initial membrane potential -- see Erev.L for the resting potential (typically .3)"`
	Act float32 `def:"0" desc:"initial activation value -- typically 0"`
	Ge  float32 `` /* 268-byte string literal not displayed */
	Gi  float32 `` /* 235-byte string literal not displayed */
}

ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.

func (*ActInitParams) Defaults

func (ai *ActInitParams) Defaults()

func (*ActInitParams) Update

func (ai *ActInitParams) Update()

type ActParams

type ActParams struct {
	Spike   SpikeParams       `view:"inline" desc:"Spiking function parameters"`
	Dend    DendParams        `view:"inline" desc:"dendrite-specific parameters"`
	Init    ActInitParams     `` /* 155-byte string literal not displayed */
	Decay   DecayParams       `` /* 233-byte string literal not displayed */
	Dt      DtParams          `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"`
	Gbar    chans.Chans       `view:"inline" desc:"[Defaults: 1, .2, 1, 1] maximal conductances levels for channels"`
	Erev    chans.Chans       `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"`
	Clamp   ClampParams       `view:"inline" desc:"how external inputs drive neural activations"`
	Noise   SpikeNoiseParams  `view:"inline" desc:"how, where, when, and how much noise to add"`
	VmRange minmax.F32        `` /* 165-byte string literal not displayed */
	KNa     knadapt.Params    `` /* 252-byte string literal not displayed */
	NMDA    chans.NMDAParams  `` /* 252-byte string literal not displayed */
	GABAB   chans.GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"`
	VGCC    chans.VGCCParams  `` /* 159-byte string literal not displayed */
	AK      chans.AKsParams   `` /* 135-byte string literal not displayed */
	Attn    AttnParams        `view:"inline" desc:"Attentional modulation parameters: how Attn modulates Ge"`
}

axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.

func (*ActParams) ActFmG

func (ac *ActParams) ActFmG(nrn *Neuron)

ActFmG computes Spike from Vm and ISI-based activation

func (*ActParams) DecayState

func (ac *ActParams) DecayState(nrn *Neuron, decay float32)

DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState

func (*ActParams) Defaults

func (ac *ActParams) Defaults()

func (*ActParams) GeFmRaw

func (ac *ActParams) GeFmRaw(nrn *Neuron, geRaw, geExt float32)

GeFmRaw integrates Ge excitatory conductance from GeRaw value into GeSyn geExt is extra conductance to add to the final Ge value

func (*ActParams) GeNoise added in v1.3.23

func (ac *ActParams) GeNoise(nrn *Neuron)

GeNoise updates nrn.GeNoise if active

func (*ActParams) GiFmRaw

func (ac *ActParams) GiFmRaw(nrn *Neuron, giRaw float32)

GiFmRaw integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)

func (*ActParams) GiNoise added in v1.3.23

func (ac *ActParams) GiNoise(nrn *Neuron)

GiNoise updates nrn.GiNoise if active

func (*ActParams) GvgccFmVm added in v1.3.24

func (ac *ActParams) GvgccFmVm(nrn *Neuron)

GvgccFmVm updates all the VGCC voltage-gated calcium channel variables from VmDend

func (*ActParams) InetFmG

func (ac *ActParams) InetFmG(vm, ge, gl, gi, gk float32) float32

InetFmG computes net current from conductances and Vm

func (*ActParams) InitActs

func (ac *ActParams) InitActs(nrn *Neuron)

InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)

func (*ActParams) InitLongActs added in v1.2.66

func (ac *ActParams) InitLongActs(nrn *Neuron)

InitLongActs initializes longer time-scale activation states in neuron (ActPrv, ActSt*, ActM, ActP, ActDif) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)

func (*ActParams) NMDAFmRaw added in v1.3.1

func (ac *ActParams) NMDAFmRaw(nrn *Neuron, geExt float32)

NMDAFmRaw updates all the NMDA variables from GnmdaRaw and current Vm, Spiking

func (*ActParams) SenderGDecay added in v1.3.1

func (ac *ActParams) SenderGDecay(nrn *Neuron)

SenderGDecay updates Se, Si, Snmda when the neuron has not spiked this time around -- decays the sender channels back open in effect.

func (*ActParams) SenderGSpiked added in v1.3.1

func (ac *ActParams) SenderGSpiked(nrn *Neuron)

SenderGSpiked sets Se, Si, Snmda to 0 when the neuron spikes, if doing depletion

func (*ActParams) Update

func (ac *ActParams) Update()

Update must be called after any changes to parameters

func (*ActParams) VmFmG

func (ac *ActParams) VmFmG(nrn *Neuron)

VmFmG computes membrane potential Vm from conductances Ge, Gi, and Gk.

func (*ActParams) VmFmInet added in v1.2.95

func (ac *ActParams) VmFmInet(vm, dt, inet float32) float32

VmFmInet computes new Vm value from inet, clamping range

func (*ActParams) VmInteg added in v1.2.96

func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32) (float32, float32)

VmInteg integrates Vm over VmSteps to obtain a more stable value Returns the new Vm and inet values.

type AttnParams added in v1.2.85

type AttnParams struct {
	On  bool    `desc:"is attentional modulation active?"`
	Min float32 `desc:"minimum act multiplier if attention is 0"`
}

AttnParams determine how the Attn modulates Ge

func (*AttnParams) Defaults added in v1.2.85

func (at *AttnParams) Defaults()

func (*AttnParams) ModVal added in v1.2.85

func (at *AttnParams) ModVal(val float32, attn float32) float32

ModVal returns the attn-modulated value -- attn must be between 1-0

func (*AttnParams) Update added in v1.2.85

func (at *AttnParams) Update()

type AxonLayer

type AxonLayer interface {
	emer.Layer

	// AsAxon returns this layer as a axon.Layer -- so that the AxonLayer
	// interface does not need to include accessors to all the basic stuff
	AsAxon() *Layer

	// InitWts initializes the weight values in the network, i.e., resetting learning
	// Also calls InitActs
	InitWts()

	// InitActAvg initializes the running-average activation values that drive learning.
	InitActAvg()

	// InitActs fully initializes activation state -- only called automatically during InitWts
	InitActs()

	// InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
	InitWtSym()

	// InitGScale computes the initial scaling factor for synaptic input conductances G,
	// stored in GScale.Scale, based on sending layer initial activation.
	InitGScale()

	// InitExt initializes external input state -- called prior to apply ext
	InitExt()

	// ApplyExt applies external input in the form of an etensor.Tensor
	// If the layer is a Target or Compare layer type, then it goes in Targ
	// otherwise it goes in Ext.
	ApplyExt(ext etensor.Tensor)

	// ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats
	// If the layer is a Target or Compare layer type, then it goes in Targ
	// otherwise it goes in Ext
	ApplyExt1D(ext []float64)

	// UpdateExtFlags updates the neuron flags for external input based on current
	// layer Type field -- call this if the Type has changed since the last
	// ApplyExt* method call.
	UpdateExtFlags()

	// IsTarget returns true if this layer is a Target layer.
	// By default, returns true for layers of Type == emer.Target
	// Other Target layers include the TRCLayer in deep predictive learning.
	// It is also used in SynScale to not apply it to target layers.
	// In both cases, Target layers are purely error-driven.
	IsTarget() bool

	// IsInput returns true if this layer is an Input layer.
	// By default, returns true for layers of Type == emer.Input
	// Used to prevent adapting of inhibition or TrgAvg values.
	IsInput() bool

	// NewState handles all initialization at start of new input pattern,
	// including computing Ge scaling from running average activation etc.
	// should already have presented the external input to the network at this point.
	NewState()

	// DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay)
	DecayState(decay float32)

	// SendSpike sends spike to receivers
	SendSpike(ltime *Time)

	// GFmInc integrates new synaptic conductances from increments sent during last SendGDelta
	GFmInc(ltime *Time)

	// AvgMaxGe computes the average and max Ge stats, used in inhibition
	AvgMaxGe(ltime *Time)

	// InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools
	InhibFmGeAct(ltime *Time)

	// ActFmG computes rate-code activation from Ge, Gi, Gl conductances
	// and updates learning running-average activations from that Act
	ActFmG(ltime *Time)

	// PostAct does updates after activation (spiking) updated for all neurons,
	// including the running-average activation used in driving inhibition,
	// and synaptic-level calcium updates depending on spiking, NMDA
	PostAct(ltime *Time)

	// SynCa does Kinase learning based on Ca driven from pre-post spiking.
	// Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP
	// representing CaMKII LTP activity and CaD representing DAPK1 LTD activity.
	// Continuous variants do weight updates (DWt), while SynSpkTheta just updates Ca.
	SynCa(ltime *Time)

	// CyclePost is called after the standard Cycle update, as a separate
	// network layer loop.
	// This is reserved for any kind of special ad-hoc types that
	// need to do something special after Act is finally computed.
	// For example, sending a neuromodulatory signal such as dopamine.
	CyclePost(ltime *Time)

	// MinusPhase does updating after end of minus phase
	MinusPhase(ltime *Time)

	// PlusPhase does updating after end of plus phase
	PlusPhase(ltime *Time)

	// ActSt1 saves current activations into ActSt1
	ActSt1(ltime *Time)

	// ActSt2 saves current activations into ActSt2
	ActSt2(ltime *Time)

	// CorSimFmActs computes the correlation similarity
	// (centered cosine aka normalized dot product)
	// in activation state between minus and plus phases
	// (1 = identical, 0 = uncorrelated).
	CorSimFmActs()

	// DWt computes the weight change (learning) -- calls DWt method on sending projections
	DWt(ltime *Time)

	// WtFmDWt updates the weights from delta-weight changes.
	// Computed from receiver perspective, does SubMean.
	WtFmDWt(ltime *Time)

	// SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling,
	// GScale conductance scaling, SWt updating, and adapting inhibition
	SlowAdapt(ltime *Time)

	// SynFail updates synaptic weight failure only -- normally done as part of DWt
	// and WtFmDWt, but this call can be used during testing to update failing synapses.
	SynFail(ltime *Time)
}

AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.

All of the structural API is in emer.Layer, which this interface also inherits for convenience.

type AxonNetwork

type AxonNetwork interface {
	emer.Network

	// AsAxon returns this network as a axon.Network -- so that the
	// AxonNetwork interface does not need to include accessors
	// to all the basic stuff
	AsAxon() *Network

	// NewStateImpl handles all initialization at start of new input pattern, including computing
	// input scaling from running average activation etc.
	NewStateImpl()

	// CycleImpl runs one cycle of activation updating:
	// * Sends Ge increments from sending to receiving layers
	// * Average and Max Ge stats
	// * Inhibition based on Ge stats and Act Stats (computed at end of Cycle)
	// * Activation from Ge, Gi, and Gl
	// * Average and Max Act stats
	// This basic version doesn't use the time info, but more specialized types do, and we
	// want to keep a consistent API for end-user code.
	CycleImpl(ltime *Time)

	// CyclePostImpl is called after the standard Cycle update, and calls CyclePost
	// on Layers -- this is reserved for any kind of special ad-hoc types that
	// need to do something special after Act is finally computed.
	// For example, sending a neuromodulatory signal such as dopamine.
	CyclePostImpl(ltime *Time)

	// MinusPhaseImpl does updating after minus phase
	MinusPhaseImpl(ltime *Time)

	// PlusPhaseImpl does updating after plus phase
	PlusPhaseImpl(ltime *Time)

	// DWtImpl computes the weight change (learning) based on current
	// running-average activation values
	DWtImpl(ltime *Time)

	// WtFmDWtImpl updates the weights from delta-weight changes.
	// Also calls SynScale every Interval times
	WtFmDWtImpl(ltime *Time)

	// SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling,
	// GScale conductance scaling, and adapting inhibition
	SlowAdapt(ltime *Time)
}

AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * MinusPhase, PlusPhase * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls.

Typically most changes in algorithm can be accomplished directly in the Layer or Prjn level, but sometimes (e.g., in deep) additional full-network passes are required.

All of the structural API is in emer.Network, which this interface also inherits for convenience.

type AxonPrjn

type AxonPrjn interface {
	emer.Prjn

	// AsAxon returns this prjn as a axon.Prjn -- so that the AxonPrjn
	// interface does not need to include accessors to all the basic stuff.
	AsAxon() *Prjn

	// InitWts initializes weight values according to Learn.WtInit params
	InitWts()

	// InitWtSym initializes weight symmetry -- is given the reciprocal projection where
	// the Send and Recv layers are reversed.
	InitWtSym(rpj AxonPrjn)

	// InitGBufs initializes the per-projection synaptic conductance buffers.
	// This is not typically needed (called during InitWts, InitActs)
	// but can be called when needed.
	InitGBufs()

	// SendESpike sends an excitatory spike from sending neuron index si,
	// to add to buffer on receivers.
	// Sends proportion of synaptic channels that remain open as function
	// of time since last spike, for Ge and Gnmda channels
	SendESpike(si int, sge, snmda float32)

	// SendISpike sends an inhibitory spike from sending neuron index si,
	// to add to buffer on receivers.  For Prjn = Inhib type.
	// Sends proportion of synaptic channels that remain open as function
	// of time since last spike.
	SendISpike(si int, sgi float32)

	// RecvGInc increments the receiver's synaptic conductances from those of all the projections.
	RecvGInc(ltime *Time)

	// SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode.
	// Optimized version only updates at point of spiking.
	// This pass goes through in sending order, filtering on sending spike.
	SendSynCa(ltime *Time)

	// RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode.
	// Optimized version only updates at point of spiking.
	// This pass goes through in recv order, filtering on recv spike.
	RecvSynCa(ltime *Time)

	// SynCaCont does Kinase learning based on Ca driven from pre-post spiking,
	// for SynSpkCont and SynNMDACont learning variants.
	// Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP
	// representing CaMKII LTP activity and CaD representing DAPK1 LTD activity.
	// Within the window of elevated synaptic Ca, CaP - CaD computes a
	// temporary DWt (TDWt) reflecting the balance of CaMKII vs. DAPK1 binding
	// at the NMDA N2B site.  When the synaptic activity has fallen from a
	// local peak (CaDMax) by a threshold amount (CaDMaxPct) then the
	// last TDWt value converts to an actual synaptic change: DWt
	SynCaCont(ltime *Time)

	// DWt computes the weight change (learning) -- on sending projections.
	DWt(ltime *Time)

	// WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections
	WtFmDWt(ltime *Time)

	// SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling,
	// GScale conductance scaling, and adapting inhibition
	SlowAdapt(ltime *Time)

	// SynFail updates synaptic weight failure only -- normally done as part of DWt
	// and WtFmDWt, but this call can be used during testing to update failing synapses.
	SynFail(ltime *Time)
}

AxonPrjn defines the essential algorithmic API for Axon, at the projection level. These are the methods that the axon.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.

All of the structural API is in emer.Prjn, which this interface also inherits for convenience.

type ClampParams

type ClampParams struct {
	Ge     float32 `def:"0.6,1" desc:"amount of Ge driven for clamping -- generally use 0.6 for Target layers, 1.0 for Input layers"`
	Add    bool    `` /* 207-byte string literal not displayed */
	ErrThr float32 `def:"0.5" desc:"threshold on neuron Act activity to count as active for computing error relative to target in PctErr method"`
}

ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.

func (*ClampParams) Defaults

func (cp *ClampParams) Defaults()

func (*ClampParams) Update

func (cp *ClampParams) Update()

type CorSimStats added in v1.3.35

type CorSimStats struct {
	Cor float32 `` /* 203-byte string literal not displayed */
	Avg float32 `` /* 137-byte string literal not displayed */
	Var float32 `` /* 139-byte string literal not displayed */
}

CorSimStats holds correlation similarity (centered cosine aka normalized dot product) statistics at the layer level

func (*CorSimStats) Init added in v1.3.35

func (cd *CorSimStats) Init()

type DecayParams added in v1.2.59

type DecayParams struct {
	Act   float32 `` /* 391-byte string literal not displayed */
	Glong float32 `` /* 332-byte string literal not displayed */
	KNa   float32 `` /* 149-byte string literal not displayed */
}

DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.

func (*DecayParams) Defaults added in v1.2.59

func (ai *DecayParams) Defaults()

func (*DecayParams) Update added in v1.2.59

func (ai *DecayParams) Update()

type DendParams added in v1.2.95

type DendParams struct {
	GbarExp      float32 `` /* 221-byte string literal not displayed */
	GbarR        float32 `` /* 150-byte string literal not displayed */
	SeiDeplete   bool    `` /* 266-byte string literal not displayed */
	SnmdaDeplete bool    `` /* 316-byte string literal not displayed */
}

DendParams are the parameters for updating dendrite-specific dynamics

func (*DendParams) Defaults added in v1.2.95

func (dp *DendParams) Defaults()

func (*DendParams) Update added in v1.2.95

func (dp *DendParams) Update()

type DtParams

type DtParams struct {
	Integ      float32 `` /* 649-byte string literal not displayed */
	VmTau      float32 `` /* 328-byte string literal not displayed */
	VmDendTau  float32 `` /* 335-byte string literal not displayed */
	VmSteps    int     `` /* 223-byte string literal not displayed */
	GeTau      float32 `def:"5" min:"1" desc:"time constant for decay of excitatory AMPA receptor conductance."`
	GiTau      float32 `def:"7" min:"1" desc:"time constant for decay of inhibitory GABAa receptor conductance."`
	IntTau     float32 `` /* 393-byte string literal not displayed */
	LongAvgTau float32 `` /* 357-byte string literal not displayed */

	VmDt      float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"`
	VmDendDt  float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"`
	DtStep    float32 `view:"-" json:"-" xml:"-" desc:"1 / VmSteps"`
	GeDt      float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	GiDt      float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	IntDt     float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"`
	LongAvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)

func (*DtParams) AvgVarUpdt added in v1.2.45

func (dp *DtParams) AvgVarUpdt(avg, vr *float32, val float32)

AvgVarUpdt updates the average and variance from current value, using LongAvgDt

func (*DtParams) Defaults

func (dp *DtParams) Defaults()

func (*DtParams) GeSynFmRaw added in v1.2.97

func (dp *DtParams) GeSynFmRaw(geRaw float32, geSyn *float32, min float32)

GeSynFmRaw updates GeSyn from raw input, decaying with time constant, back to min baseline value

func (*DtParams) GiSynFmRaw added in v1.2.97

func (dp *DtParams) GiSynFmRaw(giRaw float32, giSyn *float32, min float32)

GiSynFmRaw updates GiSyn from raw input, decaying with time constant, back to min baseline value

func (*DtParams) Update

func (dp *DtParams) Update()

type GScaleVals added in v1.2.37

type GScaleVals struct {
	Scale     float32 `` /* 240-byte string literal not displayed */
	Orig      float32 `inactive:"+" desc:"original scaling factor computed based on initial layer activity, without any subsequent adaptation"`
	Rel       float32 `` /* 159-byte string literal not displayed */
	AvgMaxRel float32 `inactive:"+" desc:"actual relative contribution of this projection based on AvgMax values"`
	Avg       float32 `inactive:"+" desc:"average G value on this trial"`
	Max       float32 `inactive:"+" desc:"maximum G value on this trial"`
	AvgAvg    float32 `inactive:"+" desc:"running average of the Avg, integrated at ly.Act.Dt.LongAvgTau"`
	AvgMax    float32 `inactive:"+" desc:"running average of the Max, integrated at ly.Act.Dt.LongAvgTau -- used for computing AvgMaxRel"`
}

GScaleVals holds the conductance scaling and associated values needed for adapting scale

func (*GScaleVals) Init added in v1.2.37

func (gs *GScaleVals) Init()

Init completes the initialization of values based on initially computed ones

type HebbPrjn added in v1.2.42

type HebbPrjn struct {
	Prjn            // access as .Prjn
	IncGain float32 `desc:"gain factor on increases relative to decreases -- lower = lower overall weights"`
}

HebbPrjn is a simple hebbian learning projection, using the CPCA Hebbian rule

func (*HebbPrjn) DWt added in v1.2.42

func (pj *HebbPrjn) DWt(ltime *Time)

DWt computes the hebbian weight change

func (*HebbPrjn) Defaults added in v1.2.42

func (pj *HebbPrjn) Defaults()

func (*HebbPrjn) UpdateParams added in v1.2.42

func (pj *HebbPrjn) UpdateParams()

type InhibMiscParams added in v1.2.89

type InhibMiscParams struct {
	AvgTau   float32 `` /* 134-byte string literal not displayed */
	GiSynThr float32 `` /* 168-byte string literal not displayed */

	AvgDt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

InhibMiscParams defines parameters for average activation value in pool that drives feedback inhibition in the FFFB inhibition function.

func (*InhibMiscParams) AvgAct added in v1.2.89

func (fb *InhibMiscParams) AvgAct(avg *float32, act float32)

AvgAct updates the average activation from new average act

func (*InhibMiscParams) Defaults added in v1.2.89

func (fb *InhibMiscParams) Defaults()

func (*InhibMiscParams) GiSyn added in v1.2.89

func (fb *InhibMiscParams) GiSyn(gisyn float32) float32

GiSyn computes the effective GiSyn value relative to the threshold

func (*InhibMiscParams) Update added in v1.2.89

func (fb *InhibMiscParams) Update()

type InhibParams

type InhibParams struct {
	Inhib  InhibMiscParams `view:"inline" desc:"misc inhibition computation parameters, including feedback activation "`
	Layer  fffb.Params     `` /* 128-byte string literal not displayed */
	Pool   fffb.Params     `view:"inline" desc:"inhibition across sub-pools of units, for layers with 4D shape"`
	Topo   TopoInhibParams `` /* 136-byte string literal not displayed */
	Self   SelfInhibParams `` /* 161-byte string literal not displayed */
	ActAvg ActAvgParams    `` /* 173-byte string literal not displayed */
}

axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as running-average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time

func (*InhibParams) Defaults

func (ip *InhibParams) Defaults()

func (*InhibParams) Update

func (ip *InhibParams) Update()

type LayFunChan

type LayFunChan chan func(ly AxonLayer)

LayFunChan is a channel that runs AxonLayer functions

type Layer

type Layer struct {
	LayerStru
	Act     ActParams       `view:"add-fields" desc:"Activation parameters and methods for computing activations"`
	Inhib   InhibParams     `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"`
	Learn   LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"`
	Neurons []Neuron        `` /* 133-byte string literal not displayed */
	Pools   []Pool          `` /* 234-byte string literal not displayed */
	ActAvg  ActAvgVals      `view:"inline" desc:"running-average activation levels used for Ge scaling and adaptive inhibition"`
	CorSim  CorSimStats     `desc:"correlation (centered cosine aka normalized dot product) similarity between ActM, ActP states"`
}

axon.Layer implements the basic Axon spiking activation function, and manages learning in the projections.

func (*Layer) ActFmG

func (ly *Layer) ActFmG(ltime *Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act

func (*Layer) ActSt1 added in v1.2.63

func (ly *Layer) ActSt1(ltime *Time)

ActSt1 saves current activation state in ActSt1 variables (using CaP)

func (*Layer) ActSt2 added in v1.2.63

func (ly *Layer) ActSt2(ltime *Time)

ActSt2 saves current activation state in ActSt2 variables (using CaP)

func (*Layer) AdaptInhib added in v1.2.37

func (ly *Layer) AdaptInhib(ltime *Time)

AdaptInhib adapts inhibition

func (*Layer) AllParams

func (ly *Layer) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Layer) ApplyExt

func (ly *Layer) ApplyExt(ext etensor.Tensor)

ApplyExt applies external input in the form of an etensor.Float32. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext

func (*Layer) ApplyExt1D

func (ly *Layer) ApplyExt1D(ext []float64)

ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext

func (*Layer) ApplyExt1D32

func (ly *Layer) ApplyExt1D32(ext []float32)

ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext

func (*Layer) ApplyExt1DTsr

func (ly *Layer) ApplyExt1DTsr(ext etensor.Tensor)

ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext

func (*Layer) ApplyExt2D

func (ly *Layer) ApplyExt2D(ext etensor.Tensor)

ApplyExt2D applies 2D tensor external input

func (*Layer) ApplyExt2Dto4D

func (ly *Layer) ApplyExt2Dto4D(ext etensor.Tensor)

ApplyExt2Dto4D applies 2D tensor external input to a 4D layer

func (*Layer) ApplyExt4D

func (ly *Layer) ApplyExt4D(ext etensor.Tensor)

ApplyExt4D applies 4D tensor external input

func (*Layer) ApplyExtFlags

func (ly *Layer) ApplyExtFlags() (clrmsk, setmsk int32, toTarg bool)

ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Targ (else Ext)

func (*Layer) AsAxon

func (ly *Layer) AsAxon() *Layer

AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff

func (*Layer) AvgGeM added in v1.2.21

func (ly *Layer) AvgGeM(ltime *Time)

AvgGeM computes the average and max GeM stats

func (*Layer) AvgMaxAct

func (ly *Layer) AvgMaxAct(ltime *Time)

AvgMaxAct updates the running-average activation used in driving inhibition

func (*Layer) AvgMaxGe

func (ly *Layer) AvgMaxGe(ltime *Time)

AvgMaxGe computes the average and max Ge stats, used in inhibition

func (*Layer) Build

func (ly *Layer) Build() error

Build constructs the layer state, including calling Build on the projections

func (*Layer) BuildPools

func (ly *Layer) BuildPools(nu int) error

BuildPools builds the inhibitory pools structures -- nu = number of units in layer

func (*Layer) BuildPrjns

func (ly *Layer) BuildPrjns() error

BuildPrjns builds the projections, recv-side

func (*Layer) BuildSubPools

func (ly *Layer) BuildSubPools()

BuildSubPools initializes neuron start / end indexes for sub-pools

func (*Layer) ClearTargExt added in v1.2.65

func (ly *Layer) ClearTargExt()

ClearTargExt clears external inputs Ext that were set from target values Targ. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Layer) CorSimFmActs added in v1.3.35

func (ly *Layer) CorSimFmActs()

CorSimFmActs computes the correlation similarity (centered cosine aka normalized dot product) in activation state between minus and plus phases.

func (*Layer) CostEst

func (ly *Layer) CostEst() (neur, syn, tot int)

CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.

func (*Layer) CyclePost

func (ly *Layer) CyclePost(ltime *Time)

CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.

func (*Layer) DTrgAvgFmErr added in v1.2.32

func (ly *Layer) DTrgAvgFmErr()

DTrgAvgFmErr computes change in TrgAvg based on unit-wise error signal

func (*Layer) DTrgSpkCaPubMean added in v1.3.3

func (ly *Layer) DTrgSpkCaPubMean()

DTrgSpkCaPubMean subtracts the mean from DTrgAvg values Called by TrgAvgFmD

func (*Layer) DWt

func (ly *Layer) DWt(ltime *Time)

DWt computes the weight change (learning) -- calls DWt method on sending projections

func (*Layer) DecayState

func (ly *Layer) DecayState(decay float32)

DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay). This does *not* call InitGInc -- must call that separately at start of AlphaCyc

func (*Layer) DecayStatePool

func (ly *Layer) DecayStatePool(pool int, decay float32)

DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)

func (*Layer) Defaults

func (ly *Layer) Defaults()

func (*Layer) GFmInc

func (ly *Layer) GFmInc(ltime *Time)

GFmInc integrates new synaptic conductances from increments sent during last Spike

func (*Layer) GFmIncNeur

func (ly *Layer) GFmIncNeur(ltime *Time, nrn *Neuron, geExt float32)

GFmIncNeur is the neuron-level code for GFmInc that integrates overall Ge, Gi values from their G*Raw accumulators. Takes an extra increment to add to geRaw

func (*Layer) GScaleAvgs added in v1.3.28

func (ly *Layer) GScaleAvgs(ltime *Time)

GScaleAvgs updates conductance scale averages

func (*Layer) HasPoolInhib added in v1.2.79

func (ly *Layer) HasPoolInhib() bool

HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.

func (*Layer) InhibFmGeAct

func (ly *Layer) InhibFmGeAct(ltime *Time)

InhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools

func (*Layer) InhibFmPool

func (ly *Layer) InhibFmPool(ltime *Time)

InhibFmPool computes inhibition Gi from Pool-level aggregated inhibition, including self and syn

func (*Layer) InitActAvg

func (ly *Layer) InitActAvg()

InitActAvg initializes the running-average activation values that drive learning. and the longer time averaging values.

func (*Layer) InitActs

func (ly *Layer) InitActs()

InitActs fully initializes activation state -- only called automatically during InitWts

func (*Layer) InitExt

func (ly *Layer) InitExt()

InitExt initializes external input state -- called prior to apply ext

func (*Layer) InitGScale added in v1.2.37

func (ly *Layer) InitGScale()

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Layer) InitWtSym

func (ly *Layer) InitWtSym()

InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers

func (*Layer) InitWts

func (ly *Layer) InitWts()

InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs

func (*Layer) IsInput added in v1.2.32

func (ly *Layer) IsInput() bool

IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == emer.Input Used to prevent adapting of inhibition or TrgAvg values.

func (*Layer) IsLearnTrgAvg added in v1.2.32

func (ly *Layer) IsLearnTrgAvg() bool

func (*Layer) IsTarget

func (ly *Layer) IsTarget() bool

IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == emer.Target Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.

func (*Layer) LesionNeurons

func (ly *Layer) LesionNeurons(prop float32) int

LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed

func (*Layer) LrateMod added in v1.2.60

func (ly *Layer) LrateMod(mod float32)

LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.

func (*Layer) LrateSched added in v1.2.60

func (ly *Layer) LrateSched(sched float32)

LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.

func (*Layer) MinusPhase added in v1.2.63

func (ly *Layer) MinusPhase(ltime *Time)

MinusPhase does updating at end of the minus phase

func (*Layer) NewState added in v1.2.63

func (ly *Layer) NewState()

NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()

func (*Layer) PctUnitErr

func (ly *Layer) PctUnitErr() float64

PctUnitErr returns the proportion of units where the thresholded value of Targ (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Act.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations.

func (*Layer) PlusPhase added in v1.2.63

func (ly *Layer) PlusPhase(ltime *Time)

PlusPhase does updating at end of the plus phase

func (*Layer) Pool

func (ly *Layer) Pool(idx int) *Pool

Pool returns pool at given index

func (*Layer) PoolInhibFmGeAct

func (ly *Layer) PoolInhibFmGeAct(ltime *Time)

PoolInhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools

func (*Layer) PoolTry

func (ly *Layer) PoolTry(idx int) (*Pool, error)

PoolTry returns pool at given index, returns error if index is out of range

func (*Layer) PostAct added in v1.3.20

func (ly *Layer) PostAct(ltime *Time)

PostAct does updates after activation (spiking) updated for all neurons, including the running-average activation used in driving inhibition, and synaptic-level calcium updates depending on spiking, NMDA

func (*Layer) ReadWtsJSON

func (ly *Layer) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.

func (*Layer) RecvGInc

func (ly *Layer) RecvGInc(ltime *Time)

RecvGInc calls RecvGInc on receiving projections to collect Neuron-level G*Inc values. This is called by GFmInc overall method, but separated out for cases that need to do something different.

func (*Layer) RecvPrjnVals

func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error

RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).

func (*Layer) SendPrjnVals

func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error

SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).

func (*Layer) SendSpike

func (ly *Layer) SendSpike(ltime *Time)

SendSpike sends spike to receivers

func (*Layer) SetWts

func (ly *Layer) SetWts(lw *weights.Layer) error

SetWts sets the weights for this layer from weights.Layer decoded values

func (*Layer) SlowAdapt added in v1.2.37

func (ly *Layer) SlowAdapt(ltime *Time)

SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, and adapting inhibition

func (*Layer) SynCa added in v1.3.1

func (ly *Layer) SynCa(ltime *Time)

SynCa does cycle-level synaptic Ca updating for the Kinase learning mechanisms. Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP representing CaMKII LTP activity and CaD representing DAPK1 LTD activity. Continuous variants do weight updates (DWt), while SynSpkTheta just updates Ca.

func (*Layer) SynFail added in v1.2.92

func (ly *Layer) SynFail(ltime *Time)

SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.

func (*Layer) SynScale added in v1.2.23

func (ly *Layer) SynScale()

SynScale performs synaptic scaling based on running average activation vs. targets

func (*Layer) TargToExt added in v1.2.65

func (ly *Layer) TargToExt()

TargToExt sets external input Ext from target values Targ This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Layer) TopoGi added in v1.2.85

func (ly *Layer) TopoGi(ltime *Time)

TopoGi computes topographic Gi inhibition

func (*Layer) TrgAvgFmD added in v1.2.32

func (ly *Layer) TrgAvgFmD()

TrgAvgFmD updates TrgAvg from DTrgAvg

func (*Layer) UnLesionNeurons

func (ly *Layer) UnLesionNeurons()

UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer

func (*Layer) UnitVal

func (ly *Layer) UnitVal(varNm string, idx []int) float32

UnitVal returns value of given variable name on given unit, using shape-based dimensional index

func (*Layer) UnitVal1D

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitVals

func (ly *Layer) UnitVals(vals *[]float32, varNm string) error

UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*Layer) UnitValsRepTensor added in v1.3.6

func (ly *Layer) UnitValsRepTensor(tsr etensor.Tensor, varNm string) error

UnitValsRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIdxs -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to a 1D shape to hold all the values if subset is defined, otherwise it calls UnitValsTensor and is identical to that. Returns error on invalid var name.

func (*Layer) UnitValsTensor

func (ly *Layer) UnitValsTensor(tsr etensor.Tensor, varNm string) error

UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.

func (*Layer) UnitVarIdx

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNames

func (ly *Layer) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*Layer) UnitVarProps

func (ly *Layer) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Layer) UpdateExtFlags

func (ly *Layer) UpdateExtFlags()

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Layer) UpdateParams

func (ly *Layer) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer

func (*Layer) VarRange

func (ly *Layer) VarRange(varNm string) (min, max float32, err error)

VarRange returns the min / max values for given variable todo: support r. s. projection values

func (*Layer) WriteWtsJSON

func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)

WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

func (*Layer) WtFmDWt

func (ly *Layer) WtFmDWt(ltime *Time)

WtFmDWt updates the weights from delta-weight changes -- on the sending projections

type LayerStru

type LayerStru struct {
	AxonLay  AxonLayer      `` /* 297-byte string literal not displayed */
	Network  emer.Network   `` /* 141-byte string literal not displayed */
	Nm       string         `` /* 151-byte string literal not displayed */
	Cls      string         `desc:"Class is for applying parameter styles, can be space separated multple tags"`
	Off      bool           `desc:"inactivate this layer -- allows for easy experimentation"`
	Shp      etensor.Shape  `` /* 219-byte string literal not displayed */
	Typ      emer.LayerType `` /* 161-byte string literal not displayed */
	Thr      int            `` /* 216-byte string literal not displayed */
	Rel      relpos.Rel     `view:"inline" desc:"Spatial relationship to other layer, determines positioning"`
	Ps       mat32.Vec3     `` /* 154-byte string literal not displayed */
	Idx      int            `` /* 256-byte string literal not displayed */
	RepIxs   []int          `desc:"indexes of representative units in the layer, for computationally expensive stats or displays"`
	RcvPrjns emer.Prjns     `desc:"list of receiving projections into this layer from other layers"`
	SndPrjns emer.Prjns     `desc:"list of sending projections from this layer to other layers"`
}

axon.LayerStru manages the structural elements of the layer, which are common to any Layer type

func (*LayerStru) ApplyParams

func (ls *LayerStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*LayerStru) Class

func (ls *LayerStru) Class() string

func (*LayerStru) Config

func (ls *LayerStru) Config(shape []int, typ emer.LayerType)

Config configures the basic properties of the layer

func (*LayerStru) Idx4DFrom2D

func (ls *LayerStru) Idx4DFrom2D(x, y int) ([]int, bool)

func (*LayerStru) Index

func (ls *LayerStru) Index() int

func (*LayerStru) InitName

func (ls *LayerStru) InitName(lay emer.Layer, name string, net emer.Network)

InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).

func (*LayerStru) Is2D

func (ls *LayerStru) Is2D() bool

func (*LayerStru) Is4D

func (ls *LayerStru) Is4D() bool

func (*LayerStru) IsOff

func (ls *LayerStru) IsOff() bool

func (*LayerStru) Label

func (ls *LayerStru) Label() string

func (*LayerStru) NPools

func (ls *LayerStru) NPools() int

NPools returns the number of unit sub-pools according to the shape parameters. Currently supported for a 4D shape, where the unit pools are the first 2 Y,X dims and then the units within the pools are the 2nd 2 Y,X dims

func (*LayerStru) NRecvPrjns

func (ls *LayerStru) NRecvPrjns() int

func (*LayerStru) NSendPrjns

func (ls *LayerStru) NSendPrjns() int

func (*LayerStru) Name

func (ls *LayerStru) Name() string

func (*LayerStru) NonDefaultParams

func (ls *LayerStru) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.

func (*LayerStru) Pos

func (ls *LayerStru) Pos() mat32.Vec3

func (*LayerStru) RecipToSendPrjn

func (ls *LayerStru) RecipToSendPrjn(spj emer.Prjn) (emer.Prjn, bool)

RecipToSendPrjn finds the reciprocal projection relative to the given sending projection found within the SendPrjns of this layer. This is then a recv prjn within this layer:

S=A -> R=B recip: R=A <- S=B -- ly = A -- we are the sender of srj and recv of rpj.

returns false if not found.

func (*LayerStru) RecvPrjn

func (ls *LayerStru) RecvPrjn(idx int) emer.Prjn

func (*LayerStru) RecvPrjns

func (ls *LayerStru) RecvPrjns() *emer.Prjns

func (*LayerStru) RelPos

func (ls *LayerStru) RelPos() relpos.Rel

func (*LayerStru) RepIdxs added in v1.3.6

func (ls *LayerStru) RepIdxs() []int

func (*LayerStru) SendPrjn

func (ls *LayerStru) SendPrjn(idx int) emer.Prjn

func (*LayerStru) SendPrjns

func (ls *LayerStru) SendPrjns() *emer.Prjns

func (*LayerStru) SetClass

func (ls *LayerStru) SetClass(cls string)

func (*LayerStru) SetIndex

func (ls *LayerStru) SetIndex(idx int)

func (*LayerStru) SetName

func (ls *LayerStru) SetName(nm string)

func (*LayerStru) SetOff

func (ls *LayerStru) SetOff(off bool)

func (*LayerStru) SetPos

func (ls *LayerStru) SetPos(pos mat32.Vec3)

func (*LayerStru) SetRelPos

func (ls *LayerStru) SetRelPos(rel relpos.Rel)

func (*LayerStru) SetRepIdxs added in v1.3.6

func (ls *LayerStru) SetRepIdxs(idxs []int)

func (*LayerStru) SetShape

func (ls *LayerStru) SetShape(shape []int)

SetShape sets the layer shape and also uses default dim names

func (*LayerStru) SetThread

func (ls *LayerStru) SetThread(thr int)

func (*LayerStru) SetType

func (ls *LayerStru) SetType(typ emer.LayerType)

func (*LayerStru) Shape

func (ls *LayerStru) Shape() *etensor.Shape

func (*LayerStru) Size

func (ls *LayerStru) Size() mat32.Vec2

func (*LayerStru) Thread

func (ls *LayerStru) Thread() int

func (*LayerStru) Type

func (ls *LayerStru) Type() emer.LayerType

func (*LayerStru) TypeName

func (ls *LayerStru) TypeName() string

type LearnNeurParams

type LearnNeurParams struct {
	NeurCa    NeurCaParams     `view:"inline" desc:"parameters for computing simple spike-driven calcium signaling variables"`
	LrnNMDA   chans.NMDAParams `view:"inline" desc:"Sending neuron NMDA channel parameters, for LrnNMDA values used in SynNMDACa learning rule"`
	TrgAvgAct TrgAvgActParams  `` /* 126-byte string literal not displayed */
	RLrate    RLrateParams     `` /* 184-byte string literal not displayed */
}

axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning

func (*LearnNeurParams) CaFmSpike added in v1.3.5

func (ln *LearnNeurParams) CaFmSpike(nrn *Neuron)

CaFmSpike updates the simple spike-based calcium signaling vals. Computed after new activation for current cycle is updated.

func (*LearnNeurParams) DecayNeurCa added in v1.3.16

func (ln *LearnNeurParams) DecayNeurCa(nrn *Neuron, decay float32)

DecayNeurCa decays neuron-level calcium by given factor (between trials)

func (*LearnNeurParams) Defaults

func (ln *LearnNeurParams) Defaults()

func (*LearnNeurParams) InitNeurCa added in v1.3.9

func (ln *LearnNeurParams) InitNeurCa(nrn *Neuron)

InitNeurCa initializes the running-average activation values that drive learning. Called by InitWts (at start of learning).

func (*LearnNeurParams) LrnNMDAFmRaw added in v1.3.11

func (ln *LearnNeurParams) LrnNMDAFmRaw(nrn *Neuron, geExt float32)

LrnNMDAFmRaw updates all the learning NMDA variables from GnmdaRaw and current Vm, Spiking

func (*LearnNeurParams) Update

func (ln *LearnNeurParams) Update()

type LearnSynParams

type LearnSynParams struct {
	Learn     bool             `desc:"enable learning for this projection"`
	Lrate     LrateParams      `desc:"learning rate parameters, supporting two levels of modulation on top of base learning rate."`
	KinaseCa  kinase.CaParams  `view:"inline" desc:"kinase calcium Ca integration parameters"`
	KinaseDWt kinase.DWtParams `view:"inline" desc:"kinase weight change parameters"`
	XCal      XCalParams       `view:"inline" desc:"parameters for the XCal learning rule"`
}

LearnSynParams manages learning-related parameters at the synapse-level.

func (*LearnSynParams) CHLdWt

func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32

CHLdWt returns the error-driven weight change component for the temporally eXtended Contrastive Attractor Learning (XCAL), CHL version

func (*LearnSynParams) CaDMax added in v1.3.17

func (ls *LearnSynParams) CaDMax(sy *Synapse)

CaDMax updates CaDMax from CaD

func (*LearnSynParams) DWtFmTDWt added in v1.3.16

func (ls *LearnSynParams) DWtFmTDWt(sy *Synapse, lr float32) bool

DWtFmTDWt updates the DWt from the TDWt, checking the learning threshold using given aggregate learning rate. Returns true if updated DWt

func (*LearnSynParams) Defaults

func (ls *LearnSynParams) Defaults()

func (*LearnSynParams) KinaseTDWt added in v1.3.20

func (ls *LearnSynParams) KinaseTDWt(sy *Synapse)

KinaseTDWt updates the temporary weight change based on current Synapse Ca values.

func (*LearnSynParams) Update

func (ls *LearnSynParams) Update()

type LrateMod added in v1.2.60

type LrateMod struct {
	On    bool       `desc:"toggle use of this modulation factor"`
	Base  float32    `viewif:"On" min:"0" max:"1" desc:"baseline learning rate -- what you get for correct cases"`
	Range minmax.F32 `` /* 191-byte string literal not displayed */
}

LrateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.

func (*LrateMod) Defaults added in v1.2.60

func (lr *LrateMod) Defaults()

func (*LrateMod) LrateMod added in v1.2.60

func (lr *LrateMod) LrateMod(net *Network, fact float32) float32

LrateMod calls LrateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.

func (*LrateMod) Mod added in v1.2.60

func (lr *LrateMod) Mod(fact float32) float32

Mod returns the learning rate modulation factor as a function of any kind of normalized modulation factor, e.g., an error measure. If fact <= Range.Min, returns Base If fact >= Range.Max, returns 1 otherwise, returns proportional value between Base..1

func (*LrateMod) Update added in v1.2.60

func (lr *LrateMod) Update()

type LrateParams added in v1.2.60

type LrateParams struct {
	Base  float32 `` /* 199-byte string literal not displayed */
	Sched float32 `desc:"scheduled learning rate multiplier, simulating reduction in plasticity over aging"`
	Mod   float32 `desc:"dynamic learning rate modulation due to neuromodulatory or other such factors"`
	Eff   float32 `inactive:"+" desc:"effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base"`
}

LrateParams manages learning rate parameters

func (*LrateParams) Defaults added in v1.2.60

func (ls *LrateParams) Defaults()

func (*LrateParams) Init added in v1.2.60

func (ls *LrateParams) Init()

Init initializes modulation values back to 1 and updates Eff

func (*LrateParams) Update added in v1.2.60

func (ls *LrateParams) Update()

type NMDAPrjn

type NMDAPrjn struct {
	Prjn // access as .Prjn
}

NMDAPrjn is a projection with NMDA maintenance channels. It marks a projection for special treatment in a MaintLayer which actually does the NMDA computations. Excitatory conductance is aggregated separately for this projection.

func (*NMDAPrjn) PrjnTypeName

func (pj *NMDAPrjn) PrjnTypeName() string

func (*NMDAPrjn) Type

func (pj *NMDAPrjn) Type() emer.PrjnType

func (*NMDAPrjn) UpdateParams

func (pj *NMDAPrjn) UpdateParams()

type Network

type Network struct {
	NetworkStru
	SlowInterval int `` /* 174-byte string literal not displayed */
	SlowCtr      int `inactive:"+" desc:"counter for how long it has been since last SlowAdapt step"`
}

axon.Network has parameters for running a basic rate-coded Axon network

func NewNetwork added in v1.2.94

func NewNetwork(name string) *Network

NewNetwork returns a new axon Network

func (*Network) ActFmG

func (nt *Network) ActFmG(ltime *Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances

func (*Network) ActSt1 added in v1.2.63

func (nt *Network) ActSt1(ltime *Time)

ActSt1 saves current acts into ActSt1 (using SpkCaP)

func (*Network) ActSt2 added in v1.2.63

func (nt *Network) ActSt2(ltime *Time)

ActSt2 saves current acts into ActSt2 (using SpkCaP)

func (*Network) AsAxon

func (nt *Network) AsAxon() *Network

func (*Network) AvgMaxGe

func (nt *Network) AvgMaxGe(ltime *Time)

AvgMaxGe computes the average and max Ge stats, used in inhibition

func (*Network) ClearTargExt added in v1.2.65

func (nt *Network) ClearTargExt()

ClearTargExt clears external inputs Ext that were set from target values Targ. This can be called to simulate alpha cycles within theta cycles, for example.

func (*Network) CollectDWts

func (nt *Network) CollectDWts(dwts *[]float32) bool

CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors.

func (*Network) Cycle

func (nt *Network) Cycle(ltime *Time)

Cycle runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.

func (*Network) CycleImpl

func (nt *Network) CycleImpl(ltime *Time)

CycleImpl runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.

func (*Network) CyclePost

func (nt *Network) CyclePost(ltime *Time)

CyclePost is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.

func (*Network) CyclePostImpl

func (nt *Network) CyclePostImpl(ltime *Time)

CyclePostImpl is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.

func (*Network) DWt

func (nt *Network) DWt(ltime *Time)

DWt computes the weight change (learning) based on current running-average activation values

func (*Network) DWtImpl

func (nt *Network) DWtImpl(ltime *Time)

DWtImpl computes the weight change (learning) based on current running-average activation values

func (*Network) DecayState

func (nt *Network) DecayState(decay float32)

DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all This is called automatically in NewState, but is avail here for ad-hoc decay cases.

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) InhibFmGeAct

func (nt *Network) InhibFmGeAct(ltime *Time)

InhibiFmGeAct computes inhibition Gi from Ge and Act stats within relevant Pools

func (*Network) InitActs

func (nt *Network) InitActs()

InitActs fully initializes activation state -- not automatically called

func (*Network) InitExt

func (nt *Network) InitExt()

InitExt initializes external input state -- call prior to applying external inputs to layers

func (*Network) InitGScale added in v1.2.92

func (nt *Network) InitGScale()

InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.

func (*Network) InitTopoSWts added in v1.2.75

func (nt *Network) InitTopoSWts()

InitTopoSWts initializes SWt structural weight parameters from prjn types that support topographic weight patterns, having flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts

func (*Network) InitWts

func (nt *Network) InitWts()

InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)

func (*Network) LayersSetOff

func (nt *Network) LayersSetOff(off bool)

LayersSetOff sets the Off flag for all layers to given setting

func (*Network) LrateMod added in v1.2.60

func (nt *Network) LrateMod(mod float32)

LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.

func (*Network) LrateSched added in v1.2.60

func (nt *Network) LrateSched(sched float32)

LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.

func (*Network) MinusPhase added in v1.2.63

func (nt *Network) MinusPhase(ltime *Time)

MinusPhase does updating after end of minus phase

func (*Network) MinusPhaseImpl added in v1.2.63

func (nt *Network) MinusPhaseImpl(ltime *Time)

MinusPhaseImpl does updating after end of minus phase

func (*Network) NewLayer

func (nt *Network) NewLayer() emer.Layer

NewLayer returns new layer of proper type

func (*Network) NewPrjn

func (nt *Network) NewPrjn() emer.Prjn

NewPrjn returns new prjn of proper type

func (*Network) NewState added in v1.2.63

func (nt *Network) NewState()

NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()

func (*Network) NewStateImpl added in v1.2.63

func (nt *Network) NewStateImpl()

NewStateImpl handles all initialization at start of new input state

func (*Network) PlusPhase added in v1.2.63

func (nt *Network) PlusPhase(ltime *Time)

PlusPhase does updating after end of plus phase

func (*Network) PlusPhaseImpl added in v1.2.63

func (nt *Network) PlusPhaseImpl(ltime *Time)

PlusPhaseImpl does updating after end of plus phase

func (*Network) PostAct added in v1.3.20

func (nt *Network) PostAct(ltime *Time)

PostAct does updates after activation (spiking) updated for all neurons, including the running-average activation used in driving inhibition, and synaptic-level calcium updates depending on spiking, NMDA

func (*Network) SendSpike

func (nt *Network) SendSpike(ltime *Time)

SendSpike sends change in activation since last sent, if above thresholds and integrates sent deltas into GeRaw and time-integrated Ge values

func (*Network) SetDWts

func (nt *Network) SetDWts(dwts []float32, navg int)

SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg)

func (*Network) SizeReport

func (nt *Network) SizeReport() string

SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint.

func (*Network) SlowAdapt added in v1.2.37

func (nt *Network) SlowAdapt(ltime *Time)

SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, GScale conductance scaling, and adapting inhibition

func (*Network) SynFail added in v1.2.92

func (nt *Network) SynFail(ltime *Time)

SynFail updates synaptic failure

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) SynVarProps

func (nt *Network) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*Network) TargToExt added in v1.2.65

func (nt *Network) TargToExt()

TargToExt sets external input Ext from target values Targ This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.

func (*Network) ThreadAlloc

func (nt *Network) ThreadAlloc(nThread int) string

ThreadAlloc allocates layers to given number of threads, attempting to evenly divide computation. Returns report of thread allocations and estimated computational cost per thread.

func (*Network) ThreadReport

func (nt *Network) ThreadReport() string

ThreadReport returns report of thread allocations and estimated computational cost per thread.

func (*Network) UnLesionNeurons

func (nt *Network) UnLesionNeurons()

UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!

func (*Network) UnitVarProps

func (nt *Network) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Network) UpdateExtFlags

func (nt *Network) UpdateExtFlags()

UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

func (*Network) WtFmDWt

func (nt *Network) WtFmDWt(ltime *Time)

WtFmDWt updates the weights from delta-weight changes. Also calls SynScale every Interval times

func (*Network) WtFmDWtImpl

func (nt *Network) WtFmDWtImpl(ltime *Time)

WtFmDWtImpl updates the weights from delta-weight changes.

type NetworkStru

type NetworkStru struct {
	EmerNet     emer.Network          `` /* 274-byte string literal not displayed */
	Nm          string                `desc:"overall name of network -- helps discriminate if there are multiple"`
	Layers      emer.Layers           `desc:"list of layers"`
	WtsFile     string                `desc:"filename of last weights file loaded or saved"`
	LayMap      map[string]emer.Layer `view:"-" desc:"map of name to layers -- layer names must be unique"`
	LayClassMap map[string][]string   `view:"-" desc:"map of layer classes -- made during Build"`
	MinPos      mat32.Vec3            `view:"-" desc:"minimum display position in network"`
	MaxPos      mat32.Vec3            `view:"-" desc:"maximum display position in network"`
	MetaData    map[string]string     `` /* 194-byte string literal not displayed */

	NThreads    int                    `` /* 203-byte string literal not displayed */
	LockThreads bool                   `` /* 165-byte string literal not displayed */
	ThrLay      [][]emer.Layer         `` /* 179-byte string literal not displayed */
	ThrChans    []LayFunChan           `view:"-" desc:"layer function channels, per thread"`
	ThrTimes    []timer.Time           `view:"-" desc:"timers for each thread, so you can see how evenly the workload is being distributed"`
	FunTimes    map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"`
	WaitGp      sync.WaitGroup         `view:"-" desc:"network-level wait group for synchronizing threaded layer calls"`
}

axon.NetworkStru holds the basic structural components of a network (layers)

func (*NetworkStru) AddLayer

func (nt *NetworkStru) AddLayer(name string, shape []int, typ emer.LayerType) emer.Layer

AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.

func (*NetworkStru) AddLayer2D

func (nt *NetworkStru) AddLayer2D(name string, shapeY, shapeX int, typ emer.LayerType) emer.Layer

AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.

func (*NetworkStru) AddLayer4D

func (nt *NetworkStru) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer

AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.

func (*NetworkStru) AddLayerInit

func (nt *NetworkStru) AddLayerInit(ly emer.Layer, name string, shape []int, typ emer.LayerType)

AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.

func (*NetworkStru) AllParams

func (nt *NetworkStru) AllParams() string

AllParams returns a listing of all parameters in the Network.

func (*NetworkStru) AllPrjnScales added in v1.2.45

func (nt *NetworkStru) AllPrjnScales() string

AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.

func (*NetworkStru) ApplyParams

func (nt *NetworkStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*NetworkStru) BidirConnectLayerNames

func (nt *NetworkStru) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)

BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) BidirConnectLayers

func (nt *NetworkStru) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)

BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) BidirConnectLayersPy

func (nt *NetworkStru) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)

BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.

func (*NetworkStru) Bounds

func (nt *NetworkStru) Bounds() (min, max mat32.Vec3)

func (*NetworkStru) BoundsUpdt

func (nt *NetworkStru) BoundsUpdt()

BoundsUpdt updates the Min / Max display bounds for 3D display

func (*NetworkStru) Build

func (nt *NetworkStru) Build() error

Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity

func (*NetworkStru) BuildThreads

func (nt *NetworkStru) BuildThreads()

BuildThreads constructs the layer thread allocation based on Thread setting in the layers

func (*NetworkStru) ConnectLayerNames

func (nt *NetworkStru) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)

ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) ConnectLayers

func (nt *NetworkStru) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn

ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) ConnectLayersPrjn

func (nt *NetworkStru) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn

ConnectLayersPrjn makes connection using given projection between two layers, adding given prjn to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) DeleteAll added in v1.2.95

func (nt *NetworkStru) DeleteAll()

DeleteAll deletes all layers, prepares network for re-configuring and building

func (*NetworkStru) FunTimerStart

func (nt *NetworkStru) FunTimerStart(fun string)

FunTimerStart starts function timer for given function name -- ensures creation of timer

func (*NetworkStru) FunTimerStop

func (nt *NetworkStru) FunTimerStop(fun string)

FunTimerStop stops function timer -- timer must already exist

func (*NetworkStru) InitName

func (nt *NetworkStru) InitName(net emer.Network, name string)

InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name.

func (*NetworkStru) Label

func (nt *NetworkStru) Label() string

func (*NetworkStru) LateralConnectLayer

func (nt *NetworkStru) LateralConnectLayer(lay emer.Layer, pat prjn.Pattern) emer.Prjn

LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) LateralConnectLayerPrjn

func (nt *NetworkStru) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn

LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.

func (*NetworkStru) Layer

func (nt *NetworkStru) Layer(idx int) emer.Layer

func (*NetworkStru) LayerByName

func (nt *NetworkStru) LayerByName(name string) emer.Layer

LayerByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.

func (*NetworkStru) LayerByNameTry

func (nt *NetworkStru) LayerByNameTry(name string) (emer.Layer, error)

LayerByNameTry returns a layer by looking it up by name -- returns error message if layer is not found

func (*NetworkStru) LayersByClass added in v1.3.4

func (nt *NetworkStru) LayersByClass(classes ...string) []string

LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.

func (*NetworkStru) Layout

func (nt *NetworkStru) Layout()

Layout computes the 3D layout of layers based on their relative position settings

func (*NetworkStru) MakeLayMap

func (nt *NetworkStru) MakeLayMap()

MakeLayMap updates layer map based on current layers

func (*NetworkStru) NLayers

func (nt *NetworkStru) NLayers() int

func (*NetworkStru) Name

func (nt *NetworkStru) Name() string

emer.Network interface methods:

func (*NetworkStru) NonDefaultParams

func (nt *NetworkStru) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.

func (*NetworkStru) OpenWtsCpp

func (nt *NetworkStru) OpenWtsCpp(filename gi.FileName) error

OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.

func (*NetworkStru) OpenWtsJSON

func (nt *NetworkStru) OpenWtsJSON(filename gi.FileName) error

OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.

func (*NetworkStru) ReadWtsCpp

func (nt *NetworkStru) ReadWtsCpp(r io.Reader) error

ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.

func (*NetworkStru) ReadWtsJSON

func (nt *NetworkStru) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.

func (*NetworkStru) SaveWtsJSON

func (nt *NetworkStru) SaveWtsJSON(filename gi.FileName) error

SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.

func (*NetworkStru) SetWts

func (nt *NetworkStru) SetWts(nw *weights.Network) error

SetWts sets the weights for this network from weights.Network decoded values

func (*NetworkStru) StartThreads

func (nt *NetworkStru) StartThreads()

StartThreads starts up the computation threads, which monitor the channels for work

func (*NetworkStru) StdVertLayout

func (nt *NetworkStru) StdVertLayout()

StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings

func (*NetworkStru) StopThreads

func (nt *NetworkStru) StopThreads()

StopThreads stops the computation threads

func (*NetworkStru) ThrLayFun

func (nt *NetworkStru) ThrLayFun(fun func(ly AxonLayer), funame string)

ThrLayFun calls function on layer, using threaded (go routine worker) computation if NThreads > 1 and otherwise just iterates over layers in the current thread.

func (*NetworkStru) ThrTimerReset

func (nt *NetworkStru) ThrTimerReset()

ThrTimerReset resets the per-thread timers

func (*NetworkStru) ThrWorker

func (nt *NetworkStru) ThrWorker(tt int)

ThrWorker is the worker function run by the worker threads

func (*NetworkStru) TimerReport

func (nt *NetworkStru) TimerReport()

TimerReport reports the amount of time spent in each function, and in each thread

func (*NetworkStru) VarRange

func (nt *NetworkStru) VarRange(varNm string) (min, max float32, err error)

VarRange returns the min / max values for given variable todo: support r. s. projection values

func (*NetworkStru) WriteWtsJSON

func (nt *NetworkStru) WriteWtsJSON(w io.Writer) error

WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

type NeurCaParams added in v1.3.9

type NeurCaParams struct {
	SpikeG float32 `def:"8" desc:"gain multiplier on spike: how much spike drives CaM value"`
	SynTau float32 `` /* 150-byte string literal not displayed */
	MTau   float32 `` /* 168-byte string literal not displayed */
	PTau   float32 `` /* 314-byte string literal not displayed */
	DTau   float32 `` /* 299-byte string literal not displayed */
	CaMax  float32 `def:"200" desc:"for SynNMDASpk, maximum expected calcium level -- used for normalizing RCa, which then drives learning"`
	CaThr  float32 `def:"0.05" desc:"threshold for overall calcium, post normalization, reflecting Ca buffering"`
	Decay  bool    `def:"false" desc:"if true, decay Ca values along with other longer duration state variables at the ThetaCycle boundary"`

	SynDt   float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	MDt     float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	PDt     float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	DDt     float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"`
	SynSpkG float32 `` /* 166-byte string literal not displayed */
}

NeurCaParams parameterizes the neuron-level spike-triggered calcium signals for the NeurSpkCa version of the Kinase learning rule. Spikes trigger decaying traces of Ca integrated in a cascading fashion at multiple time scales, with P = LTP / plus-phase and D = LTD / minus phase driving key subtraction for error-driven learning rule.

func (*NeurCaParams) CaFmSpike added in v1.3.9

func (np *NeurCaParams) CaFmSpike(nrn *Neuron)

CaFmSpike computes Ca* calcium signals based on current spike, for NeurSpkCa

func (*NeurCaParams) CaNorm added in v1.3.11

func (np *NeurCaParams) CaNorm(ca float32) float32

CaNorm normalizes and thresholds the calcium level according to CaMax, CaThr

func (*NeurCaParams) Defaults added in v1.3.9

func (np *NeurCaParams) Defaults()

func (*NeurCaParams) SynSpkCa added in v1.3.10

func (np *NeurCaParams) SynSpkCa(snCaSyn, rnCaSyn float32) float32

SynSpkCa computes synaptic spiking Ca from send and recv neuron CaSyn vals

func (*NeurCaParams) Update added in v1.3.9

func (np *NeurCaParams) Update()

type NeurFlags

type NeurFlags int32

NeurFlags are bit-flags encoding relevant binary state for neurons

const (
	// NeurOff flag indicates that this neuron has been turned off (i.e., lesioned)
	NeurOff NeurFlags = iota

	// NeurHasExt means the neuron has external input in its Ext field
	NeurHasExt

	// NeurHasTarg means the neuron has external target input in its Targ field
	NeurHasTarg

	// NeurHasCmpr means the neuron has external comparison input in its Targ field -- used for computing
	// comparison statistics but does not drive neural activity ever
	NeurHasCmpr

	NeurFlagsN
)

The neuron flags

func (*NeurFlags) FromString

func (i *NeurFlags) FromString(s string) error

func (NeurFlags) MarshalJSON

func (ev NeurFlags) MarshalJSON() ([]byte, error)

func (NeurFlags) String

func (i NeurFlags) String() string

func (*NeurFlags) UnmarshalJSON

func (ev *NeurFlags) UnmarshalJSON(b []byte) error

type Neuron

type Neuron struct {
	Flags   NeurFlags `desc:"bit flags for binary state variables"`
	SubPool int32     `` /* 214-byte string literal not displayed */
	Spike   float32   `desc:"whether neuron has spiked or not on this cycle (0 or 1)"`
	Act     float32   `` /* 283-byte string literal not displayed */
	GeSyn   float32   `desc:"total excitatory synaptic conductance -- the net excitatory input to the neuron -- does *not* include Gbar.E"`
	Ge      float32   `desc:"total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E"`
	GiSyn   float32   `` /* 168-byte string literal not displayed */
	Gi      float32   `desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"`
	Gk      float32   `` /* 148-byte string literal not displayed */
	Inet    float32   `desc:"net current produced by all channels -- drives update of Vm"`
	Vm      float32   `desc:"membrane potential -- integrates Inet current over time"`
	VmDend  float32   `desc:"dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking"`

	Targ float32 `desc:"target value: drives learning to produce this activation value"`
	Ext  float32 `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"`

	CaSyn  float32 `desc:"spike-driven calcium trace for synapse-level Ca-driven learning rules: SynSpkCa"`
	CaM    float32 `` /* 191-byte string literal not displayed */
	CaP    float32 `` /* 165-byte string literal not displayed */
	CaD    float32 `` /* 164-byte string literal not displayed */
	PctDWt float32 `desc:"percent of synapses that had DWt updated on the current cycle, for sending-neuron"`

	ActInt float32 `` /* 421-byte string literal not displayed */
	ActSt1 float32 `` /* 235-byte string literal not displayed */
	ActSt2 float32 `` /* 236-byte string literal not displayed */
	ActM   float32 `desc:"the activation state at end of third quarter, which is the traditional posterior-cortical minus phase activation"`
	ActP   float32 `desc:"the activation state at end of fourth quarter, which is the traditional posterior-cortical plus_phase activation"`
	ActDif float32 `` /* 164-byte string literal not displayed */
	ActDel float32 `desc:"delta activation: change in Act from one cycle to next -- can be useful to track where changes are taking place"`
	ActPrv float32 `desc:"the final activation state at end of previous state"`
	RLrate float32 `` /* 142-byte string literal not displayed */

	ActAvg  float32 `` /* 194-byte string literal not displayed */
	AvgPct  float32 `` /* 158-byte string literal not displayed */
	TrgAvg  float32 `` /* 169-byte string literal not displayed */
	DTrgAvg float32 `` /* 164-byte string literal not displayed */
	AvgDif  float32 `` /* 173-byte string literal not displayed */
	Attn    float32 `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"`

	ISI    float32 `desc:"current inter-spike-interval -- counts up since last spike.  Starts at -1 when initialized."`
	ISIAvg float32 `` /* 320-byte string literal not displayed */

	GeNoiseP float32 `` /* 201-byte string literal not displayed */
	GeNoise  float32 `desc:"integrated noise excitatory conductance, added into Ge"`
	GiNoiseP float32 `` /* 201-byte string literal not displayed */
	GiNoise  float32 `desc:"integrated noise inhibotyr conductance, added into Gi"`
	GiSelf   float32 `desc:"total amount of self-inhibition -- time-integrated to avoid oscillations"`

	Se    float32 `` /* 242-byte string literal not displayed */
	Si    float32 `` /* 157-byte string literal not displayed */
	Snmda float32 `` /* 289-byte string literal not displayed */

	GeM      float32 `` /* 165-byte string literal not displayed */
	GiM      float32 `` /* 168-byte string literal not displayed */
	GknaFast float32 `` /* 130-byte string literal not displayed */
	GknaMed  float32 `` /* 131-byte string literal not displayed */
	GknaSlow float32 `` /* 129-byte string literal not displayed */
	GgabaB   float32 `` /* 127-byte string literal not displayed */
	GABAB    float32 `desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"`
	GABABx   float32 `desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"`
	Gvgcc    float32 `desc:"conductance (via Ca) for VGCC voltage gated calcium channels"`
	VgccM    float32 `desc:"activation gate of VGCC channels"`
	VgccH    float32 `desc:"inactivation gate of VGCC channels"`
	VgccCa   float32 `desc:"VGCC calcium flux"`
	Gak      float32 `desc:"conductance of A-type K potassium channels"`

	GnmdaSyn float32 `desc:"integrated NMDA recv synaptic current -- adds GnmdaRaw and decays with time constant"`
	Gnmda    float32 `` /* 137-byte string literal not displayed */
	RnmdaSyn float32 `` /* 133-byte string literal not displayed */
	RCa      float32 `` /* 207-byte string literal not displayed */
	SnmdaO   float32 `` /* 310-byte string literal not displayed */
	SnmdaI   float32 `` /* 255-byte string literal not displayed */

	GeRaw    float32 `` /* 157-byte string literal not displayed */
	GiRaw    float32 `` /* 158-byte string literal not displayed */
	GnmdaRaw float32 `` /* 128-byte string literal not displayed */
}

axon.Neuron holds all of the neuron (unit) level variables. This is the most basic version, without any optional features. All variables accessible via Unit interface must be float32 and start at the top, in contiguous order

func (*Neuron) ClearFlag

func (nrn *Neuron) ClearFlag(flag NeurFlags)

func (*Neuron) ClearMask

func (nrn *Neuron) ClearMask(mask int32)

func (*Neuron) HasFlag

func (nrn *Neuron) HasFlag(flag NeurFlags) bool

func (*Neuron) IsOff

func (nrn *Neuron) IsOff() bool

IsOff returns true if the neuron has been turned off (lesioned)

func (*Neuron) SetFlag

func (nrn *Neuron) SetFlag(flag NeurFlags)

func (*Neuron) SetMask

func (nrn *Neuron) SetMask(mask int32)

func (*Neuron) VarByIndex

func (nrn *Neuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in NeuronVars list)

func (*Neuron) VarByName

func (nrn *Neuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*Neuron) VarNames

func (nrn *Neuron) VarNames() []string

type Pool

type Pool struct {
	StIdx, EdIdx int             `desc:"starting and ending (exlusive) indexes for the list of neurons in this pool"`
	Inhib        fffb.Inhib      `desc:"FFFB inhibition computed values, including Ge and Act AvgMax which drive inhibition"`
	ActM         minmax.AvgMax32 `desc:"minus phase average and max Act activation values, for ActAvg updt"`
	ActP         minmax.AvgMax32 `desc:"plus phase average and max Act activation values, for ActAvg updt"`
	GeM          minmax.AvgMax32 `desc:"stats for GeM minus phase averaged Ge values"`
	GiM          minmax.AvgMax32 `desc:"stats for GiM minus phase averaged Gi values"`
	AvgDif       minmax.AvgMax32 `desc:"absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg"`
}

Pool contains computed values for FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition, including: * average / max stats on Ge and Act that drive inhibition

func (*Pool) Init

func (pl *Pool) Init()

type Prjn

type Prjn struct {
	PrjnStru
	Com       SynComParams    `view:"inline" desc:"synaptic communication parameters: delay, probability of failure"`
	PrjnScale PrjnScaleParams `` /* 194-byte string literal not displayed */
	SWt       SWtParams       `` /* 165-byte string literal not displayed */
	Learn     LearnSynParams  `view:"add-fields" desc:"synaptic-level learning parameters for learning in the fast LWt values."`
	Syns      []Synapse       `desc:"synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`

	// misc state variables below:
	GScale   GScaleVals  `view:"inline" desc:"conductance scaling values"`
	Gidx     ringidx.FIx `` /* 201-byte string literal not displayed */
	GBuf     []float32   `` /* 182-byte string literal not displayed */
	GnmdaBuf []float32   `` /* 184-byte string literal not displayed */
	AvgDWt   float32     `inactive:"+" desc:"average DWt value across all synapses"`
}

axon.Prjn is a basic Axon projection with synaptic learning parameters

func (*Prjn) AllParams

func (pj *Prjn) AllParams() string

AllParams returns a listing of all parameters in the Layer

func (*Prjn) AsAxon

func (pj *Prjn) AsAxon() *Prjn

AsAxon returns this prjn as a axon.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the AxonPrjn interface does not need to include accessors to all the basic stuff.

func (*Prjn) Build

func (pj *Prjn) Build() error

Build constructs the full connectivity among the layers as specified in this projection. Calls PrjnStru.BuildStru and then allocates the synaptic values in Syns accordingly.

func (*Prjn) BuildGBufs added in v1.3.1

func (pj *Prjn) BuildGBufs()

BuildGBuf builds GBuf with current Com Delay values, if not correct size

func (*Prjn) DWt

func (pj *Prjn) DWt(ltime *Time)

DWt computes the weight change (learning) -- on sending projections

func (*Prjn) DWtCont added in v1.3.22

func (pj *Prjn) DWtCont(ltime *Time)

DWtCont computes the weight change (learning) for continuous learning variants (SynSpkCont and SynNMDACont), which have already continuously computed DWt from TDWt. Applies post-trial decay to simulate time passage, and checks for whether learning should occur.

func (*Prjn) DWtNeurSpkTheta added in v1.3.22

func (pj *Prjn) DWtNeurSpkTheta(ltime *Time)

DWtNeurSpkTheta computes the weight change (learning) -- on sending projections using the separately-integrated neuron-level spike-driven Ca values, equivalent to the CHL plus - minus temporal derivative with checkmark-based BCM-like XCal learning rule originally derived from Urakubo et al (2008)

func (*Prjn) DWtSynSpkTheta added in v1.3.22

func (pj *Prjn) DWtSynSpkTheta(ltime *Time)

DWtSynSpkTheta computes the weight change (learning) based on synaptically-integrated spiking, for the optimized version computed at the Theta cycle interval.

func (*Prjn) Defaults

func (pj *Prjn) Defaults()

func (*Prjn) InitGBufs added in v1.3.1

func (pj *Prjn) InitGBufs()

InitGBufs initializes the G buffer values to 0 and insures that G*Buf are properly allocated

func (*Prjn) InitWtSym

func (pj *Prjn) InitWtSym(rpjp AxonPrjn)

InitWtSym initializes weight symmetry -- is given the reciprocal projection where the Send and Recv layers are reversed.

func (*Prjn) InitWts

func (pj *Prjn) InitWts()

InitWts initializes weight values according to SWt params, enforcing current constraints.

func (*Prjn) InitWtsSyn

func (pj *Prjn) InitWtsSyn(sy *Synapse, mean, spct float32)

InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*Prjn) LrateMod added in v1.2.60

func (pj *Prjn) LrateMod(mod float32)

LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.

func (*Prjn) LrateSched added in v1.2.60

func (pj *Prjn) LrateSched(sched float32)

LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.

func (*Prjn) ReadWtsJSON

func (pj *Prjn) ReadWtsJSON(r io.Reader) error

ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.

func (*Prjn) RecvGInc

func (pj *Prjn) RecvGInc(ltime *Time)

RecvGInc increments the receiver's GeRaw or GiRaw from that of all the projections.

func (*Prjn) RecvGIncNoStats added in v1.2.37

func (pj *Prjn) RecvGIncNoStats()

RecvGIncNoStats is plus-phase version without stats

func (*Prjn) RecvGIncStats added in v1.2.37

func (pj *Prjn) RecvGIncStats()

RecvGIncStats is called every cycle during minus phase, to increment GeRaw or GiRaw, and also collect stats about conductances.

func (*Prjn) RecvSynCa added in v1.3.18

func (pj *Prjn) RecvSynCa(ltime *Time)

RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in recv order, filtering on recv spike.

func (*Prjn) SWtFmWt added in v1.2.45

func (pj *Prjn) SWtFmWt()

SWtFmWt updates structural, slowly-adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.

func (*Prjn) SWtRescale added in v1.2.45

func (pj *Prjn) SWtRescale()

SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.

func (*Prjn) SendESpike added in v1.3.1

func (pj *Prjn) SendESpike(si int, sge, snmda float32)

SendESpike sends an excitatory spike from sending neuron index si, to add to buffer on receivers. Sends proportion of synaptic channels that remain open as function of time since last spike, for Ge and Gnmda channels.

func (*Prjn) SendISpike added in v1.3.1

func (pj *Prjn) SendISpike(si int, sgi float32)

SendISpike sends an inhibitory spike from sending neuron index si, to add to buffer on receivers. Sends proportion of synaptic channels that remain open as function of time since last spike.

func (*Prjn) SendSynCa added in v1.3.22

func (pj *Prjn) SendSynCa(ltime *Time)

SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in sending order, filtering on sending spike.

func (*Prjn) SetClass

func (pj *Prjn) SetClass(cls string) emer.Prjn

func (*Prjn) SetPattern

func (pj *Prjn) SetPattern(pat prjn.Pattern) emer.Prjn

func (*Prjn) SetSWtsFunc added in v1.2.75

func (pj *Prjn) SetSWtsFunc(swtFun func(si, ri int, send, recv *etensor.Shape) float32)

SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.

func (*Prjn) SetSWtsRPool added in v1.2.75

func (pj *Prjn) SetSWtsRPool(swts etensor.Tensor)

SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.

func (*Prjn) SetSynVal

func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error

SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.

func (*Prjn) SetType

func (pj *Prjn) SetType(typ emer.PrjnType) emer.Prjn

func (*Prjn) SetWts

func (pj *Prjn) SetWts(pw *weights.Prjn) error

SetWts sets the weights for this projection from weights.Prjn decoded values

func (*Prjn) SetWtsFunc

func (pj *Prjn) SetWtsFunc(wtFun func(si, ri int, send, recv *etensor.Shape) float32)

SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.

func (*Prjn) SlowAdapt added in v1.2.37

func (pj *Prjn) SlowAdapt(ltime *Time)

SlowAdapt does the slow adaptation: SWt learning and SynScale

func (*Prjn) Syn1DNum added in v1.4.0

func (pj *Prjn) Syn1DNum() int

Syn1DNum returns the number of synapses for this prjn as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynVals.

func (*Prjn) SynCaCont added in v1.3.5

func (pj *Prjn) SynCaCont(ltime *Time)

SynCaCont does Kinase learning based on Ca driven from pre-post spiking, for SynSpkCont and SynNMDACont learning variants. Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP representing CaMKII LTP activity and CaD representing DAPK1 LTD activity. Within the window of elevated synaptic Ca, CaP - CaD computes a temporary DWt (TDWt) reflecting the balance of CaMKII vs. DAPK1 binding at the NMDA N2B site. When the synaptic activity has fallen from a local peak (CaDMax) by a threshold amount (CaDMaxPct) then the last TDWt value converts to an actual synaptic change: DWt

func (*Prjn) SynFail added in v1.2.92

func (pj *Prjn) SynFail(ltime *Time)

SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.

func (*Prjn) SynIdx

func (pj *Prjn) SynIdx(sidx, ridx int) int

SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes). Returns -1 if synapse not found between these two neurons. Requires searching within connections for receiving unit.

func (*Prjn) SynScale added in v1.2.23

func (pj *Prjn) SynScale()

SynScale performs synaptic scaling based on running average activation vs. targets

func (*Prjn) SynVal

func (pj *Prjn) SynVal(varNm string, sidx, ridx int) float32

SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)

func (*Prjn) SynVal1D

func (pj *Prjn) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Prjn) SynVals

func (pj *Prjn) SynVals(vals *[]float32, varNm string) error

SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.

func (*Prjn) SynVarIdx

func (pj *Prjn) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Prjn) SynVarNames

func (pj *Prjn) SynVarNames() []string

func (*Prjn) SynVarNum

func (pj *Prjn) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.

func (*Prjn) SynVarProps

func (pj *Prjn) SynVarProps() map[string]string

SynVarProps returns properties for variables

func (*Prjn) UpdateParams

func (pj *Prjn) UpdateParams()

UpdateParams updates all params given any changes that might have been made to individual values

func (*Prjn) WriteWtsJSON

func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)

WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.

func (*Prjn) WtFmDWt

func (pj *Prjn) WtFmDWt(ltime *Time)

WtFmDWt updates the synaptic weight values from delta-weight changes. Computed in receiving direction, does SubMean subtraction first.

type PrjnScaleParams added in v1.2.45

type PrjnScaleParams struct {
	Rel    float32 `` /* 253-byte string literal not displayed */
	Abs    float32 `` /* 334-byte string literal not displayed */
	AvgTau float32 `` /* 340-byte string literal not displayed */

	AvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

PrjnScaleParams are projection scaling parameters: modulates overall strength of projection, using both absolute and relative factors.

func (*PrjnScaleParams) Defaults added in v1.2.45

func (ws *PrjnScaleParams) Defaults()

func (*PrjnScaleParams) FullScale added in v1.2.45

func (ws *PrjnScaleParams) FullScale(savg, snu, ncon float32) float32

FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale

func (*PrjnScaleParams) SLayActScale added in v1.2.45

func (ws *PrjnScaleParams) SLayActScale(savg, snu, ncon float32) float32

SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.

func (*PrjnScaleParams) Update added in v1.2.45

func (ws *PrjnScaleParams) Update()

type PrjnStru

type PrjnStru struct {
	AxonPrj     AxonPrjn        `` /* 267-byte string literal not displayed */
	Off         bool            `desc:"inactivate this projection -- allows for easy experimentation"`
	Cls         string          `desc:"Class is for applying parameter styles, can be space separated multple tags"`
	Notes       string          `desc:"can record notes about this projection here"`
	Send        emer.Layer      `desc:"sending layer for this projection"`
	Recv        emer.Layer      `` /* 167-byte string literal not displayed */
	Pat         prjn.Pattern    `desc:"pattern of connectivity"`
	Typ         emer.PrjnType   `` /* 154-byte string literal not displayed */
	RConN       []int32         `view:"-" desc:"number of recv connections for each neuron in the receiving layer, as a flat list"`
	RConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of recv connections in the receiving layer"`
	RConIdxSt   []int32         `view:"-" desc:"starting index into ConIdx list for each neuron in receiving layer -- just a list incremented by ConN"`
	RConIdx     []int32         `` /* 213-byte string literal not displayed */
	RSynIdx     []int32         `` /* 185-byte string literal not displayed */
	SConN       []int32         `view:"-" desc:"number of sending connections for each neuron in the sending layer, as a flat list"`
	SConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of sending connections in the sending layer"`
	SConIdxSt   []int32         `view:"-" desc:"starting index into ConIdx list for each neuron in sending layer -- just a list incremented by ConN"`
	SConIdx     []int32         `` /* 213-byte string literal not displayed */
}

PrjnStru contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The exact same struct object is added to the Recv and Send layers, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation.

func (*PrjnStru) ApplyParams

func (ps *PrjnStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)

ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.

func (*PrjnStru) BuildStru

func (ps *PrjnStru) BuildStru() error

BuildStru constructs the full connectivity among the layers as specified in this projection. Calls Validate and returns false if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern.

func (*PrjnStru) Class

func (ps *PrjnStru) Class() string

func (*PrjnStru) Connect

func (ps *PrjnStru) Connect(slay, rlay emer.Layer, pat prjn.Pattern, typ emer.PrjnType)

Connect sets the connectivity between two layers and the pattern to use in interconnecting them

func (*PrjnStru) Init

func (ps *PrjnStru) Init(prjn emer.Prjn)

Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.

func (*PrjnStru) IsOff

func (ps *PrjnStru) IsOff() bool

func (*PrjnStru) Label

func (ps *PrjnStru) Label() string

func (*PrjnStru) Name

func (ps *PrjnStru) Name() string

func (*PrjnStru) NonDefaultParams

func (ps *PrjnStru) NonDefaultParams() string

NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.

func (*PrjnStru) Pattern

func (ps *PrjnStru) Pattern() prjn.Pattern

func (*PrjnStru) PrjnTypeName

func (ps *PrjnStru) PrjnTypeName() string

func (*PrjnStru) RecvLay

func (ps *PrjnStru) RecvLay() emer.Layer

func (*PrjnStru) SendLay

func (ps *PrjnStru) SendLay() emer.Layer

func (*PrjnStru) SetNIdxSt

func (ps *PrjnStru) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32

SetNIdxSt sets the *ConN and *ConIdxSt values given n tensor from Pat. Returns total number of connections for this direction.

func (*PrjnStru) SetOff

func (ps *PrjnStru) SetOff(off bool)

func (*PrjnStru) String

func (ps *PrjnStru) String() string

String satisfies fmt.Stringer for prjn

func (*PrjnStru) Type

func (ps *PrjnStru) Type() emer.PrjnType

func (*PrjnStru) TypeName

func (ps *PrjnStru) TypeName() string

func (*PrjnStru) Validate

func (ps *PrjnStru) Validate(logmsg bool) error

Validate tests for non-nil settings for the projection -- returns error message or nil if no problems (and logs them if logmsg = true)

type PrjnType

type PrjnType emer.PrjnType

PrjnType has the GLong extensions to the emer.PrjnType types, for gui

const (
	NMDA_ PrjnType = PrjnType(emer.PrjnTypeN) + iota
	PrjnTypeN
)

gui versions

func StringToPrjnType

func StringToPrjnType(s string) (PrjnType, error)

func (PrjnType) String

func (i PrjnType) String() string

type RLrateParams added in v1.2.79

type RLrateParams struct {
	On        bool    `def:"true" desc:"use learning rate modulation"`
	ActThr    float32 `def:"0.1" desc:"threshold on Max(CaP, CaD) below which Min lrate applies -- must be > 0 to prevent div by zero"`
	ActDifThr float32 `def:"0.02" desc:"threshold on recv neuron error delta, i.e., |CaP - CaD| below which lrate is at Min value"`
	Min       float32 `def:"0.001" desc:"minimum learning rate value when below ActDifThr"`
}

RLrateParams recv neuron learning rate modulation parameters. RLrate is computed as |CaP - CaD| / Max(CaP, CaD) subject to thresholding

func (*RLrateParams) Defaults added in v1.2.79

func (rl *RLrateParams) Defaults()

func (*RLrateParams) RLrate added in v1.2.79

func (rl *RLrateParams) RLrate(scap, scad float32) float32

RLrate returns the learning rate as a function of CaP and CaD values

func (*RLrateParams) Update added in v1.2.79

func (rl *RLrateParams) Update()

type SWtAdaptParams added in v1.2.45

type SWtAdaptParams struct {
	On       bool    `` /* 137-byte string literal not displayed */
	Lrate    float32 `` /* 388-byte string literal not displayed */
	SigGain  float32 `` /* 135-byte string literal not displayed */
	DreamVar float32 `` /* 354-byte string literal not displayed */
}

SWtAdaptParams manages adaptation of SWt values

func (*SWtAdaptParams) Defaults added in v1.2.45

func (sp *SWtAdaptParams) Defaults()

func (*SWtAdaptParams) RndVar added in v1.2.55

func (sp *SWtAdaptParams) RndVar() float32

RndVar returns the random variance (zero mean) based on DreamVar param

func (*SWtAdaptParams) Update added in v1.2.45

func (sp *SWtAdaptParams) Update()

type SWtInitParams added in v1.2.45

type SWtInitParams struct {
	SPct float32 `` /* 315-byte string literal not displayed */
	Mean float32 `` /* 199-byte string literal not displayed */
	Var  float32 `def:"0.25" desc:"initial variance in weight values, prior to constraints."`
	Sym  bool    `` /* 149-byte string literal not displayed */
}

SWtInitParams for initial SWt values

func (*SWtInitParams) Defaults added in v1.2.45

func (sp *SWtInitParams) Defaults()

func (*SWtInitParams) RndVar added in v1.2.45

func (sp *SWtInitParams) RndVar() float32

RndVar returns the random variance in weight value (zero mean) based on Var param

func (*SWtInitParams) Update added in v1.2.45

func (sp *SWtInitParams) Update()

type SWtParams added in v1.2.45

type SWtParams struct {
	Init  SWtInitParams  `view:"inline" desc:"initialization of SWt values"`
	Adapt SWtAdaptParams `view:"inline" desc:"adaptation of SWt values in response to LWt learning"`
	Limit minmax.F32     `def:"{0.2 0.8}" view:"inline" desc:"range limits for SWt values"`
}

SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.

func (*SWtParams) ClipSWt added in v1.2.45

func (sp *SWtParams) ClipSWt(swt float32) float32

ClipSWt returns SWt value clipped to valid range

func (*SWtParams) ClipWt added in v1.2.75

func (sp *SWtParams) ClipWt(wt float32) float32

ClipWt returns Wt value clipped to 0-1 range

func (*SWtParams) Defaults added in v1.2.45

func (sp *SWtParams) Defaults()

func (*SWtParams) InitWtsSyn added in v1.3.5

func (sp *SWtParams) InitWtsSyn(sy *Synapse, mean, spct float32)

InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.

func (*SWtParams) LWtFmWts added in v1.2.47

func (sp *SWtParams) LWtFmWts(wt, swt float32) float32

LWtFmWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.

func (*SWtParams) LinFmSigWt added in v1.2.45

func (sp *SWtParams) LinFmSigWt(wt float32) float32

LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5

func (*SWtParams) SigFmLinWt added in v1.2.45

func (sp *SWtParams) SigFmLinWt(lw float32) float32

SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt

func (*SWtParams) Update added in v1.2.45

func (sp *SWtParams) Update()

func (*SWtParams) WtFmDWt added in v1.2.45

func (sp *SWtParams) WtFmDWt(dwt, wt, lwt *float32, swt float32)

WtFmDWt updates the synaptic weights from accumulated weight changes. wt is the sigmoidal contrast-enhanced weight and lwt is the linear weight value.

func (*SWtParams) WtVal added in v1.2.45

func (sp *SWtParams) WtVal(swt, lwt float32) float32

WtVal returns the effective Wt value given the SWt and LWt values

type SelfInhibParams

type SelfInhibParams struct {
	On  bool    `desc:"enable neuron self-inhibition"`
	Gi  float32 `` /* 247-byte string literal not displayed */
	Tau float32 `` /* 379-byte string literal not displayed */
	Dt  float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"`
}

SelfInhibParams defines parameters for Neuron self-inhibition activation of the neuron directly feeds back to produce a proportional additional contribution to Gi

func (*SelfInhibParams) Defaults

func (si *SelfInhibParams) Defaults()

func (*SelfInhibParams) Inhib

func (si *SelfInhibParams) Inhib(self *float32, act float32)

Inhib updates the self inhibition value based on current unit activation

func (*SelfInhibParams) Update

func (si *SelfInhibParams) Update()

type SpikeNoiseParams added in v1.2.94

type SpikeNoiseParams struct {
	On   bool    `desc:"add noise simulating background spiking levels"`
	GeHz float32 `` /* 151-byte string literal not displayed */
	Ge   float32 `` /* 150-byte string literal not displayed */
	GiHz float32 `` /* 165-byte string literal not displayed */
	Gi   float32 `` /* 150-byte string literal not displayed */

	GeExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GeNoiseP as it is updated"`
	GiExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GiNoiseP as it is updated"`
}

SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.

func (*SpikeNoiseParams) Defaults added in v1.2.94

func (an *SpikeNoiseParams) Defaults()

func (*SpikeNoiseParams) PGe added in v1.2.94

func (an *SpikeNoiseParams) PGe(p *float32) float32

PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered

func (*SpikeNoiseParams) PGi added in v1.2.94

func (an *SpikeNoiseParams) PGi(p *float32) float32

PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered

func (*SpikeNoiseParams) Update added in v1.2.94

func (an *SpikeNoiseParams) Update()

type SpikeParams

type SpikeParams struct {
	Thr      float32 `` /* 152-byte string literal not displayed */
	VmR      float32 `` /* 217-byte string literal not displayed */
	Tr       int     `` /* 242-byte string literal not displayed */
	RTau     float32 `` /* 285-byte string literal not displayed */
	Exp      bool    `` /* 274-byte string literal not displayed */
	ExpSlope float32 `` /* 325-byte string literal not displayed */
	ExpThr   float32 `` /* 127-byte string literal not displayed */
	MaxHz    float32 `` /* 182-byte string literal not displayed */
	ISITau   float32 `def:"5" min:"1" desc:"constant for integrating the spiking interval in estimating spiking rate"`
	ISIDt    float32 `view:"-" desc:"rate = 1 / tau"`
	RDt      float32 `view:"-" desc:"rate = 1 / tau"`
}

SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)

func (*SpikeParams) ActFmISI

func (sk *SpikeParams) ActFmISI(isi, timeInc, integ float32) float32

ActFmISI computes rate-code activation from estimated spiking interval

func (*SpikeParams) ActToISI

func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32

ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ

func (*SpikeParams) AvgFmISI

func (sk *SpikeParams) AvgFmISI(avg *float32, isi float32)

AvgFmISI updates spiking ISI from current isi interval value

func (*SpikeParams) Defaults

func (sk *SpikeParams) Defaults()

func (*SpikeParams) Update

func (sk *SpikeParams) Update()

type SynComParams

type SynComParams struct {
	Delay    int     `` /* 333-byte string literal not displayed */
	PFail    float32 `` /* 149-byte string literal not displayed */
	PFailSWt bool    `` /* 141-byte string literal not displayed */
}

/ SynComParams are synaptic communication parameters: delay and probability of failure

func (*SynComParams) Defaults

func (sc *SynComParams) Defaults()

func (*SynComParams) Fail

func (sc *SynComParams) Fail(wt *float32, swt float32)

Fail updates failure status of given weight, given SWt value

func (*SynComParams) Update

func (sc *SynComParams) Update()

func (*SynComParams) WtFail

func (sc *SynComParams) WtFail(swt float32) bool

WtFail returns true if synapse should fail, as function of SWt value (optionally)

func (*SynComParams) WtFailP

func (sc *SynComParams) WtFailP(swt float32) float32

WtFailP returns probability of weight (synapse) failure given current SWt value

type Synapse

type Synapse struct {
	CaUpT  int32   `desc:"time in CycleTot of last updating of Ca values at the synapse level, for optimized synaptic-level Ca integration"`
	Wt     float32 `` /* 206-byte string literal not displayed */
	SWt    float32 `` /* 528-byte string literal not displayed */
	LWt    float32 `` /* 174-byte string literal not displayed */
	DWt    float32 `desc:"change in synaptic weight, from learning"`
	DSWt   float32 `desc:"change in SWt slow synaptic weight -- accumulates DWt"`
	TDWt   float32 `` /* 273-byte string literal not displayed */
	Ca     float32 `desc:"Raw calcium singal for Kinase based learning: send.SnmdaO * recv.RCa"`
	CaM    float32 `desc:"first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP"`
	CaP    float32 `` /* 165-byte string literal not displayed */
	CaD    float32 `` /* 164-byte string literal not displayed */
	CaDMax float32 `` /* 135-byte string literal not displayed */
}

axon.Synapse holds state for the synaptic connection between neurons

func (*Synapse) SetVarByIndex

func (sy *Synapse) SetVarByIndex(idx int, val float32)

func (*Synapse) SetVarByName

func (sy *Synapse) SetVarByName(varNm string, val float32) error

SetVarByName sets synapse variable to given value

func (*Synapse) VarByIndex

func (sy *Synapse) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in SynapseVars list)

func (*Synapse) VarByName

func (sy *Synapse) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*Synapse) VarNames

func (sy *Synapse) VarNames() []string

type Time

type Time struct {
	Phase      int     `desc:"phase counter: typicaly 0-1 for minus-plus but can be more phases for other algorithms"`
	PlusPhase  bool    `` /* 126-byte string literal not displayed */
	PhaseCycle int     `desc:"cycle within current phase -- minus or plus"`
	Cycle      int     `` /* 156-byte string literal not displayed */
	CycleTot   int     `` /* 151-byte string literal not displayed */
	Time       float32 `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"`
	Mode       string  `desc:"current evaluation mode, e.g., Train, Test, etc"`
	Testing    bool    `` /* 179-byte string literal not displayed */

	TimePerCyc float32 `def:"0.001" desc:"amount of time to increment per cycle"`
}

axon.Time contains all the timing state and parameter information for running a model. Can also include other relevant state context, e.g., Testing vs. Training modes.

func NewTime

func NewTime() *Time

NewTime returns a new Time struct with default parameters

func (*Time) CycleInc

func (tm *Time) CycleInc()

CycleInc increments at the cycle level

func (*Time) Defaults

func (tm *Time) Defaults()

Defaults sets default values

func (*Time) NewPhase added in v1.2.63

func (tm *Time) NewPhase(plusPhase bool)

NewPhase resets PhaseCycle = 0 and sets the plus phase as specified

func (*Time) NewState added in v1.2.63

func (tm *Time) NewState(mode string)

NewState resets counters at start of new state (trial) of processing. Pass the evaluation model associated with this new state -- if !Train then testing will be set to true.

func (*Time) Reset

func (tm *Time) Reset()

Reset resets the counters all back to zero

type TopoInhibParams added in v1.2.85

type TopoInhibParams struct {
	On      bool    `desc:"use topographic inhibition"`
	Width   int     `viewif:"On" desc:"half-width of topographic inhibition within layer"`
	Sigma   float32 `viewif:"On" desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"`
	Wrap    bool    `viewif:"On" desc:"half-width of topographic inhibition within layer"`
	Gi      float32 `viewif:"On" desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"`
	FF      float32 `` /* 133-byte string literal not displayed */
	FB      float32 `` /* 139-byte string literal not displayed */
	FF0     float32 `` /* 186-byte string literal not displayed */
	WidthWt float32 `inactive:"+" desc:"weight value at width -- to assess the value of Sigma"`
}

TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood.

func (*TopoInhibParams) Defaults added in v1.2.85

func (ti *TopoInhibParams) Defaults()

func (*TopoInhibParams) GiFmGeAct added in v1.2.85

func (ti *TopoInhibParams) GiFmGeAct(ge, act, ff0 float32) float32

func (*TopoInhibParams) Update added in v1.2.85

func (ti *TopoInhibParams) Update()

type TrgAvgActParams added in v1.2.45

type TrgAvgActParams struct {
	On           bool       `desc:"whether to use target average activity mechanism to scale synaptic weights"`
	ErrLrate     float32    `` /* 263-byte string literal not displayed */
	SynScaleRate float32    `` /* 231-byte string literal not displayed */
	TrgRange     minmax.F32 `` /* 185-byte string literal not displayed */
	Permute      bool       `` /* 236-byte string literal not displayed */
	Pool         bool       `` /* 206-byte string literal not displayed */
}

TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by unit-wise error and difference in actual vs. target drives synaptic scaling.

func (*TrgAvgActParams) Defaults added in v1.2.45

func (ta *TrgAvgActParams) Defaults()

func (*TrgAvgActParams) Update added in v1.2.45

func (ta *TrgAvgActParams) Update()

type XCalParams

type XCalParams struct {
	On      bool    `desc:"if true, use XCal function -- otherwise just does a direct subtraction"`
	SubMean float32 `` /* 143-byte string literal not displayed */
	PThrMin float32 `` /* 231-byte string literal not displayed */
	DWtThr  float32 `def:"0.0001" desc:"threshold on DWt to be included in SubMean process -- this is *prior* to lrate multiplier"`
	DRev    float32 `` /* 188-byte string literal not displayed */
	DThr    float32 `` /* 139-byte string literal not displayed */
	LrnThr  float32 `` /* 187-byte string literal not displayed */

	DRevRatio float32 `` /* 131-byte string literal not displayed */
}

XCalParams are parameters for temporally eXtended Contrastive Attractor Learning function (XCAL) which is the standard learning equation for axon .

func (*XCalParams) DWt

func (xc *XCalParams) DWt(srval, thrP float32) float32

DWt is the XCAL function for weight change -- the "check mark" function

func (*XCalParams) Defaults

func (xc *XCalParams) Defaults()

func (*XCalParams) Update

func (xc *XCalParams) Update()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL