Documentation ¶
Overview ¶
Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, PVLV, PBWM, etc.
The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.
The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.
This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.
One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.
The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.
Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)
The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables
There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.
Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.
The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.
Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., SpikeFmG computes activation from conductances G.
The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.
Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per projection.
The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.
Index ¶
- Constants
- Variables
- func DecaySynCa(sy *Synapse, decay float32)
- func EnvApplyInputs(net *Network, ev env.Env)
- func InitSynCa(sy *Synapse)
- func JsonToParams(b []byte) string
- func LogAddCaLrnDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)
- func LogAddDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)
- func LogAddExtraDiagnosticItems(lg *elog.Logs, net *Network, times ...etime.Times)
- func LogAddLayerGeActAvgItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)
- func LogAddPCAItems(lg *elog.Logs, net *Network, times ...etime.Times)
- func LogTestErrors(lg *elog.Logs)
- func LooperResetLogBelow(man *looper.Manager, logs *elog.Logs)
- func LooperSimCycleAndLearn(man *looper.Manager, net *Network, time *Time, viewupdt *netview.ViewUpdt)
- func LooperStdPhases(man *looper.Manager, time *Time, net *Network, plusStart, plusEnd int)
- func LooperUpdtNetView(man *looper.Manager, viewupdt *netview.ViewUpdt)
- func LooperUpdtPlots(man *looper.Manager, gui *egui.GUI)
- func NeuronVarIdxByName(varNm string) (int, error)
- func PCAStats(net emer.Network, lg *elog.Logs, stats *estats.Stats)
- func SaveWeights(net *Network, ctrString, runName string)
- func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string)
- func SigFun(w, gain, off float32) float32
- func SigFun61(w float32) float32
- func SigInvFun(w, gain, off float32) float32
- func SigInvFun61(w float32) float32
- func SynapseVarByName(varNm string) (int, error)
- func ToggleLayersOff(net *Network, layerNames []string, off bool)
- func WeightsFileName(net *Network, ctrString, runName string) string
- type ActAvgParams
- type ActAvgVals
- type ActInitParams
- type ActParams
- func (ac *ActParams) DecayState(nrn *Neuron, decay, glong float32)
- func (ac *ActParams) Defaults()
- func (ac *ActParams) GeFmSyn(nrn *Neuron, geSyn, geExt float32)
- func (ac *ActParams) GeNoise(nrn *Neuron)
- func (ac *ActParams) GiFmSyn(nrn *Neuron, giSyn float32) float32
- func (ac *ActParams) GiNoise(nrn *Neuron)
- func (ac *ActParams) GkFmVm(nrn *Neuron)
- func (ac *ActParams) GvgccFmVm(nrn *Neuron)
- func (ac *ActParams) InetFmG(vm, ge, gl, gi, gk float32) float32
- func (ac *ActParams) InitActs(nrn *Neuron)
- func (ac *ActParams) InitLongActs(nrn *Neuron)
- func (ac *ActParams) NMDAFmRaw(nrn *Neuron, geTot float32)
- func (ac *ActParams) SpikeFmVm(nrn *Neuron)
- func (ac *ActParams) Update()
- func (ac *ActParams) VmFmG(nrn *Neuron)
- func (ac *ActParams) VmFmInet(vm, dt, inet float32) float32
- func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32) (float32, float32)
- type AttnParams
- type AxonLayer
- type AxonNetwork
- type AxonPrjn
- type CaLrnParams
- type CaSpkParams
- type ClampParams
- type CorSimStats
- type DecayParams
- type DendParams
- type DtParams
- func (dp *DtParams) AvgVarUpdt(avg, vr *float32, val float32)
- func (dp *DtParams) Defaults()
- func (dp *DtParams) GeSynFmRaw(geSyn, geRaw float32) float32
- func (dp *DtParams) GeSynFmRawSteady(geRaw float32) float32
- func (dp *DtParams) GiSynFmRaw(giSyn, giRaw float32) float32
- func (dp *DtParams) GiSynFmRawSteady(giRaw float32) float32
- func (dp *DtParams) Update()
- type GScaleVals
- type HebbPrjn
- type InhibParams
- type Layer
- func (ly *Layer) AdaptInhib(ctime *Time)
- func (ly *Layer) AllParams() string
- func (ly *Layer) ApplyExt(ext etensor.Tensor)
- func (ly *Layer) ApplyExt1D(ext []float64)
- func (ly *Layer) ApplyExt1D32(ext []float32)
- func (ly *Layer) ApplyExt1DTsr(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2Dto4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExtFlags() (clrmsk, setmsk int32, toTarg bool)
- func (ly *Layer) AsAxon() *Layer
- func (ly *Layer) AvgDifFmTrgAvg()
- func (ly *Layer) AvgGeM(ctime *Time)
- func (ly *Layer) AvgMaxVarByPool(varNm string, poolIdx int) minmax.AvgMax32
- func (ly *Layer) Build() error
- func (ly *Layer) BuildPools(nu int) error
- func (ly *Layer) BuildPrjns() error
- func (ly *Layer) BuildSubPools()
- func (ly *Layer) ClearTargExt()
- func (ly *Layer) CorSimFmActs()
- func (ly *Layer) CostEst() (neur, syn, tot int)
- func (ly *Layer) CycleNeuron(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) CyclePost(ctime *Time)
- func (ly *Layer) DTrgAvgFmErr()
- func (ly *Layer) DTrgSubMean()
- func (ly *Layer) DWtLayer(ctime *Time)
- func (ly *Layer) DecayCaLrnSpk(decay float32)
- func (ly *Layer) DecayState(decay, glong float32)
- func (ly *Layer) DecayStatePool(pool int, decay, glong float32)
- func (ly *Layer) Defaults()
- func (ly *Layer) GFmRawSyn(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) GFmSpikeRaw(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) GInteg(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) GiFmSpikes(ctime *Time)
- func (ly *Layer) GiInteg(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) HasPoolInhib() bool
- func (ly *Layer) InitActAvg()
- func (ly *Layer) InitActs()
- func (ly *Layer) InitExt()
- func (ly *Layer) InitGScale()
- func (ly *Layer) InitPrjnGBuffs()
- func (ly *Layer) InitWtSym()
- func (ly *Layer) InitWts()
- func (ly *Layer) IsInput() bool
- func (ly *Layer) IsInputOrTarget() bool
- func (ly *Layer) IsLearnTrgAvg() bool
- func (ly *Layer) IsTarget() bool
- func (ly *Layer) LesionNeurons(prop float32) int
- func (ly *Layer) LocalistErr2D() (err bool, minusIdx, plusIdx int)
- func (ly *Layer) LocalistErr4D() (err bool, minusIdx, plusIdx int)
- func (ly *Layer) LrateMod(mod float32)
- func (ly *Layer) LrateSched(sched float32)
- func (ly *Layer) MinusPhase(ctime *Time)
- func (ly *Layer) NewState()
- func (ly *Layer) PctUnitErr() float64
- func (ly *Layer) PlusPhase(ctime *Time)
- func (ly *Layer) Pool(idx int) *Pool
- func (ly *Layer) PoolGiFmSpikes(ctime *Time)
- func (ly *Layer) PoolTry(idx int) (*Pool, error)
- func (ly *Layer) PostAct(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) ReadWtsJSON(r io.Reader) error
- func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, ...) error
- func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, ...) error
- func (ly *Layer) SendSpikes(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) SetSubMean(trgAvg, prjn float32)
- func (ly *Layer) SetWts(lw *weights.Layer) error
- func (ly *Layer) SlowAdapt(ctime *Time)
- func (ly *Layer) SpikeFmG(ni int, nrn *Neuron, ctime *Time)
- func (ly *Layer) SpkSt1(ctime *Time)
- func (ly *Layer) SpkSt2(ctime *Time)
- func (ly *Layer) SynFail(ctime *Time)
- func (ly *Layer) TargToExt()
- func (ly *Layer) TrgAvgFmD()
- func (ly *Layer) UnLesionNeurons()
- func (ly *Layer) UnitVal(varNm string, idx []int) float32
- func (ly *Layer) UnitVal1D(varIdx int, idx int) float32
- func (ly *Layer) UnitVals(vals *[]float32, varNm string) error
- func (ly *Layer) UnitValsRepTensor(tsr etensor.Tensor, varNm string) error
- func (ly *Layer) UnitValsTensor(tsr etensor.Tensor, varNm string) error
- func (ly *Layer) UnitVarIdx(varNm string) (int, error)
- func (ly *Layer) UnitVarNames() []string
- func (ly *Layer) UnitVarNum() int
- func (ly *Layer) UnitVarProps() map[string]string
- func (ly *Layer) UpdateExtFlags()
- func (ly *Layer) UpdateParams()
- func (ly *Layer) VarRange(varNm string) (min, max float32, err error)
- func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)
- func (ly *Layer) WtFmDWtLayer(ctime *Time)
- type LayerBase
- func (ls *LayerBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ls *LayerBase) Class() string
- func (ls *LayerBase) Config(shape []int, typ emer.LayerType)
- func (ls *LayerBase) Idx4DFrom2D(x, y int) ([]int, bool)
- func (ls *LayerBase) Index() int
- func (ls *LayerBase) InitName(lay emer.Layer, name string, net emer.Network)
- func (ls *LayerBase) Is2D() bool
- func (ls *LayerBase) Is4D() bool
- func (ls *LayerBase) IsOff() bool
- func (ls *LayerBase) Label() string
- func (ls *LayerBase) NPools() int
- func (ls *LayerBase) NRecvPrjns() int
- func (ls *LayerBase) NSendPrjns() int
- func (ls *LayerBase) Name() string
- func (ls *LayerBase) NeurStartIdx() int
- func (ls *LayerBase) NonDefaultParams() string
- func (ls *LayerBase) Pos() mat32.Vec3
- func (ls *LayerBase) RecipToSendPrjn(spj emer.Prjn) (emer.Prjn, bool)
- func (ls *LayerBase) RecvPrjn(idx int) emer.Prjn
- func (ls *LayerBase) RecvPrjns() *emer.Prjns
- func (ls *LayerBase) RelPos() relpos.Rel
- func (ls *LayerBase) RepIdxs() []int
- func (ls *LayerBase) RepShape() *etensor.Shape
- func (ls *LayerBase) SendPrjn(idx int) emer.Prjn
- func (ls *LayerBase) SendPrjns() *emer.Prjns
- func (ls *LayerBase) SetClass(cls string)
- func (ls *LayerBase) SetIndex(idx int)
- func (ls *LayerBase) SetName(nm string)
- func (ls *LayerBase) SetOff(off bool)
- func (ls *LayerBase) SetPos(pos mat32.Vec3)
- func (ls *LayerBase) SetRelPos(rel relpos.Rel)
- func (ls *LayerBase) SetRepIdxsShape(idxs, shape []int)
- func (ls *LayerBase) SetShape(shape []int)
- func (ls *LayerBase) SetThread(thr int)
- func (ls *LayerBase) SetType(typ emer.LayerType)
- func (ls *LayerBase) Shape() *etensor.Shape
- func (ls *LayerBase) Size() mat32.Vec2
- func (ls *LayerBase) Thread() int
- func (ls *LayerBase) Type() emer.LayerType
- func (ls *LayerBase) TypeName() string
- type LearnNeurParams
- func (ln *LearnNeurParams) CaFmSpike(nrn *Neuron)
- func (ln *LearnNeurParams) DecayCaLrnSpk(nrn *Neuron, decay float32)
- func (ln *LearnNeurParams) Defaults()
- func (ln *LearnNeurParams) InitNeurCa(nrn *Neuron)
- func (ln *LearnNeurParams) LrnNMDAFmRaw(nrn *Neuron, geTot float32)
- func (ln *LearnNeurParams) Update()
- type LearnSynParams
- type LrateMod
- type LrateParams
- type NMDAPrjn
- type NetThread
- type NetThreads
- type Network
- func (nt *Network) AsAxon() *Network
- func (nt *Network) ClearTargExt()
- func (nt *Network) CollectDWts(dwts *[]float32) bool
- func (nt *Network) Cycle(ctime *Time)
- func (nt *Network) CycleImpl(ctime *Time)
- func (nt *Network) DWt(ctime *Time)
- func (nt *Network) DWtImpl(ctime *Time)
- func (nt *Network) DecayState(decay, glong float32)
- func (nt *Network) DecayStateByClass(decay, glong float32, class ...string)
- func (nt *Network) Defaults()
- func (nt *Network) InitActs()
- func (nt *Network) InitExt()
- func (nt *Network) InitGScale()
- func (nt *Network) InitTopoSWts()
- func (nt *Network) InitWts()
- func (nt *Network) LayersSetOff(off bool)
- func (nt *Network) LrateMod(mod float32)
- func (nt *Network) LrateSched(sched float32)
- func (nt *Network) MinusPhase(ctime *Time)
- func (nt *Network) MinusPhaseImpl(ctime *Time)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPrjn() emer.Prjn
- func (nt *Network) NewState()
- func (nt *Network) NewStateImpl()
- func (nt *Network) PlusPhase(ctime *Time)
- func (nt *Network) PlusPhaseImpl(ctime *Time)
- func (nt *Network) SetDWts(dwts []float32, navg int)
- func (nt *Network) SetSubMean(trgAvg, prjn float32)
- func (nt *Network) SizeReport() string
- func (nt *Network) SlowAdapt(ctime *Time)
- func (nt *Network) SpkSt1(ctime *Time)
- func (nt *Network) SpkSt2(ctime *Time)
- func (nt *Network) SynFail(ctime *Time)
- func (nt *Network) SynVarNames() []string
- func (nt *Network) SynVarProps() map[string]string
- func (nt *Network) TargToExt()
- func (nt *Network) UnLesionNeurons()
- func (nt *Network) UnitVarNames() []string
- func (nt *Network) UnitVarProps() map[string]string
- func (nt *Network) UpdateExtFlags()
- func (nt *Network) UpdateParams()
- func (nt *Network) WtFmDWt(ctime *Time)
- func (nt *Network) WtFmDWtImpl(ctime *Time)
- type NetworkBase
- func (nt *NetworkBase) AddLayer(name string, shape []int, typ emer.LayerType) emer.Layer
- func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkBase) AddLayerInit(ly emer.Layer, name string, shape []int, typ emer.LayerType)
- func (nt *NetworkBase) AllParams() string
- func (nt *NetworkBase) AllPrjnScales() string
- func (nt *NetworkBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
- func (nt *NetworkBase) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
- func (nt *NetworkBase) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
- func (nt *NetworkBase) Bounds() (min, max mat32.Vec3)
- func (nt *NetworkBase) BoundsUpdt()
- func (nt *NetworkBase) Build() error
- func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
- func (nt *NetworkBase) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
- func (nt *NetworkBase) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
- func (nt *NetworkBase) DeleteAll()
- func (nt *NetworkBase) FunTimerStart(fun string)
- func (nt *NetworkBase) FunTimerStop(fun string)
- func (nt *NetworkBase) InitName(net emer.Network, name string)
- func (nt *NetworkBase) Label() string
- func (nt *NetworkBase) LateralConnectLayer(lay emer.Layer, pat prjn.Pattern) emer.Prjn
- func (nt *NetworkBase) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
- func (nt *NetworkBase) Layer(idx int) emer.Layer
- func (nt *NetworkBase) LayerByName(name string) emer.Layer
- func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
- func (nt *NetworkBase) LayerFun(fun func(ly AxonLayer), funame string, thread, wait bool)
- func (nt *NetworkBase) LayersByClass(classes ...string) []string
- func (nt *NetworkBase) Layout()
- func (nt *NetworkBase) MakeLayMap()
- func (nt *NetworkBase) NLayers() int
- func (nt *NetworkBase) Name() string
- func (nt *NetworkBase) NeuronFun(fun func(ly AxonLayer, ni int, nrn *Neuron), funame string, thread, wait bool)
- func (nt *NetworkBase) NonDefaultParams() string
- func (nt *NetworkBase) OpenWtsCpp(filename gi.FileName) error
- func (nt *NetworkBase) OpenWtsJSON(filename gi.FileName) error
- func (nt *NetworkBase) PrjnFun(fun func(pj AxonPrjn), funame string, thread, wait bool)
- func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
- func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
- func (nt *NetworkBase) SaveWtsJSON(filename gi.FileName) error
- func (nt *NetworkBase) SendSpikeFun(fun func(ly AxonLayer, ni int, nrn *Neuron), funame string, thread, wait bool)
- func (nt *NetworkBase) SetWts(nw *weights.Network) error
- func (nt *NetworkBase) StdVertLayout()
- func (nt *NetworkBase) SynCaFun(fun func(pj AxonPrjn), funame string, thread, wait bool)
- func (nt *NetworkBase) ThreadsAlloc()
- func (nt *NetworkBase) TimerReport()
- func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
- func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
- type Neuron
- func (nrn *Neuron) ClearFlag(flag NeuronFlags)
- func (nrn *Neuron) ClearMask(mask int32)
- func (nrn *Neuron) HasFlag(flag NeuronFlags) bool
- func (nrn *Neuron) IsOff() bool
- func (nrn *Neuron) SetFlag(flag NeuronFlags)
- func (nrn *Neuron) SetMask(mask int32)
- func (nrn *Neuron) VarByIndex(idx int) float32
- func (nrn *Neuron) VarByName(varNm string) (float32, error)
- func (nrn *Neuron) VarNames() []string
- type NeuronFlags
- type Pool
- type Prjn
- func (pj *Prjn) AllParams() string
- func (pj *Prjn) AsAxon() *Prjn
- func (pj *Prjn) Build() error
- func (pj *Prjn) BuildGBuffs()
- func (pj *Prjn) DWt(ctime *Time)
- func (pj *Prjn) DWtNeurSpkTheta(ctime *Time)
- func (pj *Prjn) DWtSubMean(ctime *Time)
- func (pj *Prjn) DWtSynSpkTheta(ctime *Time)
- func (pj *Prjn) DWtTraceNeurSpkTheta(ctime *Time)
- func (pj *Prjn) DWtTraceSynSpkTheta(ctime *Time)
- func (pj *Prjn) Defaults()
- func (pj *Prjn) GFmSpikes(ctime *Time)
- func (pj *Prjn) InitGBuffs()
- func (pj *Prjn) InitWtSym(rpjp AxonPrjn)
- func (pj *Prjn) InitWts()
- func (pj *Prjn) InitWtsSyn(sy *Synapse, mean, spct float32)
- func (pj *Prjn) LrateMod(mod float32)
- func (pj *Prjn) LrateSched(sched float32)
- func (pj *Prjn) ReadWtsJSON(r io.Reader) error
- func (pj *Prjn) RecvSynCa(ctime *Time)
- func (pj *Prjn) SWtFmWt()
- func (pj *Prjn) SWtRescale()
- func (pj *Prjn) SendSpikes(sendIdx int)
- func (pj *Prjn) SendSynCa(ctime *Time)
- func (pj *Prjn) SetClass(cls string) emer.Prjn
- func (pj *Prjn) SetPattern(pat prjn.Pattern) emer.Prjn
- func (pj *Prjn) SetSWtsFunc(swtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SetSWtsRPool(swts etensor.Tensor)
- func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error
- func (pj *Prjn) SetType(typ emer.PrjnType) emer.Prjn
- func (pj *Prjn) SetWts(pw *weights.Prjn) error
- func (pj *Prjn) SetWtsFunc(wtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SlowAdapt(ctime *Time)
- func (pj *Prjn) Syn1DNum() int
- func (pj *Prjn) SynFail(ctime *Time)
- func (pj *Prjn) SynIdx(sidx, ridx int) int
- func (pj *Prjn) SynScale()
- func (pj *Prjn) SynVal(varNm string, sidx, ridx int) float32
- func (pj *Prjn) SynVal1D(varIdx int, synIdx int) float32
- func (pj *Prjn) SynVals(vals *[]float32, varNm string) error
- func (pj *Prjn) SynVarIdx(varNm string) (int, error)
- func (pj *Prjn) SynVarNames() []string
- func (pj *Prjn) SynVarNum() int
- func (pj *Prjn) SynVarProps() map[string]string
- func (pj *Prjn) UpdateParams()
- func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)
- func (pj *Prjn) WtFmDWt(ctime *Time)
- type PrjnBase
- func (ps *PrjnBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ps *PrjnBase) BuildBase() error
- func (ps *PrjnBase) Class() string
- func (ps *PrjnBase) Connect(slay, rlay emer.Layer, pat prjn.Pattern, typ emer.PrjnType)
- func (ps *PrjnBase) Init(prjn emer.Prjn)
- func (ps *PrjnBase) IsOff() bool
- func (ps *PrjnBase) Label() string
- func (ps *PrjnBase) Name() string
- func (ps *PrjnBase) NonDefaultParams() string
- func (ps *PrjnBase) Pattern() prjn.Pattern
- func (ps *PrjnBase) PrjnTypeName() string
- func (ps *PrjnBase) RecvLay() emer.Layer
- func (ps *PrjnBase) SendLay() emer.Layer
- func (ps *PrjnBase) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
- func (ps *PrjnBase) SetOff(off bool)
- func (ps *PrjnBase) String() string
- func (ps *PrjnBase) Type() emer.PrjnType
- func (ps *PrjnBase) TypeName() string
- func (ps *PrjnBase) Validate(logmsg bool) error
- type PrjnGVals
- type PrjnScaleParams
- type PrjnType
- type RLrateParams
- type SWtAdaptParams
- type SWtInitParams
- type SWtParams
- func (sp *SWtParams) ClipSWt(swt float32) float32
- func (sp *SWtParams) ClipWt(wt float32) float32
- func (sp *SWtParams) Defaults()
- func (sp *SWtParams) InitWtsSyn(sy *Synapse, mean, spct float32)
- func (sp *SWtParams) LWtFmWts(wt, swt float32) float32
- func (sp *SWtParams) LinFmSigWt(wt float32) float32
- func (sp *SWtParams) SigFmLinWt(lw float32) float32
- func (sp *SWtParams) Update()
- func (sp *SWtParams) WtFmDWt(dwt, wt, lwt *float32, swt float32)
- func (sp *SWtParams) WtVal(swt, lwt float32) float32
- type SpikeNoiseParams
- type SpikeParams
- type StartEnd
- type SynComParams
- type Synapse
- type Time
- type TopoInhibParams
- type TraceParams
- type TrgAvgActParams
- type WorkMgr
Constants ¶
const ( // Thread is named const for actually using threads Thread = true // NoThread is named const for not using threads NoThread = false // Wait is named const for waiting for all go routines Wait = true // NoWait is named const for NOT waiting for all go routines NoWait = false )
const ( Version = "v1.6.12" GitCommit = "ec1eaed" // the commit JUST BEFORE the release VersionDate = "2022-12-06 06:24" // UTC )
const ( // NMDAPrjn are projections that have strong NMDA channels supporting maintenance NMDA emer.PrjnType = emer.PrjnType(emer.PrjnTypeN) + iota )
The GLong prjn types
const NeuronVarStart = 3
NeuronVarStart is the starting field where float32 variables start all variables prior must be 32 bit (int32) Note: all non-float32 infrastructure variables must be at the start!
const SynapseVarStart = 4
SynapseVarStart is the byte offset of fields in the Synapse structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!
Variables ¶
var GreedyChunks = true
GreedyChunks selects a greedy chunk running mode -- else just spawn routines willy-nilly
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
var KiT_NMDAPrjn = kit.Types.AddType(&NMDAPrjn{}, PrjnProps)
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
var KiT_NeurFlags = kit.Enums.AddEnum(NeuronFlagsNum, kit.BitFlag, nil)
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
var KiT_PrjnType = kit.Enums.AddEnumExt(emer.KiT_PrjnType, PrjnTypeN, kit.NotBitFlag, nil)
var LayerProps = ki.Props{ "ToolBar": ki.PropSlice{ {"Defaults", ki.Props{ "icon": "reset", "desc": "return all parameters to their intial default values", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the layer's activation values", }}, {"sep-act", ki.BlankProp{}}, {"LesionNeurons", ki.Props{ "icon": "close", "desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)", "Args": ki.PropSlice{ {"Proportion", ki.Props{ "desc": "proportion (0 -- 1) of neurons to lesion", }}, }, }}, {"UnLesionNeurons", ki.Props{ "icon": "reset", "desc": "Un-Lesion (reset the Off flag) for all neurons in the layer", }}, }, }
var NetworkProps = ki.Props{ "ToolBar": ki.PropSlice{ {"SaveWtsJSON", ki.Props{ "label": "Save Wts...", "icon": "file-save", "desc": "Save json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"OpenWtsJSON", ki.Props{ "label": "Open Wts...", "icon": "file-open", "desc": "Open json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"sep-file", ki.BlankProp{}}, {"Build", ki.Props{ "icon": "update", "desc": "build the network's neurons and synapses according to current params", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the network weight values according to prjn parameters", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the network activation values", }}, {"sep-act", ki.BlankProp{}}, {"AddLayer", ki.Props{ "label": "Add Layer...", "icon": "new", "desc": "add a new layer to network", "Args": ki.PropSlice{ {"Layer Name", ki.Props{}}, {"Layer Shape", ki.Props{ "desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)", }}, {"Layer Type", ki.Props{ "desc": "type of layer -- used for determining how inputs are applied", }}, }, }}, {"ConnectLayerNames", ki.Props{ "label": "Connect Layers...", "icon": "new", "desc": "add a new connection between layers in the network", "Args": ki.PropSlice{ {"Send Layer Name", ki.Props{}}, {"Recv Layer Name", ki.Props{}}, {"Pattern", ki.Props{ "desc": "pattern to connect with", }}, {"Prjn Type", ki.Props{ "desc": "type of projection -- direction, or other more specialized factors", }}, }, }}, {"AllPrjnScales", ki.Props{ "icon": "file-sheet", "desc": "AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.", "show-return": true, }}, }, }
var NeuronVarProps = map[string]string{
"GeSyn": `range:"2"`,
"Ge": `range:"2"`,
"GeM": `range:"2"`,
"Vm": `min:"0" max:"1"`,
"VmDend": `min:"0" max:"1"`,
"ISI": `auto-scale:"+"`,
"ISIAvg": `auto-scale:"+"`,
"Gi": `auto-scale:"+"`,
"Gk": `auto-scale:"+"`,
"ActDel": `auto-scale:"+"`,
"ActDiff": `auto-scale:"+"`,
"RLrate": `auto-scale:"+"`,
"AvgPct": `range:"2"`,
"TrgAvg": `range:"2"`,
"DTrgAvg": `auto-scale:"+"`,
"MahpN": `auto-scale:"+"`,
"GknaMed": `auto-scale:"+"`,
"GknaSlow": `auto-scale:"+"`,
"Gnmda": `auto-scale:"+"`,
"GnmdaSyn": `auto-scale:"+"`,
"GnmdaLrn": `auto-scale:"+"`,
"NmdaCa": `auto-scale:"+"`,
"GgabaB": `auto-scale:"+"`,
"GABAB": `auto-scale:"+"`,
"GABABx": `auto-scale:"+"`,
"Gvgcc": `auto-scale:"+"`,
"VgccCa": `auto-scale:"+"`,
"VgccCaInt": `auto-scale:"+"`,
"Gak": `auto-scale:"+"`,
"SSGi": `auto-scale:"+"`,
"SSGiDend": `auto-scale:"+"`,
}
var NeuronVars = []string{}
var NeuronVarsMap map[string]int
var PrjnProps = ki.Props{ "EnumType:Typ": KiT_PrjnType, }
var SynapseVarProps = map[string]string{
"DWt": `auto-scale:"+"`,
"DSWt": `auto-scale:"+"`,
"CaM": `auto-scale:"+"`,
"CaP": `auto-scale:"+"`,
"CaD": `auto-scale:"+"`,
"Tr": `auto-scale:"+"`,
}
var SynapseVars = []string{"Wt", "LWt", "SWt", "DWt", "DSWt", "Ca", "CaM", "CaP", "CaD", "Tr"}
var SynapseVarsMap map[string]int
Functions ¶
func DecaySynCa ¶ added in v1.3.21
DecaySynCa decays synaptic calcium by given factor (between trials) Not used by default.
func EnvApplyInputs ¶ added in v1.3.36
EnvApplyInputs applies input patterns from given env.Env environment to Input and Target layer types, assuming that env provides State with the same names as the layers. If these assumptions don't fit, use a separate method.
func InitSynCa ¶ added in v1.3.21
func InitSynCa(sy *Synapse)
InitSynCa initializes synaptic calcium state, including CaUpT
func JsonToParams ¶
JsonToParams reformates json output to suitable params display output
func LogAddCaLrnDiagnosticItems ¶ added in v1.5.3
LogAddCaLrnDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These were useful for the development of the Ca-based "trace" learning rule that directly uses NMDA and VGCC-like spiking Ca
func LogAddDiagnosticItems ¶ added in v1.3.35
LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddExtraDiagnosticItems ¶ added in v1.5.8
LogAddExtraDiagnosticItems adds extra Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddLayerGeActAvgItems ¶ added in v1.3.35
LogAddLayerGeActAvgItems adds Ge and Act average items for Hidden and Target layers for given mode and time (e.g., Test, Cycle) These are useful for monitoring layer activity during testing.
func LogAddPCAItems ¶ added in v1.3.35
LogAddPCAItems adds PCA statistics to log for Hidden and Target layers across 3 given time levels, in higher to lower order, e.g., Run, Epoch, Trial These are useful for diagnosing the behavior of the network.
func LogTestErrors ¶ added in v1.3.35
LogTestErrors records all errors made across TestTrials, at Test Epoch scope
func LooperResetLogBelow ¶ added in v1.3.35
LooperResetLogBelow adds a function in OnStart to all stacks and loops to reset the log at the level below each loop -- this is good default behavior.
func LooperSimCycleAndLearn ¶ added in v1.3.35
func LooperSimCycleAndLearn(man *looper.Manager, net *Network, time *Time, viewupdt *netview.ViewUpdt)
LooperSimCycleAndLearn adds Cycle and DWt, WtFmDWt functions to looper for given network, time, and netview update manager
func LooperStdPhases ¶ added in v1.3.35
LooperStdPhases adds the minus and plus phases of the theta cycle, along with embedded beta phases which just record St1 and St2 activity in this case. plusStart is start of plus phase, typically 150, and plusEnd is end of plus phase, typically 199 resets the state at start of trial
func LooperUpdtNetView ¶ added in v1.3.35
LooperUpdtNetView adds netview update calls at each time level
func LooperUpdtPlots ¶ added in v1.3.35
LooperUpdtPlots adds plot update calls at each time level
func NeuronVarIdxByName ¶
NeuronVarIdxByName returns the index of the variable in the Neuron, or error
func PCAStats ¶ added in v1.3.35
PCAStats computes PCA statistics on recorded hidden activation patterns from Analyze, Trial log data
func SaveWeights ¶ added in v1.3.29
SaveWeights saves network weights to filename with WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi
func SaveWeightsIfArgSet ¶ added in v1.3.35
SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. uses WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi
func SigFun61 ¶
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params
func SigInvFun61 ¶
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params
func SynapseVarByName ¶
SynapseVarByName returns the index of the variable in the Synapse, or error
func ToggleLayersOff ¶ added in v1.3.29
ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.
func WeightsFileName ¶ added in v1.3.35
WeightsFileName returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,
Types ¶
type ActAvgParams ¶
type ActAvgParams struct { Init float32 `` /* 168-byte string literal not displayed */ InhTau float32 `` /* 249-byte string literal not displayed */ AdaptGi bool `` /* 318-byte string literal not displayed */ Target float32 `` /* 151-byte string literal not displayed */ HiTol float32 `` /* 265-byte string literal not displayed */ LoTol float32 `` /* 265-byte string literal not displayed */ AdaptRate float32 `` /* 238-byte string literal not displayed */ InhDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
ActAvgParams represents expected average activity levels in the layer. Specifies the expected average activity used for G scaling. Also specifies time constant for updating a longer-term running average and for adapting inhibition levels dynamically over time.
func (*ActAvgParams) Adapt ¶ added in v1.2.37
func (aa *ActAvgParams) Adapt(gimult *float32, trg, act float32) bool
Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.
func (*ActAvgParams) AvgFmAct ¶
func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32, dt float32)
AvgFmAct updates the running-average activation given average activity level in layer
func (*ActAvgParams) Defaults ¶
func (aa *ActAvgParams) Defaults()
func (*ActAvgParams) Update ¶
func (aa *ActAvgParams) Update()
type ActAvgVals ¶ added in v1.2.32
type ActAvgVals struct { ActMAvg float32 `` /* 141-byte string literal not displayed */ ActPAvg float32 `inactive:"+" desc:"running-average plus-phase activity integrated at Dt.LongAvgTau"` AvgMaxGeM float32 `` /* 177-byte string literal not displayed */ AvgMaxGiM float32 `` /* 177-byte string literal not displayed */ GiMult float32 `inactive:"+" desc:"multiplier on inhibition -- adapted to maintain target activity level"` CaSpkPM minmax.AvgMax32 `inactive:"+" desc:"avg and maximum CaSpkP value in layer in the minus phase -- for monitoring network activity levels"` CaSpkP minmax.AvgMax32 `` /* 160-byte string literal not displayed */ CaSpkD minmax.AvgMax32 `` /* 160-byte string literal not displayed */ }
ActAvgVals are running-average activation levels used for Ge scaling and adaptive inhibition
type ActInitParams ¶
type ActInitParams struct { Vm float32 `def:"0.3" desc:"initial membrane potential -- see Erev.L for the resting potential (typically .3)"` Act float32 `def:"0" desc:"initial activation value -- typically 0"` Ge float32 `` /* 268-byte string literal not displayed */ Gi float32 `` /* 235-byte string literal not displayed */ GeVar float32 `` /* 167-byte string literal not displayed */ GiVar float32 `` /* 167-byte string literal not displayed */ }
ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.
func (*ActInitParams) Defaults ¶
func (ai *ActInitParams) Defaults()
func (*ActInitParams) GeBase ¶ added in v1.5.1
func (ai *ActInitParams) GeBase() float32
GeBase returns the baseline Ge value: Ge + rand(GeVar) > 0
func (*ActInitParams) GiBase ¶ added in v1.5.1
func (ai *ActInitParams) GiBase() float32
GiBase returns the baseline Gi value: Gi + rand(GiVar) > 0
func (*ActInitParams) Update ¶
func (ai *ActInitParams) Update()
type ActParams ¶
type ActParams struct { Spike SpikeParams `view:"inline" desc:"Spiking function parameters"` Dend DendParams `view:"inline" desc:"dendrite-specific parameters"` Init ActInitParams `` /* 155-byte string literal not displayed */ Decay DecayParams `` /* 233-byte string literal not displayed */ Dt DtParams `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"` Gbar chans.Chans `view:"inline" desc:"[Defaults: 1, .2, 1, 1] maximal conductances levels for channels"` Erev chans.Chans `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"` Clamp ClampParams `view:"inline" desc:"how external inputs drive neural activations"` Noise SpikeNoiseParams `view:"inline" desc:"how, where, when, and how much noise to add"` VmRange minmax.F32 `` /* 165-byte string literal not displayed */ Mahp chans.MahpParams `` /* 173-byte string literal not displayed */ Sahp chans.SahpParams `` /* 182-byte string literal not displayed */ KNa chans.KNaMedSlow `` /* 220-byte string literal not displayed */ NMDA chans.NMDAParams `` /* 252-byte string literal not displayed */ GABAB chans.GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"` VGCC chans.VGCCParams `` /* 159-byte string literal not displayed */ AK chans.AKsParams `` /* 135-byte string literal not displayed */ Attn AttnParams `view:"inline" desc:"Attentional modulation parameters: how Attn modulates Ge"` }
axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.
func (*ActParams) DecayState ¶
DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState
func (*ActParams) GeFmSyn ¶ added in v1.5.12
GeFmSyn integrates Ge excitatory conductance from GeSyn. geExt is extra conductance to add to the final Ge value
func (*ActParams) GiFmSyn ¶ added in v1.5.12
GiFmSyn integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) GkFmVm ¶ added in v1.6.0
GkFmVm updates all the Gk-based conductances: Mahp, KNa, Gak
func (*ActParams) GvgccFmVm ¶ added in v1.3.24
GvgccFmVm updates all the VGCC voltage-gated calcium channel variables from VmDend
func (*ActParams) InitActs ¶
InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitLongActs ¶ added in v1.2.66
InitLongActs initializes longer time-scale activation states in neuron (SpkPrv, SpkSt*, ActM, ActP, GeM) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)
func (*ActParams) NMDAFmRaw ¶ added in v1.3.1
NMDAFmRaw updates all the NMDA variables from total Ge (GeRaw + Ext) and current Vm, Spiking
func (*ActParams) SpikeFmVm ¶ added in v1.6.12
SpikeFmG computes Spike from Vm and ISI-based activation
func (*ActParams) Update ¶
func (ac *ActParams) Update()
Update must be called after any changes to parameters
type AttnParams ¶ added in v1.2.85
type AttnParams struct { On bool `desc:"is attentional modulation active?"` Min float32 `desc:"minimum act multiplier if attention is 0"` }
AttnParams determine how the Attn modulates Ge
func (*AttnParams) Defaults ¶ added in v1.2.85
func (at *AttnParams) Defaults()
func (*AttnParams) ModVal ¶ added in v1.2.85
func (at *AttnParams) ModVal(val float32, attn float32) float32
ModVal returns the attn-modulated value -- attn must be between 1-0
func (*AttnParams) Update ¶ added in v1.2.85
func (at *AttnParams) Update()
type AxonLayer ¶
type AxonLayer interface { emer.Layer // AsAxon returns this layer as a axon.Layer -- so that the AxonLayer // interface does not need to include accessors to all the basic stuff AsAxon() *Layer // NeurStartIdx is the starting index in global network slice of neurons for // neurons in this layer NeurStartIdx() int // InitWts initializes the weight values in the network, i.e., resetting learning // Also calls InitActs InitWts() // InitActAvg initializes the running-average activation values that drive learning. InitActAvg() // InitActs fully initializes activation state -- only called automatically during InitWts InitActs() // InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers InitWtSym() // InitGScale computes the initial scaling factor for synaptic input conductances G, // stored in GScale.Scale, based on sending layer initial activation. InitGScale() // InitExt initializes external input state -- called prior to apply ext InitExt() // ApplyExt applies external input in the form of an etensor.Tensor // If the layer is a Target or Compare layer type, then it goes in Target // otherwise it goes in Ext. ApplyExt(ext etensor.Tensor) // ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats // If the layer is a Target or Compare layer type, then it goes in Target // otherwise it goes in Ext ApplyExt1D(ext []float64) // UpdateExtFlags updates the neuron flags for external input based on current // layer Type field -- call this if the Type has changed since the last // ApplyExt* method call. UpdateExtFlags() // IsTarget returns true if this layer is a Target layer. // By default, returns true for layers of Type == emer.Target // Other Target layers include the TRCLayer in deep predictive learning. // It is also used in SynScale to not apply it to target layers. // In both cases, Target layers are purely error-driven. IsTarget() bool // IsInput returns true if this layer is an Input layer. // By default, returns true for layers of Type == emer.Input // Used to prevent adapting of inhibition or TrgAvg values. IsInput() bool // NewState handles all initialization at start of new input pattern, // including computing Ge scaling from running average activation etc. // should already have presented the external input to the network at this point. NewState() // DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay) DecayState(decay, glong float32) // GiFmSpikes integrates new inhibitory conductances from Spikes // at the layer and pool level GiFmSpikes(ctime *Time) // CycleNeuron does one cycle (msec) of updating at the neuron level // calls the following via this AxonLay interface: // * Ginteg // * SpikeFmG // * PostAct // * SendSpike CycleNeuron(ni int, nrn *Neuron, ctime *Time) // GInteg integrates conductances G over time (Ge, NMDA, etc). // reads pool Gi values GInteg(ni int, nrn *Neuron, ctime *Time) // SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that SpikeFmG(ni int, nrn *Neuron, ctime *Time) // PostAct does updates at neuron level after activation (spiking) // updated for all neurons. // It is a hook for specialized algorithms -- empty at Axon base level PostAct(ni int, nrn *Neuron, ctime *Time) // SendSpike sends spike to receivers -- last step in Cycle, integrated // the next time around. // Writes to sending projections for this neuron. SendSpikes(ni int, nrn *Neuron, ctime *Time) // CyclePost is called after the standard Cycle update, as a separate // network layer loop. // This is reserved for any kind of special ad-hoc types that // need to do something special after Act is finally computed. // For example, sending a neuromodulatory signal such as dopamine. CyclePost(ctime *Time) // MinusPhase does updating after end of minus phase MinusPhase(ctime *Time) // PlusPhase does updating after end of plus phase PlusPhase(ctime *Time) // SpkSt1 saves current activations into SpkSt1 SpkSt1(ctime *Time) // SpkSt2 saves current activations into SpkSt2 SpkSt2(ctime *Time) // CorSimFmActs computes the correlation similarity // (centered cosine aka normalized dot product) // in activation state between minus and plus phases // (1 = identical, 0 = uncorrelated). CorSimFmActs() // DWtLayer does weight change at the layer level. // does NOT call main projection-level DWt method. // in base, only calls DTrgAvgFmErr DWtLayer(ctime *Time) // WtFmDWtLayer does weight update at the layer level. // does NOT call main projection-level WtFmDWt method. // in base, only calls TrgAvgFmD WtFmDWtLayer(ctime *Time) // SlowAdapt is the layer-level slow adaptation functions. // Calls AdaptInhib and AvgDifFmTrgAvg for Synaptic Scaling. // Does NOT call projection-level methods. SlowAdapt(ctime *Time) // SynFail updates synaptic weight failure only -- normally done as part of DWt // and WtFmDWt, but this call can be used during testing to update failing synapses. SynFail(ctime *Time) }
AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Layer, which this interface also inherits for convenience.
type AxonNetwork ¶
type AxonNetwork interface { emer.Network // AsAxon returns this network as a axon.Network -- so that the // AxonNetwork interface does not need to include accessors // to all the basic stuff AsAxon() *Network // NewStateImpl handles all initialization at start of new input pattern, including computing // input scaling from running average activation etc. NewStateImpl() // Cycle handles entire update for one cycle (msec) of neuron activity state. CycleImpl(ctime *Time) // MinusPhaseImpl does updating after minus phase MinusPhaseImpl(ctime *Time) // PlusPhaseImpl does updating after plus phase PlusPhaseImpl(ctime *Time) // DWtImpl computes the weight change (learning) based on current // running-average activation values DWtImpl(ctime *Time) // WtFmDWtImpl updates the weights from delta-weight changes. // Also calls SynScale every Interval times WtFmDWtImpl(ctime *Time) // SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, // GScale conductance scaling, and adapting inhibition SlowAdapt(ctime *Time) }
AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * NewPhase * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls through the interface cast. Specialized algorithms should thus only change the Impl version, which is what is exposed here in this interface.
There is now a strong constraint that all Cycle level computation takes place in one pass at the Layer level, which greatly improves threading efficiency.
All of the structural API is in emer.Network, which this interface also inherits for convenience.
type AxonPrjn ¶
type AxonPrjn interface { emer.Prjn // AsAxon returns this prjn as a axon.Prjn -- so that the AxonPrjn // interface does not need to include accessors to all the basic stuff. AsAxon() *Prjn // InitWts initializes weight values according to Learn.WtInit params InitWts() // InitWtSym initializes weight symmetry -- is given the reciprocal projection where // the Send and Recv layers are reversed. InitWtSym(rpj AxonPrjn) // InitGBuffs initializes the per-projection synaptic conductance buffers. // This is not typically needed (called during InitWts, InitActs) // but can be called when needed. Must be called to completely initialize // prior activity, e.g., full Glong clearing. InitGBuffs() // SendSpike sends a spike from sending neuron index si, // to add to buffer on receivers. SendSpikes(si int) // GFmSpikes increments synaptic conductances from Spikes // including pooled aggregation of spikes into Pools for FS-FFFB inhib. GFmSpikes(ctime *Time) // SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. // Optimized version only updates at point of spiking. // This pass goes through in sending order, filtering on sending spike. SendSynCa(ctime *Time) // RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. // Optimized version only updates at point of spiking. // This pass goes through in recv order, filtering on recv spike. RecvSynCa(ctime *Time) // DWt computes the weight change (learning) -- on sending projections. DWt(ctime *Time) // DWtSubMean subtracts the mean from any projections that have SubMean > 0. // This is called on *receiving* projections, prior to WtFmDwt. DWtSubMean(ctime *Time) // WtFmDWt updates the synaptic weight values from delta-weight changes, // on sending projections WtFmDWt(ctime *Time) // SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, // GScale conductance scaling, and adapting inhibition SlowAdapt(ctime *Time) // SynFail updates synaptic weight failure only -- normally done as part of DWt // and WtFmDWt, but this call can be used during testing to update failing synapses. SynFail(ctime *Time) }
AxonPrjn defines the essential algorithmic API for Axon, at the projection level. These are the methods that the axon.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Prjn, which this interface also inherits for convenience.
type CaLrnParams ¶ added in v1.5.1
type CaLrnParams struct { Norm float32 `` /* 188-byte string literal not displayed */ SpkVGCC bool `` /* 133-byte string literal not displayed */ SpkVgccCa float32 `def:"35" desc:"multiplier on spike for computing Ca contribution to CaLrn in SpkVGCC mode"` VgccTau float32 `` /* 268-byte string literal not displayed */ Dt kinase.CaDtParams `view:"inline" desc:"time constants for integrating CaLrn across M, P and D cascading levels"` VgccDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` NormInv float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"= 1 / Norm"` }
CaLrnParams parameterizes the neuron-level calcium signals driving learning: CaLrn = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamic VGCC channel directly. CaLrn is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).
func (*CaLrnParams) CaLrn ¶ added in v1.5.1
func (np *CaLrnParams) CaLrn(nrn *Neuron)
CaLrn updates the CaLrn value and its cascaded values, based on NMDA, VGCC Ca it first calls VgccCa to update the spike-driven version of that variable, and perform its time-integration.
func (*CaLrnParams) Defaults ¶ added in v1.5.1
func (np *CaLrnParams) Defaults()
func (*CaLrnParams) Update ¶ added in v1.5.1
func (np *CaLrnParams) Update()
func (*CaLrnParams) VgccCa ¶ added in v1.5.1
func (np *CaLrnParams) VgccCa(nrn *Neuron)
VgccCa updates the simulated VGCC calcium from spiking, if that option is selected, and performs time-integration of VgccCa
type CaSpkParams ¶ added in v1.5.1
type CaSpkParams struct { SpikeG float32 `` /* 464-byte string literal not displayed */ SynTau float32 `` /* 224-byte string literal not displayed */ Dt kinase.CaDtParams `` /* 202-byte string literal not displayed */ SynDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` SynSpkG float32 `` /* 227-byte string literal not displayed */ }
CaSpkParams parameterizes the neuron-level spike-driven calcium signals, starting with CaSyn that is integrated at the neuron level and drives synapse-level, pre * post Ca integration, which provides the Tr trace that multiplies error signals, and drives learning directly for Target layers. CaSpk* values are integrated separately at the Neuron level and used for UpdtThr and RLrate as a proxy for the activation (spiking) based learning signal.
func (*CaSpkParams) CaFmSpike ¶ added in v1.5.1
func (np *CaSpkParams) CaFmSpike(nrn *Neuron)
CaFmSpike computes CaSpk* and CaSyn calcium signals based on current spike.
func (*CaSpkParams) Defaults ¶ added in v1.5.1
func (np *CaSpkParams) Defaults()
func (*CaSpkParams) Update ¶ added in v1.5.1
func (np *CaSpkParams) Update()
type ClampParams ¶
type ClampParams struct { Ge float32 `def:"0.8,1.5" desc:"amount of Ge driven for clamping -- generally use 0.8 for Target layers, 1.5 for Input layers"` Add bool `` /* 207-byte string literal not displayed */ ErrThr float32 `def:"0.5" desc:"threshold on neuron Act activity to count as active for computing error relative to target in PctErr method"` }
ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.
func (*ClampParams) Defaults ¶
func (cp *ClampParams) Defaults()
func (*ClampParams) Update ¶
func (cp *ClampParams) Update()
type CorSimStats ¶ added in v1.3.35
type CorSimStats struct { Cor float32 `` /* 203-byte string literal not displayed */ Avg float32 `` /* 138-byte string literal not displayed */ Var float32 `` /* 139-byte string literal not displayed */ }
CorSimStats holds correlation similarity (centered cosine aka normalized dot product) statistics at the layer level
func (*CorSimStats) Init ¶ added in v1.3.35
func (cd *CorSimStats) Init()
type DecayParams ¶ added in v1.2.59
type DecayParams struct { Act float32 `` /* 391-byte string literal not displayed */ Glong float32 `` /* 332-byte string literal not displayed */ AHP float32 `` /* 198-byte string literal not displayed */ }
DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.
func (*DecayParams) Defaults ¶ added in v1.2.59
func (ai *DecayParams) Defaults()
func (*DecayParams) Update ¶ added in v1.2.59
func (ai *DecayParams) Update()
type DendParams ¶ added in v1.2.95
type DendParams struct { GbarExp float32 `` /* 221-byte string literal not displayed */ GbarR float32 `` /* 150-byte string literal not displayed */ SSGi float32 `` /* 337-byte string literal not displayed */ }
DendParams are the parameters for updating dendrite-specific dynamics
func (*DendParams) Defaults ¶ added in v1.2.95
func (dp *DendParams) Defaults()
func (*DendParams) Update ¶ added in v1.2.95
func (dp *DendParams) Update()
type DtParams ¶
type DtParams struct { Integ float32 `` /* 649-byte string literal not displayed */ VmTau float32 `` /* 328-byte string literal not displayed */ VmDendTau float32 `` /* 335-byte string literal not displayed */ VmSteps int `` /* 223-byte string literal not displayed */ GeTau float32 `def:"5" min:"1" desc:"time constant for decay of excitatory AMPA receptor conductance."` GiTau float32 `def:"7" min:"1" desc:"time constant for decay of inhibitory GABAa receptor conductance."` IntTau float32 `` /* 393-byte string literal not displayed */ LongAvgTau float32 `` /* 336-byte string literal not displayed */ MaxCycStart int `` /* 138-byte string literal not displayed */ VmDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` VmDendDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` DtStep float32 `view:"-" json:"-" xml:"-" desc:"1 / VmSteps"` GeDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` GiDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` IntDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` LongAvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)
func (*DtParams) AvgVarUpdt ¶ added in v1.2.45
AvgVarUpdt updates the average and variance from current value, using LongAvgDt
func (*DtParams) GeSynFmRaw ¶ added in v1.2.97
GeSynFmRaw integrates a synaptic conductance from raw spiking using GeTau
func (*DtParams) GeSynFmRawSteady ¶ added in v1.5.12
GeSynFmRawSteady returns the steady-state GeSyn that would result from receiving a steady increment of GeRaw every time step = raw * GeTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
func (*DtParams) GiSynFmRaw ¶ added in v1.2.97
GiSynFmRaw integrates a synaptic conductance from raw spiking using GiTau
func (*DtParams) GiSynFmRawSteady ¶ added in v1.5.12
GiSynFmRawSteady returns the steady-state GiSyn that would result from receiving a steady increment of GiRaw every time step = raw * GiTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
type GScaleVals ¶ added in v1.2.37
type GScaleVals struct { Scale float32 `` /* 240-byte string literal not displayed */ Rel float32 `` /* 159-byte string literal not displayed */ }
GScaleVals holds the conductance scaling and associated values needed for adapting scale
type HebbPrjn ¶ added in v1.2.42
type HebbPrjn struct { Prjn // access as .Prjn IncGain float32 `desc:"gain factor on increases relative to decreases -- lower = lower overall weights"` }
HebbPrjn is a simple hebbian learning projection, using the CPCA Hebbian rule. Note: when used with inhibitory projections, requires Learn.Trace.SubMean = 1
func (*HebbPrjn) UpdateParams ¶ added in v1.2.42
func (pj *HebbPrjn) UpdateParams()
type InhibParams ¶
type InhibParams struct { ActAvg ActAvgParams `` /* 173-byte string literal not displayed */ Layer fsfffb.Params `` /* 128-byte string literal not displayed */ Pool fsfffb.Params `view:"inline" desc:"inhibition across sub-pools of units, for layers with 4D shape"` Topo TopoInhibParams `` /* 136-byte string literal not displayed */ }
axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as expected average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time
func (*InhibParams) Defaults ¶
func (ip *InhibParams) Defaults()
func (*InhibParams) Update ¶
func (ip *InhibParams) Update()
type Layer ¶
type Layer struct { LayerBase Act ActParams `view:"add-fields" desc:"Activation parameters and methods for computing activations"` Inhib InhibParams `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"` Learn LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"` Neurons []Neuron `` /* 133-byte string literal not displayed */ Pools []Pool `` /* 234-byte string literal not displayed */ ActAvg ActAvgVals `view:"inline" desc:"running-average activation levels used for Ge scaling and adaptive inhibition"` CorSim CorSimStats `desc:"correlation (centered cosine aka normalized dot product) similarity between ActM, ActP states"` }
axon.Layer implements the basic Axon spiking activation function, and manages learning in the projections.
func (*Layer) AdaptInhib ¶ added in v1.2.37
AdaptInhib adapts inhibition
func (*Layer) ApplyExt ¶
ApplyExt applies external input in the form of an etensor.Float32. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1D ¶
ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1D32 ¶
ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1DTsr ¶
ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt2D ¶
ApplyExt2D applies 2D tensor external input
func (*Layer) ApplyExt2Dto4D ¶
ApplyExt2Dto4D applies 2D tensor external input to a 4D layer
func (*Layer) ApplyExt4D ¶
ApplyExt4D applies 4D tensor external input
func (*Layer) ApplyExtFlags ¶
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)
func (*Layer) AsAxon ¶
AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff
func (*Layer) AvgDifFmTrgAvg ¶ added in v1.6.0
func (ly *Layer) AvgDifFmTrgAvg()
AvgDifFmTrgAvg updates neuron-level AvgDif values from AvgPct - TrgAvg which is then used for synaptic scaling of LWt values in Prjn SynScale.
func (*Layer) AvgGeM ¶ added in v1.2.21
AvgGeM computes the average and max GeM stats, updated in MinusPhase
func (*Layer) AvgMaxVarByPool ¶ added in v1.6.0
AvgMaxVarByPool returns the average and maximum value of given variable for given pool index (0 = entire layer, 1.. are subpools for 4D only). Uses fast index-based variable access.
func (*Layer) BuildPools ¶
BuildPools builds the inhibitory pools structures -- nu = number of units in layer
func (*Layer) BuildPrjns ¶
BuildPrjns builds the projections, recv-side
func (*Layer) BuildSubPools ¶
func (ly *Layer) BuildSubPools()
BuildSubPools initializes neuron start / end indexes for sub-pools
func (*Layer) ClearTargExt ¶ added in v1.2.65
func (ly *Layer) ClearTargExt()
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Layer) CorSimFmActs ¶ added in v1.3.35
func (ly *Layer) CorSimFmActs()
CorSimFmActs computes the correlation similarity (centered cosine aka normalized dot product) in activation state between minus and plus phases.
func (*Layer) CostEst ¶
CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.
func (*Layer) CycleNeuron ¶ added in v1.6.0
CycleNeuron does one cycle (msec) of updating at the neuron level
func (*Layer) CyclePost ¶
CyclePost is called after the standard Cycle update still within layer Cycle call. This is the hook for specialized algorithms (deep, hip, bg etc) to do something special after Spike / Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Layer) DTrgAvgFmErr ¶ added in v1.2.32
func (ly *Layer) DTrgAvgFmErr()
DTrgAvgFmErr computes change in TrgAvg based on unit-wise error signal Called by DWtLayer at the layer level
func (*Layer) DTrgSubMean ¶ added in v1.6.0
func (ly *Layer) DTrgSubMean()
DTrgSubMean subtracts the mean from DTrgAvg values Called by TrgAvgFmD
func (*Layer) DWtLayer ¶ added in v1.6.0
DWtLayer does weight change at the layer level. does NOT call main projection-level DWt method. in base, only calls DTrgAvgFmErr
func (*Layer) DecayCaLrnSpk ¶ added in v1.5.1
DecayCaLrnSpk decays neuron-level calcium learning and spiking variables by given factor, which is typically ly.Act.Decay.Glong. Note: this is NOT called by default and is generally not useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. It is only here for reference or optional testing.
func (*Layer) DecayState ¶
DecayState decays activation state by given proportion (default decay values are ly.Act.Decay.Act, Glong)
func (*Layer) DecayStatePool ¶
DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)
func (*Layer) GFmRawSyn ¶ added in v1.6.0
GFmRawSyn computes overall Ge and GiSyn conductances for neuron from GeRaw and GeSyn values, including NMDA, VGCC, AMPA, and GABA-A channels.
func (*Layer) GFmSpikeRaw ¶ added in v1.6.0
GFmSpikeRaw integrates G*Raw and G*Syn values for given neuron from the Prjn-level GSyn integrated values.
func (*Layer) GInteg ¶ added in v1.5.12
GInteg integrates conductances G over time (Ge, NMDA, etc). reads pool Gi values
func (*Layer) GiFmSpikes ¶ added in v1.5.12
GiFmSpikes integrates new inhibitory conductances from Spikes at the layer and pool level
func (*Layer) GiInteg ¶ added in v1.6.0
GiInteg adds Gi values from all sources including Pool computed inhib and updates GABAB as well
func (*Layer) HasPoolInhib ¶ added in v1.2.79
HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.
func (*Layer) InitActAvg ¶
func (ly *Layer) InitActAvg()
InitActAvg initializes the running-average activation values that drive learning. and the longer time averaging values.
func (*Layer) InitActs ¶
func (ly *Layer) InitActs()
InitActs fully initializes activation state -- only called automatically during InitWts
func (*Layer) InitExt ¶
func (ly *Layer) InitExt()
InitExt initializes external input state -- called prior to apply ext
func (*Layer) InitGScale ¶ added in v1.2.37
func (ly *Layer) InitGScale()
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Layer) InitPrjnGBuffs ¶ added in v1.5.12
func (ly *Layer) InitPrjnGBuffs()
InitPrjnGBuffs initializes the projection-level conductance buffers and conductance integration values for receiving projections in this layer.
func (*Layer) InitWtSym ¶
func (ly *Layer) InitWtSym()
InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
func (*Layer) InitWts ¶
func (ly *Layer) InitWts()
InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs
func (*Layer) IsInput ¶ added in v1.2.32
IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == emer.Input Used to prevent adapting of inhibition or TrgAvg values.
func (*Layer) IsInputOrTarget ¶ added in v1.6.11
IsInputOrTarget returns true if this layer is either an Input or a Target layer.
func (*Layer) IsLearnTrgAvg ¶ added in v1.2.32
func (*Layer) IsTarget ¶
IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == emer.Target Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.
func (*Layer) LesionNeurons ¶
LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed
func (*Layer) LocalistErr2D ¶ added in v1.5.3
LocalistErr2D decodes a 2D layer with Y axis = redundant units, X = localist units returning the indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)
func (*Layer) LocalistErr4D ¶ added in v1.5.3
LocalistErr4D decodes a 4D layer with each pool representing a localist value. Returns the flat 1D indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)
func (*Layer) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Layer) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Layer) MinusPhase ¶ added in v1.2.63
MinusPhase does updating at end of the minus phase
func (*Layer) NewState ¶ added in v1.2.63
func (ly *Layer) NewState()
NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()
func (*Layer) PctUnitErr ¶
PctUnitErr returns the proportion of units where the thresholded value of Target (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Act.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations.
func (*Layer) PoolGiFmSpikes ¶ added in v1.5.12
PoolGiFmSpikes computes inhibition Gi from Spikes within relevant Pools
func (*Layer) PostAct ¶ added in v1.3.20
PostAct does updates at neuron level after activation (spiking) updated for all neurons. It is a hook for specialized algorithms -- empty at Axon base level
func (*Layer) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Layer) RecvPrjnVals ¶
func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error
RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SendPrjnVals ¶
func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error
SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SendSpikes ¶ added in v1.6.12
SendSpikes sends spike to receivers -- last step in Cycle, integrated the next time around. Writes to sending projections for this neuron.
func (*Layer) SetSubMean ¶ added in v1.6.11
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Layer) SlowAdapt ¶ added in v1.2.37
SlowAdapt is the layer-level slow adaptation functions. Calls AdaptInhib and AvgDifFmTrgAvg for Synaptic Scaling. Does NOT call projection-level methods.
func (*Layer) SpikeFmG ¶ added in v1.6.0
SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that
func (*Layer) SpkSt1 ¶ added in v1.5.10
SpkSt1 saves current activation state in SpkSt1 variables (using CaP)
func (*Layer) SpkSt2 ¶ added in v1.5.10
SpkSt2 saves current activation state in SpkSt2 variables (using CaP)
func (*Layer) SynFail ¶ added in v1.2.92
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Layer) TargToExt ¶ added in v1.2.65
func (ly *Layer) TargToExt()
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Layer) TrgAvgFmD ¶ added in v1.2.32
func (ly *Layer) TrgAvgFmD()
TrgAvgFmD updates TrgAvg from DTrgAvg it is called by WtFmDWtLayer
func (*Layer) UnLesionNeurons ¶
func (ly *Layer) UnLesionNeurons()
UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer
func (*Layer) UnitVal ¶
UnitVal returns value of given variable name on given unit, using shape-based dimensional index
func (*Layer) UnitVal1D ¶
UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Layer) UnitVals ¶
UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Layer) UnitValsRepTensor ¶ added in v1.3.6
UnitValsRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIdxs -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to RepShape to hold all the values if subset is defined, otherwise it calls UnitValsTensor and is identical to that. Returns error on invalid var name.
func (*Layer) UnitValsTensor ¶
UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*Layer) UnitVarIdx ¶
UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Layer) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this layer
func (*Layer) UnitVarNum ¶
UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.
func (*Layer) UnitVarProps ¶
UnitVarProps returns properties for variables
func (*Layer) UpdateExtFlags ¶
func (ly *Layer) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Layer) UpdateParams ¶
func (ly *Layer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer
func (*Layer) VarRange ¶
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*Layer) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
func (*Layer) WtFmDWtLayer ¶ added in v1.6.0
WtFmDWtLayer does weight update at the layer level. does NOT call main projection-level WtFmDWt method. in base, only calls TrgAvgFmD
type LayerBase ¶ added in v1.4.5
type LayerBase struct { AxonLay AxonLayer `` /* 297-byte string literal not displayed */ Network emer.Network `` /* 141-byte string literal not displayed */ Nm string `` /* 151-byte string literal not displayed */ Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Off bool `desc:"inactivate this layer -- allows for easy experimentation"` Shp etensor.Shape `` /* 219-byte string literal not displayed */ Typ emer.LayerType `` /* 161-byte string literal not displayed */ Rel relpos.Rel `view:"inline" desc:"Spatial relationship to other layer, determines positioning"` Ps mat32.Vec3 `` /* 154-byte string literal not displayed */ Idx int `` /* 278-byte string literal not displayed */ NeurStIdx int `view:"-" inactive:"-" desc:"starting index of neurons for this layer within the global Network list"` RepIxs []int `` /* 128-byte string literal not displayed */ RepShp etensor.Shape `view:"-" desc:"shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape"` RcvPrjns emer.Prjns `desc:"list of receiving projections into this layer from other layers"` SndPrjns emer.Prjns `desc:"list of sending projections from this layer to other layers"` }
LayerBase manages the structural elements of the layer, which are common to any Layer type. The main Layer then can just have the algorithm-specific code.
func (*LayerBase) ApplyParams ¶ added in v1.4.5
ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*LayerBase) Idx4DFrom2D ¶ added in v1.4.5
func (*LayerBase) InitName ¶ added in v1.4.5
InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).
func (*LayerBase) NPools ¶ added in v1.4.5
NPools returns the number of unit sub-pools according to the shape parameters. Currently supported for a 4D shape, where the unit pools are the first 2 Y,X dims and then the units within the pools are the 2nd 2 Y,X dims
func (*LayerBase) NRecvPrjns ¶ added in v1.4.5
func (*LayerBase) NSendPrjns ¶ added in v1.4.5
func (*LayerBase) NeurStartIdx ¶ added in v1.6.0
func (*LayerBase) NonDefaultParams ¶ added in v1.4.5
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*LayerBase) RecipToSendPrjn ¶ added in v1.4.5
RecipToSendPrjn finds the reciprocal projection relative to the given sending projection found within the SendPrjns of this layer. This is then a recv prjn within this layer:
S=A -> R=B recip: R=A <- S=B -- ly = A -- we are the sender of srj and recv of rpj.
returns false if not found.
func (*LayerBase) RepShape ¶ added in v1.4.8
RepShape returns the shape to use for representative units
func (*LayerBase) SetRepIdxsShape ¶ added in v1.4.8
SetRepIdxsShape sets the RepIdxs, and RepShape and as list of dimension sizes
func (*LayerBase) SetShape ¶ added in v1.4.5
SetShape sets the layer shape and also uses default dim names
type LearnNeurParams ¶
type LearnNeurParams struct { CaLrn CaLrnParams `` /* 376-byte string literal not displayed */ CaSpk CaSpkParams `` /* 456-byte string literal not displayed */ LrnNMDA chans.NMDAParams `` /* 266-byte string literal not displayed */ TrgAvgAct TrgAvgActParams `` /* 126-byte string literal not displayed */ RLrate RLrateParams `` /* 184-byte string literal not displayed */ }
axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning
func (*LearnNeurParams) CaFmSpike ¶ added in v1.3.5
func (ln *LearnNeurParams) CaFmSpike(nrn *Neuron)
CaFmSpike updates all spike-driven calcium variables, including CaLrn and CaSpk. Computed after new activation for current cycle is updated.
func (*LearnNeurParams) DecayCaLrnSpk ¶ added in v1.5.1
func (ln *LearnNeurParams) DecayCaLrnSpk(nrn *Neuron, decay float32)
DecayNeurCa decays neuron-level calcium learning and spiking variables by given factor. Note: this is NOT called by default and is generally not useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. It is only here for reference or optional testing.
func (*LearnNeurParams) Defaults ¶
func (ln *LearnNeurParams) Defaults()
func (*LearnNeurParams) InitNeurCa ¶ added in v1.3.9
func (ln *LearnNeurParams) InitNeurCa(nrn *Neuron)
InitCaLrnSpk initializes the neuron-level calcium learning and spking variables. Called by InitWts (at start of learning).
func (*LearnNeurParams) LrnNMDAFmRaw ¶ added in v1.3.11
func (ln *LearnNeurParams) LrnNMDAFmRaw(nrn *Neuron, geTot float32)
LrnNMDAFmRaw updates the separate NMDA conductance and calcium values based on GeTot = GeRaw + external ge conductance. These are the variables that drive learning -- can be the same as activation but also can be different for testing learning Ca effects independent of activation effects.
func (*LearnNeurParams) Update ¶
func (ln *LearnNeurParams) Update()
type LearnSynParams ¶
type LearnSynParams struct { Learn bool `desc:"enable learning for this projection"` Lrate LrateParams `desc:"learning rate parameters, supporting two levels of modulation on top of base learning rate."` Trace TraceParams `desc:"trace-based learning parameters"` KinaseCa kinase.CaParams `view:"inline" desc:"kinase calcium Ca integration parameters"` }
LearnSynParams manages learning-related parameters at the synapse-level.
func (*LearnSynParams) CHLdWt ¶
func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32
CHLdWt returns the error-driven weight change component for a CHL contrastive hebbian learning rule, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) Defaults ¶
func (ls *LearnSynParams) Defaults()
func (*LearnSynParams) DeltaDWt ¶ added in v1.5.1
func (ls *LearnSynParams) DeltaDWt(plus, minus float32) float32
DeltaDWt returns the error-driven weight change component for a simple delta between a minus and plus phase factor, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) Update ¶
func (ls *LearnSynParams) Update()
type LrateMod ¶ added in v1.2.60
type LrateMod struct { On bool `desc:"toggle use of this modulation factor"` Base float32 `viewif:"On" min:"0" max:"1" desc:"baseline learning rate -- what you get for correct cases"` Range minmax.F32 `` /* 191-byte string literal not displayed */ }
LrateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.
func (*LrateMod) LrateMod ¶ added in v1.2.60
LrateMod calls LrateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.
type LrateParams ¶ added in v1.2.60
type LrateParams struct { Base float32 `` /* 199-byte string literal not displayed */ Sched float32 `desc:"scheduled learning rate multiplier, simulating reduction in plasticity over aging"` Mod float32 `desc:"dynamic learning rate modulation due to neuromodulatory or other such factors"` Eff float32 `inactive:"+" desc:"effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base"` }
LrateParams manages learning rate parameters
func (*LrateParams) Defaults ¶ added in v1.2.60
func (ls *LrateParams) Defaults()
func (*LrateParams) Init ¶ added in v1.2.60
func (ls *LrateParams) Init()
Init initializes modulation values back to 1 and updates Eff
func (*LrateParams) Update ¶ added in v1.2.60
func (ls *LrateParams) Update()
type NMDAPrjn ¶
type NMDAPrjn struct {
Prjn // access as .Prjn
}
NMDAPrjn is a projection with NMDA maintenance channels. It marks a projection for special treatment in a MaintLayer which actually does the NMDA computations. Excitatory conductance is aggregated separately for this projection.
func (*NMDAPrjn) PrjnTypeName ¶
func (*NMDAPrjn) UpdateParams ¶
func (pj *NMDAPrjn) UpdateParams()
type NetThread ¶ added in v1.6.2
type NetThread struct { NThreads int `desc:"number of parallel threads to deploy"` ChunksPer int `desc:"number of chunks per thread to use -- each thread greedily grabs chunks"` Work WorkMgr `view:"-" desc:"work manager"` }
NetThread specifies how to allocate threads & chunks to each task, and manages running those threads (goroutines)
type NetThreads ¶ added in v1.6.2
type NetThreads struct { Neurons NetThread `desc:"for basic neuron-level computation -- highly parallel and linear in memory -- should be able to use a lot of threads"` SendSpike NetThread `` /* 174-byte string literal not displayed */ SynCa NetThread `` /* 142-byte string literal not displayed */ Learn NetThread `` /* 136-byte string literal not displayed */ }
NetThreads parameterizes how many threads to use for each task
func (*NetThreads) Alloc ¶ added in v1.6.2
func (nt *NetThreads) Alloc(nNeurons, nPrjns int)
Alloc allocates work managers -- at Build
func (*NetThreads) Set ¶ added in v1.6.2
func (nt *NetThreads) Set(chk, neurons, sendSpike, synCa, learn int)
Set sets allocation of threads manually
func (*NetThreads) SetDefaults ¶ added in v1.6.2
func (nt *NetThreads) SetDefaults(nNeurons, nPrjns int)
SetDefaults sets default allocation of threads based on number of neurons and projections. According to tests on the LVis model, basically only CycleNeuron scales beyond 4 threads.. ChunksPer = 2 is much better than 1, but 3 == 2
type Network ¶
type Network struct { NetworkBase SlowInterval int `` /* 174-byte string literal not displayed */ SlowCtr int `inactive:"+" desc:"counter for how long it has been since last SlowAdapt step"` }
axon.Network has parameters for running a basic rate-coded Axon network
func NewNetwork ¶ added in v1.2.94
NewNetwork returns a new axon Network
func (*Network) ClearTargExt ¶ added in v1.2.65
func (nt *Network) ClearTargExt()
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Network) CollectDWts ¶
CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors.
func (*Network) Cycle ¶
Cycle runs one cycle of activation updating. It just calls the CycleImpl method through the AxonNetwork interface, thereby ensuring any specialized algorithm-specific version is called as needed (in general, strongly prefer updating the Layer specific version).
func (*Network) DWt ¶
DWt computes the weight change (learning) based on current running-average activation values
func (*Network) DWtImpl ¶
DWtImpl computes the weight change (learning) based on current running-average activation values
func (*Network) DecayState ¶
DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g) This is called automatically in NewState, but is avail here for ad-hoc decay cases.
func (*Network) DecayStateByClass ¶ added in v1.5.10
DecayStateByClass decays activation state for given class name(s) by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and projections
func (*Network) InitActs ¶
func (nt *Network) InitActs()
InitActs fully initializes activation state -- not automatically called
func (*Network) InitExt ¶
func (nt *Network) InitExt()
InitExt initializes external input state -- call prior to applying external inputs to layers
func (*Network) InitGScale ¶ added in v1.2.92
func (nt *Network) InitGScale()
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Network) InitTopoSWts ¶ added in v1.2.75
func (nt *Network) InitTopoSWts()
InitTopoSWts initializes SWt structural weight parameters from prjn types that support topographic weight patterns, having flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts
func (*Network) InitWts ¶
func (nt *Network) InitWts()
InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)
func (*Network) LayersSetOff ¶
LayersSetOff sets the Off flag for all layers to given setting
func (*Network) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Network) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Network) MinusPhase ¶ added in v1.2.63
MinusPhase does updating after end of minus phase
func (*Network) MinusPhaseImpl ¶ added in v1.2.63
MinusPhaseImpl does updating after end of minus phase
func (*Network) NewState ¶ added in v1.2.63
func (nt *Network) NewState()
NewState handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. Does NOT call InitGScale()
func (*Network) NewStateImpl ¶ added in v1.2.63
func (nt *Network) NewStateImpl()
NewStateImpl handles all initialization at start of new input state
func (*Network) PlusPhaseImpl ¶ added in v1.2.63
PlusPhaseImpl does updating after end of plus phase
func (*Network) SetDWts ¶
SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg)
func (*Network) SetSubMean ¶ added in v1.6.11
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Network) SizeReport ¶
SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint.
func (*Network) SlowAdapt ¶ added in v1.2.37
SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, GScale conductance scaling, and adapting inhibition
func (*Network) SynVarNames ¶
SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) SynVarProps ¶
SynVarProps returns properties for variables
func (*Network) TargToExt ¶ added in v1.2.65
func (nt *Network) TargToExt()
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Network) UnLesionNeurons ¶
func (nt *Network) UnLesionNeurons()
UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.
func (*Network) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) UnitVarProps ¶
UnitVarProps returns properties for variables
func (*Network) UpdateExtFlags ¶
func (nt *Network) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and projections
func (*Network) WtFmDWt ¶
WtFmDWt updates the weights from delta-weight changes. Also calls SynScale every Interval times
func (*Network) WtFmDWtImpl ¶
WtFmDWtImpl updates the weights from delta-weight changes.
type NetworkBase ¶ added in v1.4.5
type NetworkBase struct { EmerNet emer.Network `` /* 274-byte string literal not displayed */ Nm string `desc:"overall name of network -- helps discriminate if there are multiple"` Layers emer.Layers `desc:"list of layers"` NThreads int `` /* 182-byte string literal not displayed */ WtsFile string `desc:"filename of last weights file loaded or saved"` LayMap map[string]emer.Layer `view:"-" desc:"map of name to layers -- layer names must be unique"` LayClassMap map[string][]string `view:"-" desc:"map of layer classes -- made during Build"` MinPos mat32.Vec3 `view:"-" desc:"minimum display position in network"` MaxPos mat32.Vec3 `view:"-" desc:"maximum display position in network"` MetaData map[string]string `` /* 194-byte string literal not displayed */ // Implementation level code below: Neurons []Neuron `view:"-" desc:"entire network's allocation of neurons -- can be operated upon in parallel"` Prjns []AxonPrjn `view:"-" desc:"pointers to all projections in the network, via the AxonPrjn interface"` Threads NetThreads `desc:"threading config and implementation for CPU"` RecFunTimes bool `view:"-" desc:"record function timer information"` FunTimes map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"` WaitGp sync.WaitGroup `view:"-" desc:"network-level wait group for synchronizing threaded layer calls"` }
NetworkBase manages the basic structural components of a network (layers). The main Network then can just have the algorithm-specific code.
func (*NetworkBase) AddLayer ¶ added in v1.4.5
AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.
func (*NetworkBase) AddLayer2D ¶ added in v1.4.5
AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.
func (*NetworkBase) AddLayer4D ¶ added in v1.4.5
func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.
func (*NetworkBase) AddLayerInit ¶ added in v1.4.5
AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.
func (*NetworkBase) AllParams ¶ added in v1.4.5
func (nt *NetworkBase) AllParams() string
AllParams returns a listing of all parameters in the Network.
func (*NetworkBase) AllPrjnScales ¶ added in v1.4.5
func (nt *NetworkBase) AllPrjnScales() string
AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkBase) ApplyParams ¶ added in v1.4.5
ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*NetworkBase) BidirConnectLayerNames ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayers ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayersPy ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.
func (*NetworkBase) Bounds ¶ added in v1.4.5
func (nt *NetworkBase) Bounds() (min, max mat32.Vec3)
func (*NetworkBase) BoundsUpdt ¶ added in v1.4.5
func (nt *NetworkBase) BoundsUpdt()
BoundsUpdt updates the Min / Max display bounds for 3D display
func (*NetworkBase) Build ¶ added in v1.4.5
func (nt *NetworkBase) Build() error
Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity
func (*NetworkBase) ConnectLayerNames ¶ added in v1.4.5
func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) ConnectLayers ¶ added in v1.4.5
func (nt *NetworkBase) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) ConnectLayersPrjn ¶ added in v1.4.5
func (nt *NetworkBase) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
ConnectLayersPrjn makes connection using given projection between two layers, adding given prjn to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) DeleteAll ¶ added in v1.4.5
func (nt *NetworkBase) DeleteAll()
DeleteAll deletes all layers, prepares network for re-configuring and building
func (*NetworkBase) FunTimerStart ¶ added in v1.4.5
func (nt *NetworkBase) FunTimerStart(fun string)
FunTimerStart starts function timer for given function name -- ensures creation of timer
func (*NetworkBase) FunTimerStop ¶ added in v1.4.5
func (nt *NetworkBase) FunTimerStop(fun string)
FunTimerStop stops function timer -- timer must already exist
func (*NetworkBase) InitName ¶ added in v1.4.5
func (nt *NetworkBase) InitName(net emer.Network, name string)
InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name.
func (*NetworkBase) Label ¶ added in v1.4.5
func (nt *NetworkBase) Label() string
func (*NetworkBase) LateralConnectLayer ¶ added in v1.4.5
LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LateralConnectLayerPrjn ¶ added in v1.4.5
func (nt *NetworkBase) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LayerByName ¶ added in v1.4.5
func (nt *NetworkBase) LayerByName(name string) emer.Layer
LayerByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkBase) LayerByNameTry ¶ added in v1.4.5
func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
LayerByNameTry returns a layer by looking it up by name -- returns error message if layer is not found
func (*NetworkBase) LayerFun ¶ added in v1.6.0
func (nt *NetworkBase) LayerFun(fun func(ly AxonLayer), funame string, thread, wait bool)
LayerFun applies function of given name to all layers using threading (go routines) if thread is true and NThreads > 1. if wait is true, then it waits until all procs have completed. many layer-level functions are not actually worth threading overhead so this should be benchmarked for each case.
func (*NetworkBase) LayersByClass ¶ added in v1.4.5
func (nt *NetworkBase) LayersByClass(classes ...string) []string
LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkBase) Layout ¶ added in v1.4.5
func (nt *NetworkBase) Layout()
Layout computes the 3D layout of layers based on their relative position settings
func (*NetworkBase) MakeLayMap ¶ added in v1.4.5
func (nt *NetworkBase) MakeLayMap()
MakeLayMap updates layer map based on current layers
func (*NetworkBase) NLayers ¶ added in v1.4.5
func (nt *NetworkBase) NLayers() int
func (*NetworkBase) Name ¶ added in v1.4.5
func (nt *NetworkBase) Name() string
emer.Network interface methods:
func (*NetworkBase) NeuronFun ¶ added in v1.6.0
func (nt *NetworkBase) NeuronFun(fun func(ly AxonLayer, ni int, nrn *Neuron), funame string, thread, wait bool)
NeuronFun applies function of given name to all neurons using Neurons threading (go routines) if thread is true and NThreads > 1. if wait is true, then it waits until all procs have completed.
func (*NetworkBase) NonDefaultParams ¶ added in v1.4.5
func (nt *NetworkBase) NonDefaultParams() string
NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.
func (*NetworkBase) OpenWtsCpp ¶ added in v1.4.5
func (nt *NetworkBase) OpenWtsCpp(filename gi.FileName) error
OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) OpenWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) OpenWtsJSON(filename gi.FileName) error
OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) PrjnFun ¶ added in v1.6.0
func (nt *NetworkBase) PrjnFun(fun func(pj AxonPrjn), funame string, thread, wait bool)
PrjnFun applies function of given name to all projections using Learn threads (go routines) if thread is true and NThreads > 1. if wait is true, then it waits until all procs have completed.
func (*NetworkBase) ReadWtsCpp ¶ added in v1.4.5
func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) ReadWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) SaveWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) SaveWtsJSON(filename gi.FileName) error
SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.
func (*NetworkBase) SendSpikeFun ¶ added in v1.6.2
func (nt *NetworkBase) SendSpikeFun(fun func(ly AxonLayer, ni int, nrn *Neuron), funame string, thread, wait bool)
SendSpikeFun applies function of given name to all neurons using SendSpike threading (go routines) if thread is true and NThreads > 1. if wait is true, then it waits until all procs have completed.
func (*NetworkBase) SetWts ¶ added in v1.4.5
func (nt *NetworkBase) SetWts(nw *weights.Network) error
SetWts sets the weights for this network from weights.Network decoded values
func (*NetworkBase) StdVertLayout ¶ added in v1.4.5
func (nt *NetworkBase) StdVertLayout()
StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings
func (*NetworkBase) SynCaFun ¶ added in v1.6.2
func (nt *NetworkBase) SynCaFun(fun func(pj AxonPrjn), funame string, thread, wait bool)
SynCaFun applies function of given name to all projections using SynCa threads (go routines) if thread is true and NThreads > 1. if wait is true, then it waits until all procs have completed.
func (*NetworkBase) ThreadsAlloc ¶ added in v1.6.2
func (nt *NetworkBase) ThreadsAlloc()
ThreadsAlloc allocates threads if thread numbers have been updated must be called *after* Build
func (*NetworkBase) TimerReport ¶ added in v1.4.5
func (nt *NetworkBase) TimerReport()
TimerReport reports the amount of time spent in each function, and in each thread
func (*NetworkBase) VarRange ¶ added in v1.4.5
func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*NetworkBase) WriteWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type Neuron ¶
type Neuron struct { Flags NeuronFlags `desc:"bit flags for binary state variables"` LayIdx int32 `desc:"index of the layer that this neuron belongs to -- needed for neuron-level parallel code."` SubPool int32 `` /* 214-byte string literal not displayed */ Spike float32 `desc:"whether neuron has spiked or not on this cycle (0 or 1)"` Spiked float32 `` /* 224-byte string literal not displayed */ Act float32 `` /* 402-byte string literal not displayed */ ActInt float32 `` /* 478-byte string literal not displayed */ ActM float32 `` /* 228-byte string literal not displayed */ ActP float32 `` /* 229-byte string literal not displayed */ Ext float32 `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"` Target float32 `desc:"target value: drives learning to produce this activation value"` GeSyn float32 `` /* 214-byte string literal not displayed */ Ge float32 `desc:"total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E"` GiSyn float32 `` /* 293-byte string literal not displayed */ Gi float32 `desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"` Gk float32 `` /* 148-byte string literal not displayed */ Inet float32 `desc:"net current produced by all channels -- drives update of Vm"` Vm float32 `desc:"membrane potential -- integrates Inet current over time"` VmDend float32 `desc:"dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking"` CaSyn float32 `` /* 459-byte string literal not displayed */ CaSpkM float32 `` /* 283-byte string literal not displayed */ CaSpkP float32 `` /* 317-byte string literal not displayed */ CaSpkD float32 `` /* 314-byte string literal not displayed */ CaSpkPM float32 `desc:"minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value."` CaLrn float32 `` /* 669-byte string literal not displayed */ CaM float32 `` /* 174-byte string literal not displayed */ CaP float32 `` /* 192-byte string literal not displayed */ CaD float32 `` /* 192-byte string literal not displayed */ CaDiff float32 `desc:"difference between CaP - CaD -- this is the error signal that drives error-driven learning."` SpkMaxCa float32 `` /* 213-byte string literal not displayed */ SpkMax float32 `` /* 235-byte string literal not displayed */ SpkPrv float32 `` /* 155-byte string literal not displayed */ SpkSt1 float32 `` /* 235-byte string literal not displayed */ SpkSt2 float32 `` /* 236-byte string literal not displayed */ RLrate float32 `` /* 191-byte string literal not displayed */ ActAvg float32 `` /* 194-byte string literal not displayed */ AvgPct float32 `` /* 158-byte string literal not displayed */ TrgAvg float32 `` /* 169-byte string literal not displayed */ DTrgAvg float32 `` /* 164-byte string literal not displayed */ AvgDif float32 `` /* 173-byte string literal not displayed */ Attn float32 `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"` ISI float32 `desc:"current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized."` ISIAvg float32 `` /* 320-byte string literal not displayed */ GeNoiseP float32 `` /* 201-byte string literal not displayed */ GeNoise float32 `desc:"integrated noise excitatory conductance, added into Ge"` GiNoiseP float32 `` /* 201-byte string literal not displayed */ GiNoise float32 `desc:"integrated noise inhibotyr conductance, added into Gi"` GeM float32 `` /* 165-byte string literal not displayed */ GiM float32 `` /* 168-byte string literal not displayed */ MahpN float32 `desc:"accumulating voltage-gated gating value for the medium time scale AHP"` SahpCa float32 `desc:"slowly accumulating calcium value that drives the slow AHP"` SahpN float32 `desc:"sAHP gating value"` GknaMed float32 `` /* 131-byte string literal not displayed */ GknaSlow float32 `` /* 129-byte string literal not displayed */ GnmdaSyn float32 `desc:"integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant"` Gnmda float32 `` /* 137-byte string literal not displayed */ GnmdaLrn float32 `` /* 159-byte string literal not displayed */ NmdaCa float32 `desc:"NMDA calcium computed from GnmdaLrn, drives learning via CaM"` SnmdaO float32 `` /* 314-byte string literal not displayed */ SnmdaI float32 `` /* 255-byte string literal not displayed */ GgabaB float32 `` /* 127-byte string literal not displayed */ GABAB float32 `desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"` GABABx float32 `desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"` Gvgcc float32 `desc:"conductance (via Ca) for VGCC voltage gated calcium channels"` VgccM float32 `desc:"activation gate of VGCC channels"` VgccH float32 `desc:"inactivation gate of VGCC channels"` VgccCa float32 `desc:"instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc"` VgccCaInt float32 `desc:"time-integrated VGCC calcium flux -- this is actually what drives learning"` GeExt float32 `desc:"extra excitatory conductance added to Ge -- from Ext input, deep.GeCtxt etc"` GeRaw float32 `desc:"raw excitatory conductance (net input) received from senders = current raw spiking drive"` GeBase float32 `desc:"baseline level of Ge, added to GeRaw, for intrinsic excitability"` GiRaw float32 `desc:"raw inhibitory conductance (net input) received from senders = current raw spiking drive"` GiBase float32 `desc:"baseline level of Gi, added to GiRaw, for intrinsic excitability"` SSGi float32 `desc:"SST+ somatostatin positive slow spiking inhibition"` SSGiDend float32 `desc:"amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend)"` Gak float32 `desc:"conductance of A-type K potassium channels"` }
axon.Neuron holds all of the neuron (unit) level variables. This is the most basic version, without any optional features. All variables accessible via Unit interface must be float32 and start at the top, in contiguous order
func (*Neuron) ClearFlag ¶
func (nrn *Neuron) ClearFlag(flag NeuronFlags)
func (*Neuron) HasFlag ¶
func (nrn *Neuron) HasFlag(flag NeuronFlags) bool
func (*Neuron) SetFlag ¶
func (nrn *Neuron) SetFlag(flag NeuronFlags)
func (*Neuron) VarByIndex ¶
VarByIndex returns variable using index (0 = first variable in NeuronVars list)
type NeuronFlags ¶ added in v1.6.4
type NeuronFlags int32
NeuronFlags are bit-flags encoding relevant binary state for neurons
const ( // NeuronOff flag indicates that this neuron has been turned off (i.e., lesioned) NeuronOff NeuronFlags = iota // NeuronHasExt means the neuron has external input in its Ext field NeuronHasExt // NeuronHasTarg means the neuron has external target input in its Target field NeuronHasTarg // NeuronHasCmpr means the neuron has external comparison input in its Target field -- used for computing // comparison statistics but does not drive neural activity ever NeuronHasCmpr NeuronFlagsNum )
The neuron flags
func (*NeuronFlags) FromString ¶ added in v1.6.4
func (i *NeuronFlags) FromString(s string) error
func (NeuronFlags) MarshalJSON ¶ added in v1.6.4
func (ev NeuronFlags) MarshalJSON() ([]byte, error)
func (NeuronFlags) String ¶ added in v1.6.4
func (i NeuronFlags) String() string
func (*NeuronFlags) UnmarshalJSON ¶ added in v1.6.4
func (ev *NeuronFlags) UnmarshalJSON(b []byte) error
type Pool ¶
type Pool struct {
StIdx, EdIdx int `inactive:"+" desc:"starting and ending (exlusive) indexes for the list of neurons in this pool"`
Inhib fsfffb.Inhib `inactive:"+" desc:"fast-slow FFFB inhibition values"`
ActM minmax.AvgMax32 `inactive:"+" desc:"minus phase average and max Act activation values, for ActAvg updt"`
ActP minmax.AvgMax32 `inactive:"+" desc:"plus phase average and max Act activation values, for ActAvg updt"`
GeM minmax.AvgMax32 `inactive:"+" desc:"stats for GeM minus phase averaged Ge values"`
GiM minmax.AvgMax32 `inactive:"+" desc:"stats for GiM minus phase averaged Gi values"`
AvgDif minmax.AvgMax32 `inactive:"+" desc:"absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg"`
}
Pool contains computed values for FS-FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition
type Prjn ¶
type Prjn struct { PrjnBase Com SynComParams `view:"inline" desc:"synaptic communication parameters: delay, probability of failure"` PrjnScale PrjnScaleParams `` /* 194-byte string literal not displayed */ SWt SWtParams `` /* 147-byte string literal not displayed */ Learn LearnSynParams `view:"add-fields" desc:"synaptic-level learning parameters for learning in the fast LWt values."` Syns []Synapse `desc:"synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SendConIdx array"` // misc state variables below: GScale GScaleVals `view:"inline" desc:"conductance scaling values"` Gidx ringidx.FIx `` /* 201-byte string literal not displayed */ GBuf []float32 `` /* 179-byte string literal not displayed */ PIBuf []float32 `` /* 142-byte string literal not displayed */ PIdxs []int32 `` /* 162-byte string literal not displayed */ GVals []PrjnGVals `` /* 186-byte string literal not displayed */ }
axon.Prjn is a basic Axon projection with synaptic learning parameters
func (*Prjn) AsAxon ¶
AsAxon returns this prjn as a axon.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the AxonPrjn interface does not need to include accessors to all the basic stuff.
func (*Prjn) Build ¶
Build constructs the full connectivity among the layers as specified in this projection. Calls PrjnBase.BuildBase and then allocates the synaptic values in Syns accordingly.
func (*Prjn) BuildGBuffs ¶ added in v1.5.10
func (pj *Prjn) BuildGBuffs()
BuildGBuf builds GBuf with current Com Delay values, if not correct size
func (*Prjn) DWtNeurSpkTheta ¶ added in v1.3.22
DWtNeurSpkTheta computes the weight change (learning) based on separate neurally-integrated spiking, for the optimized version computed at the Theta cycle interval. non-Trace version for Target layers.
func (*Prjn) DWtSubMean ¶ added in v1.2.23
DWtSubMean subtracts the mean from any projections that have SubMean > 0. This is called on *receiving* projections, prior to WtFmDwt.
func (*Prjn) DWtSynSpkTheta ¶ added in v1.3.22
DWtSynSpkTheta computes the weight change (learning) based on synaptically-integrated spiking, for the optimized version computed at the Theta cycle interval. Non-Trace version for target layers.
func (*Prjn) DWtTraceNeurSpkTheta ¶ added in v1.5.10
DWtTraceNeurSpkTheta computes the weight change (learning) based on separate neurally-integrated spiking, for the optimized version computed at the Theta cycle interval. Trace version.
func (*Prjn) DWtTraceSynSpkTheta ¶ added in v1.5.1
DWtTraceSynSpkTheta computes the weight change (learning) based on synaptically-integrated spiking, for the optimized version computed at the Theta cycle interval. Trace version.
func (*Prjn) GFmSpikes ¶ added in v1.6.0
GFmSpikes increments synaptic conductances from Spikes including pooled aggregation of spikes into Pools for FS-FFFB inhib.
func (*Prjn) InitGBuffs ¶ added in v1.5.10
func (pj *Prjn) InitGBuffs()
InitGBuffs initializes the per-projection synaptic conductance buffers. This is not typically needed (called during InitWts, InitActs) but can be called when needed. Must be called to completely initialize prior activity, e.g., full Glong clearing.
func (*Prjn) InitWtSym ¶
InitWtSym initializes weight symmetry -- is given the reciprocal projection where the Send and Recv layers are reversed.
func (*Prjn) InitWts ¶
func (pj *Prjn) InitWts()
InitWts initializes weight values according to SWt params, enforcing current constraints.
func (*Prjn) InitWtsSyn ¶
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*Prjn) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Prjn) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Prjn) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Prjn) RecvSynCa ¶ added in v1.3.18
RecvSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in recv order, filtering on recv spike.
func (*Prjn) SWtFmWt ¶ added in v1.2.45
func (pj *Prjn) SWtFmWt()
SWtFmWt updates structural, slowly-adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.
func (*Prjn) SWtRescale ¶ added in v1.2.45
func (pj *Prjn) SWtRescale()
SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.
func (*Prjn) SendSpikes ¶ added in v1.6.12
SendSpikes sends a spike from the sending neuron at index sendIdx into the buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.
func (*Prjn) SendSynCa ¶ added in v1.3.22
SendSynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in sending order, filtering on sending spike.
func (*Prjn) SetSWtsFunc ¶ added in v1.2.75
SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.
func (*Prjn) SetSWtsRPool ¶ added in v1.2.75
SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.
func (*Prjn) SetSynVal ¶
SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.
func (*Prjn) SetWtsFunc ¶
SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.
func (*Prjn) SlowAdapt ¶ added in v1.2.37
SlowAdapt does the slow adaptation: SWt learning and SynScale
func (*Prjn) Syn1DNum ¶ added in v1.4.0
Syn1DNum returns the number of synapses for this prjn as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynVals.
func (*Prjn) SynFail ¶ added in v1.2.92
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Prjn) SynIdx ¶
SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes). Returns -1 if synapse not found between these two neurons. Requires searching within connections for receiving unit.
func (*Prjn) SynScale ¶ added in v1.2.23
func (pj *Prjn) SynScale()
SynScale performs synaptic scaling based on running average activation vs. targets. Layer-level AvgDifFmTrgAvg function must be called first.
func (*Prjn) SynVal ¶
SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)
func (*Prjn) SynVal1D ¶
SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Prjn) SynVals ¶
SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Prjn) SynVarIdx ¶
SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Prjn) SynVarNames ¶
func (*Prjn) SynVarNum ¶
SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.
func (*Prjn) SynVarProps ¶
SynVarProps returns properties for variables
func (*Prjn) UpdateParams ¶
func (pj *Prjn) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values
func (*Prjn) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type PrjnBase ¶ added in v1.4.14
type PrjnBase struct { AxonPrj AxonPrjn `` /* 267-byte string literal not displayed */ Off bool `desc:"inactivate this projection -- allows for easy experimentation"` Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Notes string `desc:"can record notes about this projection here"` Send emer.Layer `desc:"sending layer for this projection"` Recv emer.Layer `` /* 167-byte string literal not displayed */ Pat prjn.Pattern `desc:"pattern of connectivity"` Typ emer.PrjnType `` /* 154-byte string literal not displayed */ RecvConN []int32 `view:"-" desc:"number of recv connections for each neuron in the receiving layer, as a flat list"` RecvConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of recv connections in the receiving layer"` RecvConIdxStart []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in receiving layer -- just a list incremented by ConN"` RecvConIdx []int32 `` /* 213-byte string literal not displayed */ RecvSynIdx []int32 `` /* 185-byte string literal not displayed */ SendConN []int32 `view:"-" desc:"number of sending connections for each neuron in the sending layer, as a flat list"` SendConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of sending connections in the sending layer"` SendConIdxStart []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in sending layer -- just a list incremented by ConN"` SendConIdx []int32 `` /* 213-byte string literal not displayed */ }
PrjnBase contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The exact same struct object is added to the Recv and Send layers, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation.
func (*PrjnBase) ApplyParams ¶ added in v1.4.14
ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*PrjnBase) BuildBase ¶ added in v1.4.14
BuildBase constructs the full connectivity among the layers as specified in this projection. Calls Validate and returns false if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern.
func (*PrjnBase) Connect ¶ added in v1.4.14
Connect sets the connectivity between two layers and the pattern to use in interconnecting them
func (*PrjnBase) Init ¶ added in v1.4.14
Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.
func (*PrjnBase) NonDefaultParams ¶ added in v1.4.14
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*PrjnBase) PrjnTypeName ¶ added in v1.4.14
func (*PrjnBase) SetNIdxSt ¶ added in v1.4.14
func (ps *PrjnBase) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
SetNIdxSt sets the *ConN and *ConIdxSt values given n tensor from Pat. Returns total number of connections for this direction.
type PrjnGVals ¶ added in v1.5.12
type PrjnGVals struct { GRaw float32 `desc:"raw conductance received from senders = current raw spiking drive"` GSyn float32 `` /* 131-byte string literal not displayed */ }
PrjnGVals contains projection-level conductance values, integrated by prjn before being integrated at the neuron level, which enables the neuron to perform non-linear integration as needed.
type PrjnScaleParams ¶ added in v1.2.45
type PrjnScaleParams struct { Rel float32 `` /* 255-byte string literal not displayed */ Abs float32 `` /* 334-byte string literal not displayed */ AvgTau float32 `` /* 340-byte string literal not displayed */ AvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
PrjnScaleParams are projection scaling parameters: modulates overall strength of projection, using both absolute and relative factors.
func (*PrjnScaleParams) Defaults ¶ added in v1.2.45
func (ws *PrjnScaleParams) Defaults()
func (*PrjnScaleParams) FullScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) FullScale(savg, snu, ncon float32) float32
FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale
func (*PrjnScaleParams) SLayActScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) SLayActScale(savg, snu, ncon float32) float32
SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.
func (*PrjnScaleParams) Update ¶ added in v1.2.45
func (ws *PrjnScaleParams) Update()
type PrjnType ¶
PrjnType has the GLong extensions to the emer.PrjnType types, for gui
func StringToPrjnType ¶
type RLrateParams ¶ added in v1.2.79
type RLrateParams struct { On bool `def:"true" desc:"use learning rate modulation"` SigmoidMin float32 `` /* 226-byte string literal not displayed */ Diff bool `desc:"modulate learning rate as a function of plus - minus differences"` SpkThr float32 `def:"0.1" desc:"threshold on Max(CaSpkP, CaSpkD) below which Min lrate applies -- must be > 0 to prevent div by zero"` DiffThr float32 `def:"0.02" desc:"threshold on recv neuron error delta, i.e., |CaSpkP - CaSpkD| below which lrate is at Min value"` Min float32 `def:"0.001" desc:"for Diff component, minimum learning rate value when below ActDiffThr"` }
RLrateParams are recv neuron learning rate modulation parameters. Has two factors: the derivative of the sigmoid based on CaSpkD activity levels, and based on the phase-wise differences in activity (Diff).
func (*RLrateParams) Defaults ¶ added in v1.2.79
func (rl *RLrateParams) Defaults()
func (*RLrateParams) RLrateDiff ¶ added in v1.5.1
func (rl *RLrateParams) RLrateDiff(scap, scad float32) float32
RLrateDiff returns the learning rate as a function of difference between CaSpkP and CaSpkD values
func (*RLrateParams) RLrateSigDeriv ¶ added in v1.5.10
func (rl *RLrateParams) RLrateSigDeriv(act float32, laymax float32) float32
RLrateSigDeriv returns the sigmoid derivative learning rate factor as a function of spiking activity, with mid-range values having full learning and extreme values a reduced learning rate: deriv = act * (1 - act) The activity should be CaSpkP and the layer maximum is used to normalize that to a 0-1 range.
func (*RLrateParams) Update ¶ added in v1.2.79
func (rl *RLrateParams) Update()
type SWtAdaptParams ¶ added in v1.2.45
type SWtAdaptParams struct { On bool `` /* 137-byte string literal not displayed */ Lrate float32 `` /* 388-byte string literal not displayed */ SubMean float32 `viewif:"On" def:"1" desc:"amount of mean to subtract from SWt delta when updating -- generally best to set to 1"` SigGain float32 `` /* 135-byte string literal not displayed */ DreamVar float32 `` /* 354-byte string literal not displayed */ }
SWtAdaptParams manages adaptation of SWt values
func (*SWtAdaptParams) Defaults ¶ added in v1.2.45
func (sp *SWtAdaptParams) Defaults()
func (*SWtAdaptParams) RndVar ¶ added in v1.2.55
func (sp *SWtAdaptParams) RndVar() float32
RndVar returns the random variance (zero mean) based on DreamVar param
func (*SWtAdaptParams) Update ¶ added in v1.2.45
func (sp *SWtAdaptParams) Update()
type SWtInitParams ¶ added in v1.2.45
type SWtInitParams struct { SPct float32 `` /* 315-byte string literal not displayed */ Mean float32 `` /* 199-byte string literal not displayed */ Var float32 `def:"0.25" desc:"initial variance in weight values, prior to constraints."` Sym bool `` /* 149-byte string literal not displayed */ }
SWtInitParams for initial SWt values
func (*SWtInitParams) Defaults ¶ added in v1.2.45
func (sp *SWtInitParams) Defaults()
func (*SWtInitParams) RndVar ¶ added in v1.2.45
func (sp *SWtInitParams) RndVar() float32
RndVar returns the random variance in weight value (zero mean) based on Var param
func (*SWtInitParams) Update ¶ added in v1.2.45
func (sp *SWtInitParams) Update()
type SWtParams ¶ added in v1.2.45
type SWtParams struct { Init SWtInitParams `view:"inline" desc:"initialization of SWt values"` Adapt SWtAdaptParams `view:"inline" desc:"adaptation of SWt values in response to LWt learning"` Limit minmax.F32 `def:"{0.2 0.8}" view:"inline" desc:"range limits for SWt values"` }
SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.
func (*SWtParams) InitWtsSyn ¶ added in v1.3.5
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*SWtParams) LWtFmWts ¶ added in v1.2.47
LWtFmWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.
func (*SWtParams) LinFmSigWt ¶ added in v1.2.45
LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5
func (*SWtParams) SigFmLinWt ¶ added in v1.2.45
SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt
type SpikeNoiseParams ¶ added in v1.2.94
type SpikeNoiseParams struct { On bool `desc:"add noise simulating background spiking levels"` GeHz float32 `` /* 151-byte string literal not displayed */ Ge float32 `` /* 150-byte string literal not displayed */ GiHz float32 `` /* 165-byte string literal not displayed */ Gi float32 `` /* 150-byte string literal not displayed */ GeExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GeNoiseP as it is updated"` GiExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GiNoiseP as it is updated"` }
SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.
func (*SpikeNoiseParams) Defaults ¶ added in v1.2.94
func (an *SpikeNoiseParams) Defaults()
func (*SpikeNoiseParams) PGe ¶ added in v1.2.94
func (an *SpikeNoiseParams) PGe(p *float32) float32
PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered
func (*SpikeNoiseParams) PGi ¶ added in v1.2.94
func (an *SpikeNoiseParams) PGi(p *float32) float32
PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered
func (*SpikeNoiseParams) Update ¶ added in v1.2.94
func (an *SpikeNoiseParams) Update()
type SpikeParams ¶
type SpikeParams struct { Thr float32 `` /* 152-byte string literal not displayed */ VmR float32 `` /* 217-byte string literal not displayed */ Tr int `` /* 242-byte string literal not displayed */ RTau float32 `` /* 285-byte string literal not displayed */ Exp bool `` /* 274-byte string literal not displayed */ ExpSlope float32 `` /* 325-byte string literal not displayed */ ExpThr float32 `` /* 127-byte string literal not displayed */ MaxHz float32 `` /* 182-byte string literal not displayed */ ISITau float32 `def:"5" min:"1" desc:"constant for integrating the spiking interval in estimating spiking rate"` ISIDt float32 `view:"-" desc:"rate = 1 / tau"` RDt float32 `view:"-" desc:"rate = 1 / tau"` }
SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)
func (*SpikeParams) ActFmISI ¶
func (sk *SpikeParams) ActFmISI(isi, timeInc, integ float32) float32
ActFmISI computes rate-code activation from estimated spiking interval
func (*SpikeParams) ActToISI ¶
func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32
ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ
func (*SpikeParams) AvgFmISI ¶
func (sk *SpikeParams) AvgFmISI(avg *float32, isi float32)
AvgFmISI updates spiking ISI from current isi interval value
func (*SpikeParams) Defaults ¶
func (sk *SpikeParams) Defaults()
func (*SpikeParams) Update ¶
func (sk *SpikeParams) Update()
type SynComParams ¶
type SynComParams struct { Delay int `` /* 333-byte string literal not displayed */ PFail float32 `` /* 149-byte string literal not displayed */ PFailSWt bool `` /* 141-byte string literal not displayed */ }
SynComParams are synaptic communication parameters: delay and probability of failure
func (*SynComParams) Defaults ¶
func (sc *SynComParams) Defaults()
func (*SynComParams) Fail ¶
func (sc *SynComParams) Fail(wt *float32, swt float32)
Fail updates failure status of given weight, given SWt value
func (*SynComParams) Update ¶
func (sc *SynComParams) Update()
func (*SynComParams) WtFail ¶
func (sc *SynComParams) WtFail(swt float32) bool
WtFail returns true if synapse should fail, as function of SWt value (optionally)
func (*SynComParams) WtFailP ¶
func (sc *SynComParams) WtFailP(swt float32) float32
WtFailP returns probability of weight (synapse) failure given current SWt value
type Synapse ¶
type Synapse struct { CaUpT int32 `desc:"time in CycleTot of last updating of Ca values at the synapse level, for optimized synaptic-level Ca integration."` Wt float32 `` /* 282-byte string literal not displayed */ LWt float32 `` /* 309-byte string literal not displayed */ SWt float32 `` /* 466-byte string literal not displayed */ DWt float32 `desc:"change in synaptic weight, from learning -- updates LWt which then updates Wt."` DSWt float32 `desc:"change in SWt slow synaptic weight -- accumulates DWt"` Ca float32 `desc:"Raw calcium singal for Kinase learning: SpikeG * (send.CaSyn * recv.CaSyn)"` CaM float32 `desc:"first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP"` CaP float32 `` /* 165-byte string literal not displayed */ CaD float32 `` /* 164-byte string literal not displayed */ Tr float32 `desc:"trace of synaptic activity over time -- used for credit assignment in learning."` }
axon.Synapse holds state for the synaptic connection between neurons
func (*Synapse) SetVarByIndex ¶
func (*Synapse) SetVarByName ¶
SetVarByName sets synapse variable to given value
func (*Synapse) VarByIndex ¶
VarByIndex returns variable using index (0 = first variable in SynapseVars list)
type Time ¶
type Time struct { Phase int `desc:"phase counter: typicaly 0-1 for minus-plus but can be more phases for other algorithms"` PlusPhase bool `` /* 126-byte string literal not displayed */ PhaseCycle int `desc:"cycle within current phase -- minus or plus"` Cycle int `` /* 156-byte string literal not displayed */ CycleTot int `` /* 151-byte string literal not displayed */ Time float32 `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"` Mode string `desc:"current evaluation mode, e.g., Train, Test, etc"` Testing bool `` /* 179-byte string literal not displayed */ TimePerCyc float32 `def:"0.001" desc:"amount of time to increment per cycle"` }
axon.Time contains all the timing state and parameter information for running a model. Can also include other relevant state context, e.g., Testing vs. Training modes.
func (*Time) NewPhase ¶ added in v1.2.63
NewPhase resets PhaseCycle = 0 and sets the plus phase as specified
type TopoInhibParams ¶ added in v1.2.85
type TopoInhibParams struct { On bool `desc:"use topographic inhibition"` Width int `viewif:"On" desc:"half-width of topographic inhibition within layer"` Sigma float32 `viewif:"On" desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"` Wrap bool `viewif:"On" desc:"half-width of topographic inhibition within layer"` Gi float32 `viewif:"On" desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"` FF float32 `` /* 133-byte string literal not displayed */ FB float32 `` /* 139-byte string literal not displayed */ FF0 float32 `` /* 186-byte string literal not displayed */ WidthWt float32 `inactive:"+" desc:"weight value at width -- to assess the value of Sigma"` }
TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood.
func (*TopoInhibParams) Defaults ¶ added in v1.2.85
func (ti *TopoInhibParams) Defaults()
func (*TopoInhibParams) GiFmGeAct ¶ added in v1.2.85
func (ti *TopoInhibParams) GiFmGeAct(ge, act, ff0 float32) float32
func (*TopoInhibParams) Update ¶ added in v1.2.85
func (ti *TopoInhibParams) Update()
type TraceParams ¶ added in v1.5.1
type TraceParams struct { NeuronCa bool `` /* 306-byte string literal not displayed */ Tau float32 `` /* 126-byte string literal not displayed */ SubMean float32 `` /* 409-byte string literal not displayed */ Dt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` }
TraceParams manages learning rate parameters
func (*TraceParams) Defaults ¶ added in v1.5.1
func (tp *TraceParams) Defaults()
func (*TraceParams) TrFmCa ¶ added in v1.5.1
func (tp *TraceParams) TrFmCa(tr float32, ca float32) float32
TrFmCa returns updated trace factor as function of a synaptic calcium update factor and current trace
func (*TraceParams) Update ¶ added in v1.5.1
func (tp *TraceParams) Update()
type TrgAvgActParams ¶ added in v1.2.45
type TrgAvgActParams struct { On bool `desc:"whether to use target average activity mechanism to scale synaptic weights"` ErrLrate float32 `` /* 263-byte string literal not displayed */ SynScaleRate float32 `` /* 231-byte string literal not displayed */ SubMean float32 `` /* 235-byte string literal not displayed */ TrgRange minmax.F32 `` /* 181-byte string literal not displayed */ Permute bool `` /* 236-byte string literal not displayed */ Pool bool `` /* 206-byte string literal not displayed */ }
TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by unit-wise error and difference in actual vs. target. drives synaptic scaling and baseline excitatory drive.
func (*TrgAvgActParams) Defaults ¶ added in v1.2.45
func (ta *TrgAvgActParams) Defaults()
func (*TrgAvgActParams) Update ¶ added in v1.2.45
func (ta *TrgAvgActParams) Update()