Documentation ¶
Overview ¶
Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, PVLV, PBWM, etc.
The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.
The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.
This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.
One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.
The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.
Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)
The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables
There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.
Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.
The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.
Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., ActFmG computes activation from conductances G.
The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.
Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per projection.
The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.
Index ¶
- Constants
- Variables
- func JsonToParams(b []byte) string
- func NeuronVarIdxByName(varNm string) (int, error)
- func SigFun(w, gain, off float32) float32
- func SigFun61(w float32) float32
- func SigInvFun(w, gain, off float32) float32
- func SigInvFun61(w float32) float32
- func SynapseVarByName(varNm string) (int, error)
- type ActAvgParams
- type ActAvgVals
- type ActInitParams
- type ActNoiseParams
- type ActNoiseTypes
- type ActParams
- func (ac *ActParams) ActFmG(nrn *Neuron)
- func (ac *ActParams) BurstGe(cyc int, actm float32) float32
- func (ac *ActParams) DecayState(nrn *Neuron, decay float32)
- func (ac *ActParams) Defaults()
- func (ac *ActParams) GeFmRaw(nrn *Neuron, geRaw float32, cyc int, actm float32)
- func (ac *ActParams) GiFmRaw(nrn *Neuron, giRaw float32)
- func (ac *ActParams) HasRateClamp(nrn *Neuron) bool
- func (ac *ActParams) InetFmG(vm, ge, gi, gk float32) float32
- func (ac *ActParams) InitActs(nrn *Neuron)
- func (ac *ActParams) InitLongActs(nrn *Neuron)
- func (ac *ActParams) RateClamp(nrn *Neuron)
- func (ac *ActParams) Update()
- func (ac *ActParams) VmFmG(nrn *Neuron)
- type AttnParams
- type AxonLayer
- type AxonNetwork
- type AxonPrjn
- type ClampParams
- type ClampTypes
- type CosDiffStats
- type DecayParams
- type DtParams
- type GScaleVals
- type GTargParams
- type HebbPrjn
- type InhibMiscParams
- type InhibParams
- type LayFunChan
- type Layer
- func (ly *Layer) ActFmG(ltime *Time)
- func (ly *Layer) ActSt1(ltime *Time)
- func (ly *Layer) ActSt2(ltime *Time)
- func (ly *Layer) AdaptGScale()
- func (ly *Layer) AdaptInhib()
- func (ly *Layer) AllParams() string
- func (ly *Layer) ApplyExt(ext etensor.Tensor)
- func (ly *Layer) ApplyExt1D(ext []float64)
- func (ly *Layer) ApplyExt1D32(ext []float32)
- func (ly *Layer) ApplyExt1DTsr(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2Dto4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExtFlags() (clrmsk, setmsk int32, toTarg bool)
- func (ly *Layer) AsAxon() *Layer
- func (ly *Layer) AvgGeM(ltime *Time)
- func (ly *Layer) AvgMaxAct(ltime *Time)
- func (ly *Layer) AvgMaxGe(ltime *Time)
- func (ly *Layer) Build() error
- func (ly *Layer) BuildPools(nu int) error
- func (ly *Layer) BuildPrjns() error
- func (ly *Layer) BuildSubPools()
- func (ly *Layer) ClearTargExt()
- func (ly *Layer) CosDiffFmActs()
- func (ly *Layer) CostEst() (neur, syn, tot int)
- func (ly *Layer) CyclePost(ltime *Time)
- func (ly *Layer) DTrgAvgFmErr()
- func (ly *Layer) DTrgAvgSubMean()
- func (ly *Layer) DWt()
- func (ly *Layer) DecayState(decay float32)
- func (ly *Layer) DecayStatePool(pool int, decay float32)
- func (ly *Layer) Defaults()
- func (ly *Layer) GFmInc(ltime *Time)
- func (ly *Layer) GFmIncNeur(ltime *Time)
- func (ly *Layer) GenNoise()
- func (ly *Layer) HasPoolInhib() bool
- func (ly *Layer) InhibFmGeAct(ltime *Time)
- func (ly *Layer) InhibFmPool(ltime *Time)
- func (ly *Layer) InitActAvg()
- func (ly *Layer) InitActs()
- func (ly *Layer) InitExt()
- func (ly *Layer) InitGScale()
- func (ly *Layer) InitWtSym()
- func (ly *Layer) InitWts()
- func (ly *Layer) IsInput() bool
- func (ly *Layer) IsLearnTrgAvg() bool
- func (ly *Layer) IsTarget() bool
- func (ly *Layer) LesionNeurons(prop float32) int
- func (ly *Layer) LrateMod(mod float32)
- func (ly *Layer) LrateSched(sched float32)
- func (ly *Layer) MinusPhase(ltime *Time)
- func (ly *Layer) NewState()
- func (ly *Layer) PctUnitErr() float64
- func (ly *Layer) PlusPhase(ltime *Time)
- func (ly *Layer) Pool(idx int) *Pool
- func (ly *Layer) PoolInhibFmGeAct(ltime *Time)
- func (ly *Layer) PoolTry(idx int) (*Pool, error)
- func (ly *Layer) RateClamp()
- func (ly *Layer) ReadWtsJSON(r io.Reader) error
- func (ly *Layer) RecvGInc(ltime *Time)
- func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, ...) error
- func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, ...) error
- func (ly *Layer) SendSpike(ltime *Time)
- func (ly *Layer) SetWts(lw *weights.Layer) error
- func (ly *Layer) SlowAdapt()
- func (ly *Layer) SynFail()
- func (ly *Layer) SynScale()
- func (ly *Layer) TargToExt()
- func (ly *Layer) TopoGi(ltime *Time)
- func (ly *Layer) TrgAvgFmD()
- func (ly *Layer) UnLesionNeurons()
- func (ly *Layer) UnitVal(varNm string, idx []int) float32
- func (ly *Layer) UnitVal1D(varIdx int, idx int) float32
- func (ly *Layer) UnitVals(vals *[]float32, varNm string) error
- func (ly *Layer) UnitValsTensor(tsr etensor.Tensor, varNm string) error
- func (ly *Layer) UnitVarIdx(varNm string) (int, error)
- func (ly *Layer) UnitVarNames() []string
- func (ly *Layer) UnitVarNum() int
- func (ly *Layer) UnitVarProps() map[string]string
- func (ly *Layer) UpdateExtFlags()
- func (ly *Layer) UpdateParams()
- func (ly *Layer) VarRange(varNm string) (min, max float32, err error)
- func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)
- func (ly *Layer) WtFmDWt()
- type LayerStru
- func (ls *LayerStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ls *LayerStru) Class() string
- func (ls *LayerStru) Config(shape []int, typ emer.LayerType)
- func (ls *LayerStru) Idx4DFrom2D(x, y int) ([]int, bool)
- func (ls *LayerStru) Index() int
- func (ls *LayerStru) InitName(lay emer.Layer, name string, net emer.Network)
- func (ls *LayerStru) Is2D() bool
- func (ls *LayerStru) Is4D() bool
- func (ls *LayerStru) IsOff() bool
- func (ls *LayerStru) Label() string
- func (ls *LayerStru) NPools() int
- func (ls *LayerStru) NRecvPrjns() int
- func (ls *LayerStru) NSendPrjns() int
- func (ls *LayerStru) Name() string
- func (ls *LayerStru) NonDefaultParams() string
- func (ls *LayerStru) Pos() mat32.Vec3
- func (ls *LayerStru) RecipToSendPrjn(spj emer.Prjn) (emer.Prjn, bool)
- func (ls *LayerStru) RecvPrjn(idx int) emer.Prjn
- func (ls *LayerStru) RecvPrjns() *emer.Prjns
- func (ls *LayerStru) RelPos() relpos.Rel
- func (ls *LayerStru) SendPrjn(idx int) emer.Prjn
- func (ls *LayerStru) SendPrjns() *emer.Prjns
- func (ls *LayerStru) SetClass(cls string)
- func (ls *LayerStru) SetIndex(idx int)
- func (ls *LayerStru) SetName(nm string)
- func (ls *LayerStru) SetOff(off bool)
- func (ls *LayerStru) SetPos(pos mat32.Vec3)
- func (ls *LayerStru) SetRelPos(rel relpos.Rel)
- func (ls *LayerStru) SetShape(shape []int)
- func (ls *LayerStru) SetThread(thr int)
- func (ls *LayerStru) SetType(typ emer.LayerType)
- func (ls *LayerStru) Shape() *etensor.Shape
- func (ls *LayerStru) Size() mat32.Vec2
- func (ls *LayerStru) Thread() int
- func (ls *LayerStru) Type() emer.LayerType
- func (ls *LayerStru) TypeName() string
- type LearnNeurParams
- type LearnSynParams
- type LrateMod
- type LrateParams
- type LrnActAvgParams
- type NMDAPrjn
- type Network
- func (nt *Network) ActFmG(ltime *Time)
- func (nt *Network) ActSt1(ltime *Time)
- func (nt *Network) ActSt2(ltime *Time)
- func (nt *Network) AsAxon() *Network
- func (nt *Network) AvgMaxAct(ltime *Time)
- func (nt *Network) AvgMaxGe(ltime *Time)
- func (nt *Network) ClearTargExt()
- func (nt *Network) CollectDWts(dwts *[]float32) bool
- func (nt *Network) Cycle(ltime *Time)
- func (nt *Network) CycleImpl(ltime *Time)
- func (nt *Network) CyclePost(ltime *Time)
- func (nt *Network) CyclePostImpl(ltime *Time)
- func (nt *Network) DWt()
- func (nt *Network) DWtImpl()
- func (nt *Network) DecayState(decay float32)
- func (nt *Network) Defaults()
- func (nt *Network) InhibFmGeAct(ltime *Time)
- func (nt *Network) InitActs()
- func (nt *Network) InitExt()
- func (nt *Network) InitGScale()
- func (nt *Network) InitTopoSWts()
- func (nt *Network) InitWts()
- func (nt *Network) LayersSetOff(off bool)
- func (nt *Network) LrateMod(mod float32)
- func (nt *Network) LrateSched(sched float32)
- func (nt *Network) MinusPhase(ltime *Time)
- func (nt *Network) MinusPhaseImpl(ltime *Time)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPrjn() emer.Prjn
- func (nt *Network) NewState()
- func (nt *Network) NewStateImpl()
- func (nt *Network) PlusPhase(ltime *Time)
- func (nt *Network) PlusPhaseImpl(ltime *Time)
- func (nt *Network) SendSpike(ltime *Time)
- func (nt *Network) SetDWts(dwts []float32, navg int)
- func (nt *Network) SizeReport() string
- func (nt *Network) SlowAdapt()
- func (nt *Network) SynFail()
- func (nt *Network) SynVarNames() []string
- func (nt *Network) SynVarProps() map[string]string
- func (nt *Network) TargToExt()
- func (nt *Network) ThreadAlloc(nThread int) string
- func (nt *Network) ThreadReport() string
- func (nt *Network) UnLesionNeurons()
- func (nt *Network) UnitVarNames() []string
- func (nt *Network) UnitVarProps() map[string]string
- func (nt *Network) UpdateExtFlags()
- func (nt *Network) UpdateParams()
- func (nt *Network) WtFmDWt()
- func (nt *Network) WtFmDWtImpl()
- type NetworkStru
- func (nt *NetworkStru) AddLayer(name string, shape []int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayer2D(name string, shapeY, shapeX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayerInit(ly emer.Layer, name string, shape []int, typ emer.LayerType)
- func (nt *NetworkStru) AllParams() string
- func (nt *NetworkStru) AllPrjnScales() string
- func (nt *NetworkStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (nt *NetworkStru) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
- func (nt *NetworkStru) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
- func (nt *NetworkStru) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
- func (nt *NetworkStru) Bounds() (min, max mat32.Vec3)
- func (nt *NetworkStru) BoundsUpdt()
- func (nt *NetworkStru) Build() error
- func (nt *NetworkStru) BuildThreads()
- func (nt *NetworkStru) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
- func (nt *NetworkStru) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
- func (nt *NetworkStru) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
- func (nt *NetworkStru) FunTimerStart(fun string)
- func (nt *NetworkStru) FunTimerStop(fun string)
- func (nt *NetworkStru) InitName(net emer.Network, name string)
- func (nt *NetworkStru) Label() string
- func (nt *NetworkStru) LateralConnectLayer(lay emer.Layer, pat prjn.Pattern) emer.Prjn
- func (nt *NetworkStru) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
- func (nt *NetworkStru) Layer(idx int) emer.Layer
- func (nt *NetworkStru) LayerByName(name string) emer.Layer
- func (nt *NetworkStru) LayerByNameTry(name string) (emer.Layer, error)
- func (nt *NetworkStru) Layout()
- func (nt *NetworkStru) MakeLayMap()
- func (nt *NetworkStru) NLayers() int
- func (nt *NetworkStru) Name() string
- func (nt *NetworkStru) NonDefaultParams() string
- func (nt *NetworkStru) OpenWtsCpp(filename gi.FileName) error
- func (nt *NetworkStru) OpenWtsJSON(filename gi.FileName) error
- func (nt *NetworkStru) ReadWtsCpp(r io.Reader) error
- func (nt *NetworkStru) ReadWtsJSON(r io.Reader) error
- func (nt *NetworkStru) SaveWtsJSON(filename gi.FileName) error
- func (nt *NetworkStru) SetWts(nw *weights.Network) error
- func (nt *NetworkStru) StartThreads()
- func (nt *NetworkStru) StdVertLayout()
- func (nt *NetworkStru) StopThreads()
- func (nt *NetworkStru) ThrLayFun(fun func(ly AxonLayer), funame string)
- func (nt *NetworkStru) ThrTimerReset()
- func (nt *NetworkStru) ThrWorker(tt int)
- func (nt *NetworkStru) TimerReport()
- func (nt *NetworkStru) VarRange(varNm string) (min, max float32, err error)
- func (nt *NetworkStru) WriteWtsJSON(w io.Writer) error
- type NeurFlags
- type Neuron
- func (nrn *Neuron) ClearFlag(flag NeurFlags)
- func (nrn *Neuron) ClearMask(mask int32)
- func (nrn *Neuron) HasFlag(flag NeurFlags) bool
- func (nrn *Neuron) IsOff() bool
- func (nrn *Neuron) SetFlag(flag NeurFlags)
- func (nrn *Neuron) SetMask(mask int32)
- func (nrn *Neuron) VarByIndex(idx int) float32
- func (nrn *Neuron) VarByName(varNm string) (float32, error)
- func (nrn *Neuron) VarNames() []string
- type Pool
- type Prjn
- func (pj *Prjn) AllParams() string
- func (pj *Prjn) AsAxon() *Prjn
- func (pj *Prjn) Build() error
- func (pj *Prjn) DWt()
- func (pj *Prjn) Defaults()
- func (pj *Prjn) InitGbuf()
- func (pj *Prjn) InitWtSym(rpjp AxonPrjn)
- func (pj *Prjn) InitWts()
- func (pj *Prjn) InitWtsSyn(sy *Synapse, mean, spct float32)
- func (pj *Prjn) LrateMod(mod float32)
- func (pj *Prjn) LrateSched(sched float32)
- func (pj *Prjn) ReadWtsJSON(r io.Reader) error
- func (pj *Prjn) RecvGInc(ltime *Time)
- func (pj *Prjn) RecvGIncNoStats()
- func (pj *Prjn) RecvGIncStats()
- func (pj *Prjn) SWtFmWt()
- func (pj *Prjn) SWtRescale()
- func (pj *Prjn) SendSpike(si int)
- func (pj *Prjn) SetClass(cls string) emer.Prjn
- func (pj *Prjn) SetPattern(pat prjn.Pattern) emer.Prjn
- func (pj *Prjn) SetSWtsFunc(swtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SetSWtsRPool(swts etensor.Tensor)
- func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error
- func (pj *Prjn) SetType(typ emer.PrjnType) emer.Prjn
- func (pj *Prjn) SetWts(pw *weights.Prjn) error
- func (pj *Prjn) SetWtsFunc(wtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SlowAdapt()
- func (pj *Prjn) SynFail()
- func (pj *Prjn) SynIdx(sidx, ridx int) int
- func (pj *Prjn) SynScale()
- func (pj *Prjn) SynVal(varNm string, sidx, ridx int) float32
- func (pj *Prjn) SynVal1D(varIdx int, synIdx int) float32
- func (pj *Prjn) SynVals(vals *[]float32, varNm string) error
- func (pj *Prjn) SynVarIdx(varNm string) (int, error)
- func (pj *Prjn) SynVarNames() []string
- func (pj *Prjn) SynVarNum() int
- func (pj *Prjn) SynVarProps() map[string]string
- func (pj *Prjn) UpdateParams()
- func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)
- func (pj *Prjn) WtFmDWt()
- type PrjnScaleParams
- type PrjnStru
- func (ps *PrjnStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ps *PrjnStru) BuildStru() error
- func (ps *PrjnStru) Class() string
- func (ps *PrjnStru) Connect(slay, rlay emer.Layer, pat prjn.Pattern, typ emer.PrjnType)
- func (ps *PrjnStru) Init(prjn emer.Prjn)
- func (ps *PrjnStru) IsOff() bool
- func (ps *PrjnStru) Label() string
- func (ps *PrjnStru) Name() string
- func (ps *PrjnStru) NonDefaultParams() string
- func (ps *PrjnStru) Pattern() prjn.Pattern
- func (ps *PrjnStru) PrjnTypeName() string
- func (ps *PrjnStru) RecvLay() emer.Layer
- func (ps *PrjnStru) SendLay() emer.Layer
- func (ps *PrjnStru) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
- func (ps *PrjnStru) SetOff(off bool)
- func (ps *PrjnStru) String() string
- func (ps *PrjnStru) Type() emer.PrjnType
- func (ps *PrjnStru) TypeName() string
- func (ps *PrjnStru) Validate(logmsg bool) error
- type PrjnType
- type RLrateParams
- type SWtAdaptParams
- type SWtInitParams
- type SWtParams
- func (sp *SWtParams) ClipSWt(swt float32) float32
- func (sp *SWtParams) ClipWt(wt float32) float32
- func (sp *SWtParams) Defaults()
- func (sp *SWtParams) LWtFmWts(wt, swt float32) float32
- func (sp *SWtParams) LinFmSigWt(wt float32) float32
- func (sp *SWtParams) SigFmLinWt(lw float32) float32
- func (sp *SWtParams) Update()
- func (sp *SWtParams) WtFmDWt(dwt, wt, lwt *float32, swt float32)
- func (sp *SWtParams) WtVal(swt, lwt float32) float32
- type SelfInhibParams
- type SpikeParams
- type SynComParams
- type Synapse
- type Time
- type TimeScales
- type TopoInhibParams
- type TrgAvgActParams
- type XCalParams
Constants ¶
const ( Version = "v1.2.92" GitCommit = "d983db4" // the commit JUST BEFORE the release VersionDate = "2021-11-22 13:07" // UTC )
const ( // NMDAPrjn are projections that have strong NMDA channels supporting maintenance NMDA emer.PrjnType = emer.PrjnType(emer.PrjnTypeN) + iota )
The GLong prjn types
const NeuronVarStart = 8
NeuronVarStart is the byte offset of fields in the Neuron structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!
Variables ¶
var KiT_ActNoiseTypes = kit.Enums.AddEnum(ActNoiseTypesN, kit.NotBitFlag, nil)
var KiT_ClampTypes = kit.Enums.AddEnum(ClampTypesN, kit.NotBitFlag, nil)
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
var KiT_NMDAPrjn = kit.Types.AddType(&NMDAPrjn{}, PrjnProps)
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
var KiT_NeurFlags = kit.Enums.AddEnum(NeurFlagsN, kit.BitFlag, nil)
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
var KiT_PrjnType = kit.Enums.AddEnumExt(emer.KiT_PrjnType, PrjnTypeN, kit.NotBitFlag, nil)
var KiT_TimeScales = kit.Enums.AddEnum(TimeScalesN, kit.NotBitFlag, nil)
var LayerProps = ki.Props{ "ToolBar": ki.PropSlice{ {"Defaults", ki.Props{ "icon": "reset", "desc": "return all parameters to their intial default values", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the layer's activation values", }}, {"sep-act", ki.BlankProp{}}, {"LesionNeurons", ki.Props{ "icon": "close", "desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)", "Args": ki.PropSlice{ {"Proportion", ki.Props{ "desc": "proportion (0 -- 1) of neurons to lesion", }}, }, }}, {"UnLesionNeurons", ki.Props{ "icon": "reset", "desc": "Un-Lesion (reset the Off flag) for all neurons in the layer", }}, }, }
var NetworkProps = ki.Props{ "ToolBar": ki.PropSlice{ {"SaveWtsJSON", ki.Props{ "label": "Save Wts...", "icon": "file-save", "desc": "Save json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"OpenWtsJSON", ki.Props{ "label": "Open Wts...", "icon": "file-open", "desc": "Open json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"sep-file", ki.BlankProp{}}, {"Build", ki.Props{ "icon": "update", "desc": "build the network's neurons and synapses according to current params", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the network weight values according to prjn parameters", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the network activation values", }}, {"sep-act", ki.BlankProp{}}, {"AddLayer", ki.Props{ "label": "Add Layer...", "icon": "new", "desc": "add a new layer to network", "Args": ki.PropSlice{ {"Layer Name", ki.Props{}}, {"Layer Shape", ki.Props{ "desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)", }}, {"Layer Type", ki.Props{ "desc": "type of layer -- used for determining how inputs are applied", }}, }, }}, {"ConnectLayerNames", ki.Props{ "label": "Connect Layers...", "icon": "new", "desc": "add a new connection between layers in the network", "Args": ki.PropSlice{ {"Send Layer Name", ki.Props{}}, {"Recv Layer Name", ki.Props{}}, {"Pattern", ki.Props{ "desc": "pattern to connect with", }}, {"Prjn Type", ki.Props{ "desc": "type of projection -- direction, or other more specialized factors", }}, }, }}, {"AllPrjnScales", ki.Props{ "icon": "file-sheet", "desc": "AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.", "show-return": true, }}, }, }
var NeuronVarProps = map[string]string{
"Vm": `min:"0" max:"1"`,
"VmDend": `min:"0" max:"1"`,
"ISI": `auto-scale:"+"`,
"ISIAvg": `auto-scale:"+"`,
"Gi": `auto-scale:"+"`,
"Gk": `auto-scale:"+"`,
"ActDel": `auto-scale:"+"`,
"ActDif": `auto-scale:"+"`,
"AvgPct": `min:"-2" max:"2"`,
"TrgAvg": `min:"-2" max:"2"`,
"DTrgAvg": `auto-scale:"+"`,
"GknaFast": `auto-scale:"+"`,
"GknaMed": `auto-scale:"+"`,
"GknaSlow": `auto-scale:"+"`,
"Gnmda": `auto-scale:"+"`,
"NMDA": `auto-scale:"+"`,
"GgabaB": `auto-scale:"+"`,
"GABAB": `auto-scale:"+"`,
"GABABx": `auto-scale:"+"`,
}
var NeuronVars = []string{"Spike", "ISI", "ISIAvg", "Act", "ActInt", "Ge", "Gi", "Gk", "Inet", "Vm", "VmDend", "Targ", "Ext", "AvgSS", "AvgS", "AvgM", "AvgSLrn", "AvgMLrn", "ActSt1", "ActSt2", "ActM", "ActP", "ActDif", "ActDel", "ActPrv", "RLrate", "ActAvg", "AvgPct", "TrgAvg", "DTrgAvg", "AvgDif", "Noise", "GiSyn", "GiSelf", "GeRaw", "GiRaw", "GeM", "GiM", "GknaFast", "GknaMed", "GknaSlow", "Gnmda", "NMDA", "NMDASyn", "GgabaB", "GABAB", "GABABx", "Attn"}
var NeuronVarsMap map[string]int
var PrjnProps = ki.Props{ "EnumType:Typ": KiT_PrjnType, }
var SynapseVarProps = map[string]string{
"DWt": `auto-scale:"+"`,
"DSWt": `auto-scale:"+"`,
}
var SynapseVars = []string{"Wt", "SWt", "LWt", "DWt", "DSWt"}
var SynapseVarsMap map[string]int
Functions ¶
func JsonToParams ¶
JsonToParams reformates json output to suitable params display output
func NeuronVarIdxByName ¶
NeuronVarIdxByName returns the index of the variable in the Neuron, or error
func SigFun61 ¶
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params
func SigInvFun61 ¶
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params
func SynapseVarByName ¶
SynapseVarByName returns the index of the variable in the Synapse, or error
Types ¶
type ActAvgParams ¶
type ActAvgParams struct { InhTau float32 `` /* 249-byte string literal not displayed */ Init float32 `` /* 166-byte string literal not displayed */ AdaptGi bool `` /* 126-byte string literal not displayed */ Targ float32 `` /* 151-byte string literal not displayed */ HiTol float32 `` /* 263-byte string literal not displayed */ LoTol float32 `` /* 263-byte string literal not displayed */ AdaptRate float32 `` /* 182-byte string literal not displayed */ InhDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
ActAvgParams represents expected average activity levels in the layer. Used for computing running-average computation that is then used for G scaling. Also specifies time constant for updating average and for the target value for adapting inhibition in inhib_adapt.
func (*ActAvgParams) Adapt ¶ added in v1.2.37
func (aa *ActAvgParams) Adapt(gimult *float32, trg, act float32) bool
Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.
func (*ActAvgParams) AvgFmAct ¶
func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32, dt float32)
AvgFmAct updates the running-average activation given average activity level in layer
func (*ActAvgParams) Defaults ¶
func (aa *ActAvgParams) Defaults()
func (*ActAvgParams) Update ¶
func (aa *ActAvgParams) Update()
type ActAvgVals ¶ added in v1.2.32
type ActAvgVals struct { ActMAvg float32 `` /* 141-byte string literal not displayed */ ActPAvg float32 `inactive:"+" desc:"running-average plus-phase activity integrated at Dt.LongAvgTau"` AvgMaxGeM float32 `` /* 203-byte string literal not displayed */ AvgMaxGiM float32 `` /* 203-byte string literal not displayed */ GiMult float32 `inactive:"+" desc:"multiplier on inhibition -- adapted to maintain target activity level"` }
ActAvgVals are running-average activation levels used for Ge scaling and adaptive inhibition
type ActInitParams ¶
type ActInitParams struct { Vm float32 `def:"0.3" desc:"initial membrane potential -- see Erev.L for the resting potential (typically .3)"` Act float32 `def:"0" desc:"initial activation value -- typically 0"` Ge float32 `` /* 268-byte string literal not displayed */ Gi float32 `` /* 235-byte string literal not displayed */ }
ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.
func (*ActInitParams) Defaults ¶
func (ai *ActInitParams) Defaults()
func (*ActInitParams) Update ¶
func (ai *ActInitParams) Update()
type ActNoiseParams ¶
type ActNoiseParams struct { erand.RndParams Type ActNoiseTypes `desc:"where and how to add processing noise"` Fixed bool `` /* 227-byte string literal not displayed */ }
ActNoiseParams contains parameters for activation-level noise
func (*ActNoiseParams) Defaults ¶
func (an *ActNoiseParams) Defaults()
func (*ActNoiseParams) Update ¶
func (an *ActNoiseParams) Update()
type ActNoiseTypes ¶ added in v1.2.28
type ActNoiseTypes int
ActNoiseTypes are different types / locations of random noise for activations
const ( // NoNoise means no noise added NoNoise ActNoiseTypes = iota // VmNoise means noise is added to the membrane potential. VmNoise // GeNoise means noise is added to the excitatory conductance (Ge). GeNoise // ActNoise means noise is added to the final rate code activation ActNoise // GeMultNoise means that noise is multiplicative on the Ge excitatory conductance values GeMultNoise ActNoiseTypesN )
func (*ActNoiseTypes) FromString ¶ added in v1.2.28
func (i *ActNoiseTypes) FromString(s string) error
func (ActNoiseTypes) MarshalJSON ¶ added in v1.2.28
func (ev ActNoiseTypes) MarshalJSON() ([]byte, error)
func (ActNoiseTypes) String ¶ added in v1.2.28
func (i ActNoiseTypes) String() string
func (*ActNoiseTypes) UnmarshalJSON ¶ added in v1.2.28
func (ev *ActNoiseTypes) UnmarshalJSON(b []byte) error
type ActParams ¶
type ActParams struct { Spike SpikeParams `view:"inline" desc:"Spiking function parameters"` Init ActInitParams `` /* 155-byte string literal not displayed */ Decay DecayParams `` /* 233-byte string literal not displayed */ Dt DtParams `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"` Gbar chans.Chans `view:"inline" desc:"[Defaults: 1, .2, 1, 1] maximal conductances levels for channels"` Erev chans.Chans `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"` GTarg GTargParams `` /* 132-byte string literal not displayed */ Clamp ClampParams `view:"inline" desc:"how external inputs drive neural activations"` Noise ActNoiseParams `view:"inline" desc:"how, where, when, and how much noise to add"` VmRange minmax.F32 `` /* 165-byte string literal not displayed */ KNa knadapt.Params `` /* 252-byte string literal not displayed */ NMDA glong.NMDAParams `view:"inline" desc:"NMDA channel parameters plus more general params"` GABAB glong.GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"` Attn AttnParams `view:"inline" desc:"Attentional modulation parameters: how Attn modulates Ge"` }
axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.
func (*ActParams) BurstGe ¶ added in v1.2.55
BurstGe returns extra bursting excitatory conductance based on params
func (*ActParams) DecayState ¶
DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState
func (*ActParams) GeFmRaw ¶
GeFmRaw integrates Ge excitatory conductance from GeRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) GiFmRaw ¶
GiFmRaw integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) HasRateClamp ¶ added in v1.2.28
HasRateClamp returns true if this neuron has external input that should be hard clamped
func (*ActParams) InitActs ¶
InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitLongActs ¶ added in v1.2.66
InitLongActs initializes longer time-scale activation states in neuron (ActPrv, ActSt*, ActM, ActP, ActDif) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)
func (*ActParams) RateClamp ¶ added in v1.2.28
RateClamp drives Poisson rate spiking according to external input. Also adds any Noise *if* noise is set to ActNoise.
type AttnParams ¶ added in v1.2.85
type AttnParams struct { On bool `desc:"is attentional modulation active?"` Min float32 `desc:"minimum act multiplier if attention is 0"` }
AttnParams determine how the Attn modulates Ge
func (*AttnParams) Defaults ¶ added in v1.2.85
func (at *AttnParams) Defaults()
func (*AttnParams) ModVal ¶ added in v1.2.85
func (at *AttnParams) ModVal(val float32, attn float32) float32
ModVal returns the attn-modulated value -- attn must be between 1-0
func (*AttnParams) Update ¶ added in v1.2.85
func (at *AttnParams) Update()
type AxonLayer ¶
type AxonLayer interface { emer.Layer // AsAxon returns this layer as a axon.Layer -- so that the AxonLayer // interface does not need to include accessors to all the basic stuff AsAxon() *Layer // InitWts initializes the weight values in the network, i.e., resetting learning // Also calls InitActs InitWts() // InitActAvg initializes the running-average activation values that drive learning. InitActAvg() // InitActs fully initializes activation state -- only called automatically during InitWts InitActs() // InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers InitWtSym() // InitGScale computes the initial scaling factor for synaptic input conductances G, // stored in GScale.Scale, based on sending layer initial activation. InitGScale() // InitExt initializes external input state -- called prior to apply ext InitExt() // ApplyExt applies external input in the form of an etensor.Tensor // If the layer is a Target or Compare layer type, then it goes in Targ // otherwise it goes in Ext. ApplyExt(ext etensor.Tensor) // ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats // If the layer is a Target or Compare layer type, then it goes in Targ // otherwise it goes in Ext ApplyExt1D(ext []float64) // UpdateExtFlags updates the neuron flags for external input based on current // layer Type field -- call this if the Type has changed since the last // ApplyExt* method call. UpdateExtFlags() // IsTarget returns true if this layer is a Target layer. // By default, returns true for layers of Type == emer.Target // Other Target layers include the TRCLayer in deep predictive learning. // It is also used in SynScale to not apply it to target layers. // In both cases, Target layers are purely error-driven. IsTarget() bool // IsInput returns true if this layer is an Input layer. // By default, returns true for layers of Type == emer.Input // Used to prevent adapting of inhibition or TrgAvg values. IsInput() bool // NewState handles all initialization at start of new input pattern, // including computing Ge scaling from running average activation etc. // should already have presented the external input to the network at this point. NewState() // GenNoise generates random noise for all neurons GenNoise() // DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay) DecayState(decay float32) // RateClamp hard-clamps the activations in the layer -- called during NewState // for hard-clamped Input layers RateClamp() // SendSpike sends spike to receivers SendSpike(ltime *Time) // GFmInc integrates new synaptic conductances from increments sent during last SendGDelta GFmInc(ltime *Time) // AvgMaxGe computes the average and max Ge stats, used in inhibition AvgMaxGe(ltime *Time) // InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools InhibFmGeAct(ltime *Time) // ActFmG computes rate-code activation from Ge, Gi, Gl conductances // and updates learning running-average activations from that Act ActFmG(ltime *Time) // AvgMaxAct computes the running-average activation used in driving inhibition AvgMaxAct(ltime *Time) // CyclePost is called after the standard Cycle update, as a separate // network layer loop. // This is reserved for any kind of special ad-hoc types that // need to do something special after Act is finally computed. // For example, sending a neuromodulatory signal such as dopamine. CyclePost(ltime *Time) // MinusPhase does updating after end of minus phase MinusPhase(ltime *Time) // PlusPhase does updating after end of plus phase PlusPhase(ltime *Time) // ActSt1 saves current activations into ActSt1 ActSt1(ltime *Time) // ActSt2 saves current activations into ActSt2 ActSt2(ltime *Time) // CosDiffFmActs computes the cosine difference in activation state // between minus and plus phases. CosDiffFmActs() // DWt computes the weight change (learning) -- calls DWt method on sending projections DWt() // WtFmDWt updates the weights from delta-weight changes. // Computed from receiver perspective, does SubMean. WtFmDWt() // SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, // GScale conductance scaling, SWt updating, and adapting inhibition SlowAdapt() // SynFail updates synaptic weight failure only -- normally done as part of DWt // and WtFmDWt, but this call can be used during testing to update failing synapses. SynFail() }
AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Layer, which this interface also inherits for convenience.
type AxonNetwork ¶
type AxonNetwork interface { emer.Network // AsAxon returns this network as a axon.Network -- so that the // AxonNetwork interface does not need to include accessors // to all the basic stuff AsAxon() *Network // NewStateImpl handles all initialization at start of new input pattern, including computing // input scaling from running average activation etc. NewStateImpl() // CycleImpl runs one cycle of activation updating: // * Sends Ge increments from sending to receiving layers // * Average and Max Ge stats // * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) // * Activation from Ge, Gi, and Gl // * Average and Max Act stats // This basic version doesn't use the time info, but more specialized types do, and we // want to keep a consistent API for end-user code. CycleImpl(ltime *Time) // CyclePostImpl is called after the standard Cycle update, and calls CyclePost // on Layers -- this is reserved for any kind of special ad-hoc types that // need to do something special after Act is finally computed. // For example, sending a neuromodulatory signal such as dopamine. CyclePostImpl(ltime *Time) // MinusPhaseImpl does updating after minus phase MinusPhaseImpl(ltime *Time) // PlusPhaseImpl does updating after plus phase PlusPhaseImpl(ltime *Time) // DWtImpl computes the weight change (learning) based on current // running-average activation values DWtImpl() // WtFmDWtImpl updates the weights from delta-weight changes. // Also calls SynScale every Interval times WtFmDWtImpl() // SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, // GScale conductance scaling, and adapting inhibition SlowAdapt() }
AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * MinusPhase, PlusPhase * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls.
Typically most changes in algorithm can be accomplished directly in the Layer or Prjn level, but sometimes (e.g., in deep) additional full-network passes are required.
All of the structural API is in emer.Network, which this interface also inherits for convenience.
type AxonPrjn ¶
type AxonPrjn interface { emer.Prjn // AsAxon returns this prjn as a axon.Prjn -- so that the AxonPrjn // interface does not need to include accessors to all the basic stuff. AsAxon() *Prjn // InitWts initializes weight values according to Learn.WtInit params InitWts() // InitWtSym initializes weight symmetry -- is given the reciprocal projection where // the Send and Recv layers are reversed. InitWtSym(rpj AxonPrjn) // InitGbuf initializes the per-projection synaptic conductance buffers. // This is not typically needed (called during InitWts, InitActs) // but can be called when needed. InitGbuf() // SendSpike sends a spike from sending neuron index si, // to add to buffer on receivers. SendSpike(si int) // RecvGInc increments the receiver's synaptic conductances from those of all the projections. RecvGInc(ltime *Time) // DWt computes the weight change (learning) -- on sending projections DWt() // WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections WtFmDWt() // SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, // GScale conductance scaling, and adapting inhibition SlowAdapt() // SynFail updates synaptic weight failure only -- normally done as part of DWt // and WtFmDWt, but this call can be used during testing to update failing synapses. SynFail() }
AxonPrjn defines the essential algorithmic API for Axon, at the projection level. These are the methods that the axon.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Prjn, which this interface also inherits for convenience.
type ClampParams ¶
type ClampParams struct { ErrThr float32 `def:"0.5" desc:"threshold on neuron Act activity to count as active for computing error relative to target in PctErr method"` Type ClampTypes `` /* 170-byte string literal not displayed */ Rate float32 `` /* 160-byte string literal not displayed */ Ge float32 `` /* 155-byte string literal not displayed */ Burst bool `viewif:"Type=GeClamp" desc:"activate bursting at start of clamping window"` BurstThr float32 `` /* 237-byte string literal not displayed */ BurstCyc int `` /* 145-byte string literal not displayed */ BurstGe float32 `` /* 244-byte string literal not displayed */ }
ClampParams are for specifying how external inputs are clamped onto network activation values
func (*ClampParams) Defaults ¶
func (cp *ClampParams) Defaults()
func (*ClampParams) Update ¶
func (cp *ClampParams) Update()
type ClampTypes ¶ added in v1.2.28
type ClampTypes int
ClampTypes are different types of clamping
const ( // GeClamp drives a constant excitatory input given by Ge value // ignoring any other source of Ge input -- like a current clamp. // This works best in general by allowing more natural temporal dynamics. GeClamp ClampTypes = iota // RateClamp drives a poisson firing rate in proportion to clamped value. RateClamp // AddGeClamp adds a constant extra Ge value on top of existing Ge inputs AddGeClamp ClampTypesN )
func (*ClampTypes) FromString ¶ added in v1.2.28
func (i *ClampTypes) FromString(s string) error
func (ClampTypes) MarshalJSON ¶ added in v1.2.28
func (ev ClampTypes) MarshalJSON() ([]byte, error)
func (ClampTypes) String ¶ added in v1.2.28
func (i ClampTypes) String() string
func (*ClampTypes) UnmarshalJSON ¶ added in v1.2.28
func (ev *ClampTypes) UnmarshalJSON(b []byte) error
type CosDiffStats ¶
type CosDiffStats struct { Cos float32 `` /* 179-byte string literal not displayed */ Avg float32 `` /* 159-byte string literal not displayed */ Var float32 `` /* 160-byte string literal not displayed */ }
CosDiffStats holds cosine-difference statistics at the layer level
func (*CosDiffStats) Init ¶
func (cd *CosDiffStats) Init()
type DecayParams ¶ added in v1.2.59
type DecayParams struct { Act float32 `` /* 391-byte string literal not displayed */ Glong float32 `` /* 332-byte string literal not displayed */ KNa float32 `` /* 149-byte string literal not displayed */ }
DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.
func (*DecayParams) Defaults ¶ added in v1.2.59
func (ai *DecayParams) Defaults()
func (*DecayParams) Update ¶ added in v1.2.59
func (ai *DecayParams) Update()
type DtParams ¶
type DtParams struct { Integ float32 `` /* 649-byte string literal not displayed */ VmTau float32 `` /* 328-byte string literal not displayed */ VmDendTau float32 `def:"5" min:"1" desc:"dendritic membrane potential integration time constant"` GeTau float32 `def:"5" min:"1" desc:"time constant for decay of excitatory AMPA receptor conductance."` GiTau float32 `def:"7" min:"1" desc:"time constant for decay of inhibitory GABAa receptor conductance."` IntTau float32 `` /* 393-byte string literal not displayed */ LongAvgTau float32 `` /* 349-byte string literal not displayed */ VmDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` VmDendDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` GeDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` GiDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` IntDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` LongAvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)
func (*DtParams) AvgVarUpdt ¶ added in v1.2.45
AvgVarUpdt updates the average and variance from current value, using LongAvgDt
func (*DtParams) GeFmRaw ¶
GeFmRaw updates ge from raw input, decaying with time constant, back to min baseline value
type GScaleVals ¶ added in v1.2.37
type GScaleVals struct { Scale float32 `` /* 240-byte string literal not displayed */ Orig float32 `inactive:"+" desc:"original scaling factor computed based on initial layer activity, without any subsequent adaptation"` Rel float32 `` /* 159-byte string literal not displayed */ AvgMaxRel float32 `` /* 156-byte string literal not displayed */ Err float32 `inactive:"+" desc:"error that drove last adjustment in scale"` Avg float32 `inactive:"+" desc:"average G value on this trial"` Max float32 `inactive:"+" desc:"maximum G value on this trial"` AvgAvg float32 `inactive:"+" desc:"running average of the Avg, integrated at ly.Act.Dt.LongAvgTau"` AvgMax float32 `` /* 134-byte string literal not displayed */ }
GScaleVals holds the conductance scaling and associated values needed for adapting scale
func (*GScaleVals) Init ¶ added in v1.2.37
func (gs *GScaleVals) Init()
Init completes the initialization of values based on initially computed ones
type GTargParams ¶ added in v1.2.37
type GTargParams struct { GeMax float32 `def:"1.2" min:"0" desc:"target maximum excitatory conductance in the minus phase: GeM"` GiMax float32 `` /* 171-byte string literal not displayed */ }
GTargParams are target conductance levels for excitation and inhibition, driving adaptation of GScale.Scale conductance scaling
func (*GTargParams) Defaults ¶ added in v1.2.37
func (gt *GTargParams) Defaults()
func (*GTargParams) Update ¶ added in v1.2.37
func (gt *GTargParams) Update()
type HebbPrjn ¶ added in v1.2.42
type HebbPrjn struct { Prjn // access as .Prjn IncGain float32 `desc:"gain factor on increases relative to decreases -- lower = lower overall weights"` }
HebbPrjn is a simple hebbian learning projection, using the CPCA Hebbian rule
func (*HebbPrjn) DWt ¶ added in v1.2.42
func (pj *HebbPrjn) DWt()
DWt computes the hebbian weight change
func (*HebbPrjn) UpdateParams ¶ added in v1.2.42
func (pj *HebbPrjn) UpdateParams()
type InhibMiscParams ¶ added in v1.2.89
type InhibMiscParams struct { AvgTau float32 `` /* 134-byte string literal not displayed */ GiSynThr float32 `` /* 168-byte string literal not displayed */ AvgDt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
InhibMiscParams defines parameters for average activation value in pool that drives feedback inhibition in the FFFB inhibition function.
func (*InhibMiscParams) AvgAct ¶ added in v1.2.89
func (fb *InhibMiscParams) AvgAct(avg *float32, act float32)
AvgAct updates the average activation from new average act
func (*InhibMiscParams) Defaults ¶ added in v1.2.89
func (fb *InhibMiscParams) Defaults()
func (*InhibMiscParams) GiSyn ¶ added in v1.2.89
func (fb *InhibMiscParams) GiSyn(gisyn float32) float32
GiSyn computes the effective GiSyn value relative to the threshold
func (*InhibMiscParams) Update ¶ added in v1.2.89
func (fb *InhibMiscParams) Update()
type InhibParams ¶
type InhibParams struct { Inhib InhibMiscParams `view:"inline" desc:"misc inhibition computation parameters, including feedback activation "` Layer fffb.Params `` /* 128-byte string literal not displayed */ Pool fffb.Params `view:"inline" desc:"inhibition across sub-pools of units, for layers with 4D shape"` Topo TopoInhibParams `` /* 136-byte string literal not displayed */ Self SelfInhibParams `` /* 161-byte string literal not displayed */ ActAvg ActAvgParams `` /* 173-byte string literal not displayed */ }
axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as running-average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time
func (*InhibParams) Defaults ¶
func (ip *InhibParams) Defaults()
func (*InhibParams) Update ¶
func (ip *InhibParams) Update()
type LayFunChan ¶
type LayFunChan chan func(ly AxonLayer)
LayFunChan is a channel that runs AxonLayer functions
type Layer ¶
type Layer struct { LayerStru Act ActParams `view:"add-fields" desc:"Activation parameters and methods for computing activations"` Inhib InhibParams `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"` Learn LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"` Neurons []Neuron `` /* 133-byte string literal not displayed */ Pools []Pool `` /* 234-byte string literal not displayed */ ActAvg ActAvgVals `view:"inline" desc:"running-average activation levels used for Ge scaling and adaptive inhibition"` CosDiff CosDiffStats `desc:"cosine difference between ActM, ActP stats"` }
axon.Layer implements the basic Axon spiking activation function, and manages learning in the projections.
func (*Layer) ActFmG ¶
ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act
func (*Layer) AdaptGScale ¶ added in v1.2.38
func (ly *Layer) AdaptGScale()
AdaptGScale adapts the conductance scale based on targets
func (*Layer) AdaptInhib ¶ added in v1.2.37
func (ly *Layer) AdaptInhib()
AdaptInhib adapts inhibition
func (*Layer) ApplyExt ¶
ApplyExt applies external input in the form of an etensor.Float32. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1D ¶
ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1D32 ¶
ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1DTsr ¶
ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt2D ¶
ApplyExt2D applies 2D tensor external input
func (*Layer) ApplyExt2Dto4D ¶
ApplyExt2Dto4D applies 2D tensor external input to a 4D layer
func (*Layer) ApplyExt4D ¶
ApplyExt4D applies 4D tensor external input
func (*Layer) ApplyExtFlags ¶
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Targ (else Ext)
func (*Layer) AsAxon ¶
AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff
func (*Layer) BuildPools ¶
BuildPools builds the inhibitory pools structures -- nu = number of units in layer
func (*Layer) BuildPrjns ¶
BuildPrjns builds the projections, recv-side
func (*Layer) BuildSubPools ¶
func (ly *Layer) BuildSubPools()
BuildSubPools initializes neuron start / end indexes for sub-pools
func (*Layer) ClearTargExt ¶ added in v1.2.65
func (ly *Layer) ClearTargExt()
ClearTargExt clears external inputs Ext that were set from target values Targ. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Layer) CosDiffFmActs ¶
func (ly *Layer) CosDiffFmActs()
CosDiffFmActs computes the cosine difference in activation state between minus and plus phases.
func (*Layer) CostEst ¶
CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.
func (*Layer) CyclePost ¶
CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Layer) DTrgAvgFmErr ¶ added in v1.2.32
func (ly *Layer) DTrgAvgFmErr()
DTrgAvgFmErr computes change in TrgAvg based on unit-wise error signal
func (*Layer) DTrgAvgSubMean ¶ added in v1.2.32
func (ly *Layer) DTrgAvgSubMean()
DTrgAvgSubMean subtracts the mean from DTrgAvg values Called by TrgAvgFmD
func (*Layer) DWt ¶
func (ly *Layer) DWt()
DWt computes the weight change (learning) -- calls DWt method on sending projections
func (*Layer) DecayState ¶
DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay). This does *not* call InitGInc -- must call that separately at start of AlphaCyc
func (*Layer) DecayStatePool ¶
DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)
func (*Layer) GFmInc ¶
GFmInc integrates new synaptic conductances from increments sent during last Spike
func (*Layer) GFmIncNeur ¶
GFmIncNeur is the neuron-level code for GFmInc that integrates overall Ge, Gi values from their G*Raw accumulators.
func (*Layer) GenNoise ¶
func (ly *Layer) GenNoise()
GenNoise generates random noise for all neurons
func (*Layer) HasPoolInhib ¶ added in v1.2.79
HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.
func (*Layer) InhibFmGeAct ¶
InhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools
func (*Layer) InhibFmPool ¶
InhibFmPool computes inhibition Gi from Pool-level aggregated inhibition, including self and syn
func (*Layer) InitActAvg ¶
func (ly *Layer) InitActAvg()
InitActAvg initializes the running-average activation values that drive learning. and the longer time averaging values.
func (*Layer) InitActs ¶
func (ly *Layer) InitActs()
InitActs fully initializes activation state -- only called automatically during InitWts
func (*Layer) InitExt ¶
func (ly *Layer) InitExt()
InitExt initializes external input state -- called prior to apply ext
func (*Layer) InitGScale ¶ added in v1.2.37
func (ly *Layer) InitGScale()
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Layer) InitWtSym ¶
func (ly *Layer) InitWtSym()
InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
func (*Layer) InitWts ¶
func (ly *Layer) InitWts()
InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs
func (*Layer) IsInput ¶ added in v1.2.32
IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == emer.Input Used to prevent adapting of inhibition or TrgAvg values.
func (*Layer) IsLearnTrgAvg ¶ added in v1.2.32
func (*Layer) IsTarget ¶
IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == emer.Target Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.
func (*Layer) LesionNeurons ¶
LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed
func (*Layer) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Layer) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Layer) MinusPhase ¶ added in v1.2.63
MinusPhase does updating at end of the minus phase
func (*Layer) NewState ¶ added in v1.2.63
func (ly *Layer) NewState()
NewState handles all initialization at start of new input pattern, including computing input scaling from running average activation etc. should already have presented the external input to the network at this point.
func (*Layer) PctUnitErr ¶
PctUnitErr returns the proportion of units where the thresholded value of Targ (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Act.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations.
func (*Layer) PoolInhibFmGeAct ¶
PoolInhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools
func (*Layer) RateClamp ¶ added in v1.2.28
func (ly *Layer) RateClamp()
RateClamp rate-clamps the activations in the layer.
func (*Layer) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Layer) RecvGInc ¶
RecvGInc calls RecvGInc on receiving projections to collect Neuron-level G*Inc values. This is called by GFmInc overall method, but separated out for cases that need to do something different.
func (*Layer) RecvPrjnVals ¶
func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error
RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SendPrjnVals ¶
func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error
SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SlowAdapt ¶ added in v1.2.37
func (ly *Layer) SlowAdapt()
SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, GScale conductance scaling, and adapting inhibition
func (*Layer) SynFail ¶ added in v1.2.92
func (ly *Layer) SynFail()
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Layer) SynScale ¶ added in v1.2.23
func (ly *Layer) SynScale()
SynScale performs synaptic scaling based on running average activation vs. targets
func (*Layer) TargToExt ¶ added in v1.2.65
func (ly *Layer) TargToExt()
TargToExt sets external input Ext from target values Targ This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Layer) TrgAvgFmD ¶ added in v1.2.32
func (ly *Layer) TrgAvgFmD()
TrgAvgFmD updates TrgAvg from DTrgAvg
func (*Layer) UnLesionNeurons ¶
func (ly *Layer) UnLesionNeurons()
UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer
func (*Layer) UnitVal ¶
UnitVal returns value of given variable name on given unit, using shape-based dimensional index
func (*Layer) UnitVal1D ¶
UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Layer) UnitVals ¶
UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Layer) UnitValsTensor ¶
UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*Layer) UnitVarIdx ¶
UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Layer) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this layer
func (*Layer) UnitVarNum ¶
UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.
func (*Layer) UnitVarProps ¶
UnitVarProps returns properties for variables
func (*Layer) UpdateExtFlags ¶
func (ly *Layer) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Layer) UpdateParams ¶
func (ly *Layer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer
func (*Layer) VarRange ¶
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*Layer) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type LayerStru ¶
type LayerStru struct { AxonLay AxonLayer `` /* 297-byte string literal not displayed */ Network emer.Network `` /* 141-byte string literal not displayed */ Nm string `` /* 151-byte string literal not displayed */ Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Off bool `desc:"inactivate this layer -- allows for easy experimentation"` Shp etensor.Shape `` /* 219-byte string literal not displayed */ Typ emer.LayerType `` /* 161-byte string literal not displayed */ Thr int `` /* 216-byte string literal not displayed */ Rel relpos.Rel `view:"inline" desc:"Spatial relationship to other layer, determines positioning"` Ps mat32.Vec3 `` /* 154-byte string literal not displayed */ Idx int `` /* 256-byte string literal not displayed */ RcvPrjns emer.Prjns `desc:"list of receiving projections into this layer from other layers"` SndPrjns emer.Prjns `desc:"list of sending projections from this layer to other layers"` }
axon.LayerStru manages the structural elements of the layer, which are common to any Layer type
func (*LayerStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*LayerStru) InitName ¶
InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).
func (*LayerStru) NPools ¶
NPools returns the number of unit sub-pools according to the shape parameters. Currently supported for a 4D shape, where the unit pools are the first 2 Y,X dims and then the units within the pools are the 2nd 2 Y,X dims
func (*LayerStru) NRecvPrjns ¶
func (*LayerStru) NSendPrjns ¶
func (*LayerStru) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*LayerStru) RecipToSendPrjn ¶
RecipToSendPrjn finds the reciprocal projection relative to the given sending projection found within the SendPrjns of this layer. This is then a recv prjn within this layer:
S=A -> R=B recip: R=A <- S=B -- ly = A -- we are the sender of srj and recv of rpj.
returns false if not found.
type LearnNeurParams ¶
type LearnNeurParams struct { ActAvg LrnActAvgParams `view:"inline" desc:"parameters for computing running average activations that drive learning"` TrgAvgAct TrgAvgActParams `` /* 126-byte string literal not displayed */ RLrate RLrateParams `` /* 176-byte string literal not displayed */ }
axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning
func (*LearnNeurParams) AvgsFmAct ¶
func (ln *LearnNeurParams) AvgsFmAct(nrn *Neuron)
AvgsFmAct updates the running averages based on current learning activation. Computed after new activation for current cycle is updated.
func (*LearnNeurParams) Defaults ¶
func (ln *LearnNeurParams) Defaults()
func (*LearnNeurParams) InitActAvg ¶
func (ln *LearnNeurParams) InitActAvg(nrn *Neuron)
InitActAvg initializes the running-average activation values that drive learning. Called by InitWts (at start of learning).
func (*LearnNeurParams) Update ¶
func (ln *LearnNeurParams) Update()
type LearnSynParams ¶
type LearnSynParams struct { Learn bool `desc:"enable learning for this projection"` Lrate LrateParams `desc:"learning rate parameters, supporting two levels of modulation on top of base learning rate."` XCal XCalParams `view:"inline" desc:"parameters for the XCal learning rule"` }
LearnSynParams manages learning-related parameters at the synapse-level.
func (*LearnSynParams) CHLdWt ¶
func (ls *LearnSynParams) CHLdWt(suAvgSLrn, suAvgMLrn, ruAvgSLrn, ruAvgMLrn float32) float32
CHLdWt returns the error-driven weight change component for the temporally eXtended Contrastive Attractor Learning (XCAL), CHL version
func (*LearnSynParams) Defaults ¶
func (ls *LearnSynParams) Defaults()
func (*LearnSynParams) Update ¶
func (ls *LearnSynParams) Update()
type LrateMod ¶ added in v1.2.60
type LrateMod struct { On bool `desc:"toggle use of this modulation factor"` Base float32 `viewif:"On" min:"0" max:"1" desc:"baseline learning rate -- what you get for correct cases"` Range minmax.F32 `` /* 191-byte string literal not displayed */ }
LrateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.
func (*LrateMod) LrateMod ¶ added in v1.2.60
LrateMod calls LrateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.
type LrateParams ¶ added in v1.2.60
type LrateParams struct { Base float32 `` /* 199-byte string literal not displayed */ Sched float32 `desc:"scheduled learning rate multiplier, simulating reduction in plasticity over aging"` Mod float32 `desc:"dynamic learning rate modulation due to neuromodulatory or other such factors"` Eff float32 `inactive:"+" desc:"effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base"` }
LrateParams manages learning rate parameters
func (*LrateParams) Defaults ¶ added in v1.2.60
func (ls *LrateParams) Defaults()
func (*LrateParams) Init ¶ added in v1.2.60
func (ls *LrateParams) Init()
Init initializes modulation values back to 1 and updates Eff
func (*LrateParams) Update ¶ added in v1.2.60
func (ls *LrateParams) Update()
type LrnActAvgParams ¶
type LrnActAvgParams struct { SpikeG float32 `def:"8" desc:"gain multiplier on spike: how much spike drives AvgSS value"` MinLrn float32 `def:"0.02" desc:"minimum learning activation -- below this goes to zero"` SSTau float32 `` /* 532-byte string literal not displayed */ STau float32 `` /* 382-byte string literal not displayed */ MTau float32 `` /* 521-byte string literal not displayed */ LrnM float32 `` /* 618-byte string literal not displayed */ Init float32 `def:"0.15" min:"0" max:"1" desc:"initial value for average"` SSDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` SDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` MDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` LrnS float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"1-LrnM"` }
LrnActAvgParams has rate constants for averaging over activations at different time scales, to produce the running average activation values that then drive learning in the XCAL learning rules. Is driven directly by spikes that increment running-average at super-short timescale. Time cycle of 50 msec quarters / theta window learning works Cyc:50, SS:35 S:8, M:40 (best) Cyc:25, SS:20, S:4, M:20
func (*LrnActAvgParams) AvgsFmAct ¶
func (aa *LrnActAvgParams) AvgsFmAct(act float32, avgSS, avgS, avgM, avgSLrn, avgMLrn *float32)
AvgsFmAct computes averages based on current act
func (*LrnActAvgParams) Defaults ¶
func (aa *LrnActAvgParams) Defaults()
func (*LrnActAvgParams) Update ¶
func (aa *LrnActAvgParams) Update()
type NMDAPrjn ¶
type NMDAPrjn struct {
Prjn // access as .Prjn
}
NMDAPrjn is a projection with NMDA maintenance channels. It marks a projection for special treatment in a MaintLayer which actually does the NMDA computations. Excitatory conductance is aggregated separately for this projection.
func (*NMDAPrjn) PrjnTypeName ¶
func (*NMDAPrjn) UpdateParams ¶
func (pj *NMDAPrjn) UpdateParams()
type Network ¶
type Network struct { NetworkStru SlowInterval int `` /* 174-byte string literal not displayed */ SlowCtr int `inactive:"+" desc:"counter for how long it has been since last SlowAdapt step"` }
axon.Network has parameters for running a basic rate-coded Axon network
func (*Network) ClearTargExt ¶ added in v1.2.65
func (nt *Network) ClearTargExt()
ClearTargExt clears external inputs Ext that were set from target values Targ. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Network) CollectDWts ¶
CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors.
func (*Network) Cycle ¶
Cycle runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.
func (*Network) CycleImpl ¶
CycleImpl runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.
func (*Network) CyclePost ¶
CyclePost is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Network) CyclePostImpl ¶
CyclePostImpl is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Network) DWt ¶
func (nt *Network) DWt()
DWt computes the weight change (learning) based on current running-average activation values
func (*Network) DWtImpl ¶
func (nt *Network) DWtImpl()
DWtImpl computes the weight change (learning) based on current running-average activation values
func (*Network) DecayState ¶
DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all This is called automatically in NewState, but is avail here for ad-hoc decay cases.
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and projections
func (*Network) InhibFmGeAct ¶
InhibiFmGeAct computes inhibition Gi from Ge and Act stats within relevant Pools
func (*Network) InitActs ¶
func (nt *Network) InitActs()
InitActs fully initializes activation state -- not automatically called
func (*Network) InitExt ¶
func (nt *Network) InitExt()
InitExt initializes external input state -- call prior to applying external inputs to layers
func (*Network) InitGScale ¶ added in v1.2.92
func (nt *Network) InitGScale()
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Network) InitTopoSWts ¶ added in v1.2.75
func (nt *Network) InitTopoSWts()
InitTopoSWts initializes SWt structural weight parameters from prjn types that support topographic weight patterns, having flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts
func (*Network) InitWts ¶
func (nt *Network) InitWts()
InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)
func (*Network) LayersSetOff ¶
LayersSetOff sets the Off flag for all layers to given setting
func (*Network) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Network) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Network) MinusPhase ¶ added in v1.2.63
MinusPhase does updating after end of minus phase
func (*Network) MinusPhaseImpl ¶ added in v1.2.63
MinusPhaseImpl does updating after end of minus phase
func (*Network) NewState ¶ added in v1.2.63
func (nt *Network) NewState()
NewState handles all initialization at start of new input pattern, including computing input scaling from running average activation etc.
func (*Network) NewStateImpl ¶ added in v1.2.63
func (nt *Network) NewStateImpl()
NewStateImpl handles all initialization at start of new input state
func (*Network) PlusPhaseImpl ¶ added in v1.2.63
PlusPhaseImpl does updating after end of plus phase
func (*Network) SendSpike ¶
SendSpike sends change in activation since last sent, if above thresholds and integrates sent deltas into GeRaw and time-integrated Ge values
func (*Network) SetDWts ¶
SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg)
func (*Network) SizeReport ¶
SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint.
func (*Network) SlowAdapt ¶ added in v1.2.37
func (nt *Network) SlowAdapt()
SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, GScale conductance scaling, and adapting inhibition
func (*Network) SynFail ¶ added in v1.2.92
func (nt *Network) SynFail()
SynFail updates synaptic failure
func (*Network) SynVarNames ¶
SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) SynVarProps ¶
SynVarProps returns properties for variables
func (*Network) TargToExt ¶ added in v1.2.65
func (nt *Network) TargToExt()
TargToExt sets external input Ext from target values Targ This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Network) ThreadAlloc ¶
ThreadAlloc allocates layers to given number of threads, attempting to evenly divide computation. Returns report of thread allocations and estimated computational cost per thread.
func (*Network) ThreadReport ¶
ThreadReport returns report of thread allocations and estimated computational cost per thread.
func (*Network) UnLesionNeurons ¶
func (nt *Network) UnLesionNeurons()
UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.
func (*Network) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) UnitVarProps ¶
UnitVarProps returns properties for variables
func (*Network) UpdateExtFlags ¶
func (nt *Network) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and projections
func (*Network) WtFmDWt ¶
func (nt *Network) WtFmDWt()
WtFmDWt updates the weights from delta-weight changes. Also calls SynScale every Interval times
func (*Network) WtFmDWtImpl ¶
func (nt *Network) WtFmDWtImpl()
WtFmDWtImpl updates the weights from delta-weight changes.
type NetworkStru ¶
type NetworkStru struct { EmerNet emer.Network `` /* 274-byte string literal not displayed */ Nm string `desc:"overall name of network -- helps discriminate if there are multiple"` Layers emer.Layers `desc:"list of layers"` WtsFile string `desc:"filename of last weights file loaded or saved"` LayMap map[string]emer.Layer `view:"-" desc:"map of name to layers -- layer names must be unique"` MinPos mat32.Vec3 `view:"-" desc:"minimum display position in network"` MaxPos mat32.Vec3 `view:"-" desc:"maximum display position in network"` MetaData map[string]string `` /* 194-byte string literal not displayed */ NThreads int `` /* 203-byte string literal not displayed */ LockThreads bool `` /* 165-byte string literal not displayed */ ThrLay [][]emer.Layer `` /* 179-byte string literal not displayed */ ThrChans []LayFunChan `view:"-" desc:"layer function channels, per thread"` ThrTimes []timer.Time `view:"-" desc:"timers for each thread, so you can see how evenly the workload is being distributed"` FunTimes map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"` WaitGp sync.WaitGroup `view:"-" desc:"network-level wait group for synchronizing threaded layer calls"` }
axon.NetworkStru holds the basic structural components of a network (layers)
func (*NetworkStru) AddLayer ¶
AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.
func (*NetworkStru) AddLayer2D ¶
AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.
func (*NetworkStru) AddLayer4D ¶
func (nt *NetworkStru) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.
func (*NetworkStru) AddLayerInit ¶
AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.
func (*NetworkStru) AllParams ¶
func (nt *NetworkStru) AllParams() string
AllParams returns a listing of all parameters in the Network.
func (*NetworkStru) AllPrjnScales ¶ added in v1.2.45
func (nt *NetworkStru) AllPrjnScales() string
AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*NetworkStru) BidirConnectLayerNames ¶
func (nt *NetworkStru) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) BidirConnectLayers ¶
func (nt *NetworkStru) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) BidirConnectLayersPy ¶
func (nt *NetworkStru) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.
func (*NetworkStru) Bounds ¶
func (nt *NetworkStru) Bounds() (min, max mat32.Vec3)
func (*NetworkStru) BoundsUpdt ¶
func (nt *NetworkStru) BoundsUpdt()
BoundsUpdt updates the Min / Max display bounds for 3D display
func (*NetworkStru) Build ¶
func (nt *NetworkStru) Build() error
Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity
func (*NetworkStru) BuildThreads ¶
func (nt *NetworkStru) BuildThreads()
BuildThreads constructs the layer thread allocation based on Thread setting in the layers
func (*NetworkStru) ConnectLayerNames ¶
func (nt *NetworkStru) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) ConnectLayers ¶
func (nt *NetworkStru) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) ConnectLayersPrjn ¶
func (nt *NetworkStru) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
ConnectLayersPrjn makes connection using given projection between two layers, adding given prjn to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) FunTimerStart ¶
func (nt *NetworkStru) FunTimerStart(fun string)
FunTimerStart starts function timer for given function name -- ensures creation of timer
func (*NetworkStru) FunTimerStop ¶
func (nt *NetworkStru) FunTimerStop(fun string)
FunTimerStop stops function timer -- timer must already exist
func (*NetworkStru) InitName ¶
func (nt *NetworkStru) InitName(net emer.Network, name string)
InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name.
func (*NetworkStru) Label ¶
func (nt *NetworkStru) Label() string
func (*NetworkStru) LateralConnectLayer ¶
LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) LateralConnectLayerPrjn ¶
func (nt *NetworkStru) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) LayerByName ¶
func (nt *NetworkStru) LayerByName(name string) emer.Layer
LayerByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkStru) LayerByNameTry ¶
func (nt *NetworkStru) LayerByNameTry(name string) (emer.Layer, error)
LayerByNameTry returns a layer by looking it up by name -- emits a log error message if layer is not found
func (*NetworkStru) Layout ¶
func (nt *NetworkStru) Layout()
Layout computes the 3D layout of layers based on their relative position settings
func (*NetworkStru) MakeLayMap ¶
func (nt *NetworkStru) MakeLayMap()
MakeLayMap updates layer map based on current layers
func (*NetworkStru) NLayers ¶
func (nt *NetworkStru) NLayers() int
func (*NetworkStru) NonDefaultParams ¶
func (nt *NetworkStru) NonDefaultParams() string
NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.
func (*NetworkStru) OpenWtsCpp ¶
func (nt *NetworkStru) OpenWtsCpp(filename gi.FileName) error
OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkStru) OpenWtsJSON ¶
func (nt *NetworkStru) OpenWtsJSON(filename gi.FileName) error
OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkStru) ReadWtsCpp ¶
func (nt *NetworkStru) ReadWtsCpp(r io.Reader) error
ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkStru) ReadWtsJSON ¶
func (nt *NetworkStru) ReadWtsJSON(r io.Reader) error
ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkStru) SaveWtsJSON ¶
func (nt *NetworkStru) SaveWtsJSON(filename gi.FileName) error
SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.
func (*NetworkStru) SetWts ¶
func (nt *NetworkStru) SetWts(nw *weights.Network) error
SetWts sets the weights for this network from weights.Network decoded values
func (*NetworkStru) StartThreads ¶
func (nt *NetworkStru) StartThreads()
StartThreads starts up the computation threads, which monitor the channels for work
func (*NetworkStru) StdVertLayout ¶
func (nt *NetworkStru) StdVertLayout()
StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings
func (*NetworkStru) StopThreads ¶
func (nt *NetworkStru) StopThreads()
StopThreads stops the computation threads
func (*NetworkStru) ThrLayFun ¶
func (nt *NetworkStru) ThrLayFun(fun func(ly AxonLayer), funame string)
ThrLayFun calls function on layer, using threaded (go routine worker) computation if NThreads > 1 and otherwise just iterates over layers in the current thread.
func (*NetworkStru) ThrTimerReset ¶
func (nt *NetworkStru) ThrTimerReset()
ThrTimerReset resets the per-thread timers
func (*NetworkStru) ThrWorker ¶
func (nt *NetworkStru) ThrWorker(tt int)
ThrWorker is the worker function run by the worker threads
func (*NetworkStru) TimerReport ¶
func (nt *NetworkStru) TimerReport()
TimerReport reports the amount of time spent in each function, and in each thread
func (*NetworkStru) VarRange ¶
func (nt *NetworkStru) VarRange(varNm string) (min, max float32, err error)
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*NetworkStru) WriteWtsJSON ¶
func (nt *NetworkStru) WriteWtsJSON(w io.Writer) error
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type NeurFlags ¶
type NeurFlags int32
NeurFlags are bit-flags encoding relevant binary state for neurons
const ( // NeurOff flag indicates that this neuron has been turned off (i.e., lesioned) NeurOff NeurFlags = iota // NeurHasExt means the neuron has external input in its Ext field NeurHasExt // NeurHasTarg means the neuron has external target input in its Targ field NeurHasTarg // NeurHasCmpr means the neuron has external comparison input in its Targ field -- used for computing // comparison statistics but does not drive neural activity ever NeurHasCmpr NeurFlagsN )
The neuron flags
func (*NeurFlags) FromString ¶
func (NeurFlags) MarshalJSON ¶
func (*NeurFlags) UnmarshalJSON ¶
type Neuron ¶
type Neuron struct { Flags NeurFlags `desc:"bit flags for binary state variables"` SubPool int32 `` /* 214-byte string literal not displayed */ Spike float32 `desc:"whether neuron has spiked or not on this cycle (0 or 1)"` ISI float32 `desc:"current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized."` ISIAvg float32 `` /* 320-byte string literal not displayed */ Act float32 `` /* 283-byte string literal not displayed */ ActInt float32 `` /* 421-byte string literal not displayed */ Ge float32 `desc:"total excitatory synaptic conductance -- the net excitatory input to the neuron -- does *not* include Gbar.E"` Gi float32 `desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"` Gk float32 `` /* 148-byte string literal not displayed */ Inet float32 `desc:"net current produced by all channels -- drives update of Vm"` Vm float32 `desc:"membrane potential -- integrates Inet current over time"` VmDend float32 `desc:"dendritic membrane potential -- subject to multiplier"` Targ float32 `desc:"target value: drives learning to produce this activation value"` Ext float32 `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"` AvgSS float32 `` /* 248-byte string literal not displayed */ AvgS float32 `` /* 202-byte string literal not displayed */ AvgM float32 `` /* 131-byte string literal not displayed */ AvgSLrn float32 `` /* 487-byte string literal not displayed */ AvgMLrn float32 `desc:"medium time-scale activation average used in learning: subect to thresholding so low values become zero"` ActSt1 float32 `` /* 176-byte string literal not displayed */ ActSt2 float32 `` /* 176-byte string literal not displayed */ ActM float32 `desc:"the activation state at end of third quarter, which is the traditional posterior-cortical minus phase activation"` ActP float32 `desc:"the activation state at end of fourth quarter, which is the traditional posterior-cortical plus_phase activation"` ActDif float32 `` /* 164-byte string literal not displayed */ ActDel float32 `desc:"delta activation: change in Act from one cycle to next -- can be useful to track where changes are taking place"` ActPrv float32 `desc:"the final activation state at end of previous state"` RLrate float32 `` /* 142-byte string literal not displayed */ ActAvg float32 `` /* 194-byte string literal not displayed */ AvgPct float32 `` /* 158-byte string literal not displayed */ TrgAvg float32 `` /* 169-byte string literal not displayed */ DTrgAvg float32 `` /* 164-byte string literal not displayed */ AvgDif float32 `` /* 173-byte string literal not displayed */ Noise float32 `desc:"noise value added to unit (ActNoiseParams determines distribution, and when / where it is added)"` GiSyn float32 `` /* 168-byte string literal not displayed */ GiSelf float32 `desc:"total amount of self-inhibition -- time-integrated to avoid oscillations"` GeRaw float32 `desc:"raw excitatory conductance (net input) received from sending units (send delta's are added to this value)"` GiRaw float32 `desc:"raw inhibitory conductance (net input) received from sending units (send delta's are added to this value)"` GeM float32 `` /* 165-byte string literal not displayed */ GiM float32 `` /* 168-byte string literal not displayed */ GknaFast float32 `` /* 130-byte string literal not displayed */ GknaMed float32 `` /* 131-byte string literal not displayed */ GknaSlow float32 `` /* 129-byte string literal not displayed */ Gnmda float32 `desc:"net NMDA conductance, after Vm gating and Gbar -- added directly to Ge as it has the same reversal potential."` NMDA float32 `desc:"NMDA channel activation -- underlying time-integrated value with decay"` NMDASyn float32 `desc:"synaptic NMDA activation directly from projection(s)"` GgabaB float32 `desc:"net GABA-B conductance, after Vm gating and Gbar + Gbase -- set to Gk for GIRK, with .1 reversal potential."` GABAB float32 `desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"` GABABx float32 `desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"` Attn float32 `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"` }
axon.Neuron holds all of the neuron (unit) level variables -- this is the most basic version with rate-code only and no optional features at all. All variables accessible via Unit interface must be float32 and start at the top, in contiguous order
func (*Neuron) VarByIndex ¶
VarByIndex returns variable using index (0 = first variable in NeuronVars list)
type Pool ¶
type Pool struct {
StIdx, EdIdx int `desc:"starting and ending (exlusive) indexes for the list of neurons in this pool"`
Inhib fffb.Inhib `desc:"FFFB inhibition computed values, including Ge and Act AvgMax which drive inhibition"`
ActM minmax.AvgMax32 `desc:"minus phase average and max Act activation values, for ActAvg updt"`
ActP minmax.AvgMax32 `desc:"plus phase average and max Act activation values, for ActAvg updt"`
GeM minmax.AvgMax32 `desc:"stats for GeM minus phase averaged Ge values"`
GiM minmax.AvgMax32 `desc:"stats for GiM minus phase averaged Gi values"`
AvgDif minmax.AvgMax32 `desc:"absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg"`
}
Pool contains computed values for FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition, including: * average / max stats on Ge and Act that drive inhibition
type Prjn ¶
type Prjn struct { PrjnStru Com SynComParams `view:"inline" desc:"synaptic communication parameters: delay, probability of failure"` PrjnScale PrjnScaleParams `` /* 194-byte string literal not displayed */ SWt SWtParams `` /* 165-byte string literal not displayed */ Learn LearnSynParams `view:"add-fields" desc:"synaptic-level learning parameters for learning in the fast LWt values."` Syns []Synapse `desc:"synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"` // misc state variables below: GScale GScaleVals `view:"inline" desc:"conductance scaling values"` Gidx ringidx.FIx `` /* 201-byte string literal not displayed */ Gbuf []float32 `` /* 173-byte string literal not displayed */ }
axon.Prjn is a basic Axon projection with synaptic learning parameters
func (*Prjn) AsAxon ¶
AsAxon returns this prjn as a axon.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the AxonPrjn interface does not need to include accessors to all the basic stuff.
func (*Prjn) Build ¶
Build constructs the full connectivity among the layers as specified in this projection. Calls PrjnStru.BuildStru and then allocates the synaptic values in Syns accordingly.
func (*Prjn) DWt ¶
func (pj *Prjn) DWt()
DWt computes the weight change (learning) -- on sending projections
func (*Prjn) InitWtSym ¶
InitWtSym initializes weight symmetry -- is given the reciprocal projection where the Send and Recv layers are reversed.
func (*Prjn) InitWts ¶
func (pj *Prjn) InitWts()
InitWts initializes weight values according to SWt params, enforcing current constraints.
func (*Prjn) InitWtsSyn ¶
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*Prjn) LrateMod ¶ added in v1.2.60
LrateMod sets the Lrate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LrateSched). Updates the effective learning rate factor accordingly.
func (*Prjn) LrateSched ¶ added in v1.2.60
LrateSched sets the schedule-based learning rate multiplier. See also LrateMod. Updates the effective learning rate factor accordingly.
func (*Prjn) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Prjn) RecvGInc ¶
RecvGInc increments the receiver's GeRaw or GiRaw from that of all the projections.
func (*Prjn) RecvGIncNoStats ¶ added in v1.2.37
func (pj *Prjn) RecvGIncNoStats()
RecvGIncNoStats is plus-phase version without stats
func (*Prjn) RecvGIncStats ¶ added in v1.2.37
func (pj *Prjn) RecvGIncStats()
RecvGIncStats is called every cycle during minus phase, to increment GeRaw or GiRaw, and also collect stats about conductances.
func (*Prjn) SWtFmWt ¶ added in v1.2.45
func (pj *Prjn) SWtFmWt()
SWtFmWt updates structural, slowly-adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.
func (*Prjn) SWtRescale ¶ added in v1.2.45
func (pj *Prjn) SWtRescale()
SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.
func (*Prjn) SendSpike ¶
SendSpike sends a spike from sending neuron index si, to add to buffer on receivers.
func (*Prjn) SetSWtsFunc ¶ added in v1.2.75
SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.
func (*Prjn) SetSWtsRPool ¶ added in v1.2.75
SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.
func (*Prjn) SetSynVal ¶
SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.
func (*Prjn) SetWtsFunc ¶
SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.
func (*Prjn) SlowAdapt ¶ added in v1.2.37
func (pj *Prjn) SlowAdapt()
SlowAdapt does the slow adaptation: SWt learning and SynScale
func (*Prjn) SynFail ¶ added in v1.2.92
func (pj *Prjn) SynFail()
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Prjn) SynIdx ¶
SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes). Returns -1 if synapse not found between these two neurons. Requires searching within connections for receiving unit.
func (*Prjn) SynScale ¶ added in v1.2.23
func (pj *Prjn) SynScale()
SynScale performs synaptic scaling based on running average activation vs. targets
func (*Prjn) SynVal ¶
SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)
func (*Prjn) SynVal1D ¶
SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Prjn) SynVals ¶
SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Prjn) SynVarIdx ¶
SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Prjn) SynVarNames ¶
func (*Prjn) SynVarNum ¶
SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.
func (*Prjn) SynVarProps ¶
SynVarProps returns properties for variables
func (*Prjn) UpdateParams ¶
func (pj *Prjn) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values
func (*Prjn) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type PrjnScaleParams ¶ added in v1.2.45
type PrjnScaleParams struct { Rel float32 `` /* 253-byte string literal not displayed */ Abs float32 `` /* 334-byte string literal not displayed */ Adapt bool `` /* 335-byte string literal not displayed */ ScaleLrate float32 `` /* 290-byte string literal not displayed */ HiTol float32 `` /* 248-byte string literal not displayed */ LoTol float32 `` /* 248-byte string literal not displayed */ AvgTau float32 `` /* 348-byte string literal not displayed */ AvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
PrjnScaleParams are projection scaling parameters: modulates overall strength of projection, using both absolute and relative factors. Also includes ability to adapt Scale factors to maintain AvgMaxGeM / GiM max conductances according to Acts.GTarg target values.
func (*PrjnScaleParams) Defaults ¶ added in v1.2.45
func (ws *PrjnScaleParams) Defaults()
func (*PrjnScaleParams) FullScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) FullScale(savg, snu, ncon float32) float32
FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale
func (*PrjnScaleParams) SLayActScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) SLayActScale(savg, snu, ncon float32) float32
SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.
func (*PrjnScaleParams) Update ¶ added in v1.2.45
func (ws *PrjnScaleParams) Update()
type PrjnStru ¶
type PrjnStru struct { AxonPrj AxonPrjn `` /* 267-byte string literal not displayed */ Off bool `desc:"inactivate this projection -- allows for easy experimentation"` Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Notes string `desc:"can record notes about this projection here"` Send emer.Layer `desc:"sending layer for this projection"` Recv emer.Layer `` /* 167-byte string literal not displayed */ Pat prjn.Pattern `desc:"pattern of connectivity"` Typ emer.PrjnType `` /* 154-byte string literal not displayed */ RConN []int32 `view:"-" desc:"number of recv connections for each neuron in the receiving layer, as a flat list"` RConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of recv connections in the receiving layer"` RConIdxSt []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in receiving layer -- just a list incremented by ConN"` RConIdx []int32 `` /* 213-byte string literal not displayed */ RSynIdx []int32 `` /* 185-byte string literal not displayed */ SConN []int32 `view:"-" desc:"number of sending connections for each neuron in the sending layer, as a flat list"` SConNAvgMax minmax.AvgMax32 `inactive:"+" desc:"average and maximum number of sending connections in the sending layer"` SConIdxSt []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in sending layer -- just a list incremented by ConN"` SConIdx []int32 `` /* 213-byte string literal not displayed */ }
PrjnStru contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The exact same struct object is added to the Recv and Send layers, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation.
func (*PrjnStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*PrjnStru) BuildStru ¶
BuildStru constructs the full connectivity among the layers as specified in this projection. Calls Validate and returns false if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern.
func (*PrjnStru) Connect ¶
Connect sets the connectivity between two layers and the pattern to use in interconnecting them
func (*PrjnStru) Init ¶
Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.
func (*PrjnStru) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*PrjnStru) PrjnTypeName ¶
func (*PrjnStru) SetNIdxSt ¶
func (ps *PrjnStru) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
SetNIdxSt sets the *ConN and *ConIdxSt values given n tensor from Pat. Returns total number of connections for this direction.
type PrjnType ¶
PrjnType has the GLong extensions to the emer.PrjnType types, for gui
func StringToPrjnType ¶
type RLrateParams ¶ added in v1.2.79
type RLrateParams struct { On bool `def:"true" desc:"use learning rate modulation"` ActThr float32 `def:"0.1" desc:"threshold on Max(AvgS, AvgM) below which Min lrate applies -- must be > 0 to prevent div by zero"` ActDifThr float32 `def:"0.02" desc:"threshold on recv neuron error delta, i.e., |AvgS - AvgM| below which lrate is at Min value"` Min float32 `def:"0.001" desc:"minimum learning rate value when below ActDifThr"` }
RLrateParams recv neuron learning rate modulation parameters. RLrate is computed as |AvgS - AvgM| / Max(AvgS, AvgM) subject to thresholding
func (*RLrateParams) Defaults ¶ added in v1.2.79
func (rl *RLrateParams) Defaults()
func (*RLrateParams) RLrate ¶ added in v1.2.79
func (rl *RLrateParams) RLrate(avgS, avgM float32) float32
RLrate returns the learning rate as a function of AvgS and AvgM values
func (*RLrateParams) Update ¶ added in v1.2.79
func (rl *RLrateParams) Update()
type SWtAdaptParams ¶ added in v1.2.45
type SWtAdaptParams struct { On bool `` /* 137-byte string literal not displayed */ Lrate float32 `` /* 388-byte string literal not displayed */ SigGain float32 `` /* 135-byte string literal not displayed */ DreamVar float32 `` /* 354-byte string literal not displayed */ }
SWtAdaptParams manages adaptation of SWt values
func (*SWtAdaptParams) Defaults ¶ added in v1.2.45
func (sp *SWtAdaptParams) Defaults()
func (*SWtAdaptParams) RndVar ¶ added in v1.2.55
func (sp *SWtAdaptParams) RndVar() float32
RndVar returns the random variance (zero mean) based on DreamVar param
func (*SWtAdaptParams) Update ¶ added in v1.2.45
func (sp *SWtAdaptParams) Update()
type SWtInitParams ¶ added in v1.2.45
type SWtInitParams struct { SPct float32 `` /* 315-byte string literal not displayed */ Mean float32 `` /* 199-byte string literal not displayed */ Var float32 `def:"0.25" desc:"initial variance in weight values, prior to constraints."` Sym bool `` /* 149-byte string literal not displayed */ }
SWtInitParams for initial SWt values
func (*SWtInitParams) Defaults ¶ added in v1.2.45
func (sp *SWtInitParams) Defaults()
func (*SWtInitParams) RndVar ¶ added in v1.2.45
func (sp *SWtInitParams) RndVar() float32
RndVar returns the random variance in weight value (zero mean) based on Var param
func (*SWtInitParams) Update ¶ added in v1.2.45
func (sp *SWtInitParams) Update()
type SWtParams ¶ added in v1.2.45
type SWtParams struct { Init SWtInitParams `view:"inline" desc:"initialization of SWt values"` Adapt SWtAdaptParams `view:"inline" desc:"adaptation of SWt values in response to LWt learning"` Limit minmax.F32 `def:"{0.2 0.8}" view:"inline" desc:"range limits for SWt values"` }
SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.
func (*SWtParams) LWtFmWts ¶ added in v1.2.47
LWtFmWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.
func (*SWtParams) LinFmSigWt ¶ added in v1.2.45
LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5
func (*SWtParams) SigFmLinWt ¶ added in v1.2.45
SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt
type SelfInhibParams ¶
type SelfInhibParams struct { On bool `desc:"enable neuron self-inhibition"` Gi float32 `` /* 247-byte string literal not displayed */ Tau float32 `` /* 379-byte string literal not displayed */ Dt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
SelfInhibParams defines parameters for Neuron self-inhibition activation of the neuron directly feeds back to produce a proportional additional contribution to Gi
func (*SelfInhibParams) Defaults ¶
func (si *SelfInhibParams) Defaults()
func (*SelfInhibParams) Inhib ¶
func (si *SelfInhibParams) Inhib(self *float32, act float32)
Inhib updates the self inhibition value based on current unit activation
func (*SelfInhibParams) Update ¶
func (si *SelfInhibParams) Update()
type SpikeParams ¶
type SpikeParams struct { Thr float32 `` /* 152-byte string literal not displayed */ VmR float32 `` /* 201-byte string literal not displayed */ Tr int `` /* 127-byte string literal not displayed */ Exp bool `` /* 274-byte string literal not displayed */ ExpSlope float32 `` /* 325-byte string literal not displayed */ ExpThr float32 `viewif:"Exp" def:"1" desc:"membrane potential threshold for actually triggering a spike when using the exponential mechanism"` MaxHz float32 `` /* 265-byte string literal not displayed */ ISITau float32 `def:"5" min:"1" desc:"constant for integrating the spiking interval in estimating spiking rate"` ISIDt float32 `view:"-" desc:"rate = 1 / tau"` }
SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)
func (*SpikeParams) ActFmISI ¶
func (sk *SpikeParams) ActFmISI(isi, timeInc, integ float32) float32
ActFmISI computes rate-code activation from estimated spiking interval
func (*SpikeParams) ActToISI ¶
func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32
ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ
func (*SpikeParams) AvgFmISI ¶
func (sk *SpikeParams) AvgFmISI(avg *float32, isi float32)
AvgFmISI updates spiking ISI from current isi interval value
func (*SpikeParams) Defaults ¶
func (sk *SpikeParams) Defaults()
func (*SpikeParams) Update ¶
func (sk *SpikeParams) Update()
type SynComParams ¶
type SynComParams struct { Delay int `desc:"synaptic delay for inputs arriving at this projection -- IMPORTANT: if you change this, you must rebuild network!"` PFail float32 `` /* 149-byte string literal not displayed */ PFailSWt bool `` /* 141-byte string literal not displayed */ }
/ SynComParams are synaptic communication parameters: delay and probability of failure
func (*SynComParams) Defaults ¶
func (sc *SynComParams) Defaults()
func (*SynComParams) Fail ¶
func (sc *SynComParams) Fail(wt *float32, swt float32)
Fail updates failure status of given weight, given SWt value
func (*SynComParams) Update ¶
func (sc *SynComParams) Update()
func (*SynComParams) WtFail ¶
func (sc *SynComParams) WtFail(swt float32) bool
WtFail returns true if synapse should fail, as function of SWt value (optionally)
func (*SynComParams) WtFailP ¶
func (sc *SynComParams) WtFailP(swt float32) float32
WtFailP returns probability of weight (synapse) failure given current SWt value
type Synapse ¶
type Synapse struct { Wt float32 `` /* 206-byte string literal not displayed */ SWt float32 `` /* 528-byte string literal not displayed */ LWt float32 `` /* 174-byte string literal not displayed */ DWt float32 `desc:"change in synaptic weight, from learning"` DSWt float32 `desc:"change in SWt slow synaptic weight -- accumulates DWt"` }
axon.Synapse holds state for the synaptic connection between neurons
func (*Synapse) SetVarByIndex ¶
func (*Synapse) SetVarByName ¶
SetVarByName sets synapse variable to given value
func (*Synapse) VarByIndex ¶
VarByIndex returns variable using index (0 = first variable in SynapseVars list)
type Time ¶
type Time struct { Time float32 `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"` Cycle int `` /* 156-byte string literal not displayed */ PhaseCycle int `desc:"cycle within current phase -- minus or plus"` CycleTot int `` /* 151-byte string literal not displayed */ PlusPhase bool `` /* 126-byte string literal not displayed */ TimePerCyc float32 `def:"0.001" desc:"amount of time to increment per cycle"` }
axon.Time contains all the timing state and parameter information for running a model.
func (*Time) NewPhase ¶ added in v1.2.63
func (tm *Time) NewPhase()
NewPhase updates from minus phase to plus phase and resets PhaseCycle
type TimeScales ¶
type TimeScales int32
TimeScales are the different time scales associated with overall simulation running, and can be used to parameterize the updating and control flow of simulations at different scales. The definitions become increasingly subjective imprecise as the time scales increase. This is not used directly in the algorithm code -- all control is responsibility of the end simulation. This list is designed to standardize terminology across simulations and establish a common conceptual framework for time -- it can easily be extended in specific simulations to add needed additional levels, although using one of the existing standard values is recommended wherever possible.
const ( // Cycle is the finest time scale -- typically 1 msec -- a single activation update. Cycle TimeScales = iota // FastSpike is typically 10 cycles = 10 msec (100hz) = the fastest spiking time // generally observed in the brain. This can be useful for visualizing updates // at a granularity in between Cycle and GammaCycle. FastSpike // GammaCycle is typically 25 cycles = 25 msec (40hz) GammaCycle // Phase is either Minus or Plus phase, where plus phase is bursting / outcome // that drives positive learning relative to prediction in minus phase. Phase // BetaCycle is typically 50 cycles = 50 msec (20 hz) = one beta-frequency cycle. // Gating in the basal ganglia and associated updating in prefrontal cortex // occurs at this frequency. BetaCycle // AlphaCycle is typically 100 cycles = 100 msec (10 hz) = one alpha-frequency cycle. AlphaCycle // ThetaCycle is typically 200 cycles = 200 msec (5 hz) = two alpha-frequency cycles. // This is the modal duration of a saccade, the update frequency of medial temporal lobe // episodic memory, and the minimal predictive learning cycle (perceive an Alpha 1, predict on 2). ThetaCycle // Event is the smallest unit of naturalistic experience that coheres unto itself // (e.g., something that could be described in a sentence). // Typically this is on the time scale of a few seconds: e.g., reaching for // something, catching a ball. Event // Trial is one unit of behavior in an experiment -- it is typically environmentally // defined instead of endogenously defined in terms of basic brain rhythms. // In the minimal case it could be one ThetaCycle, but could be multiple, and // could encompass multiple Events (e.g., one event is fixation, next is stimulus, // last is response) Trial // Tick is one step in a sequence -- often it is useful to have Trial count // up throughout the entire Epoch but also include a Tick to count trials // within a Sequence Tick // Sequence is a sequential group of Trials (not always needed). Sequence // Condition is a collection of Blocks that share the same set of parameters. // This is intermediate between Block and Run levels. Condition // Block is a collection of Trials, Sequences or Events, often used in experiments // when conditions are varied across blocks. Block // Epoch is used in two different contexts. In machine learning, it represents a // collection of Trials, Sequences or Events that constitute a "representative sample" // of the environment. In the simplest case, it is the entire collection of Trials // used for training. In electrophysiology, it is a timing window used for organizing // the analysis of electrode data. Epoch // Run is a complete run of a model / subject, from training to testing, etc. // Often multiple runs are done in an Expt to obtain statistics over initial // random weights etc. Run // Expt is an entire experiment -- multiple Runs through a given protocol / set of // parameters. Expt // Scene is a sequence of events that constitutes the next larger-scale coherent unit // of naturalistic experience corresponding e.g., to a scene in a movie. // Typically consists of events that all take place in one location over // e.g., a minute or so. This could be a paragraph or a page or so in a book. Scene // Episode is a sequence of scenes that constitutes the next larger-scale unit // of naturalistic experience e.g., going to the grocery store or eating at a // restaurant, attending a wedding or other "event". // This could be a chapter in a book. Episode TimeScalesN )
The time scales
func (*TimeScales) FromString ¶
func (i *TimeScales) FromString(s string) error
func (TimeScales) MarshalJSON ¶
func (ev TimeScales) MarshalJSON() ([]byte, error)
func (TimeScales) String ¶
func (i TimeScales) String() string
func (*TimeScales) UnmarshalJSON ¶
func (ev *TimeScales) UnmarshalJSON(b []byte) error
type TopoInhibParams ¶ added in v1.2.85
type TopoInhibParams struct { On bool `desc:"use topographic inhibition"` Width int `viewif:"On" desc:"half-width of topographic inhibition within layer"` Sigma float32 `viewif:"On" desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"` Wrap bool `viewif:"On" desc:"half-width of topographic inhibition within layer"` Gi float32 `viewif:"On" desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"` FF float32 `` /* 133-byte string literal not displayed */ FB float32 `` /* 139-byte string literal not displayed */ FF0 float32 `` /* 186-byte string literal not displayed */ WidthWt float32 `inactive:"+" desc:"weight value at width -- to assess the value of Sigma"` }
TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood.
func (*TopoInhibParams) Defaults ¶ added in v1.2.85
func (ti *TopoInhibParams) Defaults()
func (*TopoInhibParams) GiFmGeAct ¶ added in v1.2.85
func (ti *TopoInhibParams) GiFmGeAct(ge, act, ff0 float32) float32
func (*TopoInhibParams) Update ¶ added in v1.2.85
func (ti *TopoInhibParams) Update()
type TrgAvgActParams ¶ added in v1.2.45
type TrgAvgActParams struct { On bool `desc:"whether to use target average activity mechanism to scale synaptic weights"` ErrLrate float32 `` /* 263-byte string literal not displayed */ SynScaleRate float32 `` /* 231-byte string literal not displayed */ TrgRange minmax.F32 `` /* 185-byte string literal not displayed */ Permute bool `` /* 236-byte string literal not displayed */ Pool bool `` /* 206-byte string literal not displayed */ }
TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by unit-wise error and difference in actual vs. target drives synaptic scaling.
func (*TrgAvgActParams) Defaults ¶ added in v1.2.45
func (ta *TrgAvgActParams) Defaults()
func (*TrgAvgActParams) Update ¶ added in v1.2.45
func (ta *TrgAvgActParams) Update()
type XCalParams ¶
type XCalParams struct { SubMean float32 `def:"1" desc:"amount of the mean dWt to subtract -- 1.0 = full zero-sum dWt -- only on non-zero DWts (see DWtThr)"` DWtThr float32 `def:"0.0001" desc:"threshold on DWt to be included in SubMean process -- this is *prior* to lrate multiplier"` DRev float32 `` /* 188-byte string literal not displayed */ DThr float32 `` /* 139-byte string literal not displayed */ LrnThr float32 `` /* 338-byte string literal not displayed */ DRevRatio float32 `` /* 131-byte string literal not displayed */ }
XCalParams are parameters for temporally eXtended Contrastive Attractor Learning function (XCAL) which is the standard learning equation for axon .
func (*XCalParams) DWt ¶
func (xc *XCalParams) DWt(srval, thrP float32) float32
DWt is the XCAL function for weight change -- the "check mark" function -- no DGain, no ThrPMin
func (*XCalParams) Defaults ¶
func (xc *XCalParams) Defaults()
func (*XCalParams) Update ¶
func (xc *XCalParams) Update()