Documentation ¶
Overview ¶
Package leabra provides the basic reference leabra implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep leabra, PVLV, PBWM, etc.
The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.
The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.
This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.
One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.
The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.
Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)
The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables
There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.
Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.
The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.
Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., ActFmG computes activation from conductances G.
The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the LeabraUnGpState structures in C++ emergent.
Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of LeabraConSpec / LeabraConState at the level of connection groups per unit per projection.
The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.
Index ¶
- Constants
- Variables
- func JsonToParams(b []byte) string
- func NeuronVarIdxByName(varNm string) (int, error)
- func SigFun(w, gain, off float32) float32
- func SigFun61(w float32) float32
- func SigInvFun(w, gain, off float32) float32
- func SigInvFun61(w float32) float32
- func SynapseVarByName(varNm string) (int, error)
- type ActAvg
- type ActAvgParams
- type ActInitParams
- type ActNoiseParams
- type ActNoiseType
- type ActParams
- func (ac *ActParams) ActFmG(nrn *Neuron)
- func (ac *ActParams) DecayState(nrn *Neuron, decay float32)
- func (ac *ActParams) Defaults()
- func (ac *ActParams) GeFmRaw(nrn *Neuron, geRaw float32)
- func (ac *ActParams) GeThrFmG(nrn *Neuron) float32
- func (ac *ActParams) GeThrFmGnoK(nrn *Neuron) float32
- func (ac *ActParams) GiFmRaw(nrn *Neuron, giRaw float32)
- func (ac *ActParams) HardClamp(nrn *Neuron)
- func (ac *ActParams) HasHardClamp(nrn *Neuron) bool
- func (ac *ActParams) InetFmG(vm, ge, gi, gk float32) float32
- func (ac *ActParams) InitActQs(nrn *Neuron)
- func (ac *ActParams) InitActs(nrn *Neuron)
- func (ac *ActParams) InitGInc(nrn *Neuron)
- func (ac *ActParams) Update()
- func (ac *ActParams) VmFmG(nrn *Neuron)
- type AvgLParams
- type ClampParams
- type CosDiffParams
- type CosDiffStats
- type DWtNormParams
- type DtParams
- type InhibParams
- type LayFunChan
- type Layer
- func (ly *Layer) ActAvgFmAct()
- func (ly *Layer) ActFmG(ltime *Time)
- func (ly *Layer) ActQ0FmActP()
- func (ly *Layer) AllParams() string
- func (ly *Layer) AlphaCycInit(updtActAvg bool)
- func (ly *Layer) ApplyExt(ext etensor.Tensor)
- func (ly *Layer) ApplyExt1D(ext []float64)
- func (ly *Layer) ApplyExt1D32(ext []float32)
- func (ly *Layer) ApplyExt1DTsr(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt2Dto4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExt4D(ext etensor.Tensor)
- func (ly *Layer) ApplyExtFlags() (clrmsk, setmsk int32, toTarg bool)
- func (ly *Layer) AsLeabra() *Layer
- func (ly *Layer) AvgLFmAvgM()
- func (ly *Layer) AvgMaxAct(ltime *Time)
- func (ly *Layer) AvgMaxGe(ltime *Time)
- func (ly *Layer) Build() error
- func (ly *Layer) BuildPools(nu int) error
- func (ly *Layer) BuildPrjns() error
- func (ly *Layer) BuildSubPools()
- func (ly *Layer) CosDiffFmActs()
- func (ly *Layer) CostEst() (neur, syn, tot int)
- func (ly *Layer) CyclePost(ltime *Time)
- func (ly *Layer) DWt()
- func (ly *Layer) DecayState(decay float32)
- func (ly *Layer) DecayStatePool(pool int, decay float32)
- func (ly *Layer) Defaults()
- func (ly *Layer) GFmInc(ltime *Time)
- func (ly *Layer) GFmIncNeur(ltime *Time)
- func (ly *Layer) GScaleFmAvgAct()
- func (ly *Layer) GenNoise()
- func (ly *Layer) HardClamp()
- func (ly *Layer) InhibFmGeAct(ltime *Time)
- func (ly *Layer) InhibFmPool(ltime *Time)
- func (ly *Layer) InitActAvg()
- func (ly *Layer) InitActs()
- func (ly *Layer) InitExt()
- func (ly *Layer) InitGInc()
- func (ly *Layer) InitWtSym()
- func (ly *Layer) InitWts()
- func (ly *Layer) IsTarget() bool
- func (ly *Layer) LesionNeurons(prop float32) int
- func (ly *Layer) LrateMult(mult float32)
- func (ly *Layer) MSE(tol float32) (sse, mse float64)
- func (ly *Layer) Pool(idx int) *Pool
- func (ly *Layer) PoolInhibFmGeAct(ltime *Time)
- func (ly *Layer) PoolTry(idx int) (*Pool, error)
- func (ly *Layer) QuarterFinal(ltime *Time)
- func (ly *Layer) ReadWtsJSON(r io.Reader) error
- func (ly *Layer) RecvGInc(ltime *Time)
- func (ly *Layer) RecvNameTry(receiver string) (emer.Prjn, error)
- func (ly *Layer) RecvNameTypeTry(receiver, typ string) (emer.Prjn, error)
- func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, ...) error
- func (ly *Layer) SSE(tol float32) float64
- func (ly *Layer) SendGDelta(ltime *Time)
- func (ly *Layer) SendNameTry(sender string) (emer.Prjn, error)
- func (ly *Layer) SendNameTypeTry(sender, typ string) (emer.Prjn, error)
- func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, ...) error
- func (ly *Layer) SetWts(lw *weights.Layer) error
- func (ly *Layer) UnLesionNeurons()
- func (ly *Layer) UnitVal(varNm string, idx []int) float32
- func (ly *Layer) UnitVal1D(varIdx int, idx int) float32
- func (ly *Layer) UnitVals(vals *[]float32, varNm string) error
- func (ly *Layer) UnitValsRepTensor(tsr etensor.Tensor, varNm string) error
- func (ly *Layer) UnitValsTensor(tsr etensor.Tensor, varNm string) error
- func (ly *Layer) UnitVarIdx(varNm string) (int, error)
- func (ly *Layer) UnitVarNames() []string
- func (ly *Layer) UnitVarNum() int
- func (ly *Layer) UnitVarProps() map[string]string
- func (ly *Layer) UpdateExtFlags()
- func (ly *Layer) UpdateParams()
- func (ly *Layer) VarRange(varNm string) (min, max float32, err error)
- func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)
- func (ly *Layer) WtBalFmWt()
- func (ly *Layer) WtFmDWt()
- type LayerStru
- func (ls *LayerStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ls *LayerStru) Class() string
- func (ls *LayerStru) Config(shape []int, typ emer.LayerType)
- func (ls *LayerStru) Idx4DFrom2D(x, y int) ([]int, bool)
- func (ls *LayerStru) Index() int
- func (ls *LayerStru) InitName(lay emer.Layer, name string, net emer.Network)
- func (ls *LayerStru) Is2D() bool
- func (ls *LayerStru) Is4D() bool
- func (ls *LayerStru) IsOff() bool
- func (ls *LayerStru) Label() string
- func (ls *LayerStru) NPools() int
- func (ls *LayerStru) NRecvPrjns() int
- func (ls *LayerStru) NSendPrjns() int
- func (ls *LayerStru) Name() string
- func (ls *LayerStru) NonDefaultParams() string
- func (ls *LayerStru) Pos() mat32.Vec3
- func (ls *LayerStru) RecipToSendPrjn(spj emer.Prjn) (emer.Prjn, bool)
- func (ly *LayerStru) RecvName(receiver string) emer.Prjn
- func (ly *LayerStru) RecvNameTry(receiver string) (emer.Prjn, error)
- func (ly *LayerStru) RecvNameTypeTry(receiver, typ string) (emer.Prjn, error)
- func (ls *LayerStru) RecvPrjn(idx int) emer.Prjn
- func (ls *LayerStru) RecvPrjns() *LeabraPrjns
- func (ls *LayerStru) RelPos() relpos.Rel
- func (ls *LayerStru) RepIdxs() []int
- func (ls *LayerStru) RepShape() *etensor.Shape
- func (ly *LayerStru) SendName(sender string) emer.Prjn
- func (ly *LayerStru) SendNameTry(sender string) (emer.Prjn, error)
- func (ly *LayerStru) SendNameTypeTry(sender, typ string) (emer.Prjn, error)
- func (ls *LayerStru) SendPrjn(idx int) emer.Prjn
- func (ls *LayerStru) SendPrjns() *LeabraPrjns
- func (ls *LayerStru) SetClass(cls string)
- func (ls *LayerStru) SetIndex(idx int)
- func (ls *LayerStru) SetName(nm string)
- func (ls *LayerStru) SetOff(off bool)
- func (ls *LayerStru) SetPos(pos mat32.Vec3)
- func (ls *LayerStru) SetRelPos(rel relpos.Rel)
- func (ls *LayerStru) SetRepIdxsShape(idxs, shape []int)
- func (ls *LayerStru) SetShape(shape []int)
- func (ls *LayerStru) SetThread(thr int)
- func (ls *LayerStru) SetType(typ emer.LayerType)
- func (ls *LayerStru) Shape() *etensor.Shape
- func (ls *LayerStru) Size() mat32.Vec2
- func (ls *LayerStru) Thread() int
- func (ls *LayerStru) Type() emer.LayerType
- func (ls *LayerStru) TypeName() string
- type LeabraLayer
- type LeabraNetwork
- type LeabraPrjn
- type LeabraPrjns
- type LearnNeurParams
- type LearnSynParams
- func (ls *LearnSynParams) BCMdWt(suAvgSLrn, ruAvgSLrn, ruAvgL float32) float32
- func (ls *LearnSynParams) CHLdWt(suAvgSLrn, suAvgM, ruAvgSLrn, ruAvgM, ruAvgL float32) (err, bcm float32)
- func (ls *LearnSynParams) Defaults()
- func (ls *LearnSynParams) LWtFmWt(syn *Synapse)
- func (ls *LearnSynParams) Update()
- func (ls *LearnSynParams) WtFmDWt(wbInc, wbDec float32, dwt, wt, lwt *float32, scale float32)
- func (ls *LearnSynParams) WtFmLWt(syn *Synapse)
- type LrnActAvgParams
- type MomentumParams
- type Network
- func (nt *Network) ActFmG(ltime *Time)
- func (nt *Network) AlphaCycInit(updtActAvg bool)
- func (nt *Network) AlphaCycInitImpl(updtActAvg bool)
- func (nt *Network) AsLeabra() *Network
- func (nt *Network) AvgMaxAct(ltime *Time)
- func (nt *Network) AvgMaxGe(ltime *Time)
- func (nt *Network) CollectDWts(dwts *[]float32, nwts int) bool
- func (nt *Network) Cycle(ltime *Time)
- func (nt *Network) CycleImpl(ltime *Time)
- func (nt *Network) CyclePost(ltime *Time)
- func (nt *Network) CyclePostImpl(ltime *Time)
- func (nt *Network) DWt()
- func (nt *Network) DWtImpl()
- func (nt *Network) DecayState(decay float32)
- func (nt *Network) Defaults()
- func (nt *Network) GScaleFmAvgAct()
- func (nt *Network) InhibFmGeAct(ltime *Time)
- func (nt *Network) InitActs()
- func (nt *Network) InitExt()
- func (nt *Network) InitGInc()
- func (nt *Network) InitTopoScales()
- func (nt *Network) InitWts()
- func (nt *Network) LayersSetOff(off bool)
- func (nt *Network) LrateMult(mult float32)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPrjn() emer.Prjn
- func (nt *Network) QuarterFinal(ltime *Time)
- func (nt *Network) QuarterFinalImpl(ltime *Time)
- func (nt *Network) SendGDelta(ltime *Time)
- func (nt *Network) SetDWts(dwts []float32)
- func (nt *Network) SizeReport() string
- func (nt *Network) SynVarNames() []string
- func (nt *Network) SynVarProps() map[string]string
- func (nt *Network) ThreadAlloc(nThread int) string
- func (nt *Network) ThreadReport() string
- func (nt *Network) UnLesionNeurons()
- func (nt *Network) UnitVarNames() []string
- func (nt *Network) UnitVarProps() map[string]string
- func (nt *Network) UpdateExtFlags()
- func (nt *Network) UpdateParams()
- func (nt *Network) WtBalFmWt()
- func (nt *Network) WtFmDWt()
- func (nt *Network) WtFmDWtImpl()
- type NetworkStru
- func (nt *NetworkStru) AddLayer(name string, shape []int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayer2D(name string, shapeY, shapeX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
- func (nt *NetworkStru) AddLayerInit(ly emer.Layer, name string, shape []int, typ emer.LayerType)
- func (nt *NetworkStru) AllParams() string
- func (nt *NetworkStru) AllWtScales() string
- func (nt *NetworkStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (nt *NetworkStru) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
- func (nt *NetworkStru) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
- func (nt *NetworkStru) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
- func (nt *NetworkStru) Bounds() (min, max mat32.Vec3)
- func (nt *NetworkStru) BoundsUpdt()
- func (nt *NetworkStru) Build() error
- func (nt *NetworkStru) BuildThreads()
- func (nt *NetworkStru) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
- func (nt *NetworkStru) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
- func (nt *NetworkStru) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
- func (nt *NetworkStru) FunTimerStart(fun string)
- func (nt *NetworkStru) FunTimerStop(fun string)
- func (nt *NetworkStru) InitName(net emer.Network, name string)
- func (nt *NetworkStru) KeyLayerParams() string
- func (nt *NetworkStru) KeyPrjnParams() string
- func (nt *NetworkStru) Label() string
- func (nt *NetworkStru) LateralConnectLayer(lay emer.Layer, pat prjn.Pattern) emer.Prjn
- func (nt *NetworkStru) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
- func (nt *NetworkStru) Layer(idx int) emer.Layer
- func (nt *NetworkStru) LayerByName(name string) emer.Layer
- func (nt *NetworkStru) LayerByNameTry(name string) (emer.Layer, error)
- func (nt *NetworkStru) LayersByClass(classes ...string) []string
- func (nt *NetworkStru) Layout()
- func (nt *NetworkStru) MakeLayMap()
- func (nt *NetworkStru) NLayers() int
- func (nt *NetworkStru) Name() string
- func (nt *NetworkStru) NonDefaultParams() string
- func (nt *NetworkStru) OpenWtsCpp(filename gi.FileName) error
- func (nt *NetworkStru) OpenWtsJSON(filename gi.FileName) error
- func (nt *NetworkStru) ReadWtsCpp(r io.Reader) error
- func (nt *NetworkStru) ReadWtsJSON(r io.Reader) error
- func (nt *NetworkStru) SaveWtsJSON(filename gi.FileName) error
- func (nt *NetworkStru) SetWts(nw *weights.Network) error
- func (nt *NetworkStru) StartThreads()
- func (nt *NetworkStru) StdVertLayout()
- func (nt *NetworkStru) StopThreads()
- func (nt *NetworkStru) ThrLayFun(fun func(ly LeabraLayer), funame string)
- func (nt *NetworkStru) ThrTimerReset()
- func (nt *NetworkStru) ThrWorker(tt int)
- func (nt *NetworkStru) TimerReport()
- func (nt *NetworkStru) VarRange(varNm string) (min, max float32, err error)
- func (nt *NetworkStru) WriteWtsJSON(w io.Writer) error
- type NeurFlags
- type Neuron
- func (nrn *Neuron) ClearFlag(flag NeurFlags)
- func (nrn *Neuron) ClearMask(mask int32)
- func (nrn *Neuron) HasFlag(flag NeurFlags) bool
- func (nrn *Neuron) IsOff() bool
- func (nrn *Neuron) SetFlag(flag NeurFlags)
- func (nrn *Neuron) SetMask(mask int32)
- func (nrn *Neuron) VarByIndex(idx int) float32
- func (nrn *Neuron) VarByName(varNm string) (float32, error)
- func (nrn *Neuron) VarNames() []string
- type OptThreshParams
- type Pool
- type Prjn
- func (pj *Prjn) AllParams() string
- func (pj *Prjn) AsLeabra() *Prjn
- func (pj *Prjn) Build() error
- func (pj *Prjn) DWt()
- func (pj *Prjn) Defaults()
- func (pj *Prjn) InitGInc()
- func (pj *Prjn) InitWtSym(rpjp LeabraPrjn)
- func (pj *Prjn) InitWts()
- func (pj *Prjn) InitWtsSyn(syn *Synapse)
- func (pj *Prjn) LrateMult(mult float32)
- func (pj *Prjn) ReadWtsJSON(r io.Reader) error
- func (pj *Prjn) RecvGInc()
- func (pj *Prjn) SendGDelta(si int, delta float32)
- func (pj *Prjn) SetClass(cls string) emer.Prjn
- func (pj *Prjn) SetPattern(pat prjn.Pattern) emer.Prjn
- func (pj *Prjn) SetScalesFunc(scaleFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SetScalesRPool(scales etensor.Tensor)
- func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error
- func (pj *Prjn) SetType(typ emer.PrjnType) emer.Prjn
- func (pj *Prjn) SetWts(pw *weights.Prjn) error
- func (pj *Prjn) SetWtsFunc(wtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) Syn1DNum() int
- func (pj *Prjn) SynIdx(sidx, ridx int) int
- func (pj *Prjn) SynVal(varNm string, sidx, ridx int) float32
- func (pj *Prjn) SynVal1D(varIdx int, synIdx int) float32
- func (pj *Prjn) SynVals(vals *[]float32, varNm string) error
- func (pj *Prjn) SynVarIdx(varNm string) (int, error)
- func (pj *Prjn) SynVarNames() []string
- func (pj *Prjn) SynVarNum() int
- func (pj *Prjn) SynVarProps() map[string]string
- func (pj *Prjn) UpdateParams()
- func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)
- func (pj *Prjn) WtBalFmWt()
- func (pj *Prjn) WtFmDWt()
- type PrjnStru
- func (ps *PrjnStru) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ps *PrjnStru) BuildStru() error
- func (ps *PrjnStru) Class() string
- func (ps *PrjnStru) Connect(slay, rlay emer.Layer, pat prjn.Pattern, typ emer.PrjnType)
- func (ps *PrjnStru) Init(prjn emer.Prjn)
- func (ps *PrjnStru) IsOff() bool
- func (ps *PrjnStru) Label() string
- func (ps *PrjnStru) Name() string
- func (ps *PrjnStru) NonDefaultParams() string
- func (ps *PrjnStru) Pattern() prjn.Pattern
- func (ps *PrjnStru) PrjnTypeName() string
- func (ps *PrjnStru) RecvLay() emer.Layer
- func (ps *PrjnStru) SendLay() emer.Layer
- func (ps *PrjnStru) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
- func (ps *PrjnStru) SetOff(off bool)
- func (ps *PrjnStru) String() string
- func (ps *PrjnStru) Type() emer.PrjnType
- func (ps *PrjnStru) TypeName() string
- func (ps *PrjnStru) Validate(logmsg bool) error
- type Quarters
- func (qt *Quarters) Clear(qtr int)
- func (i *Quarters) FromString(s string) error
- func (qt Quarters) Has(qtr int) bool
- func (qt Quarters) HasNext(qtr int) bool
- func (qt Quarters) HasPrev(qtr int) bool
- func (qt Quarters) MarshalJSON() ([]byte, error)
- func (qt *Quarters) Set(qtr int)
- func (i Quarters) String() string
- func (qt *Quarters) UnmarshalJSON(b []byte) error
- type SelfInhibParams
- type Synapse
- type Time
- type TimeScales
- type WtBalParams
- type WtBalRecvPrjn
- type WtInitParams
- type WtScaleParams
- type WtSigParams
- type XCalParams
Constants ¶
const ( Version = "v1.2.10" GitCommit = "180e4eb" // the commit JUST BEFORE the release VersionDate = "2024-03-07 19:40" // UTC )
const NeuronVarStart = 8
NeuronVarStart is the byte offset of fields in the Neuron structure where the float32 named variables start. Note: all non-float32 infrastructure variables must be at the start!
Variables ¶
var KiT_ActNoiseType = kit.Enums.AddEnum(ActNoiseTypeN, kit.NotBitFlag, nil)
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
var KiT_NeurFlags = kit.Enums.AddEnum(NeurFlagsN, kit.BitFlag, nil)
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
var KiT_TimeScales = kit.Enums.AddEnum(TimeScalesN, kit.NotBitFlag, nil)
var LayerProps = ki.Props{ "ToolBar": ki.PropSlice{ {"Defaults", ki.Props{ "icon": "reset", "desc": "return all parameters to their intial default values", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the layer's activation values", }}, {"sep-act", ki.BlankProp{}}, {"LesionNeurons", ki.Props{ "icon": "close", "desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)", "Args": ki.PropSlice{ {"Proportion", ki.Props{ "desc": "proportion (0 -- 1) of neurons to lesion", }}, }, }}, {"UnLesionNeurons", ki.Props{ "icon": "reset", "desc": "Un-Lesion (reset the Off flag) for all neurons in the layer", }}, }, }
var NetworkProps = ki.Props{ "ToolBar": ki.PropSlice{ {"SaveWtsJSON", ki.Props{ "label": "Save Wts...", "icon": "file-save", "desc": "Save json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"OpenWtsJSON", ki.Props{ "label": "Open Wts...", "icon": "file-open", "desc": "Open json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"sep-file", ki.BlankProp{}}, {"Build", ki.Props{ "icon": "update", "desc": "build the network's neurons and synapses according to current params", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the network weight values according to prjn parameters", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the network activation values", }}, {"sep-act", ki.BlankProp{}}, {"AddLayer", ki.Props{ "label": "Add Layer...", "icon": "new", "desc": "add a new layer to network", "Args": ki.PropSlice{ {"Layer Name", ki.Props{}}, {"Layer Shape", ki.Props{ "desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)", }}, {"Layer Type", ki.Props{ "desc": "type of layer -- used for determining how inputs are applied", }}, }, }}, {"ConnectLayerNames", ki.Props{ "label": "Connect Layers...", "icon": "new", "desc": "add a new connection between layers in the network", "Args": ki.PropSlice{ {"Send Layer Name", ki.Props{}}, {"Recv Layer Name", ki.Props{}}, {"Pattern", ki.Props{ "desc": "pattern to connect with", }}, {"Prjn Type", ki.Props{ "desc": "type of projection -- direction, or other more specialized factors", }}, }, }}, {"AllWtScales", ki.Props{ "icon": "file-sheet", "desc": "AllWtScales returns a listing of all WtScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.", "show-return": true, }}, }, }
var NeuronVarProps = map[string]string{
"Vm": `min:"0" max:"1"`,
"ActDel": `auto-scale:"+"`,
"ActDif": `auto-scale:"+"`,
}
var NeuronVars = []string{"Act", "ActLrn", "Ge", "Gi", "Gk", "Inet", "Vm", "Targ", "Ext", "AvgSS", "AvgS", "AvgM", "AvgL", "AvgLLrn", "AvgSLrn", "ActQ0", "ActQ1", "ActQ2", "ActM", "ActP", "ActDif", "ActDel", "ActAvg", "Noise", "GiSyn", "GiSelf", "ActSent", "GeRaw", "GiRaw", "GknaFast", "GknaMed", "GknaSlow", "Spike", "ISI", "ISIAvg"}
var NeuronVarsMap map[string]int
var PrjnProps = ki.Props{}
var SynapseVarProps = map[string]string{
"DWt": `auto-scale:"+"`,
"Moment": `auto-scale:"+"`,
}
var SynapseVars = []string{"Wt", "LWt", "DWt", "Norm", "Moment", "Scale"}
var SynapseVarsMap map[string]int
Functions ¶
func JsonToParams ¶
JsonToParams reformates json output to suitable params display output
func NeuronVarIdxByName ¶ added in v1.1.4
NeuronVarIdxByName returns the index of the variable in the Neuron, or error
func SigFun61 ¶
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params
func SigInvFun61 ¶
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params
func SynapseVarByName ¶ added in v1.0.0
SynapseVarByName returns the index of the variable in the Synapse, or error
Types ¶
type ActAvg ¶
type ActAvg struct { // running-average minus-phase activity -- used for adapting inhibition -- see ActAvgParams.Tau for time constant etc ActMAvg float32 `desc:"running-average minus-phase activity -- used for adapting inhibition -- see ActAvgParams.Tau for time constant etc"` // running-average plus-phase activity -- used for synaptic input scaling -- see ActAvgParams.Tau for time constant etc ActPAvg float32 `desc:"running-average plus-phase activity -- used for synaptic input scaling -- see ActAvgParams.Tau for time constant etc"` // ActPAvg * ActAvgParams.Adjust -- adjusted effective layer activity directly used in synaptic input scaling ActPAvgEff float32 `desc:"ActPAvg * ActAvgParams.Adjust -- adjusted effective layer activity directly used in synaptic input scaling"` }
ActAvg are running-average activation levels used for netinput scaling and adaptive inhibition
type ActAvgParams ¶
type ActAvgParams struct { // [min: 0] [typically 0.1 - 0.2] initial estimated average activity level in the layer (see also UseFirst option -- if that is off then it is used as a starting point for running average actual activity level, ActMAvg and ActPAvg) -- ActPAvg is used primarily for automatic netinput scaling, to balance out layers that have different activity levels -- thus it is important that init be relatively accurate -- good idea to update from recorded ActPAvg levels Init float32 `` /* 462-byte string literal not displayed */ // [def: false] if true, then the Init value is used as a constant for ActPAvgEff (the effective value used for netinput rescaling), instead of using the actual running average activation Fixed bool `` /* 190-byte string literal not displayed */ // [def: false] if true, then use the activation level computed from the external inputs to this layer (avg of targ or ext unit vars) -- this will only be applied to layers with Input or Target / Compare layer types, and falls back on the targ_init value if external inputs are not available or have a zero average -- implies fixed behavior UseExtAct bool `` /* 343-byte string literal not displayed */ // [def: true] [viewif: Fixed=false] use the first actual average value to override targ_init value -- actual value is likely to be a better estimate than our guess UseFirst bool `` /* 166-byte string literal not displayed */ // [def: 100] [viewif: Fixed=false] [min: 1] time constant in trials for integrating time-average values at the layer level -- used for computing Pool.ActAvg.ActsMAvg, ActsPAvg Tau float32 `` /* 177-byte string literal not displayed */ // [def: 1] [viewif: Fixed=false] adjustment multiplier on the computed ActPAvg value that is used to compute ActPAvgEff, which is actually used for netinput rescaling -- if based on connectivity patterns or other factors the actual running-average value is resulting in netinputs that are too high or low, then this can be used to adjust the effective average activity value -- reducing the average activity with a factor < 1 will increase netinput scaling (stronger net inputs from layers that receive from this layer), and vice-versa for increasing (decreases net inputs) Adjust float32 `` /* 576-byte string literal not displayed */ // [view: -] rate = 1 / tau Dt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
ActAvgParams represents expected average activity levels in the layer. Used for computing running-average computation that is then used for netinput scaling. Also specifies time constant for updating average and for the target value for adapting inhibition in inhib_adapt.
func (*ActAvgParams) AvgFmAct ¶
func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32)
AvgFmAct updates the running-average activation given average activity level in layer
func (*ActAvgParams) Defaults ¶
func (aa *ActAvgParams) Defaults()
func (*ActAvgParams) EffFmAvg ¶
func (aa *ActAvgParams) EffFmAvg(eff *float32, avg float32)
EffFmAvg updates the effective value from the running-average value
func (*ActAvgParams) EffInit ¶
func (aa *ActAvgParams) EffInit() float32
EffInit returns the initial value applied during InitWts for the AvgPAvgEff effective layer activity
func (*ActAvgParams) Update ¶
func (aa *ActAvgParams) Update()
type ActInitParams ¶
type ActInitParams struct { // [def: 0,1] [min: 0] [max: 1] proportion to decay activation state toward initial values at start of every trial Decay float32 `def:"0,1" max:"1" min:"0" desc:"proportion to decay activation state toward initial values at start of every trial"` // [def: 0.4] initial membrane potential -- see e_rev.l for the resting potential (typically .3) -- often works better to have a somewhat elevated initial membrane potential relative to that Vm float32 `` /* 193-byte string literal not displayed */ // [def: 0] initial activation value -- typically 0 Act float32 `def:"0" desc:"initial activation value -- typically 0"` // [def: 0] baseline level of excitatory conductance (net input) -- Ge is initialized to this value, and it is added in as a constant background level of excitatory input -- captures all the other inputs not represented in the model, and intrinsic excitability, etc Ge float32 `` /* 268-byte string literal not displayed */ }
ActInitParams are initial values for key network state variables. Initialized at start of trial with Init_Acts or DecayState.
func (*ActInitParams) Defaults ¶
func (ai *ActInitParams) Defaults()
func (*ActInitParams) Update ¶
func (ai *ActInitParams) Update()
type ActNoiseParams ¶
type ActNoiseParams struct { erand.RndParams // where and how to add processing noise Type ActNoiseType `desc:"where and how to add processing noise"` // keep the same noise value over the entire alpha cycle -- prevents noise from being washed out and produces a stable effect that can be better used for learning -- this is strongly recommended for most learning situations Fixed bool `` /* 227-byte string literal not displayed */ }
ActNoiseParams contains parameters for activation-level noise
func (*ActNoiseParams) Defaults ¶
func (an *ActNoiseParams) Defaults()
func (*ActNoiseParams) Update ¶
func (an *ActNoiseParams) Update()
type ActNoiseType ¶
type ActNoiseType int
ActNoiseType are different types / locations of random noise for activations
const ( // NoNoise means no noise added NoNoise ActNoiseType = iota // VmNoise means noise is added to the membrane potential. // IMPORTANT: this should NOT be used for rate-code (NXX1) activations, // because they do not depend directly on the vm -- this then has no effect VmNoise // GeNoise means noise is added to the excitatory conductance (Ge). // This should be used for rate coded activations (NXX1) GeNoise // ActNoise means noise is added to the final rate code activation ActNoise // GeMultNoise means that noise is multiplicative on the Ge excitatory conductance values GeMultNoise ActNoiseTypeN )
The activation noise types
func (*ActNoiseType) FromString ¶
func (i *ActNoiseType) FromString(s string) error
func (ActNoiseType) MarshalJSON ¶
func (ev ActNoiseType) MarshalJSON() ([]byte, error)
func (ActNoiseType) String ¶
func (i ActNoiseType) String() string
func (*ActNoiseType) UnmarshalJSON ¶
func (ev *ActNoiseType) UnmarshalJSON(b []byte) error
type ActParams ¶
type ActParams struct { // [view: inline] Noisy X/X+1 rate code activation function parameters XX1 nxx1.Params `view:"inline" desc:"Noisy X/X+1 rate code activation function parameters"` // [view: inline] optimization thresholds for faster processing OptThresh OptThreshParams `view:"inline" desc:"optimization thresholds for faster processing"` // [view: inline] initial values for key network state variables -- initialized at start of trial with InitActs or DecayActs Init ActInitParams `` /* 127-byte string literal not displayed */ // [view: inline] time and rate constants for temporal derivatives / updating of activation state Dt DtParams `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"` // [view: inline] [Defaults: 1, .1, 1, 1] maximal conductances levels for channels Gbar chans.Chans `view:"inline" desc:"[Defaults: 1, .1, 1, 1] maximal conductances levels for channels"` // [view: inline] [Defaults: 1, .3, .25, .1] reversal potentials for each channel Erev chans.Chans `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"` // [view: inline] how external inputs drive neural activations Clamp ClampParams `view:"inline" desc:"how external inputs drive neural activations"` // [view: inline] how, where, when, and how much noise to add to activations Noise ActNoiseParams `view:"inline" desc:"how, where, when, and how much noise to add to activations"` // [view: inline] range for Vm membrane potential -- [0, 2.0] by default VmRange minmax.F32 `view:"inline" desc:"range for Vm membrane potential -- [0, 2.0] by default"` // [view: no-inline] sodium-gated potassium channel adaptation parameters -- activates an inhibitory leak-like current as a function of neural activity (firing = Na influx) at three different time-scales (M-type = fast, Slick = medium, Slack = slow) KNa knadapt.Params `` /* 252-byte string literal not displayed */ // [view: -] Erev - Act.Thr for each channel -- used in computing GeThrFmG among others ErevSubThr chans.Chans `inactive:"+" view:"-" json:"-" xml:"-" desc:"Erev - Act.Thr for each channel -- used in computing GeThrFmG among others"` // [view: -] Act.Thr - Erev for each channel -- used in computing GeThrFmG among others ThrSubErev chans.Chans `inactive:"+" view:"-" json:"-" xml:"-" desc:"Act.Thr - Erev for each channel -- used in computing GeThrFmG among others"` }
leabra.ActParams contains all the activation computation params and functions for basic Leabra, at the neuron level . This is included in leabra.Layer to drive the computation.
func (*ActParams) DecayState ¶
DecayState decays the activation state toward initial values in proportion to given decay parameter Called with ac.Init.Decay by Layer during AlphaCycInit
func (*ActParams) GeFmRaw ¶ added in v1.0.0
GeFmRaw integrates Ge excitatory conductance from GeRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) GeThrFmG ¶
GeThrFmG computes the threshold for Ge based on all other conductances, including Gk. This is used for computing the adapted Act value.
func (*ActParams) GeThrFmGnoK ¶ added in v1.0.0
GeThrFmGnoK computes the threshold for Ge based on other conductances, excluding Gk. This is used for computing the non-adapted ActLrn value.
func (*ActParams) GiFmRaw ¶ added in v1.0.0
GiFmRaw integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) HardClamp ¶
HardClamp clamps activation from external input -- just does it -- use HasHardClamp to check if it should do it. Also adds any Noise *if* noise is set to ActNoise.
func (*ActParams) HasHardClamp ¶
HasHardClamp returns true if this neuron has external input that should be hard clamped
func (*ActParams) InitActQs ¶ added in v1.0.0
InitActQs initializes quarter-based activation states in neuron (ActQ0-2, ActM, ActP, ActDif) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitActs ¶
InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitGInc ¶ added in v1.0.0
InitGinc initializes the Ge excitatory and Gi inhibitory conductance accumulation states including ActSent and G*Raw values. called at start of trial always, and can be called optionally when delta-based Ge computation needs to be updated (e.g., weights might have changed strength)
type AvgLParams ¶
type AvgLParams struct { // [def: 0.4] [min: 0] [max: 1] initial AvgL value at start of training Init float32 `def:"0.4" min:"0" max:"1" desc:"initial AvgL value at start of training"` // [def: 1.5,2,2.5,3,4,5] [min: 0] gain multiplier on activation used in computing the running average AvgL value that is the key floating threshold in the BCM Hebbian learning rule -- when using the DELTA_FF_FB learning rule, it should generally be 2x what it was before with the old XCAL_CHL rule, i.e., default of 5 instead of 2.5 -- it is a good idea to experiment with this parameter a bit -- the default is on the high-side, so typically reducing a bit from initial default is a good direction Gain float32 `` /* 501-byte string literal not displayed */ // [def: 0.2] [min: 0] miniumum AvgL value -- running average cannot go lower than this value even when it otherwise would due to inactivity -- default value is generally good and typically does not need to be changed Min float32 `` /* 219-byte string literal not displayed */ // [def: 10] [min: 1] time constant for updating the running average AvgL -- AvgL moves toward gain*act with this time constant on every alpha-cycle - longer time constants can also work fine, but the default of 10 allows for quicker reaction to beneficial weight changes Tau float32 `` /* 273-byte string literal not displayed */ // [def: 0.5] [min: 0] maximum AvgLLrn value, which is amount of learning driven by AvgL factor -- when AvgL is at its maximum value (i.e., gain, as act does not exceed 1), then AvgLLrn will be at this maximum value -- by default, strong amounts of this homeostatic Hebbian form of learning can be used when the receiving unit is highly active -- this will then tend to bring down the average activity of units -- the default of 0.5, in combination with the err_mod flag, works well for most models -- use around 0.0004 for a single fixed value (with err_mod flag off) LrnMax float32 `` /* 570-byte string literal not displayed */ // [def: 0.0001,0.0004] [min: 0] miniumum AvgLLrn value (amount of learning driven by AvgL factor) -- if AvgL is at its minimum value, then AvgLLrn will be at this minimum value -- neurons that are not overly active may not need to increase the contrast of their weights as much -- use around 0.0004 for a single fixed value (with err_mod flag off) LrnMin float32 `` /* 350-byte string literal not displayed */ // [def: true] modulate amount learning by normalized level of error within layer ErrMod bool `def:"true" desc:"modulate amount learning by normalized level of error within layer"` // [def: 0.01] [viewif: ErrMod=true] minimum modulation value for ErrMod-- ensures a minimum amount of self-organizing learning even for network / layers that have a very small level of error signal ModMin float32 `` /* 200-byte string literal not displayed */ // [view: -] rate = 1 / tau Dt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` // [view: -] (LrnMax - LrnMin) / (Gain - Min) LrnFact float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"(LrnMax - LrnMin) / (Gain - Min)"` }
AvgLParams are parameters for computing the long-term floating average value, AvgL which is used for driving BCM-style hebbian learning in XCAL -- this form of learning increases contrast of weights and generally decreases overall activity of neuron, to prevent "hog" units -- it is computed as a running average of the (gain multiplied) medium-time-scale average activation at the end of the alpha-cycle. Also computes an adaptive amount of BCM learning, AvgLLrn, based on AvgL.
func (*AvgLParams) AvgLFmAvgM ¶
func (al *AvgLParams) AvgLFmAvgM(avgM float32, avgL, lrn *float32)
AvgLFmAvgM computes long-term average activation value, and learning factor, from given medium-scale running average activation avgM
func (*AvgLParams) Defaults ¶
func (al *AvgLParams) Defaults()
func (*AvgLParams) ErrModFmLayErr ¶
func (al *AvgLParams) ErrModFmLayErr(layCosDiffAvg float32) float32
ErrModFmLayErr computes AvgLLrn multiplier from layer cosine diff avg statistic
func (*AvgLParams) Update ¶
func (al *AvgLParams) Update()
type ClampParams ¶
type ClampParams struct { // [def: true] whether to hard clamp inputs where activation is directly set to external input value (Act = Ext) or do soft clamping where Ext is added into Ge excitatory current (Ge += Gain * Ext) Hard bool `` /* 200-byte string literal not displayed */ // [viewif: Hard] range of external input activation values allowed -- Max is .95 by default due to saturating nature of rate code activation function Range minmax.F32 `` /* 153-byte string literal not displayed */ // [def: 0.02:0.5] [viewif: !Hard] soft clamp gain factor (Ge += Gain * Ext) Gain float32 `viewif:"!Hard" def:"0.02:0.5" desc:"soft clamp gain factor (Ge += Gain * Ext)"` // [viewif: !Hard] compute soft clamp as the average of current and target netins, not the sum -- prevents some of the main effect problems associated with adding external inputs Avg bool `` /* 181-byte string literal not displayed */ // [def: 0.2] [viewif: !Hard && Avg] gain factor for averaging the Ge -- clamp value Ext contributes with AvgGain and current Ge as (1-AvgGain) AvgGain float32 `` /* 145-byte string literal not displayed */ }
ClampParams are for specifying how external inputs are clamped onto network activation values
func (*ClampParams) AvgGe ¶
func (cp *ClampParams) AvgGe(ext, ge float32) float32
AvgGe computes Avg-based Ge clamping value if using that option.
func (*ClampParams) Defaults ¶
func (cp *ClampParams) Defaults()
func (*ClampParams) Update ¶
func (cp *ClampParams) Update()
type CosDiffParams ¶
type CosDiffParams struct { // [def: 100] [min: 1] time constant in alpha-cycles (roughly how long significant change takes, 1.4 x half-life) for computing running average CosDiff value for the layer, CosDiffAvg = cosine difference between ActM and ActP -- this is an important statistic for how much phase-based difference there is between phases in this layer -- it is used in standard X_COS_DIFF modulation of l_mix in LeabraConSpec, and for modulating learning rate as a function of predictability in the DeepLeabra predictive auto-encoder learning -- running average variance also computed with this: cos_diff_var Tau float32 `` /* 592-byte string literal not displayed */ // [view: -] rate constant = 1 / Tau Dt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate constant = 1 / Tau"` // [view: -] complement of rate constant = 1 - Dt DtC float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"complement of rate constant = 1 - Dt"` }
CosDiffParams specify how to integrate cosine of difference between plus and minus phase activations Used to modulate amount of hebbian learning, and overall learning rate.
func (*CosDiffParams) AvgVarFmCos ¶
func (cd *CosDiffParams) AvgVarFmCos(avg, vr *float32, cos float32)
AvgVarFmCos updates the average and variance from current cosine diff value
func (*CosDiffParams) Defaults ¶
func (cd *CosDiffParams) Defaults()
func (*CosDiffParams) Update ¶
func (cd *CosDiffParams) Update()
type CosDiffStats ¶
type CosDiffStats struct { // cosine (normalized dot product) activation difference between ActP and ActM on this alpha-cycle for this layer -- computed by CosDiffFmActs at end of QuarterFinal for quarter = 3 Cos float32 `` /* 185-byte string literal not displayed */ // running average of cosine (normalized dot product) difference between ActP and ActM -- computed with CosDiff.Tau time constant in QuarterFinal, and used for modulating BCM Hebbian learning (see AvgLrn) and overall learning rate Avg float32 `` /* 234-byte string literal not displayed */ // running variance of cosine (normalized dot product) difference between ActP and ActM -- computed with CosDiff.Tau time constant in QuarterFinal, used for modulating overall learning rate Var float32 `` /* 193-byte string literal not displayed */ // 1 - Avg and 0 for non-Hidden layers AvgLrn float32 `desc:"1 - Avg and 0 for non-Hidden layers"` // 1 - AvgLrn and 0 for non-Hidden layers -- this is the value of Avg used for AvgLParams ErrMod modulation of the AvgLLrn factor if enabled ModAvgLLrn float32 `` /* 144-byte string literal not displayed */ }
CosDiffStats holds cosine-difference statistics at the layer level
func (*CosDiffStats) Init ¶
func (cd *CosDiffStats) Init()
type DWtNormParams ¶
type DWtNormParams struct { // [def: true] whether to use dwt normalization, only on error-driven dwt component, based on projection-level max_avg value -- slowly decays and instantly resets to any current max On bool `` /* 184-byte string literal not displayed */ // [def: 1000,10000] [viewif: On] [min: 1] time constant for decay of dwnorm factor -- generally should be long-ish, between 1000-10000 -- integration rate factor is 1/tau DecayTau float32 `` /* 172-byte string literal not displayed */ // [def: 0.001] [viewif: On] [min: 0] minimum effective value of the normalization factor -- provides a lower bound to how much normalization can be applied NormMin float32 `` /* 157-byte string literal not displayed */ // [def: 0.15] [viewif: On] [min: 0] overall learning rate multiplier to compensate for changes due to use of normalization -- allows for a common master learning rate to be used between different conditions -- 0.1 for synapse-level, maybe higher for other levels LrComp float32 `` /* 264-byte string literal not displayed */ // [def: false] [viewif: On] record the avg, max values of err, bcm hebbian, and overall dwt change per con group and per projection Stats bool `` /* 134-byte string literal not displayed */ // [view: -] rate constant of decay = 1 / decay_tau DecayDt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate constant of decay = 1 / decay_tau"` // [view: -] complement rate constant of decay = 1 - (1 / decay_tau) DecayDtC float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"complement rate constant of decay = 1 - (1 / decay_tau)"` }
DWtNormParams are weight change (dwt) normalization parameters, using MAX(ABS(dwt)) aggregated over Sending connections in a given projection for a given unit. Slowly decays and instantly resets to any current max(abs) Serves as an estimate of the variance in the weight changes, assuming zero net mean overall.
func (*DWtNormParams) Defaults ¶
func (dn *DWtNormParams) Defaults()
func (*DWtNormParams) NormFmAbsDWt ¶
func (dn *DWtNormParams) NormFmAbsDWt(norm *float32, absDwt float32) float32
DWtNormParams updates the dwnorm running max_abs, slowly decaying value jumps up to max(abs_dwt) and slowly decays returns the effective normalization factor, as a multiplier, including lrate comp
func (*DWtNormParams) Update ¶
func (dn *DWtNormParams) Update()
type DtParams ¶
type DtParams struct { // [def: 1,0.5] [min: 0] overall rate constant for numerical integration, for all equations at the unit level -- all time constants are specified in millisecond units, with one cycle = 1 msec -- if you instead want to make one cycle = 2 msec, you can do this globally by setting this integ value to 2 (etc). However, stability issues will likely arise if you go too high. For improved numerical stability, you may even need to reduce this value to 0.5 or possibly even lower (typically however this is not necessary). MUST also coordinate this with network.time_inc variable to ensure that global network.time reflects simulated time accurately Integ float32 `` /* 649-byte string literal not displayed */ // [def: 3.3] [min: 1] membrane potential and rate-code activation time constant in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life) -- reflects the capacitance of the neuron in principle -- biological default for AdEx spiking model C = 281 pF = 2.81 normalized -- for rate-code activation, this also determines how fast to integrate computed activation values over time VmTau float32 `` /* 455-byte string literal not displayed */ // [def: 1.4,3,5] [min: 1] time constant for integrating synaptic conductances, in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life) -- this is important for damping oscillations -- generally reflects time constants associated with synaptic channels which are not modeled in the most abstract rate code models (set to 1 for detailed spiking models with more realistic synaptic currents) -- larger values (e.g., 3) can be important for models with higher conductances that otherwise might be more prone to oscillation. GTau float32 `` /* 601-byte string literal not displayed */ // [def: 200] for integrating activation average (ActAvg), time constant in trials (roughly, how long it takes for value to change significantly) -- used mostly for visualization and tracking *hog* units AvgTau float32 `` /* 206-byte string literal not displayed */ // [view: -] nominal rate = Integ / tau VmDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` // [view: -] rate = Integ / tau GDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` // [view: -] rate = 1 / tau AvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
DtParams are time and rate constants for temporal derivatives in Leabra (Vm, net input)
type InhibParams ¶
type InhibParams struct { // [view: inline] inhibition across the entire layer Layer fffb.Params `view:"inline" desc:"inhibition across the entire layer"` // [view: inline] inhibition across sub-pools of units, for layers with 4D shape Pool fffb.Params `view:"inline" desc:"inhibition across sub-pools of units, for layers with 4D shape"` // [view: inline] neuron self-inhibition parameters -- can be beneficial for producing more graded, linear response -- not typically used in cortical networks Self SelfInhibParams `` /* 161-byte string literal not displayed */ // [view: inline] running-average activation computation values -- for overall estimates of layer activation levels, used in netinput scaling ActAvg ActAvgParams `` /* 144-byte string literal not displayed */ }
leabra.InhibParams contains all the inhibition computation params and functions for basic Leabra This is included in leabra.Layer to support computation. This also includes other misc layer-level params such as running-average activation in the layer which is used for netinput rescaling and potentially for adapting inhibition over time
func (*InhibParams) Defaults ¶
func (ip *InhibParams) Defaults()
func (*InhibParams) Update ¶
func (ip *InhibParams) Update()
type LayFunChan ¶
type LayFunChan chan func(ly LeabraLayer)
LayFunChan is a channel that runs LeabraLayer functions
type Layer ¶
type Layer struct { LayerStru // [view: add-fields] Activation parameters and methods for computing activations Act ActParams `view:"add-fields" desc:"Activation parameters and methods for computing activations"` // [view: add-fields] Inhibition parameters and methods for computing layer-level inhibition Inhib InhibParams `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"` // [view: add-fields] Learning parameters and methods that operate at the neuron level Learn LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"` // slice of neurons for this layer -- flat list of len = Shp.Len(). You must iterate over index and use pointer to modify values. Neurons []Neuron `` /* 133-byte string literal not displayed */ // inhibition and other pooled, aggregate state variables -- flat list has at least of 1 for layer, and one for each sub-pool (unit group) if shape supports that (4D). You must iterate over index and use pointer to modify values. Pools []Pool `` /* 234-byte string literal not displayed */ // cosine difference between ActM, ActP stats CosDiff CosDiffStats `desc:"cosine difference between ActM, ActP stats"` }
leabra.Layer has parameters for running a basic rate-coded Leabra layer
func (*Layer) ActAvgFmAct ¶ added in v1.1.36
func (ly *Layer) ActAvgFmAct()
ActAvgFmAct updates the running average ActMAvg, ActPAvg, and ActPAvgEff values from the current pool-level averages. The ActPAvgEff value is used for updating the conductance scaling parameters, if these are not set to Fixed, so calling this will change the scaling of projections in the network!
func (*Layer) ActFmG ¶
ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act
func (*Layer) ActQ0FmActP ¶ added in v1.1.36
func (ly *Layer) ActQ0FmActP()
ActQ0FmActP updates the neuron ActQ0 value from prior ActP value
func (*Layer) AlphaCycInit ¶
AlphaCycInit handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. If updtActAvg is true, this includes updating the running-average activations for each layer / pool, and the AvgL running average used in BCM Hebbian learning. The input scaling is updated based on the layer-level running average acts, and this can then change the behavior of the network, so if you want 100% repeatable testing results, set this to false to keep the existing scaling factors (e.g., can pass a train bool to only update during training). This flag also affects the AvgL learning threshold
func (*Layer) ApplyExt ¶
ApplyExt applies external input in the form of an etensor.Float32. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1D ¶
ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1D32 ¶ added in v1.0.0
ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt1DTsr ¶ added in v1.0.0
ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Targ otherwise it goes in Ext
func (*Layer) ApplyExt2D ¶ added in v1.0.0
ApplyExt2D applies 2D tensor external input
func (*Layer) ApplyExt2Dto4D ¶ added in v1.0.0
ApplyExt2Dto4D applies 2D tensor external input to a 4D layer
func (*Layer) ApplyExt4D ¶ added in v1.0.0
ApplyExt4D applies 4D tensor external input
func (*Layer) ApplyExtFlags ¶
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Targ (else Ext)
func (*Layer) AsLeabra ¶
AsLeabra returns this layer as a leabra.Layer -- all derived layers must redefine this to return the base Layer type, so that the LeabraLayer interface does not need to include accessors to all the basic stuff
func (*Layer) AvgLFmAvgM ¶
func (ly *Layer) AvgLFmAvgM()
AvgLFmAvgM updates AvgL long-term running average activation that drives BCM Hebbian learning
func (*Layer) BuildPools ¶
BuildPools builds the inhibitory pools structures -- nu = number of units in layer
func (*Layer) BuildPrjns ¶
BuildPrjns builds the projections, recv-side
func (*Layer) BuildSubPools ¶
func (ly *Layer) BuildSubPools()
BuildSubPools initializes neuron start / end indexes for sub-pools
func (*Layer) CosDiffFmActs ¶
func (ly *Layer) CosDiffFmActs()
CosDiffFmActs computes the cosine difference in activation state between minus and plus phases. this is also used for modulating the amount of BCM hebbian learning
func (*Layer) CostEst ¶ added in v1.1.6
CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.
func (*Layer) CyclePost ¶ added in v1.0.5
CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Layer) DWt ¶
func (ly *Layer) DWt()
DWt computes the weight change (learning) -- calls DWt method on sending projections
func (*Layer) DecayState ¶
DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay). This does *not* call InitGInc -- must call that separately at start of AlphaCyc
func (*Layer) DecayStatePool ¶ added in v1.1.15
DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)
func (*Layer) GFmInc ¶
GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.
func (*Layer) GFmIncNeur ¶ added in v1.0.0
GFmIncNeur is the neuron-level code for GFmInc that integrates overall Ge, Gi values from their G*Raw accumulators.
func (*Layer) GScaleFmAvgAct ¶
func (ly *Layer) GScaleFmAvgAct()
GScaleFmAvgAct computes the scaling factor for synaptic input conductances G, based on sending layer average activation. This attempts to automatically adjust for overall differences in raw activity coming into the units to achieve a general target of around .5 to 1 for the integrated Ge value.
func (*Layer) GenNoise ¶
func (ly *Layer) GenNoise()
GenNoise generates random noise for all neurons
func (*Layer) HardClamp ¶
func (ly *Layer) HardClamp()
HardClamp hard-clamps the activations in the layer -- called during AlphaCycInit for hard-clamped Input layers
func (*Layer) InhibFmGeAct ¶
InhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools
func (*Layer) InhibFmPool ¶ added in v1.1.4
InhibFmPool computes inhibition Gi from Pool-level aggregated inhibition, including self and syn
func (*Layer) InitActAvg ¶
func (ly *Layer) InitActAvg()
InitActAvg initializes the running-average activation values that drive learning.
func (*Layer) InitActs ¶
func (ly *Layer) InitActs()
InitActs fully initializes activation state -- only called automatically during InitWts
func (*Layer) InitExt ¶
func (ly *Layer) InitExt()
InitExt initializes external input state -- called prior to apply ext
func (*Layer) InitGInc ¶
func (ly *Layer) InitGInc()
InitGinc initializes the Ge excitatory and Gi inhibitory conductance accumulation states including ActSent and G*Raw values. called at start of trial always, and can be called optionally when delta-based Ge computation needs to be updated (e.g., weights might have changed strength)
func (*Layer) InitWtSym ¶
func (ly *Layer) InitWtSym()
InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
func (*Layer) InitWts ¶
func (ly *Layer) InitWts()
InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs
func (*Layer) IsTarget ¶ added in v1.1.19
IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == emer.Target Other Target layers include the TRCLayer in deep predictive learning. This is used for turning off BCM hebbian learning, in CosDiffFmActs to set the CosDiff.ModAvgLLrn value for error-modulated level of hebbian learning. It is also used in WtBal to not apply it to target layers. In both cases, Target layers are purely error-driven.
func (*Layer) LesionNeurons ¶
LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed
func (*Layer) LrateMult ¶ added in v1.0.0
LrateMult sets the new Lrate parameter for Prjns to LrateInit * mult. Useful for implementing learning rate schedules.
func (*Layer) MSE ¶
MSE returns the sum-squared-error and mean-squared-error over the layer, in terms of ActP - ActM (valid even on non-target layers FWIW). Uses the given tolerance per-unit to count an error at all (e.g., .5 = activity just has to be on the right side of .5).
func (*Layer) PoolInhibFmGeAct ¶ added in v1.1.2
PoolInhibFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools
func (*Layer) QuarterFinal ¶
QuarterFinal does updating after end of a quarter
func (*Layer) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Layer) RecvGInc ¶ added in v1.0.0
RecvGInc calls RecvGInc on receiving projections to collect Neuron-level G*Inc values. This is called by GFmInc overall method, but separated out for cases that need to do something different.
func (*Layer) RecvNameTry ¶ added in v1.2.4
func (*Layer) RecvNameTypeTry ¶ added in v1.2.4
func (*Layer) RecvPrjnVals ¶ added in v1.0.0
func (ly *Layer) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error
RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SSE ¶
SSE returns the sum-squared-error over the layer, in terms of ActP - ActM (valid even on non-target layers FWIW). Uses the given tolerance per-unit to count an error at all (e.g., .5 = activity just has to be on the right side of .5). Use this in Python which only allows single return values.
func (*Layer) SendGDelta ¶
SendGDelta sends change in activation since last sent, to increment recv synaptic conductances G, if above thresholds
func (*Layer) SendNameTry ¶ added in v1.2.4
func (*Layer) SendNameTypeTry ¶ added in v1.2.4
func (*Layer) SendPrjnVals ¶ added in v1.0.0
func (ly *Layer) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error
SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*Layer) SetWts ¶ added in v1.0.0
SetWts sets the weights for this layer from weights.Layer decoded values
func (*Layer) UnLesionNeurons ¶
func (ly *Layer) UnLesionNeurons()
UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer
func (*Layer) UnitVal ¶
UnitVal returns value of given variable name on given unit, using shape-based dimensional index
func (*Layer) UnitVal1D ¶
UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Layer) UnitVals ¶
UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Layer) UnitValsRepTensor ¶ added in v1.1.48
UnitValsRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIdxs -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to a 1D shape to hold all the values if subset is defined, otherwise it calls UnitValsTensor and is identical to that. Returns error on invalid var name.
func (*Layer) UnitValsTensor ¶
UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*Layer) UnitVarIdx ¶ added in v1.1.0
UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Layer) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this layer
func (*Layer) UnitVarNum ¶ added in v1.1.2
UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.
func (*Layer) UnitVarProps ¶ added in v1.0.0
UnitVarProps returns properties for variables
func (*Layer) UpdateExtFlags ¶ added in v1.0.0
func (ly *Layer) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Layer) UpdateParams ¶
func (ly *Layer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer
func (*Layer) VarRange ¶
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*Layer) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type LayerStru ¶
type LayerStru struct { // [view: -] we need a pointer to ourselves as an LeabraLayer (which subsumes emer.Layer), which can always be used to extract the true underlying type of object when layer is embedded in other structs -- function receivers do not have this ability so this is necessary. LeabraLay LeabraLayer `` /* 299-byte string literal not displayed */ // [view: -] our parent network, in case we need to use it to find other layers etc -- set when added by network Network emer.Network `` /* 141-byte string literal not displayed */ // Name of the layer -- this must be unique within the network, which has a map for quick lookup and layers are typically accessed directly by name Nm string `` /* 151-byte string literal not displayed */ // Class is for applying parameter styles, can be space separated multple tags Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` // inactivate this layer -- allows for easy experimentation Off bool `desc:"inactivate this layer -- allows for easy experimentation"` // shape of the layer -- can be 2D for basic layers and 4D for layers with sub-groups (hypercolumns) -- order is outer-to-inner (row major) so Y then X for 2D and for 4D: Y-X unit pools then Y-X neurons within pools Shp etensor.Shape `` /* 219-byte string literal not displayed */ // type of layer -- Hidden, Input, Target, Compare, or extended type in specialized algorithms -- matches against .Class parameter styles (e.g., .Hidden etc) Typ emer.LayerType `` /* 161-byte string literal not displayed */ // the thread number (go routine) to use in updating this layer. The user is responsible for allocating layers to threads, trying to maintain an even distribution across layers and establishing good break-points. Thr int `` /* 216-byte string literal not displayed */ // [view: inline] Spatial relationship to other layer, determines positioning Rel relpos.Rel `view:"inline" desc:"Spatial relationship to other layer, determines positioning"` // position of lower-left-hand corner of layer in 3D space, computed from Rel. Layers are in X-Y width - height planes, stacked vertically in Z axis. Ps mat32.Vec3 `` /* 154-byte string literal not displayed */ // a 0..n-1 index of the position of the layer within list of layers in the network. For Leabra networks, it only has significance in determining who gets which weights for enforcing initial weight symmetry -- higher layers get weights from lower layers. Idx int `` /* 258-byte string literal not displayed */ // indexes of representative units in the layer, for computationally expensive stats or displays RepIxs []int `desc:"indexes of representative units in the layer, for computationally expensive stats or displays"` // shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape RepShp etensor.Shape `desc:"shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape"` // list of receiving projections into this layer from other layers RcvPrjns LeabraPrjns `desc:"list of receiving projections into this layer from other layers"` // list of sending projections from this layer to other layers SndPrjns LeabraPrjns `desc:"list of sending projections from this layer to other layers"` }
leabra.LayerStru manages the structural elements of the layer, which are common to any Layer type
func (*LayerStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*LayerStru) Idx4DFrom2D ¶ added in v1.0.0
func (*LayerStru) InitName ¶
InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).
func (*LayerStru) NPools ¶
NPools returns the number of unit sub-pools according to the shape parameters. Currently supported for a 4D shape, where the unit pools are the first 2 Y,X dims and then the units within the pools are the 2nd 2 Y,X dims
func (*LayerStru) NRecvPrjns ¶
func (*LayerStru) NSendPrjns ¶
func (*LayerStru) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*LayerStru) RecipToSendPrjn ¶
RecipToSendPrjn finds the reciprocal projection relative to the given sending projection found within the SendPrjns of this layer. This is then a recv prjn within this layer:
S=A -> R=B recip: R=A <- S=B -- ly = A -- we are the sender of srj and recv of rpj.
returns false if not found.
func (*LayerStru) RecvNameTry ¶ added in v1.2.6
func (*LayerStru) RecvNameTypeTry ¶ added in v1.2.6
func (*LayerStru) RecvPrjns ¶
func (ls *LayerStru) RecvPrjns() *LeabraPrjns
func (*LayerStru) RepShape ¶ added in v1.2.1
RepShape returns the shape to use for representative units
func (*LayerStru) SendNameTry ¶ added in v1.2.6
func (*LayerStru) SendNameTypeTry ¶ added in v1.2.6
func (*LayerStru) SendPrjns ¶
func (ls *LayerStru) SendPrjns() *LeabraPrjns
func (*LayerStru) SetRepIdxsShape ¶ added in v1.2.1
SetRepIdxsShape sets the RepIdxs, and RepShape and as list of dimension sizes
type LeabraLayer ¶
type LeabraLayer interface { emer.Layer // AsLeabra returns this layer as a leabra.Layer -- so that the LeabraLayer // interface does not need to include accessors to all the basic stuff AsLeabra() *Layer // SetThread sets the thread number for this layer to run on SetThread(thr int) // InitWts initializes the weight values in the network, i.e., resetting learning // Also calls InitActs InitWts() // InitActAvg initializes the running-average activation values that drive learning. InitActAvg() // InitActs fully initializes activation state -- only called automatically during InitWts InitActs() // InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers InitWtSym() // InitExt initializes external input state -- called prior to apply ext InitExt() // ApplyExt applies external input in the form of an etensor.Tensor // If the layer is a Target or Compare layer type, then it goes in Targ // otherwise it goes in Ext. ApplyExt(ext etensor.Tensor) // ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats // If the layer is a Target or Compare layer type, then it goes in Targ // otherwise it goes in Ext ApplyExt1D(ext []float64) // UpdateExtFlags updates the neuron flags for external input based on current // layer Type field -- call this if the Type has changed since the last // ApplyExt* method call. UpdateExtFlags() // RecvPrjns returns the slice of receiving projections for this layer RecvPrjns() *LeabraPrjns // SendPrjns returns the slice of sending projections for this layer SendPrjns() *LeabraPrjns // IsTarget returns true if this layer is a Target layer. // By default, returns true for layers of Type == emer.Target // Other Target layers include the TRCLayer in deep predictive learning. // This is used for turning off BCM hebbian learning, // in CosDiffFmActs to set the CosDiff.ModAvgLLrn value // for error-modulated level of hebbian learning. // It is also used in WtBal to not apply it to target layers. // In both cases, Target layers are purely error-driven. IsTarget() bool // AlphaCycInit handles all initialization at start of new input pattern. // Should already have presented the external input to the network at this point. // If updtActAvg is true, this includes updating the running-average // activations for each layer / pool, and the AvgL running average used // in BCM Hebbian learning. // The input scaling is updated based on the layer-level running average acts, // and this can then change the behavior of the network, // so if you want 100% repeatable testing results, set this to false to // keep the existing scaling factors (e.g., can pass a train bool to // only update during training). This flag also affects the AvgL learning // threshold AlphaCycInit(updtActAvg bool) // AvgLFmAvgM updates AvgL long-term running average activation that // drives BCM Hebbian learning AvgLFmAvgM() // GScaleFmAvgAct computes the scaling factor for synaptic conductance input // based on sending layer average activation. // This attempts to automatically adjust for overall differences in raw // activity coming into the units to achieve a general target // of around .5 to 1 for the integrated G values. GScaleFmAvgAct() // GenNoise generates random noise for all neurons GenNoise() // DecayState decays activation state by given proportion (default is on ly.Act.Init.Decay) DecayState(decay float32) // HardClamp hard-clamps the activations in the layer -- called during AlphaCycInit // for hard-clamped Input layers HardClamp() // InitGInc initializes synaptic conductance increments -- optional InitGInc() // SendGDelta sends change in activation since last sent, to increment recv // synaptic conductances G, if above thresholds SendGDelta(ltime *Time) // GFmInc integrates new synaptic conductances from increments sent during last SendGDelta GFmInc(ltime *Time) // AvgMaxGe computes the average and max Ge stats, used in inhibition AvgMaxGe(ltime *Time) // InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools InhibFmGeAct(ltime *Time) // ActFmG computes rate-code activation from Ge, Gi, Gl conductances // and updates learning running-average activations from that Act ActFmG(ltime *Time) // AvgMaxAct computes the average and max Act stats, used in inhibition AvgMaxAct(ltime *Time) // CyclePost is called after the standard Cycle update, as a separate // network layer loop. // This is reserved for any kind of special ad-hoc types that // need to do something special after Act is finally computed. // For example, sending a neuromodulatory signal such as dopamine. CyclePost(ltime *Time) // QuarterFinal does updating after end of a quarter QuarterFinal(ltime *Time) // CosDiffFmActs computes the cosine difference in activation state // between minus and plus phases. // This is also used for modulating the amount of BCM hebbian learning CosDiffFmActs() // DWt computes the weight change (learning) -- calls DWt method on sending projections DWt() // WtFmDWt updates the weights from delta-weight changes -- on the sending projections WtFmDWt() // WtBalFmWt computes the Weight Balance factors based on average recv weights WtBalFmWt() // LrateMult sets the new Lrate parameter for Prjns to LrateInit * mult. // Useful for implementing learning rate schedules. LrateMult(mult float32) }
LeabraLayer defines the essential algorithmic API for Leabra, at the layer level. These are the methods that the leabra.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Layer, which this interface also inherits for convenience.
type LeabraNetwork ¶ added in v1.0.5
type LeabraNetwork interface { emer.Network // AsLeabra returns this network as a leabra.Network -- so that the // LeabraNetwork interface does not need to include accessors // to all the basic stuff AsLeabra() *Network // NewLayer creates a new concrete layer of appropriate type for this network NewLayer() emer.Layer // NewPrjn creates a new concrete projection of appropriate type for this network NewPrjn() emer.Prjn // AlphaCycInit handles all initialization at start of new input pattern. // Should already have presented the external input to the network at this point. // If updtActAvg is true, this includes updating the running-average // activations for each layer / pool, and the AvgL running average used // in BCM Hebbian learning. // The input scaling is updated based on the layer-level running average acts, // and this can then change the behavior of the network, // so if you want 100% repeatable testing results, set this to false to // keep the existing scaling factors (e.g., can pass a train bool to // only update during training). This flag also affects the AvgL learning // threshold AlphaCycInitImpl(updtActAvg bool) // CycleImpl runs one cycle of activation updating: // * Sends Ge increments from sending to receiving layers // * Average and Max Ge stats // * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) // * Activation from Ge, Gi, and Gl // * Average and Max Act stats // This basic version doesn't use the time info, but more specialized types do, and we // want to keep a consistent API for end-user code. CycleImpl(ltime *Time) // CyclePostImpl is called after the standard Cycle update, and calls CyclePost // on Layers -- this is reserved for any kind of special ad-hoc types that // need to do something special after Act is finally computed. // For example, sending a neuromodulatory signal such as dopamine. CyclePostImpl(ltime *Time) // QuarterFinalImpl does updating after end of a quarter QuarterFinalImpl(ltime *Time) // DWtImpl computes the weight change (learning) based on current // running-average activation values DWtImpl() // WtFmDWtImpl updates the weights from delta-weight changes. // Also calls WtBalFmWt every WtBalInterval times WtFmDWtImpl() }
LeabraNetwork defines the essential algorithmic API for Leabra, at the network level. These are the methods that the user calls in their Sim code: * AlphaCycInit * Cycle * QuarterFinal * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls.
Typically most changes in algorithm can be accomplished directly in the Layer or Prjn level, but sometimes (e.g., in deep) additional full-network passes are required.
All of the structural API is in emer.Network, which this interface also inherits for convenience.
type LeabraPrjn ¶
type LeabraPrjn interface { emer.Prjn // AsLeabra returns this prjn as a leabra.Prjn -- so that the LeabraPrjn // interface does not need to include accessors to all the basic stuff. AsLeabra() *Prjn // InitWts initializes weight values according to Learn.WtInit params InitWts() // InitWtSym initializes weight symmetry -- is given the reciprocal projection where // the Send and Recv layers are reversed. InitWtSym(rpj LeabraPrjn) // InitGInc initializes the per-projection synaptic conductance threadsafe increments. // This is not typically needed (called during InitWts only) but can be called when needed InitGInc() // SendGDelta sends the delta-activation from sending neuron index si, // to integrate synaptic conductances on receivers SendGDelta(si int, delta float32) // RecvGInc increments the receiver's synaptic conductances from those of all the projections. RecvGInc() // DWt computes the weight change (learning) -- on sending projections DWt() // WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections WtFmDWt() // WtBalFmWt computes the Weight Balance factors based on average recv weights WtBalFmWt() // LrateMult sets the new Lrate parameter for Prjns to LrateInit * mult. // Useful for implementing learning rate schedules. LrateMult(mult float32) }
LeabraPrjn defines the essential algorithmic API for Leabra, at the projection level. These are the methods that the leabra.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Prjn, which this interface also inherits for convenience.
type LeabraPrjns ¶ added in v1.2.6
type LeabraPrjns []LeabraPrjn
func (*LeabraPrjns) Add ¶ added in v1.2.6
func (pl *LeabraPrjns) Add(p LeabraPrjn)
type LearnNeurParams ¶
type LearnNeurParams struct { // [view: inline] parameters for computing running average activations that drive learning ActAvg LrnActAvgParams `view:"inline" desc:"parameters for computing running average activations that drive learning"` // [view: inline] parameters for computing AvgL long-term running average AvgL AvgLParams `view:"inline" desc:"parameters for computing AvgL long-term running average"` // [view: inline] parameters for computing cosine diff between minus and plus phase CosDiff CosDiffParams `view:"inline" desc:"parameters for computing cosine diff between minus and plus phase"` }
leabra.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning
func (*LearnNeurParams) AvgLFmAvgM ¶
func (ln *LearnNeurParams) AvgLFmAvgM(nrn *Neuron)
AvgLFmAct computes long-term average activation value, and learning factor, from current AvgM. Called at start of new alpha-cycle.
func (*LearnNeurParams) AvgsFmAct ¶
func (ln *LearnNeurParams) AvgsFmAct(nrn *Neuron)
AvgsFmAct updates the running averages based on current learning activation. Computed after new activation for current cycle is updated.
func (*LearnNeurParams) Defaults ¶
func (ln *LearnNeurParams) Defaults()
func (*LearnNeurParams) InitActAvg ¶
func (ln *LearnNeurParams) InitActAvg(nrn *Neuron)
InitActAvg initializes the running-average activation values that drive learning. Called by InitWts (at start of learning).
func (*LearnNeurParams) Update ¶
func (ln *LearnNeurParams) Update()
type LearnSynParams ¶
type LearnSynParams struct { // enable learning for this projection Learn bool `desc:"enable learning for this projection"` // [viewif: Learn] current effective learning rate (multiplies DWt values, determining rate of change of weights) Lrate float32 `viewif:"Learn" desc:"current effective learning rate (multiplies DWt values, determining rate of change of weights)"` // [viewif: Learn] initial learning rate -- this is set from Lrate in UpdateParams, which is called when Params are updated, and used in LrateMult to compute a new learning rate for learning rate schedules. LrateInit float32 `` /* 209-byte string literal not displayed */ // [view: inline] [viewif: Learn] parameters for the XCal learning rule XCal XCalParams `viewif:"Learn" view:"inline" desc:"parameters for the XCal learning rule"` // [view: inline] [viewif: Learn] parameters for the sigmoidal contrast weight enhancement WtSig WtSigParams `viewif:"Learn" view:"inline" desc:"parameters for the sigmoidal contrast weight enhancement"` // [view: inline] [viewif: Learn] parameters for normalizing weight changes by abs max dwt Norm DWtNormParams `viewif:"Learn" view:"inline" desc:"parameters for normalizing weight changes by abs max dwt"` // [view: inline] [viewif: Learn] parameters for momentum across weight changes Momentum MomentumParams `viewif:"Learn" view:"inline" desc:"parameters for momentum across weight changes"` // [view: inline] [viewif: Learn] parameters for balancing strength of weight increases vs. decreases WtBal WtBalParams `viewif:"Learn" view:"inline" desc:"parameters for balancing strength of weight increases vs. decreases"` }
leabra.LearnSynParams manages learning-related parameters at the synapse-level.
func (*LearnSynParams) BCMdWt ¶ added in v1.0.1
func (ls *LearnSynParams) BCMdWt(suAvgSLrn, ruAvgSLrn, ruAvgL float32) float32
BCMdWt returns the BCM Hebbian weight change for AvgSLrn vs. AvgL long-term average floating activation on the receiver.
func (*LearnSynParams) CHLdWt ¶
func (ls *LearnSynParams) CHLdWt(suAvgSLrn, suAvgM, ruAvgSLrn, ruAvgM, ruAvgL float32) (err, bcm float32)
CHLdWt returns the error-driven and BCM Hebbian weight change components for the temporally eXtended Contrastive Attractor Learning (XCAL), CHL version
func (*LearnSynParams) Defaults ¶
func (ls *LearnSynParams) Defaults()
func (*LearnSynParams) LWtFmWt ¶
func (ls *LearnSynParams) LWtFmWt(syn *Synapse)
LWtFmWt updates the linear weight value based on the current effective Wt value. effective weight is sigmoidally contrast-enhanced relative to the linear weight.
func (*LearnSynParams) Update ¶
func (ls *LearnSynParams) Update()
func (*LearnSynParams) WtFmDWt ¶
func (ls *LearnSynParams) WtFmDWt(wbInc, wbDec float32, dwt, wt, lwt *float32, scale float32)
WtFmDWt updates the synaptic weights from accumulated weight changes wbInc and wbDec are the weight balance factors, wt is the sigmoidal contrast-enhanced weight and lwt is the linear weight value
func (*LearnSynParams) WtFmLWt ¶
func (ls *LearnSynParams) WtFmLWt(syn *Synapse)
WtFmLWt updates the effective weight value based on the current linear Wt value. effective weight is sigmoidally contrast-enhanced relative to the linear weight.
type LrnActAvgParams ¶
type LrnActAvgParams struct { // [def: 2,4,7] [min: 1] time constant in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life), for continuously updating the super-short time-scale avg_ss value -- this is provides a pre-integration step before integrating into the avg_s short time scale -- it is particularly important for spiking -- in general 4 is the largest value without starting to impair learning, but a value of 7 can be combined with m_in_s = 0 with somewhat worse results SSTau float32 `` /* 532-byte string literal not displayed */ // [def: 2] [min: 1] time constant in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life), for continuously updating the short time-scale avg_s value from the super-short avg_ss value (cascade mode) -- avg_s represents the plus phase learning signal that reflects the most recent past information STau float32 `` /* 378-byte string literal not displayed */ // [def: 10] [min: 1] time constant in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life), for continuously updating the medium time-scale avg_m value from the short avg_s value (cascade mode) -- avg_m represents the minus phase learning signal that reflects the expectation representation prior to experiencing the outcome (in addition to the outcome) -- the default value of 10 generally cannot be exceeded without impairing learning MTau float32 `` /* 518-byte string literal not displayed */ // [def: 0.1,0] [min: 0] [max: 1] how much of the medium term average activation to mix in with the short (plus phase) to compute the Neuron AvgSLrn variable that is used for the unit's short-term average in learning. This is important to ensure that when unit turns off in plus phase (short time scale), enough medium-phase trace remains so that learning signal doesn't just go all the way to 0, at which point no learning would take place -- typically need faster time constant for updating S such that this trace of the M signal is lost -- can set SSTau=7 and set this to 0 but learning is generally somewhat worse LrnM float32 `` /* 618-byte string literal not displayed */ // [def: 0.15] [min: 0] [max: 1] initial value for average Init float32 `def:"0.15" min:"0" max:"1" desc:"initial value for average"` // [view: -] rate = 1 / tau SSDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` // [view: -] rate = 1 / tau SDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` // [view: -] rate = 1 / tau MDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` // [view: -] 1-LrnM LrnS float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"1-LrnM"` }
LrnActAvgParams has rate constants for averaging over activations at different time scales, to produce the running average activation values that then drive learning in the XCAL learning rules
func (*LrnActAvgParams) AvgsFmAct ¶
func (aa *LrnActAvgParams) AvgsFmAct(ruAct float32, avgSS, avgS, avgM, avgSLrn *float32)
AvgsFmAct computes averages based on current act
func (*LrnActAvgParams) Defaults ¶
func (aa *LrnActAvgParams) Defaults()
func (*LrnActAvgParams) Update ¶
func (aa *LrnActAvgParams) Update()
type MomentumParams ¶
type MomentumParams struct { // [def: true] whether to use standard simple momentum On bool `def:"true" desc:"whether to use standard simple momentum"` // [def: 10] [viewif: On] [min: 1] time constant factor for integration of momentum -- 1/tau is dt (e.g., .1), and 1-1/tau (e.g., .95 or .9) is traditional momentum time-integration factor MTau float32 `` /* 189-byte string literal not displayed */ // [def: 0.1] [viewif: On] [min: 0] overall learning rate multiplier to compensate for changes due to JUST momentum without normalization -- allows for a common master learning rate to be used between different conditions -- generally should use .1 to compensate for just momentum itself LrComp float32 `` /* 288-byte string literal not displayed */ // [view: -] rate constant of momentum integration = 1 / m_tau MDt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate constant of momentum integration = 1 / m_tau"` // [view: -] complement rate constant of momentum integration = 1 - (1 / m_tau) MDtC float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"complement rate constant of momentum integration = 1 - (1 / m_tau)"` }
MomentumParams implements standard simple momentum -- accentuates consistent directions of weight change and cancels out dithering -- biologically captures slower timecourse of longer-term plasticity mechanisms.
func (*MomentumParams) Defaults ¶
func (mp *MomentumParams) Defaults()
func (*MomentumParams) MomentFmDWt ¶
func (mp *MomentumParams) MomentFmDWt(moment *float32, dwt float32) float32
MomentFmDWt updates synaptic moment variable based on dwt weight change value and returns new momentum factor * LrComp
func (*MomentumParams) Update ¶
func (mp *MomentumParams) Update()
type Network ¶
type Network struct { NetworkStru // [def: 10] how frequently to update the weight balance average weight factor -- relatively expensive WtBalInterval int `def:"10" desc:"how frequently to update the weight balance average weight factor -- relatively expensive"` // counter for how long it has been since last WtBal WtBalCtr int `inactive:"+" desc:"counter for how long it has been since last WtBal"` }
leabra.Network has parameters for running a basic rate-coded Leabra network
func (*Network) AlphaCycInit ¶
AlphaCycInit handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. If updtActAvg is true, this includes updating the running-average activations for each layer / pool, and the AvgL running average used in BCM Hebbian learning. The input scaling is updated based on the layer-level running average acts, and this can then change the behavior of the network, so if you want 100% repeatable testing results, set this to false to keep the existing scaling factors (e.g., can pass a train bool to only update during training). This flag also affects the AvgL learning threshold
func (*Network) AlphaCycInitImpl ¶ added in v1.0.5
AlphaCycInit handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point. If updtActAvg is true, this includes updating the running-average activations for each layer / pool, and the AvgL running average used in BCM Hebbian learning. The input scaling is updated based on the layer-level running average acts, and this can then change the behavior of the network, so if you want 100% repeatable testing results, set this to false to keep the existing scaling factors (e.g., can pass a train bool to only update during training). This flag also affects the AvgL learning threshold
func (*Network) CollectDWts ¶ added in v1.0.5
CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors.
func (*Network) Cycle ¶
Cycle runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.
func (*Network) CycleImpl ¶ added in v1.0.5
CycleImpl runs one cycle of activation updating: * Sends Ge increments from sending to receiving layers * Average and Max Ge stats * Inhibition based on Ge stats and Act Stats (computed at end of Cycle) * Activation from Ge, Gi, and Gl * Average and Max Act stats This basic version doesn't use the time info, but more specialized types do, and we want to keep a consistent API for end-user code.
func (*Network) CyclePost ¶ added in v1.0.5
CyclePost is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Network) CyclePostImpl ¶ added in v1.0.5
CyclePostImpl is called after the standard Cycle update, and calls CyclePost on Layers -- this is reserved for any kind of special ad-hoc types that need to do something special after Act is finally computed. For example, sending a neuromodulatory signal such as dopamine.
func (*Network) DWt ¶
func (nt *Network) DWt()
DWt computes the weight change (learning) based on current running-average activation values
func (*Network) DWtImpl ¶ added in v1.0.5
func (nt *Network) DWtImpl()
DWtImpl computes the weight change (learning) based on current running-average activation values
func (*Network) DecayState ¶ added in v1.1.22
DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all This is called automatically in AlphaCycInit, but is avail here for ad-hoc decay cases.
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and projections
func (*Network) GScaleFmAvgAct ¶ added in v1.0.0
func (nt *Network) GScaleFmAvgAct()
GScaleFmAvgAct computes the scaling factor for synaptic input conductances G, based on sending layer average activation. This attempts to automatically adjust for overall differences in raw activity coming into the units to achieve a general target of around .5 to 1 for the integrated Ge value. This is automatically done during AlphaCycInit, but if scaling parameters are changed at any point thereafter during AlphaCyc, this must be called.
func (*Network) InhibFmGeAct ¶
InhibiFmGeAct computes inhibition Gi from Ge and Act stats within relevant Pools
func (*Network) InitActs ¶
func (nt *Network) InitActs()
InitActs fully initializes activation state -- not automatically called
func (*Network) InitExt ¶
func (nt *Network) InitExt()
InitExt initializes external input state -- call prior to applying external inputs to layers
func (*Network) InitGInc ¶ added in v1.0.0
func (nt *Network) InitGInc()
InitGinc initializes the Ge excitatory and Gi inhibitory conductance accumulation states including ActSent and G*Raw values. called at start of trial always (at layer level), and can be called optionally when delta-based Ge computation needs to be updated (e.g., weights might have changed strength)
func (*Network) InitTopoScales ¶ added in v1.1.14
func (nt *Network) InitTopoScales()
InitTopoScales initializes synapse-specific scale parameters from prjn types that support them, with flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts
func (*Network) InitWts ¶
func (nt *Network) InitWts()
InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)
func (*Network) LayersSetOff ¶ added in v1.0.0
LayersSetOff sets the Off flag for all layers to given setting
func (*Network) LrateMult ¶ added in v1.0.0
LrateMult sets the new Lrate parameter for Prjns to LrateInit * mult. Useful for implementing learning rate schedules.
func (*Network) QuarterFinal ¶
QuarterFinal does updating after end of a quarter
func (*Network) QuarterFinalImpl ¶ added in v1.0.5
QuarterFinalImpl does updating after end of a quarter
func (*Network) SendGDelta ¶
SendGeDelta sends change in activation since last sent, if above thresholds and integrates sent deltas into GeRaw and time-integrated Ge values
func (*Network) SetDWts ¶ added in v1.0.5
SetDWts sets the DWt weight changes from given array of floats, which must be correct size
func (*Network) SizeReport ¶ added in v1.1.6
SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint.
func (*Network) SynVarNames ¶ added in v1.1.0
SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) SynVarProps ¶ added in v1.1.0
SynVarProps returns properties for variables
func (*Network) ThreadAlloc ¶ added in v1.1.6
ThreadAlloc allocates layers to given number of threads, attempting to evenly divide computation. Returns report of thread allocations and estimated computational cost per thread.
func (*Network) ThreadReport ¶ added in v1.1.6
ThreadReport returns report of thread allocations and estimated computational cost per thread.
func (*Network) UnLesionNeurons ¶ added in v1.0.0
func (nt *Network) UnLesionNeurons()
UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.
func (*Network) UnitVarNames ¶ added in v1.1.0
UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*Network) UnitVarProps ¶ added in v1.1.0
UnitVarProps returns properties for variables
func (*Network) UpdateExtFlags ¶ added in v1.0.0
func (nt *Network) UpdateExtFlags()
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and projections
func (*Network) WtBalFmWt ¶
func (nt *Network) WtBalFmWt()
WtBalFmWt updates the weight balance factors based on average recv weights
func (*Network) WtFmDWt ¶
func (nt *Network) WtFmDWt()
WtFmDWt updates the weights from delta-weight changes. Also calls WtBalFmWt every WtBalInterval times
func (*Network) WtFmDWtImpl ¶ added in v1.0.5
func (nt *Network) WtFmDWtImpl()
WtFmDWtImpl updates the weights from delta-weight changes. Also calls WtBalFmWt every WtBalInterval times
type NetworkStru ¶
type NetworkStru struct { // [view: -] we need a pointer to ourselves as an emer.Network, which can always be used to extract the true underlying type of object when network is embedded in other structs -- function receivers do not have this ability so this is necessary. EmerNet emer.Network `` /* 274-byte string literal not displayed */ // overall name of network -- helps discriminate if there are multiple Nm string `desc:"overall name of network -- helps discriminate if there are multiple"` // list of layers Layers emer.Layers `desc:"list of layers"` // filename of last weights file loaded or saved WtsFile string `desc:"filename of last weights file loaded or saved"` // [view: -] map of name to layers -- layer names must be unique LayMap map[string]emer.Layer `view:"-" desc:"map of name to layers -- layer names must be unique"` // [view: -] map of layer classes -- made during Build LayClassMap map[string][]string `view:"-" desc:"map of layer classes -- made during Build"` // [view: -] minimum display position in network MinPos mat32.Vec3 `view:"-" desc:"minimum display position in network"` // [view: -] maximum display position in network MaxPos mat32.Vec3 `view:"-" desc:"maximum display position in network"` // optional metadata that is saved in network weights files -- e.g., can indicate number of epochs that were trained, or any other information about this network that would be useful to save MetaData map[string]string `` /* 194-byte string literal not displayed */ // number of parallel threads (go routines) to use -- this is computed directly from the Layers which you must explicitly allocate to different threads -- updated during Build of network NThreads int `` /* 203-byte string literal not displayed */ // if set, runtime.LockOSThread() is called on the compute threads, which can be faster on large networks on some architectures -- experimentation is recommended LockThreads bool `` /* 165-byte string literal not displayed */ // [view: -] layers per thread -- outer group is threads and inner is layers operated on by that thread -- based on user-assigned threads, initialized during Build ThrLay [][]emer.Layer `` /* 179-byte string literal not displayed */ // [view: -] layer function channels, per thread ThrChans []LayFunChan `view:"-" desc:"layer function channels, per thread"` // [view: -] timers for each thread, so you can see how evenly the workload is being distributed ThrTimes []timer.Time `view:"-" desc:"timers for each thread, so you can see how evenly the workload is being distributed"` // [view: -] timers for each major function (step of processing) FunTimes map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"` // [view: -] network-level wait group for synchronizing threaded layer calls WaitGp sync.WaitGroup `view:"-" desc:"network-level wait group for synchronizing threaded layer calls"` }
leabra.NetworkStru holds the basic structural components of a network (layers)
func (*NetworkStru) AddLayer ¶
AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Leabra networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.
func (*NetworkStru) AddLayer2D ¶
AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.
func (*NetworkStru) AddLayer4D ¶
func (nt *NetworkStru) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ emer.LayerType) emer.Layer
AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Leabra networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.
func (*NetworkStru) AddLayerInit ¶ added in v1.0.0
AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.
func (*NetworkStru) AllParams ¶
func (nt *NetworkStru) AllParams() string
AllParams returns a listing of all parameters in the Network.
func (*NetworkStru) AllWtScales ¶ added in v1.1.22
func (nt *NetworkStru) AllWtScales() string
AllWtScales returns a listing of all WtScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*NetworkStru) BidirConnectLayerNames ¶ added in v1.0.0
func (nt *NetworkStru) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay emer.Layer, fwdpj, backpj emer.Prjn, err error)
BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) BidirConnectLayers ¶ added in v1.0.0
func (nt *NetworkStru) BidirConnectLayers(low, high emer.Layer, pat prjn.Pattern) (fwdpj, backpj emer.Prjn)
BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) BidirConnectLayersPy ¶ added in v1.1.10
func (nt *NetworkStru) BidirConnectLayersPy(low, high emer.Layer, pat prjn.Pattern)
BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.
func (*NetworkStru) Bounds ¶
func (nt *NetworkStru) Bounds() (min, max mat32.Vec3)
func (*NetworkStru) BoundsUpdt ¶
func (nt *NetworkStru) BoundsUpdt()
BoundsUpdt updates the Min / Max display bounds for 3D display
func (*NetworkStru) Build ¶
func (nt *NetworkStru) Build() error
Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity
func (*NetworkStru) BuildThreads ¶
func (nt *NetworkStru) BuildThreads()
BuildThreads constructs the layer thread allocation based on Thread setting in the layers
func (*NetworkStru) ConnectLayerNames ¶
func (nt *NetworkStru) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ emer.PrjnType) (rlay, slay emer.Layer, pj emer.Prjn, err error)
ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) ConnectLayers ¶
func (nt *NetworkStru) ConnectLayers(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType) emer.Prjn
ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) ConnectLayersPrjn ¶ added in v1.0.0
func (nt *NetworkStru) ConnectLayersPrjn(send, recv emer.Layer, pat prjn.Pattern, typ emer.PrjnType, pj emer.Prjn) emer.Prjn
ConnectLayersPrjn makes connection using given projection between two layers, adding given prjn to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) FunTimerStart ¶
func (nt *NetworkStru) FunTimerStart(fun string)
FunTimerStart starts function timer for given function name -- ensures creation of timer
func (*NetworkStru) FunTimerStop ¶
func (nt *NetworkStru) FunTimerStop(fun string)
FunTimerStop stops function timer -- timer must already exist
func (*NetworkStru) InitName ¶
func (nt *NetworkStru) InitName(net emer.Network, name string)
InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name.
func (*NetworkStru) KeyLayerParams ¶ added in v1.2.8
func (nt *NetworkStru) KeyLayerParams() string
KeyLayerParams returns a listing for all layers in the network, of the most important layer-level params (specific to each algorithm).
func (*NetworkStru) KeyPrjnParams ¶ added in v1.2.8
func (nt *NetworkStru) KeyPrjnParams() string
KeyPrjnParams returns a listing for all Recv projections in the network, of the most important projection-level params (specific to each algorithm).
func (*NetworkStru) Label ¶
func (nt *NetworkStru) Label() string
func (*NetworkStru) LateralConnectLayer ¶ added in v1.0.0
LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) LateralConnectLayerPrjn ¶ added in v1.0.0
func (nt *NetworkStru) LateralConnectLayerPrjn(lay emer.Layer, pat prjn.Pattern, pj emer.Prjn) emer.Prjn
LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkStru) LayerByName ¶
func (nt *NetworkStru) LayerByName(name string) emer.Layer
LayerByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkStru) LayerByNameTry ¶
func (nt *NetworkStru) LayerByNameTry(name string) (emer.Layer, error)
LayerByNameTry returns a layer by looking it up by name -- emits a log error message if layer is not found
func (*NetworkStru) LayersByClass ¶ added in v1.1.47
func (nt *NetworkStru) LayersByClass(classes ...string) []string
LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkStru) Layout ¶
func (nt *NetworkStru) Layout()
Layout computes the 3D layout of layers based on their relative position settings
func (*NetworkStru) MakeLayMap ¶
func (nt *NetworkStru) MakeLayMap()
MakeLayMap updates layer map based on current layers
func (*NetworkStru) NLayers ¶
func (nt *NetworkStru) NLayers() int
func (*NetworkStru) NonDefaultParams ¶
func (nt *NetworkStru) NonDefaultParams() string
NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.
func (*NetworkStru) OpenWtsCpp ¶ added in v1.0.0
func (nt *NetworkStru) OpenWtsCpp(filename gi.FileName) error
OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkStru) OpenWtsJSON ¶
func (nt *NetworkStru) OpenWtsJSON(filename gi.FileName) error
OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkStru) ReadWtsCpp ¶ added in v1.0.0
func (nt *NetworkStru) ReadWtsCpp(r io.Reader) error
ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkStru) ReadWtsJSON ¶
func (nt *NetworkStru) ReadWtsJSON(r io.Reader) error
ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkStru) SaveWtsJSON ¶
func (nt *NetworkStru) SaveWtsJSON(filename gi.FileName) error
SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.
func (*NetworkStru) SetWts ¶ added in v1.0.0
func (nt *NetworkStru) SetWts(nw *weights.Network) error
SetWts sets the weights for this network from weights.Network decoded values
func (*NetworkStru) StartThreads ¶
func (nt *NetworkStru) StartThreads()
StartThreads starts up the computation threads, which monitor the channels for work
func (*NetworkStru) StdVertLayout ¶
func (nt *NetworkStru) StdVertLayout()
StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings
func (*NetworkStru) StopThreads ¶
func (nt *NetworkStru) StopThreads()
StopThreads stops the computation threads
func (*NetworkStru) ThrLayFun ¶
func (nt *NetworkStru) ThrLayFun(fun func(ly LeabraLayer), funame string)
ThrLayFun calls function on layer, using threaded (go routine worker) computation if NThreads > 1 and otherwise just iterates over layers in the current thread.
func (*NetworkStru) ThrTimerReset ¶
func (nt *NetworkStru) ThrTimerReset()
ThrTimerReset resets the per-thread timers
func (*NetworkStru) ThrWorker ¶
func (nt *NetworkStru) ThrWorker(tt int)
ThrWorker is the worker function run by the worker threads
func (*NetworkStru) TimerReport ¶
func (nt *NetworkStru) TimerReport()
TimerReport reports the amount of time spent in each function, and in each thread
func (*NetworkStru) VarRange ¶
func (nt *NetworkStru) VarRange(varNm string) (min, max float32, err error)
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*NetworkStru) WriteWtsJSON ¶
func (nt *NetworkStru) WriteWtsJSON(w io.Writer) error
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type NeurFlags ¶
type NeurFlags int32
NeurFlags are bit-flags encoding relevant binary state for neurons
const ( // NeurOff flag indicates that this neuron has been turned off (i.e., lesioned) NeurOff NeurFlags = iota // NeurHasExt means the neuron has external input in its Ext field NeurHasExt // NeurHasTarg means the neuron has external target input in its Targ field NeurHasTarg // NeurHasCmpr means the neuron has external comparison input in its Targ field -- used for computing // comparison statistics but does not drive neural activity ever NeurHasCmpr NeurFlagsN )
The neuron flags
func (*NeurFlags) FromString ¶
func (NeurFlags) MarshalJSON ¶
func (*NeurFlags) UnmarshalJSON ¶
type Neuron ¶
type Neuron struct { // bit flags for binary state variables Flags NeurFlags `desc:"bit flags for binary state variables"` // index of the sub-level inhibitory pool that this neuron is in (only for 4D shapes, the pool (unit-group / hypercolumn) structure level) -- indicies start at 1 -- 0 is layer-level pool (is 0 if no sub-pools). SubPool int32 `` /* 214-byte string literal not displayed */ // rate-coded activation value reflecting final output of neuron communicated to other neurons, typically in range 0-1. This value includes adaptation and synaptic depression / facilitation effects which produce temporal contrast (see ActLrn for version without this). For rate-code activation, this is noisy-x-over-x-plus-one (NXX1) function; for discrete spiking it is computed from the inverse of the inter-spike interval (ISI), and Spike reflects the discrete spikes. Act float32 `` /* 477-byte string literal not displayed */ // learning activation value, reflecting *dendritic* activity that is not affected by synaptic depression or adapdation channels which are located near the axon hillock. This is the what drives the Avg* values that drive learning. Computationally, neurons strongly discount the signals sent to other neurons to provide temporal contrast, but need to learn based on a more stable reflection of their overall inputs in the dendrites. ActLrn float32 `` /* 436-byte string literal not displayed */ // total excitatory synaptic conductance -- the net excitatory input to the neuron -- does *not* include Gbar.E Ge float32 `desc:"total excitatory synaptic conductance -- the net excitatory input to the neuron -- does *not* include Gbar.E"` // total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I Gi float32 `desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"` // total potassium conductance, typically reflecting sodium-gated potassium currents involved in adaptation effects -- does *not* include Gbar.K Gk float32 `` /* 148-byte string literal not displayed */ // net current produced by all channels -- drives update of Vm Inet float32 `desc:"net current produced by all channels -- drives update of Vm"` // membrane potential -- integrates Inet current over time Vm float32 `desc:"membrane potential -- integrates Inet current over time"` // target value: drives learning to produce this activation value Targ float32 `desc:"target value: drives learning to produce this activation value"` // external input: drives activation of unit from outside influences (e.g., sensory input) Ext float32 `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"` // super-short time-scale average of ActLrn activation -- provides the lowest-level time integration -- for spiking this integrates over spikes before subsequent averaging, and it is also useful for rate-code to provide a longer time integral overall AvgSS float32 `` /* 254-byte string literal not displayed */ // short time-scale average of ActLrn activation -- tracks the most recent activation states (integrates over AvgSS values), and represents the plus phase for learning in XCAL algorithms AvgS float32 `` /* 190-byte string literal not displayed */ // medium time-scale average of ActLrn activation -- integrates over AvgS values, and represents the minus phase for learning in XCAL algorithms AvgM float32 `` /* 148-byte string literal not displayed */ // long time-scale average of medium-time scale (trial level) activation, used for the BCM-style floating threshold in XCAL AvgL float32 `` /* 127-byte string literal not displayed */ // how much to learn based on the long-term floating threshold (AvgL) for BCM-style Hebbian learning -- is modulated by level of AvgL itself (stronger Hebbian as average activation goes higher) and optionally the average amount of error experienced in the layer (to retain a common proportionality with the level of error-driven learning across layers) AvgLLrn float32 `` /* 356-byte string literal not displayed */ // short time-scale activation average that is actually used for learning -- typically includes a small contribution from AvgM in addition to mostly AvgS, as determined by LrnActAvgParams.LrnM -- important to ensure that when unit turns off in plus phase (short time scale), enough medium-phase trace remains so that learning signal doesn't just go all the way to 0, at which point no learning would take place AvgSLrn float32 `` /* 414-byte string literal not displayed */ // the activation state at start of current alpha cycle (same as the state at end of previous cycle) ActQ0 float32 `desc:"the activation state at start of current alpha cycle (same as the state at end of previous cycle)"` // the activation state at end of first quarter of current alpha cycle ActQ1 float32 `desc:"the activation state at end of first quarter of current alpha cycle"` // the activation state at end of second quarter of current alpha cycle ActQ2 float32 `desc:"the activation state at end of second quarter of current alpha cycle"` // the activation state at end of third quarter, which is the traditional posterior-cortical minus phase activation ActM float32 `desc:"the activation state at end of third quarter, which is the traditional posterior-cortical minus phase activation"` // the activation state at end of fourth quarter, which is the traditional posterior-cortical plus_phase activation ActP float32 `desc:"the activation state at end of fourth quarter, which is the traditional posterior-cortical plus_phase activation"` // ActP - ActM -- difference between plus and minus phase acts -- reflects the individual error gradient for this neuron in standard error-driven learning terms ActDif float32 `` /* 164-byte string literal not displayed */ // delta activation: change in Act from one cycle to next -- can be useful to track where changes are taking place ActDel float32 `desc:"delta activation: change in Act from one cycle to next -- can be useful to track where changes are taking place"` // average activation (of final plus phase activation state) over long time intervals (time constant = DtPars.AvgTau -- typically 200) -- useful for finding hog units and seeing overall distribution of activation ActAvg float32 `` /* 216-byte string literal not displayed */ // noise value added to unit (ActNoiseParams determines distribution, and when / where it is added) Noise float32 `desc:"noise value added to unit (ActNoiseParams determines distribution, and when / where it is added)"` // aggregated synaptic inhibition (from Inhib projections) -- time integral of GiRaw -- this is added with computed FFFB inhibition to get the full inhibition in Gi GiSyn float32 `` /* 168-byte string literal not displayed */ // total amount of self-inhibition -- time-integrated to avoid oscillations GiSelf float32 `desc:"total amount of self-inhibition -- time-integrated to avoid oscillations"` // last activation value sent (only send when diff is over threshold) ActSent float32 `desc:"last activation value sent (only send when diff is over threshold)"` // raw excitatory conductance (net input) received from sending units (send delta's are added to this value) GeRaw float32 `desc:"raw excitatory conductance (net input) received from sending units (send delta's are added to this value)"` // raw inhibitory conductance (net input) received from sending units (send delta's are added to this value) GiRaw float32 `desc:"raw inhibitory conductance (net input) received from sending units (send delta's are added to this value)"` // conductance of sodium-gated potassium channel (KNa) fast dynamics (M-type) -- produces accommodation / adaptation of firing GknaFast float32 `` /* 130-byte string literal not displayed */ // conductance of sodium-gated potassium channel (KNa) medium dynamics (Slick) -- produces accommodation / adaptation of firing GknaMed float32 `` /* 131-byte string literal not displayed */ // conductance of sodium-gated potassium channel (KNa) slow dynamics (Slack) -- produces accommodation / adaptation of firing GknaSlow float32 `` /* 129-byte string literal not displayed */ // whether neuron has spiked or not (0 or 1), for discrete spiking neurons. Spike float32 `desc:"whether neuron has spiked or not (0 or 1), for discrete spiking neurons."` // current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized. ISI float32 `desc:"current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized."` // average inter-spike-interval -- average time interval between spikes. Starts at -1 when initialized, and goes to -2 after first spike, and is only valid after the second spike post-initialization. ISIAvg float32 `` /* 204-byte string literal not displayed */ }
leabra.Neuron holds all of the neuron (unit) level variables -- this is the most basic version with rate-code only and no optional features at all. All variables accessible via Unit interface must be float32 and start at the top, in contiguous order
func (*Neuron) VarByIndex ¶
VarByIndex returns variable using index (0 = first variable in NeuronVars list)
type OptThreshParams ¶
type OptThreshParams struct { // [def: 0.1] don't send activation when act <= send -- greatly speeds processing Send float32 `def:"0.1" desc:"don't send activation when act <= send -- greatly speeds processing"` // [def: 0.005] don't send activation changes until they exceed this threshold: only for when LeabraNetwork::send_delta is on! Delta float32 `` /* 129-byte string literal not displayed */ }
OptThreshParams provides optimization thresholds for faster processing
func (*OptThreshParams) Defaults ¶
func (ot *OptThreshParams) Defaults()
func (*OptThreshParams) Update ¶
func (ot *OptThreshParams) Update()
type Pool ¶
type Pool struct {
// starting and ending (exlusive) indexes for the list of neurons in this pool
StIdx, EdIdx int `desc:"starting and ending (exlusive) indexes for the list of neurons in this pool"`
// FFFB inhibition computed values, including Ge and Act AvgMax which drive inhibition
Inhib fffb.Inhib `desc:"FFFB inhibition computed values, including Ge and Act AvgMax which drive inhibition"`
// minus phase average and max Act activation values, for ActAvg updt
ActM minmax.AvgMax32 `desc:"minus phase average and max Act activation values, for ActAvg updt"`
// plus phase average and max Act activation values, for ActAvg updt
ActP minmax.AvgMax32 `desc:"plus phase average and max Act activation values, for ActAvg updt"`
// running-average activation levels used for netinput scaling and adaptive inhibition
ActAvg ActAvg `desc:"running-average activation levels used for netinput scaling and adaptive inhibition"`
}
Pool contains computed values for FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition, including: * average / max stats on Ge and Act that drive inhibition * average activity overall that is used for normalizing netin (at layer level)
type Prjn ¶
type Prjn struct { PrjnStru // [view: inline] initial random weight distribution WtInit WtInitParams `view:"inline" desc:"initial random weight distribution"` // [view: inline] weight scaling parameters: modulates overall strength of projection, using both absolute and relative factors WtScale WtScaleParams `` /* 130-byte string literal not displayed */ // [view: add-fields] synaptic-level learning parameters Learn LearnSynParams `view:"add-fields" desc:"synaptic-level learning parameters"` // synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array Syns []Synapse `desc:"synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"` // scaling factor for integrating synaptic input conductances (G's) -- computed in AlphaCycInit, incorporates running-average activity levels GScale float32 `` /* 145-byte string literal not displayed */ // local per-recv unit increment accumulator for synaptic conductance from sending units -- goes to either GeRaw or GiRaw on neuron depending on projection type -- this will be thread-safe GInc []float32 `` /* 192-byte string literal not displayed */ // weight balance state variables for this projection, one per recv neuron WbRecv []WtBalRecvPrjn `desc:"weight balance state variables for this projection, one per recv neuron"` }
leabra.Prjn is a basic Leabra projection with synaptic learning parameters
func (*Prjn) AsLeabra ¶
AsLeabra returns this prjn as a leabra.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the LeabraPrjn interface does not need to include accessors to all the basic stuff.
func (*Prjn) Build ¶
Build constructs the full connectivity among the layers as specified in this projection. Calls PrjnStru.BuildStru and then allocates the synaptic values in Syns accordingly.
func (*Prjn) DWt ¶
func (pj *Prjn) DWt()
DWt computes the weight change (learning) -- on sending projections
func (*Prjn) InitGInc ¶
func (pj *Prjn) InitGInc()
InitGInc initializes the per-projection GInc threadsafe increment -- not typically needed (called during InitWts only) but can be called when needed
func (*Prjn) InitWtSym ¶
func (pj *Prjn) InitWtSym(rpjp LeabraPrjn)
InitWtSym initializes weight symmetry -- is given the reciprocal projection where the Send and Recv layers are reversed.
func (*Prjn) InitWts ¶
func (pj *Prjn) InitWts()
InitWts initializes weight values according to Learn.WtInit params
func (*Prjn) InitWtsSyn ¶ added in v1.0.0
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*Prjn) LrateMult ¶ added in v1.0.0
LrateMult sets the new Lrate parameter for Prjns to LrateInit * mult. Useful for implementing learning rate schedules.
func (*Prjn) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Prjn) RecvGInc ¶
func (pj *Prjn) RecvGInc()
RecvGInc increments the receiver's GeRaw or GiRaw from that of all the projections.
func (*Prjn) SendGDelta ¶
SendGDelta sends the delta-activation from sending neuron index si, to integrate synaptic conductances on receivers
func (*Prjn) SetScalesFunc ¶ added in v1.0.0
SetScalesFunc initializes synaptic Scale values using given function based on receiving and sending unit indexes.
func (*Prjn) SetScalesRPool ¶ added in v1.0.0
SetScalesRPool initializes synaptic Scale values using given tensor of values which has unique values for each recv neuron within a given pool.
func (*Prjn) SetSynVal ¶
SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.
func (*Prjn) SetWts ¶ added in v1.0.0
SetWts sets the weights for this projection from weights.Prjn decoded values
func (*Prjn) SetWtsFunc ¶ added in v1.0.0
SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes.
func (*Prjn) Syn1DNum ¶ added in v1.2.1
Syn1DNum returns the number of synapses for this prjn as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynVals.
func (*Prjn) SynIdx ¶ added in v1.1.0
SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes). Returns -1 if synapse not found between these two neurons. Requires searching within connections for receiving unit.
func (*Prjn) SynVal ¶
SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)
func (*Prjn) SynVal1D ¶ added in v1.1.0
SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*Prjn) SynVals ¶
SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Leabra), into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*Prjn) SynVarIdx ¶ added in v1.1.0
SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*Prjn) SynVarNames ¶
func (*Prjn) SynVarNum ¶ added in v1.1.2
SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.
func (*Prjn) SynVarProps ¶ added in v1.0.0
SynVarProps returns properties for variables
func (*Prjn) UpdateParams ¶
func (pj *Prjn) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values
func (*Prjn) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type PrjnStru ¶
type PrjnStru struct { // [view: -] we need a pointer to ourselves as an LeabraPrjn, which can always be used to extract the true underlying type of object when prjn is embedded in other structs -- function receivers do not have this ability so this is necessary. LeabraPrj LeabraPrjn `` /* 269-byte string literal not displayed */ // inactivate this projection -- allows for easy experimentation Off bool `desc:"inactivate this projection -- allows for easy experimentation"` // Class is for applying parameter styles, can be space separated multple tags Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` // can record notes about this projection here Notes string `desc:"can record notes about this projection here"` // sending layer for this projection Send emer.Layer `desc:"sending layer for this projection"` // receiving layer for this projection -- the emer.Layer interface can be converted to the specific Layer type you are using, e.g., rlay := prjn.Recv.(*leabra.Layer) Recv emer.Layer `` /* 169-byte string literal not displayed */ // pattern of connectivity Pat prjn.Pattern `desc:"pattern of connectivity"` // type of projection -- Forward, Back, Lateral, or extended type in specialized algorithms -- matches against .Cls parameter styles (e.g., .Back etc) Typ emer.PrjnType `` /* 154-byte string literal not displayed */ // [view: -] number of recv connections for each neuron in the receiving layer, as a flat list RConN []int32 `view:"-" desc:"number of recv connections for each neuron in the receiving layer, as a flat list"` // [view: inline] average and maximum number of recv connections in the receiving layer RConNAvgMax minmax.AvgMax32 `inactive:"+" view:"inline" desc:"average and maximum number of recv connections in the receiving layer"` // [view: -] starting index into ConIdx list for each neuron in receiving layer -- just a list incremented by ConN RConIdxSt []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in receiving layer -- just a list incremented by ConN"` // [view: -] index of other neuron on sending side of projection, ordered by the receiving layer's order of units as the outer loop (each start is in ConIdxSt), and then by the sending layer's units within that RConIdx []int32 `` /* 213-byte string literal not displayed */ // [view: -] index of synaptic state values for each recv unit x connection, for the receiver projection which does not own the synapses, and instead indexes into sender-ordered list RSynIdx []int32 `` /* 185-byte string literal not displayed */ // [view: -] number of sending connections for each neuron in the sending layer, as a flat list SConN []int32 `view:"-" desc:"number of sending connections for each neuron in the sending layer, as a flat list"` // [view: inline] average and maximum number of sending connections in the sending layer SConNAvgMax minmax.AvgMax32 `inactive:"+" view:"inline" desc:"average and maximum number of sending connections in the sending layer"` // [view: -] starting index into ConIdx list for each neuron in sending layer -- just a list incremented by ConN SConIdxSt []int32 `view:"-" desc:"starting index into ConIdx list for each neuron in sending layer -- just a list incremented by ConN"` // [view: -] index of other neuron on receiving side of projection, ordered by the sending layer's order of units as the outer loop (each start is in ConIdxSt), and then by the sending layer's units within that SConIdx []int32 `` /* 213-byte string literal not displayed */ }
PrjnStru contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The exact same struct object is added to the Recv and Send layers, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation.
func (*PrjnStru) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*PrjnStru) BuildStru ¶
BuildStru constructs the full connectivity among the layers as specified in this projection. Calls Validate and returns false if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern.
func (*PrjnStru) Connect ¶
Connect sets the connectivity between two layers and the pattern to use in interconnecting them
func (*PrjnStru) Init ¶
Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.
func (*PrjnStru) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*PrjnStru) PrjnTypeName ¶ added in v1.0.0
func (*PrjnStru) SetNIdxSt ¶
func (ps *PrjnStru) SetNIdxSt(n *[]int32, avgmax *minmax.AvgMax32, idxst *[]int32, tn *etensor.Int32) int32
SetNIdxSt sets the *ConN and *ConIdxSt values given n tensor from Pat. Returns total number of connections for this direction.
type Quarters ¶
type Quarters int32
Quarters are the different alpha trial quarters, as a bitflag, for use in relevant timing parameters where quarters need to be specified. The Q1..4 defined values are integer *bit positions* -- use Set, Has etc methods to set bits from these bit positions.
const ( // Q1 is the first quarter, which, due to 0-based indexing, shows up as Quarter = 0 in timer Q1 Quarters = iota Q2 Q3 Q4 QuartersN )
The quarters
func (*Quarters) Clear ¶ added in v1.1.2
Clear clears given quarter bit (qtr = 0..3 = same as Quarters)
func (*Quarters) FromString ¶
func (Quarters) Has ¶ added in v1.1.2
Has returns true if the given quarter is set (qtr = 0..3 = same as Quarters)
func (Quarters) HasNext ¶ added in v1.1.2
HasNext returns true if the quarter after given quarter is set. This wraps around from Q4 to Q1. (qtr = 0..3 = same as Quarters)
func (Quarters) HasPrev ¶ added in v1.1.2
HasPrev returns true if the quarter before given quarter is set. This wraps around from Q1 to Q4. (qtr = 0..3 = same as Quarters)
func (Quarters) MarshalJSON ¶
func (*Quarters) Set ¶ added in v1.1.2
Set sets given quarter bit (adds to any existing) (qtr = 0..3 = same as Quarters)
func (*Quarters) UnmarshalJSON ¶
type SelfInhibParams ¶
type SelfInhibParams struct { // enable neuron self-inhibition On bool `desc:"enable neuron self-inhibition"` // [def: 0.4] [viewif: On] strength of individual neuron self feedback inhibition -- can produce proportional activation behavior in individual units for specialized cases (e.g., scalar val or BG units), but not so good for typical hidden layers Gi float32 `` /* 247-byte string literal not displayed */ // [def: 1.4] [viewif: On] time constant in cycles, which should be milliseconds typically (roughly, how long it takes for value to change significantly -- 1.4x the half-life) for integrating unit self feedback inhibitory values -- prevents oscillations that otherwise occur -- relatively rapid 1.4 typically works, but may need to go longer if oscillations are a problem Tau float32 `` /* 373-byte string literal not displayed */ // [view: -] rate = 1 / tau Dt float32 `inactive:"+" view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
SelfInhibParams defines parameters for Neuron self-inhibition -- activation of the neuron directly feeds back to produce a proportional additional contribution to Gi
func (*SelfInhibParams) Defaults ¶
func (si *SelfInhibParams) Defaults()
func (*SelfInhibParams) Inhib ¶
func (si *SelfInhibParams) Inhib(self *float32, act float32)
Inhib updates the self inhibition value based on current unit activation
func (*SelfInhibParams) Update ¶
func (si *SelfInhibParams) Update()
type Synapse ¶
type Synapse struct { // synaptic weight value -- sigmoid contrast-enhanced Wt float32 `desc:"synaptic weight value -- sigmoid contrast-enhanced"` // linear (underlying) weight value -- learns according to the lrate specified in the connection spec -- this is converted into the effective weight value, Wt, via sigmoidal contrast enhancement (see WtSigParams) LWt float32 `` /* 216-byte string literal not displayed */ // change in synaptic weight, from learning DWt float32 `desc:"change in synaptic weight, from learning"` // DWt normalization factor -- reset to max of abs value of DWt, decays slowly down over time -- serves as an estimate of variance in weight changes over time Norm float32 `` /* 162-byte string literal not displayed */ // momentum -- time-integrated DWt changes, to accumulate a consistent direction of weight change and cancel out dithering contradictory changes Moment float32 `` /* 148-byte string literal not displayed */ // scaling parameter for this connection: effective weight value is scaled by this factor -- useful for topographic connectivity patterns e.g., to enforce more distant connections to always be lower in magnitude than closer connections. Value defaults to 1 (cannot be exactly 0 -- otherwise is automatically reset to 1 -- use a very small number to approximate 0). Typically set by using the prjn.Pattern Weights() values where appropriate Scale float32 `` /* 445-byte string literal not displayed */ }
leabra.Synapse holds state for the synaptic connection between neurons
func (*Synapse) SetVarByIndex ¶ added in v1.0.0
func (*Synapse) SetVarByName ¶
SetVarByName sets synapse variable to given value
func (*Synapse) VarByIndex ¶ added in v1.0.0
VarByIndex returns variable using index (0 = first variable in SynapseVars list)
type Time ¶
type Time struct { // accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds Time float32 `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"` // cycle counter: number of iterations of activation updating (settling) on the current alpha-cycle (100 msec / 10 Hz) trial -- this counts time sequentially through the entire trial, typically from 0 to 99 cycles Cycle int `` /* 217-byte string literal not displayed */ // total cycle count -- this increments continuously from whenever it was last reset -- typically this is number of milliseconds in simulation time CycleTot int `` /* 151-byte string literal not displayed */ // [0-3] current gamma-frequency (25 msec / 40 Hz) quarter of alpha-cycle (100 msec / 10 Hz) trial being processed. Due to 0-based indexing, the first quarter is 0, second is 1, etc -- the plus phase final quarter is 3. Quarter int `` /* 224-byte string literal not displayed */ // true if this is the plus phase (final quarter = 3) -- else minus phase PlusPhase bool `desc:"true if this is the plus phase (final quarter = 3) -- else minus phase"` // [def: 0.001] amount of time to increment per cycle TimePerCyc float32 `def:"0.001" desc:"amount of time to increment per cycle"` // [def: 25] number of cycles per quarter to run -- 25 = standard 100 msec alpha-cycle CycPerQtr int `def:"25" desc:"number of cycles per quarter to run -- 25 = standard 100 msec alpha-cycle"` }
leabra.Time contains all the timing state and parameter information for running a model
func (*Time) AlphaCycStart ¶
func (tm *Time) AlphaCycStart()
AlphaCycStart starts a new alpha-cycle (set of 4 quarters)
func (*Time) QuarterCycle ¶ added in v1.0.0
QuarterCycle returns the number of cycles into current quarter
func (*Time) QuarterInc ¶
func (tm *Time) QuarterInc()
QuarterInc increments at the quarter level, updating Quarter and PlusPhase
type TimeScales ¶
type TimeScales int32
TimeScales are the different time scales associated with overall simulation running, and can be used to parameterize the updating and control flow of simulations at different scales. The definitions become increasingly subjective imprecise as the time scales increase. This is not used directly in the algorithm code -- all control is responsibility of the end simulation. This list is designed to standardize terminology across simulations and establish a common conceptual framework for time -- it can easily be extended in specific simulations to add needed additional levels, although using one of the existing standard values is recommended wherever possible.
const ( // Cycle is the finest time scale -- typically 1 msec -- a single activation update. Cycle TimeScales = iota // FastSpike is typically 10 cycles = 10 msec (100hz) = the fastest spiking time // generally observed in the brain. This can be useful for visualizing updates // at a granularity in between Cycle and Quarter. FastSpike // Quarter is typically 25 cycles = 25 msec (40hz) = 1/4 of the 100 msec alpha trial // This is also the GammaCycle (gamma = 40hz), but we use Quarter functionally // by virtue of there being 4 per AlphaCycle. Quarter // Phase is either Minus or Plus phase -- Minus = first 3 quarters, Plus = last quarter Phase // BetaCycle is typically 50 cycles = 50 msec (20 hz) = one beta-frequency cycle. // Gating in the basal ganglia and associated updating in prefrontal cortex // occurs at this frequency. BetaCycle // AlphaCycle is typically 100 cycles = 100 msec (10 hz) = one alpha-frequency cycle, // which is the fundamental unit of learning in posterior cortex. AlphaCycle // ThetaCycle is typically 200 cycles = 200 msec (5 hz) = two alpha-frequency cycles. // This is the modal duration of a saccade, the update frequency of medial temporal lobe // episodic memory, and the minimal predictive learning cycle (perceive an Alpha 1, predict on 2). ThetaCycle // Event is the smallest unit of naturalistic experience that coheres unto itself // (e.g., something that could be described in a sentence). // Typically this is on the time scale of a few seconds: e.g., reaching for // something, catching a ball. Event // Trial is one unit of behavior in an experiment -- it is typically environmentally // defined instead of endogenously defined in terms of basic brain rhythms. // In the minimal case it could be one AlphaCycle, but could be multiple, and // could encompass multiple Events (e.g., one event is fixation, next is stimulus, // last is response) Trial // Tick is one step in a sequence -- often it is useful to have Trial count // up throughout the entire Epoch but also include a Tick to count trials // within a Sequence Tick // Sequence is a sequential group of Trials (not always needed). Sequence // Condition is a collection of Blocks that share the same set of parameters. // This is intermediate between Block and Run levels. Condition // Block is a collection of Trials, Sequences or Events, often used in experiments // when conditions are varied across blocks. Block // Epoch is used in two different contexts. In machine learning, it represents a // collection of Trials, Sequences or Events that constitute a "representative sample" // of the environment. In the simplest case, it is the entire collection of Trials // used for training. In electrophysiology, it is a timing window used for organizing // the analysis of electrode data. Epoch // Run is a complete run of a model / subject, from training to testing, etc. // Often multiple runs are done in an Expt to obtain statistics over initial // random weights etc. Run // Expt is an entire experiment -- multiple Runs through a given protocol / set of // parameters. Expt // Scene is a sequence of events that constitutes the next larger-scale coherent unit // of naturalistic experience corresponding e.g., to a scene in a movie. // Typically consists of events that all take place in one location over // e.g., a minute or so. This could be a paragraph or a page or so in a book. Scene // Episode is a sequence of scenes that constitutes the next larger-scale unit // of naturalistic experience e.g., going to the grocery store or eating at a // restaurant, attending a wedding or other "event". // This could be a chapter in a book. Episode TimeScalesN )
The time scales
func (*TimeScales) FromString ¶
func (i *TimeScales) FromString(s string) error
func (TimeScales) MarshalJSON ¶
func (ev TimeScales) MarshalJSON() ([]byte, error)
func (TimeScales) String ¶
func (i TimeScales) String() string
func (*TimeScales) UnmarshalJSON ¶
func (ev *TimeScales) UnmarshalJSON(b []byte) error
type WtBalParams ¶
type WtBalParams struct { // perform weight balance soft normalization? if so, maintains overall weight balance across units by progressively penalizing weight increases as a function of amount of averaged receiver weight above a high threshold (hi_thr) and long time-average activation above an act_thr -- this is generally very beneficial for larger models where hog units are a problem, but not as much for smaller models where the additional constraints are not beneficial -- uses a sigmoidal function: WbInc = 1 / (1 + HiGain*(WbAvg - HiThr) + ActGain * (nrn.ActAvg - ActThr))) On bool `` /* 561-byte string literal not displayed */ // apply soft bounding to target layers -- appears to be beneficial but still testing Targs bool `desc:"apply soft bounding to target layers -- appears to be beneficial but still testing"` // [def: 0.25] [viewif: On] threshold on weight value for inclusion into the weight average that is then subject to the further HiThr threshold for then driving a change in weight balance -- this AvgThr allows only stronger weights to contribute so that weakening of lower weights does not dilute sensitivity to number and strength of strong weights AvgThr float32 `` /* 351-byte string literal not displayed */ // [def: 0.4] [viewif: On] high threshold on weight average (subject to AvgThr) before it drives changes in weight increase vs. decrease factors HiThr float32 `` /* 146-byte string literal not displayed */ // [def: 4] [viewif: On] gain multiplier applied to above-HiThr thresholded weight averages -- higher values turn weight increases down more rapidly as the weights become more imbalanced HiGain float32 `` /* 188-byte string literal not displayed */ // [def: 0.4] [viewif: On] low threshold on weight average (subject to AvgThr) before it drives changes in weight increase vs. decrease factors LoThr float32 `` /* 145-byte string literal not displayed */ // [def: 6,0] [viewif: On] gain multiplier applied to below-lo_thr thresholded weight averages -- higher values turn weight increases up more rapidly as the weights become more imbalanced -- generally beneficial but sometimes not -- worth experimenting with either 6 or 0 LoGain float32 `` /* 273-byte string literal not displayed */ }
WtBalParams are weight balance soft renormalization params: maintains overall weight balance by progressively penalizing weight increases as a function of how strong the weights are overall (subject to thresholding) and long time-averaged activation. Plugs into soft bounding function.
func (*WtBalParams) Defaults ¶
func (wb *WtBalParams) Defaults()
func (*WtBalParams) Update ¶
func (wb *WtBalParams) Update()
func (*WtBalParams) WtBal ¶
func (wb *WtBalParams) WtBal(wbAvg float32) (fact, inc, dec float32)
WtBal computes weight balance factors for increase and decrease based on extent to which weights and average act exceed thresholds
type WtBalRecvPrjn ¶
type WtBalRecvPrjn struct { // average of effective weight values that exceed WtBal.AvgThr across given Recv Neuron's connections for given Prjn Avg float32 `desc:"average of effective weight values that exceed WtBal.AvgThr across given Recv Neuron's connections for given Prjn"` // overall weight balance factor that drives changes in WbInc vs. WbDec via a sigmoidal function -- this is the net strength of weight balance changes Fact float32 `` /* 154-byte string literal not displayed */ // weight balance increment factor -- extra multiplier to add to weight increases to maintain overall weight balance Inc float32 `desc:"weight balance increment factor -- extra multiplier to add to weight increases to maintain overall weight balance"` // weight balance decrement factor -- extra multiplier to add to weight decreases to maintain overall weight balance Dec float32 `desc:"weight balance decrement factor -- extra multiplier to add to weight decreases to maintain overall weight balance"` }
WtBalRecvPrjn are state variables used in computing the WtBal weight balance function There is one of these for each Recv Neuron participating in the projection.
func (*WtBalRecvPrjn) Init ¶
func (wb *WtBalRecvPrjn) Init()
type WtInitParams ¶ added in v1.0.0
type WtInitParams struct { erand.RndParams // symmetrize the weight values with those in reciprocal projection -- typically true for bidirectional excitatory connections Sym bool `` /* 130-byte string literal not displayed */ }
WtInitParams are weight initialization parameters -- basically the random distribution parameters but also Symmetry flag
func (*WtInitParams) Defaults ¶ added in v1.0.0
func (wp *WtInitParams) Defaults()
type WtScaleParams ¶
type WtScaleParams struct { // [def: 1] [min: 0] absolute scaling, which is not subject to normalization: directly multiplies weight values Abs float32 `def:"1" min:"0" desc:"absolute scaling, which is not subject to normalization: directly multiplies weight values"` // [min: 0] [Default: 1] relative scaling that shifts balance between different projections -- this is subject to normalization across all other projections into unit Rel float32 `` /* 169-byte string literal not displayed */ }
/ WtScaleParams are weight scaling parameters: modulates overall strength of projection, using both absolute and relative factors
func (*WtScaleParams) Defaults ¶
func (ws *WtScaleParams) Defaults()
func (*WtScaleParams) FullScale ¶
func (ws *WtScaleParams) FullScale(savg, snu, ncon float32) float32
FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale
func (*WtScaleParams) SLayActScale ¶
func (ws *WtScaleParams) SLayActScale(savg, snu, ncon float32) float32
SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.
func (*WtScaleParams) Update ¶
func (ws *WtScaleParams) Update()
type WtSigParams ¶
type WtSigParams struct { // [def: 1,6] [min: 0] gain (contrast, sharpness) of the weight contrast function (1 = linear) Gain float32 `def:"1,6" min:"0" desc:"gain (contrast, sharpness) of the weight contrast function (1 = linear)"` // [def: 1] [min: 0] offset of the function (1=centered at .5, >1=higher, <1=lower) -- 1 is standard for XCAL Off float32 `def:"1" min:"0" desc:"offset of the function (1=centered at .5, >1=higher, <1=lower) -- 1 is standard for XCAL"` // [def: true] apply exponential soft bounding to the weight changes SoftBound bool `def:"true" desc:"apply exponential soft bounding to the weight changes"` }
WtSigParams are sigmoidal weight contrast enhancement function parameters
func (*WtSigParams) Defaults ¶
func (ws *WtSigParams) Defaults()
func (*WtSigParams) LinFmSigWt ¶
func (ws *WtSigParams) LinFmSigWt(sw float32) float32
LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight
func (*WtSigParams) SigFmLinWt ¶
func (ws *WtSigParams) SigFmLinWt(lw float32) float32
SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight
func (*WtSigParams) Update ¶
func (ws *WtSigParams) Update()
type XCalParams ¶
type XCalParams struct { // [def: 1] [min: 0] multiplier on learning based on the medium-term floating average threshold which produces error-driven learning -- this is typically 1 when error-driven learning is being used, and 0 when pure Hebbian learning is used. The long-term floating average threshold is provided by the receiving unit MLrn float32 `` /* 316-byte string literal not displayed */ // [def: false] if true, set a fixed AvgLLrn weighting factor that determines how much of the long-term floating average threshold (i.e., BCM, Hebbian) component of learning is used -- this is useful for setting a fully Hebbian learning connection, e.g., by setting MLrn = 0 and LLrn = 1. If false, then the receiving unit's AvgLLrn factor is used, which dynamically modulates the amount of the long-term component as a function of how active overall it is SetLLrn bool `` /* 459-byte string literal not displayed */ // [viewif: SetLLrn] fixed l_lrn weighting factor that determines how much of the long-term floating average threshold (i.e., BCM, Hebbian) component of learning is used -- this is useful for setting a fully Hebbian learning connection, e.g., by setting MLrn = 0 and LLrn = 1. LLrn float32 `` /* 279-byte string literal not displayed */ // [def: 0.1] [min: 0] [max: 0.99] proportional point within LTD range where magnitude reverses to go back down to zero at zero -- err-driven svm component does better with smaller values, and BCM-like mvl component does better with larger values -- 0.1 is a compromise DRev float32 `` /* 270-byte string literal not displayed */ // [def: 0.0001,0.01] [min: 0] minimum LTD threshold value below which no weight change occurs -- this is now *relative* to the threshold DThr float32 `` /* 139-byte string literal not displayed */ // [def: 0.01] xcal learning threshold -- don't learn when sending unit activation is below this value in both phases -- due to the nature of the learning function being 0 when the sr coproduct is 0, it should not affect learning in any substantial way -- nonstandard learning algorithms that have different properties should ignore it LrnThr float32 `` /* 338-byte string literal not displayed */ // [view: -] -(1-DRev)/DRev -- multiplication factor in learning rule -- builds in the minus sign! DRevRatio float32 `` /* 131-byte string literal not displayed */ }
XCalParams are parameters for temporally eXtended Contrastive Attractor Learning function (XCAL) which is the standard learning equation for leabra .
func (*XCalParams) DWt ¶
func (xc *XCalParams) DWt(srval, thrP float32) float32
DWt is the XCAL function for weight change -- the "check mark" function -- no DGain, no ThrPMin
func (*XCalParams) Defaults ¶
func (xc *XCalParams) Defaults()
func (*XCalParams) LongLrate ¶
func (xc *XCalParams) LongLrate(avgLLrn float32) float32
LongLrate returns the learning rate for long-term floating average component (BCM)
func (*XCalParams) Update ¶
func (xc *XCalParams) Update()