Documentation
¶
Overview ¶
Package pbwm provides the prefrontal cortex basal ganglia working memory (PBWM) model of the basal ganglia (BG) and prefrontal cortex (PFC) circuitry that supports dynamic BG gating of PFC robust active maintenance.
This package builds on the deep package for defining thalamocortical circuits involved in predictive learning -- the BG basically acts to gate these circuits.
It provides a basis for dopamine-modulated processing of all types, and is the base package for the PVLV model package built on top of it.
There are multiple levels of functionality to allow for flexibility in exploring new variants.
Each different Layer type defines and manages its own Neuron type, despite some redundancy, so only one type is needed and it is exactly what that layer needs. However, a Network must have a single consistent set of Neuron variables, which is given by ModNeuronVars and NeuronVars enum. In many cases, those "neuron" variables are actually stored in the layer itself instead of on per-neuron level.
Naming rule: DA when a singleton, DaMod (lowercase a) when CamelCased with something else
############## # Basic Level
* ModLayer has DA, ACh, SE -- can be modulated
* DaSrcLayer sends DA to a list of layers (does not use Prjns)
- AChSrcLayer, SeSrcLayer likewise for ACh and SE (serotonin)
- GateLayer has GateStates in 1-to-1 correspondence with Pools, to keep track of gating state -- source gating layers can send updates to other layers.
################ # PBWM specific
- MatrixLayer for dorsal striatum gating of DLPFC areas, separate D1R = Go, D2R = NoGo Each layer contains Maint and Out GateTypes, as function of outer 4D Pool X dimension (Maint on the left, Out on the right)
- GPiThalLayer receives from Matrix Go and GPe NoGo to compute final WTA gating, and broadcasts GateState info to its SendTo layers. See Timing params for timing.
- PFCLayer for active maintenance -- uses DeepLeabra framework, with update timing according to deep.Layer DeepBurst.BurstQtr. Gating is computed in quarter *before* updating in BurstQtr. At *end* of BurstQtr, Super Burst -> Deep Ctxt to drive maintenance via Ctxt in Deep.
Index ¶
- Variables
- type AChSrcLayer
- type ClampDaLayer
- type DaHebbPrjn
- type DaModParams
- type DaReceptors
- type DaSrcLayer
- type GPiGateParams
- type GPiNeuron
- type GPiThalLayer
- func (ly *GPiThalLayer) AddSendTo(laynm string)
- func (ly *GPiThalLayer) AlphaCycInit()
- func (ly *GPiThalLayer) Build() error
- func (ly *GPiThalLayer) Defaults()
- func (ly *GPiThalLayer) GFmInc(ltime *leabra.Time)
- func (ly *GPiThalLayer) GateFmAct(ltime *leabra.Time)
- func (ly *GPiThalLayer) GateSend(ltime *leabra.Time)
- func (ly *GPiThalLayer) GateType() GateTypes
- func (ly *GPiThalLayer) InitActs()
- func (ly *GPiThalLayer) MatrixPrjns() (goPrjn, nogoPrjn *GPiThalPrjn, err error)
- func (ly *GPiThalLayer) RecGateAct(ltime *leabra.Time)
- func (ly *GPiThalLayer) SendGateShape() error
- func (ly *GPiThalLayer) SendGateStates()
- func (ly *GPiThalLayer) SendToCheck() error
- func (ly *GPiThalLayer) SendToMatrixPFC(prefix string)
- func (ly *GPiThalLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
- type GPiThalPrjn
- type GPiTimingParams
- type GateLayer
- func (ly *GateLayer) AsGate() *GateLayer
- func (ly *GateLayer) AvgMaxGeRaw(ltime *leabra.Time)
- func (ly *GateLayer) Build() error
- func (ly *GateLayer) GateShape() *GateShape
- func (ly *GateLayer) GateState(poolIdx int) *GateState
- func (ly *GateLayer) InitActs()
- func (ly *GateLayer) SetGateState(poolIdx int, state *GateState)
- func (ly *GateLayer) SetGateStates(states []GateState, typ GateTypes)
- func (ly *GateLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
- type GateLayerer
- type GateShape
- type GateState
- type GateTypes
- type Layer
- type MatrixLayer
- func (ly *MatrixLayer) ActFmG(ltime *leabra.Time)
- func (ly *MatrixLayer) Build() error
- func (ly *MatrixLayer) DALrnFmDA(da float32) float32
- func (ly *MatrixLayer) DaAChFmLay(ltime *leabra.Time)
- func (ly *MatrixLayer) Defaults()
- func (ly *MatrixLayer) DoQuarter2DWt() bool
- func (ly *MatrixLayer) GateType() GateTypes
- func (ly *MatrixLayer) InhibFmGeAct(ltime *leabra.Time)
- func (ly *MatrixLayer) InitActs()
- func (ly *MatrixLayer) RecGateAct(ltime *leabra.Time)
- func (ly *MatrixLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
- type MatrixNeuron
- type MatrixParams
- type MatrixTracePrjn
- type ModLayer
- func (ly *ModLayer) AsGate() *GateLayer
- func (ly *ModLayer) AsMod() *ModLayer
- func (ly *ModLayer) Defaults()
- func (ly *ModLayer) DoQuarter2DWt() bool
- func (ly *ModLayer) GateSend(ltime *leabra.Time)
- func (ly *ModLayer) InitActs()
- func (ly *ModLayer) Quarter2DWt()
- func (ly *ModLayer) QuarterFinal(ltime *leabra.Time)
- func (ly *ModLayer) RecGateAct(ltime *leabra.Time)
- func (ly *ModLayer) SendMods(ltime *leabra.Time)
- func (ly *ModLayer) UnitVal1DTry(varNm string, idx int) (float32, error)
- func (ly *ModLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
- func (ly *ModLayer) UnitValTry(varNm string, idx []int) (float32, error)
- func (ly *ModLayer) UnitVals(vals *[]float32, varNm string) error
- func (ly *ModLayer) UnitValsTensor(tsr etensor.Tensor, varNm string) error
- func (ly *ModLayer) UnitVarNames() []string
- func (ly *ModLayer) UpdateParams()
- type Network
- func (nt *Network) AddClampDaLayer(name string) *ClampDaLayer
- func (nt *Network) AddDorsalBG(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (mtxGo, mtxNoGo, gpe, gpi leabra.LeabraLayer)
- func (nt *Network) AddGPeLayer(name string, nY, nMaint, nOut int) *ModLayer
- func (nt *Network) AddGPiThalLayer(name string, nY, nMaint, nOut int) *GPiThalLayer
- func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer
- func (nt *Network) AddPBWM(prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) (mtxGo, mtxNoGo, gpe, gpi, pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)
- func (nt *Network) AddPFC(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)
- func (nt *Network) AddPFCLayer(name string, nY, nX, nNeurY, nNeurX int, out bool) (sp, dp *PFCLayer)
- func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da leabra.LeabraLayer)
- func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td leabra.LeabraLayer)
- func (nt *Network) Cycle(ltime *leabra.Time)
- func (nt *Network) Defaults()
- func (nt *Network) GateSend(ltime *leabra.Time)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPrjn() emer.Prjn
- func (nt *Network) RecGateAct(ltime *leabra.Time)
- func (nt *Network) SendMods(ltime *leabra.Time)
- func (nt *Network) UpdateParams()
- type NeuronVars
- type PBWMLayer
- type PBWMPrjn
- type PFCDyn
- type PFCDyns
- type PFCGateParams
- type PFCLayer
- func (ly *PFCLayer) ActFmG(ltime *leabra.Time)
- func (ly *PFCLayer) AvgMaxGe(ltime *leabra.Time)
- func (ly *PFCLayer) Build() error
- func (ly *PFCLayer) BurstFmAct(ltime *leabra.Time)
- func (ly *PFCLayer) ClearCtxtPool(pool int)
- func (ly *PFCLayer) ClearMaint(pool int)
- func (ly *PFCLayer) CtxtFmGe(ltime *leabra.Time)
- func (ly *PFCLayer) DecayStatePool(pool int, decay float32)
- func (ly *PFCLayer) DeepMaint(ltime *leabra.Time)
- func (ly *PFCLayer) DeepPFC() *PFCLayer
- func (ly *PFCLayer) Defaults()
- func (ly *PFCLayer) DoQuarter2DWt() bool
- func (ly *PFCLayer) GFmInc(ltime *leabra.Time)
- func (ly *PFCLayer) GateStateToDeep(ltime *leabra.Time)
- func (ly *PFCLayer) GateType() GateTypes
- func (ly *PFCLayer) Gating(ltime *leabra.Time)
- func (ly *PFCLayer) InitActs()
- func (ly *PFCLayer) MaintPFC() *PFCLayer
- func (ly *PFCLayer) QuarterFinal(ltime *leabra.Time)
- func (ly *PFCLayer) RecGateAct(ltime *leabra.Time)
- func (ly *PFCLayer) SendCtxtGe(ltime *leabra.Time)
- func (ly *PFCLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
- type PFCMaintParams
- type PFCNeuron
- type RWDaLayer
- type RWPredLayer
- type RWPrjn
- type SeSrcLayer
- type TDDaLayer
- type TDRewIntegLayer
- type TDRewIntegParams
- type TDRewPredLayer
- type TDRewPredPrjn
- type TraceParams
- type TraceSyn
- type Valences
Constants ¶
This section is empty.
Variables ¶
var ( // ModNeuronVars are the modulator neurons plus some custom variables that sub-types use for their // algo-specific cases -- need a consistent set of overall network-level vars for display / generic // interface. ModNeuronVars = []string{"DA", "DALrn", "ACh", "SE", "GateAct", "GateNow", "GateCnt", "ActG", "Cust1"} ModNeuronVarsMap map[string]int ModNeuronVarsAll []string )
var KiT_AChSrcLayer = kit.Types.AddType(&AChSrcLayer{}, deep.LayerProps)
var KiT_ClampDaLayer = kit.Types.AddType(&ClampDaLayer{}, deep.LayerProps)
var KiT_DaHebbPrjn = kit.Types.AddType(&DaHebbPrjn{}, deep.PrjnProps)
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
var KiT_DaSrcLayer = kit.Types.AddType(&DaSrcLayer{}, deep.LayerProps)
var KiT_GPiThalLayer = kit.Types.AddType(&GPiThalLayer{}, deep.LayerProps)
var KiT_GPiThalPrjn = kit.Types.AddType(&GPiThalPrjn{}, deep.PrjnProps)
var KiT_GateLayer = kit.Types.AddType(&GateLayer{}, deep.LayerProps)
var KiT_GateTypes = kit.Enums.AddEnum(GateTypesN, kit.NotBitFlag, nil)
var KiT_Layer = kit.Types.AddType(&Layer{}, deep.LayerProps)
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, deep.LayerProps)
var KiT_ModLayer = kit.Types.AddType(&ModLayer{}, deep.LayerProps)
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
var KiT_PFCLayer = kit.Types.AddType(&PFCLayer{}, deep.LayerProps)
var KiT_RWDaLayer = kit.Types.AddType(&RWDaLayer{}, deep.LayerProps)
var KiT_RWPredLayer = kit.Types.AddType(&RWPredLayer{}, deep.LayerProps)
var KiT_SeSrcLayer = kit.Types.AddType(&SeSrcLayer{}, deep.LayerProps)
var KiT_TDDaLayer = kit.Types.AddType(&TDDaLayer{}, deep.LayerProps)
var KiT_TDRewIntegLayer = kit.Types.AddType(&TDRewIntegLayer{}, deep.LayerProps)
var KiT_TDRewPredLayer = kit.Types.AddType(&TDRewPredLayer{}, deep.LayerProps)
var KiT_TDRewPredPrjn = kit.Types.AddType(&TDRewPredPrjn{}, deep.PrjnProps)
var KiT_Valences = kit.Enums.AddEnum(ValencesN, kit.NotBitFlag, nil)
var NetworkProps = deep.NetworkProps
var TraceSynVars = []string{"NTr", "Tr"}
Functions ¶
This section is empty.
Types ¶
type AChSrcLayer ¶
AChSrcLayer is the basic type of layer that sends ACh to other layers. Uses a list of layer names to send to -- not use Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.
func (*AChSrcLayer) AddSendTo ¶
func (ly *AChSrcLayer) AddSendTo(laynm string)
AddSendTo adds given layer name to list of those to send DA to
func (*AChSrcLayer) Build ¶
func (ly *AChSrcLayer) Build() error
Build constructs the layer state, including calling Build on the projections.
func (*AChSrcLayer) SendACh ¶
func (ly *AChSrcLayer) SendACh(ach float32)
SendACh sends ACh to SendTo list of layers
func (*AChSrcLayer) SendToCheck ¶
func (ly *AChSrcLayer) SendToCheck() error
SendToCheck is called during Build to ensure that SendTo layers are valid
type ClampDaLayer ¶
type ClampDaLayer struct {
DaSrcLayer
}
ClampDaLayer is an Input layer that just sends its activity as the dopamine signal
func (*ClampDaLayer) SendMods ¶
func (ly *ClampDaLayer) SendMods(ltime *leabra.Time)
SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing
type DaHebbPrjn ¶
DaHebbPrjn does dopamine-modulated Hebbian learning -- i.e., the 3-factor learning rule: Da * Recv.Act * Send.Act
func (*DaHebbPrjn) DWt ¶
func (pj *DaHebbPrjn) DWt()
DWt computes the weight change (learning) -- on sending projections.
func (*DaHebbPrjn) Defaults ¶
func (pj *DaHebbPrjn) Defaults()
type DaModParams ¶
type DaModParams struct { On bool `desc:"whether to use dopamine modulation"` ModGain bool `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"` Minus float32 `` /* 145-byte string literal not displayed */ Plus float32 `` /* 144-byte string literal not displayed */ NegGain float32 `` /* 208-byte string literal not displayed */ PosGain float32 `` /* 208-byte string literal not displayed */ }
Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects
func (*DaModParams) Defaults ¶
func (dm *DaModParams) Defaults()
func (*DaModParams) Gain ¶
func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32
Gain returns da-modulated gain value
func (*DaModParams) GainModOn ¶
func (dm *DaModParams) GainModOn() bool
GainModOn returns true if modulating Gain
func (*DaModParams) Ge ¶
func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32
Ge returns da-modulated ge value
func (*DaModParams) GeModOn ¶
func (dm *DaModParams) GeModOn() bool
GeModOn returns true if modulating Ge
type DaReceptors ¶
type DaReceptors int
DaReceptors for D1R and D2R dopamine receptors
const ( // D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum D1R DaReceptors = iota // D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum D2R DaReceptorsN )
func (*DaReceptors) FromString ¶
func (i *DaReceptors) FromString(s string) error
func (DaReceptors) MarshalJSON ¶
func (ev DaReceptors) MarshalJSON() ([]byte, error)
func (DaReceptors) String ¶
func (i DaReceptors) String() string
func (*DaReceptors) UnmarshalJSON ¶
func (ev *DaReceptors) UnmarshalJSON(b []byte) error
type DaSrcLayer ¶
DaSrcLayer is the basic type of layer that sends dopamine to other layers. Uses a list of layer names to send to -- not using Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.
func (*DaSrcLayer) AddSendTo ¶
func (ly *DaSrcLayer) AddSendTo(laynm string)
AddSendTo adds given layer name to list of those to send DA to
func (*DaSrcLayer) Build ¶
func (ly *DaSrcLayer) Build() error
Build constructs the layer state, including calling Build on the projections.
func (*DaSrcLayer) SendDA ¶
func (ly *DaSrcLayer) SendDA(da float32)
SendDA sends dopamine to SendTo list of layers
func (*DaSrcLayer) SendToAllBut ¶
func (ly *DaSrcLayer) SendToAllBut(excl []string)
SendToAllBut adds all layers in network except those in list to the SendTo list of layers to send to -- this layer is automatically excluded as well.
func (*DaSrcLayer) SendToCheck ¶
func (ly *DaSrcLayer) SendToCheck() error
SendToCheck is called during Build to ensure that SendTo layers are valid
type GPiGateParams ¶
type GPiGateParams struct { GeGain float32 `` /* 217-byte string literal not displayed */ NoGo float32 `` /* 178-byte string literal not displayed */ Thr float32 `` /* 242-byte string literal not displayed */ ThrAct bool `` /* 159-byte string literal not displayed */ }
GPiGateParams has gating parameters for gating in GPiThal layer, including threshold
func (*GPiGateParams) Defaults ¶
func (gp *GPiGateParams) Defaults()
func (*GPiGateParams) GeRaw ¶
func (gp *GPiGateParams) GeRaw(goRaw, nogoRaw float32) float32
GeRaw returns the net GeRaw from go, nogo specific values
type GPiNeuron ¶
type GPiNeuron struct {
ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"`
}
GPiNeuron contains extra variables for GPiThalLayer neurons -- stored separately
type GPiThalLayer ¶
type GPiThalLayer struct { GateLayer Timing GPiTimingParams `view:"inline" desc:"timing parameters determining when gating happens"` Gate GPiGateParams `view:"inline" desc:"gating parameters determining threshold for gating etc"` SendTo []string `desc:"list of layers to send GateState to"` GPiNeurs []GPiNeuron `` /* 144-byte string literal not displayed */ }
GPiThalLayer represents the combined Winner-Take-All dynamic of GPi (SNr) and Thalamus. It is the final arbiter of gating in the BG, weighing Go (direct) and NoGo (indirect) inputs from MatrixLayers (indirectly via GPe layer in case of NoGo). Use 4D structure for this so it matches 4D structure in Matrix layers
func (*GPiThalLayer) AddSendTo ¶
func (ly *GPiThalLayer) AddSendTo(laynm string)
AddSendTo adds given layer name to list of those to send DA to
func (*GPiThalLayer) AlphaCycInit ¶
func (ly *GPiThalLayer) AlphaCycInit()
AlphaCycInit handles all initialization at start of new input pattern, including computing input scaling from running average activation etc. should already have presented the external input to the network at this point. need to clear incrementing GeRaw from prjns
func (*GPiThalLayer) Build ¶
func (ly *GPiThalLayer) Build() error
Build constructs the layer state, including calling Build on the projections.
func (*GPiThalLayer) Defaults ¶
func (ly *GPiThalLayer) Defaults()
func (*GPiThalLayer) GFmInc ¶
func (ly *GPiThalLayer) GFmInc(ltime *leabra.Time)
GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.
func (*GPiThalLayer) GateFmAct ¶
func (ly *GPiThalLayer) GateFmAct(ltime *leabra.Time)
GateFmAct updates GateState from current activations, at time of gating
func (*GPiThalLayer) GateSend ¶
func (ly *GPiThalLayer) GateSend(ltime *leabra.Time)
GateSend updates gating state and sends it along to other layers
func (*GPiThalLayer) GateType ¶
func (ly *GPiThalLayer) GateType() GateTypes
func (*GPiThalLayer) InitActs ¶
func (ly *GPiThalLayer) InitActs()
func (*GPiThalLayer) MatrixPrjns ¶
func (ly *GPiThalLayer) MatrixPrjns() (goPrjn, nogoPrjn *GPiThalPrjn, err error)
MatrixPrjns returns the recv prjns from Go and NoGo MatrixLayer pathways -- error if not found or if prjns are not of the GPiThalPrjn type
func (*GPiThalLayer) RecGateAct ¶
func (ly *GPiThalLayer) RecGateAct(ltime *leabra.Time)
RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now
func (*GPiThalLayer) SendGateShape ¶
func (ly *GPiThalLayer) SendGateShape() error
SendGateShape send GateShape info to all SendTo layers -- convenient config-time way to ensure all are consistent -- also checks validity of SendTo's
func (*GPiThalLayer) SendGateStates ¶
func (ly *GPiThalLayer) SendGateStates()
SendGateStates sends GateStates to other layers
func (*GPiThalLayer) SendToCheck ¶
func (ly *GPiThalLayer) SendToCheck() error
SendToCheck is called during Build to ensure that SendTo layers are valid
func (*GPiThalLayer) SendToMatrixPFC ¶
func (ly *GPiThalLayer) SendToMatrixPFC(prefix string)
SendToMatrixPFC adds standard SendTo layers for PBWM: MatrixGo, NoGo, PFCmnt, PFCout with optional prefix -- excludes mnt, out cases if corresp shape = 0
func (*GPiThalLayer) UnitValByIdx ¶
func (ly *GPiThalLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).
type GPiThalPrjn ¶
type GPiThalPrjn struct { deep.Prjn // access as .Prjn GeRaw []float32 `desc:"per-recv, per-prjn raw excitatory input"` }
GPiThalPrjn accumulates per-prjn raw conductance that is needed for separately weighting NoGo vs. Go inputs
func (*GPiThalPrjn) Build ¶
func (pj *GPiThalPrjn) Build() error
func (*GPiThalPrjn) InitGInc ¶
func (pj *GPiThalPrjn) InitGInc()
func (*GPiThalPrjn) RecvGInc ¶
func (pj *GPiThalPrjn) RecvGInc()
RecvGInc increments the receiver's GeInc or GiInc from that of all the projections.
type GPiTimingParams ¶
type GPiTimingParams struct { GateQtr leabra.Quarters `` /* 249-byte string literal not displayed */ Cycle int `` /* 139-byte string literal not displayed */ }
GPiTimingParams has timing parameters for gating in the GPiThal layer
func (*GPiTimingParams) Defaults ¶
func (gt *GPiTimingParams) Defaults()
func (*GPiTimingParams) IsGateQtr ¶
func (gt *GPiTimingParams) IsGateQtr(qtr int) bool
IsGateQtr returns true if the given quarter (0-3) is set as a Gating quarter
func (*GPiTimingParams) SetGateQtr ¶
func (gt *GPiTimingParams) SetGateQtr(qtr leabra.Quarters)
SetGateQtr sets given gating quarter (adds to any existing) -- Q1, 3 by default
type GateLayer ¶
type GateLayer struct { ModLayer GateShp GateShape `desc:"shape of overall Maint + Out gating system that this layer is part of"` GateStates []GateState `` /* 192-byte string literal not displayed */ }
GateLayer is a layer that cares about thalamic (BG) gating signals, and has slice of GateState fields that a gating layer will update.
func (*GateLayer) AvgMaxGeRaw ¶
AvgMaxGeRaw computes the average and max GeRaw stats
func (*GateLayer) Build ¶
Build constructs the layer state, including calling Build on the projections.
func (*GateLayer) GateState ¶
GateState returns the GateState for given pool index (0 based) on this layer
func (*GateLayer) SetGateState ¶
SetGateState sets the GateState for given pool index (individual pools start at 1) on this layer
func (*GateLayer) SetGateStates ¶
SetGateStates sets the GateStates from given source states, of given gating type
func (*GateLayer) UnitValByIdx ¶
func (ly *GateLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).
type GateLayerer ¶
type GateLayerer interface { // AsGate returns the layer as a GateLayer layer, for direct access to fields AsGate() *GateLayer // GateType returns the type of gating supported by this layer GateType() GateTypes // GateShape returns the shape of gating system that this layer is part of GateShape() *GateShape // GateState returns the GateState for given pool index (0-based) on this layer GateState(poolIdx int) *GateState // SetGateState sets the GateState for given pool index (0-based) on this layer SetGateState(poolIdx int, state *GateState) // SetGateStates sets the GateStates from given source states, of given gating type SetGateStates(states []GateState, typ GateTypes) }
GateLayerer is an interface for GateLayer layers
type GateShape ¶
type GateShape struct { Y int `desc:"overall shape dimensions for the full set of gating pools, e.g., as present in the Matrix and GPiThal levels"` MaintX int `desc:"how many pools in the X dimension are Maint gating pools -- rest are Out"` OutX int `desc:"how many pools in the X dimension are Out gating pools -- comes after Maint"` }
GateShape defines the shape of the outer pool dimensions of gating layers, organized into Maint and Out subsets which are arrayed along the X axis with Maint first (to the left) then Out. Individual layers may only represent Maint or Out subsets of this overall shape, but all need to have this coordinated shape information to be able to share gating state information. Each layer represents gate state information in their native geometry -- FullIndex1D provides access from a subset to full set.
func (*GateShape) FullIndex1D ¶
FullIndex1D returns the index into full MaintOut GateStates for given 1D pool idx (0-based) *from given GateType*.
func (*GateShape) Index ¶
Index returns the index into GateStates for given 2D pool coords for given GateType. Each type stores gate info in its "native" 2D format.
type GateState ¶
type GateState struct { Act float32 `` /* 203-byte string literal not displayed */ Now bool `desc:"gating timing signal -- true if this is the moment when gating takes place"` Cnt int `` /* 307-byte string literal not displayed */ GeRaw minmax.AvgMax32 `copy:"-" desc:"not copies: average and max Ge Raw excitatory conductance values -- before being influenced by gating signals"` }
GateState is gating state values stored in layers that receive thalamic gating signals including MatrixLayer, PFCLayer, GPiThal layer, etc -- use GateLayer as base layer to include.
type GateTypes ¶
type GateTypes int
GateTypes for region of striatum
func (*GateTypes) FromString ¶
func (GateTypes) MarshalJSON ¶
func (*GateTypes) UnmarshalJSON ¶
type Layer ¶
type Layer struct { ModLayer DaMod DaModParams `` /* 180-byte string literal not displayed */ }
pbwm.Layer is the default layer type for PBWM framework, based on the ModLayer with dopamine modulation -- can be used for basic DA-modulated learning.
type MatrixLayer ¶
type MatrixLayer struct { GateLayer MaintN int `desc:"number of Maint Pools in X outer dimension of 4D shape -- Out gating after that"` DaR DaReceptors `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"` Matrix MatrixParams `view:"inline" desc:"matrix parameters"` MatrixNeurs []MatrixNeuron `` /* 147-byte string literal not displayed */ }
MatrixLayer represents the dorsal matrisome MSN's that are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM. D1R = Go, D2R = NoGo, and outer 4D Pool X dimension determines GateTypes per MaintN (Maint on the left up to MaintN, Out on the right after)
func (*MatrixLayer) ActFmG ¶
func (ly *MatrixLayer) ActFmG(ltime *leabra.Time)
ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. Matrix extends to call DaAChFmLay
func (*MatrixLayer) Build ¶
func (ly *MatrixLayer) Build() error
Build constructs the layer state, including calling Build on the projections you MUST have properly configured the Inhib.Pool.On setting by this point to properly allocate Pools for the unit groups if necessary.
func (*MatrixLayer) DALrnFmDA ¶
func (ly *MatrixLayer) DALrnFmDA(da float32) float32
DALrnFmDA returns effective learning dopamine value from given raw DA value applying Burst and Dip Gain factors, and then reversing sign for D2R.
func (*MatrixLayer) DaAChFmLay ¶
func (ly *MatrixLayer) DaAChFmLay(ltime *leabra.Time)
DaAChFmLay computes Da and ACh from layer and Shunt received from PatchLayer units
func (*MatrixLayer) Defaults ¶
func (ly *MatrixLayer) Defaults()
func (*MatrixLayer) DoQuarter2DWt ¶
func (ly *MatrixLayer) DoQuarter2DWt() bool
DoQuarter2DWt indicates whether to do optional Q2 DWt
func (*MatrixLayer) GateType ¶
func (ly *MatrixLayer) GateType() GateTypes
func (*MatrixLayer) InhibFmGeAct ¶
func (ly *MatrixLayer) InhibFmGeAct(ltime *leabra.Time)
InhibiFmGeAct computes inhibition Gi from Ge and Act averages within relevant Pools Matrix version applies OutAChInhib to bias output gating on reward trials
func (*MatrixLayer) InitActs ¶
func (ly *MatrixLayer) InitActs()
func (*MatrixLayer) RecGateAct ¶
func (ly *MatrixLayer) RecGateAct(ltime *leabra.Time)
RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now
func (*MatrixLayer) UnitValByIdx ¶
func (ly *MatrixLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).
type MatrixNeuron ¶
type MatrixNeuron struct { DA float32 `desc:"per-neuron modulated dopamine level, derived from layer DA and Shunt"` DALrn float32 `desc:"per-neuron effective learning dopamine value -- gain modulated and sign reversed for D2R"` ACh float32 `desc:"per-neuron modulated ACh level, derived from layer ACh and Shunt"` Shunt float32 `desc:"shunting input received from Patch neurons (in reality flows through SNc DA pathways)"` ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"` }
MatrixNeuron contains extra variables for MatrixLayer neurons -- stored separately
type MatrixParams ¶
type MatrixParams struct { PatchShunt float32 `` /* 173-byte string literal not displayed */ ShuntACh bool `` /* 269-byte string literal not displayed */ OutAChInhib float32 `` /* 354-byte string literal not displayed */ BurstGain float32 `` /* 237-byte string literal not displayed */ DipGain float32 `` /* 237-byte string literal not displayed */ }
MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM
func (*MatrixParams) Defaults ¶
func (mp *MatrixParams) Defaults()
type MatrixTracePrjn ¶
type MatrixTracePrjn struct { deep.Prjn Trace TraceParams `view:"inline" desc:"special parameters for matrix trace learning"` TrSyns []TraceSyn `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"` }
MatrixTracePrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context
func (*MatrixTracePrjn) Build ¶
func (pj *MatrixTracePrjn) Build() error
func (*MatrixTracePrjn) ClearTrace ¶
func (pj *MatrixTracePrjn) ClearTrace()
func (*MatrixTracePrjn) DWt ¶
func (pj *MatrixTracePrjn) DWt()
DWt computes the weight change (learning) -- on sending projections.
func (*MatrixTracePrjn) Defaults ¶
func (pj *MatrixTracePrjn) Defaults()
func (*MatrixTracePrjn) InitWts ¶
func (pj *MatrixTracePrjn) InitWts()
type ModLayer ¶
type ModLayer struct { deep.Layer DA float32 `desc:"current dopamine level for this layer"` ACh float32 `desc:"current acetylcholine level for this layer"` SE float32 `desc:"current serotonin level for this layer"` }
ModLayer is the base layer type for PBWM framework -- has variables for the layer-level neuromodulatory variables: dopamine, ach, serotonin. The pbwm.Layer is a usable generic version of this base ModLayer, and other more specialized types build directly from ModLayer.
func (*ModLayer) DoQuarter2DWt ¶
DoQuarter2DWt indicates whether to do optional Q2 DWt
func (*ModLayer) GateSend ¶
GateSend updates gating state and sends it along to other layers. most layers don't implement -- only gating layers
func (*ModLayer) Quarter2DWt ¶
func (ly *ModLayer) Quarter2DWt()
Quarter2DWt is optional Q2 DWt -- define where relevant
func (*ModLayer) QuarterFinal ¶
QuarterFinal does updating after end of a quarter
func (*ModLayer) RecGateAct ¶
RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now -- only for gating layers
func (*ModLayer) SendMods ¶
SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing
func (*ModLayer) UnitVal1DTry ¶
func (*ModLayer) UnitValByIdx ¶
func (ly *ModLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).
func (*ModLayer) UnitValTry ¶
UnitValTry returns value of given variable name on given unit, using shape-based dimensional index
func (*ModLayer) UnitVals ¶
UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*ModLayer) UnitValsTensor ¶
UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*ModLayer) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this layer Mod returns *layer level* vars
func (*ModLayer) UpdateParams ¶
func (ly *ModLayer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer
type Network ¶
pbwm.Network has parameters for running a DeepLeabra network
func (*Network) AddClampDaLayer ¶
func (nt *Network) AddClampDaLayer(name string) *ClampDaLayer
AddClampDaLayer adds a ClampDaLayer of given name
func (*Network) AddDorsalBG ¶
func (nt *Network) AddDorsalBG(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (mtxGo, mtxNoGo, gpe, gpi leabra.LeabraLayer)
AddDorsalBG adds MatrixGo, NoGo, GPe, and GPiThal layers, with given optional prefix. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made to drive GPiThal, with BgFixed class name set so they can be styled appropriately (no learning, WtRnd.Mean=0.8, Var=0)
func (*Network) AddGPeLayer ¶
AddGPeLayer adds a ModLayer to serve as a GPe layer, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.
func (*Network) AddGPiThalLayer ¶
func (nt *Network) AddGPiThalLayer(name string, nY, nMaint, nOut int) *GPiThalLayer
AddGPiThalLayer adds a GPiThalLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has 1x1 neurons.
func (*Network) AddMatrixLayer ¶
func (nt *Network) AddMatrixLayer(name string, nY, nMaint, nOut, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer
AddMatrixLayer adds a MatrixLayer of given size, with given name. nY = number of pools in Y dimension, nMaint + nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. da gives the DaReceptor type (D1R = Go, D2R = NoGo)
func (*Network) AddPBWM ¶
func (nt *Network) AddPBWM(prefix string, nY, nMaint, nOut, nNeurBgY, nNeurBgX, nNeurPfcY, nNeurPfcX int) (mtxGo, mtxNoGo, gpe, gpi, pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)
AddPBWM adds a DorsalBG an PFC with given params
func (*Network) AddPFC ¶
func (nt *Network) AddPFC(prefix string, nY, nMaint, nOut, nNeurY, nNeurX int) (pfcMnt, pfcMntD, pfcOut, pfcOutD leabra.LeabraLayer)
AddPFC adds paired PFCmnt, PFCout and associated Deep layers, with given optional prefix. nY = number of pools in Y dimension, nMaint, nOut are pools in X dimension, and each pool has nNeurY, nNeurX neurons. Appropriate PoolOneToOne connections are made within super / deep (see AddPFCLayer) and between PFCmntD -> PFCout.
func (*Network) AddPFCLayer ¶
func (nt *Network) AddPFCLayer(name string, nY, nX, nNeurY, nNeurX int, out bool) (sp, dp *PFCLayer)
AddPFCLayer adds a PFCLayer, super and deep, of given size, with given name. nY, nX = number of pools in Y, X dimensions, and each pool has nNeurY, nNeurX neurons. out is true for output-gating layer. Both have the class "PFC" set. deep receives one-to-one projections of class "PFCToDeep" from super, and sends "PFCFmDeep", and is positioned behind it.
func (*Network) AddRWLayers ¶
func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da leabra.LeabraLayer)
AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero. Projection from RWPred to DA is given class RWPredToDA -- should have no learning and 1 weight.
func (*Network) AddTDLayers ¶
func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td leabra.LeabraLayer)
AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Projection from Rew to RewInteg is given class TDRewToInteg -- should have no learning and 1 weight.
func (*Network) Cycle ¶
Cycle runs one cycle of activation updating PBWM calls GateSend after Cycle and before DeepBurst Deep version adds call to update DeepBurst at end
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and projections
func (*Network) GateSend ¶
GateSend is called at end of Cycle, computes Gating and sends to other layers
func (*Network) RecGateAct ¶
RecGateAct is called after GateSend, to record gating activations at time of gating
func (*Network) SendMods ¶
SendMods is called at end of Cycle to send modulator signals (DA, etc) which will then be active for the next cycle of processing
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and projections
type NeuronVars ¶
type NeuronVars int
NeuronVars are indexes into extra PBWM neuron-level variables
const ( DA NeuronVars = iota DALrn ACh SE GateAct GateNow GateCnt ActG Cust1 NeuronVarsN )
type PBWMLayer ¶
type PBWMLayer interface { deep.DeepLayer // AsMod returns this layer as a pbwm.ModLayer (minimum layer in PBWM) AsMod() *ModLayer // AsGate returns this layer as a pbwm.GateLayer (gated layer type) -- nil if not impl AsGate() *GateLayer // UnitValByIdx returns value of given PBWM-specific variable by variable index // and flat neuron index (from layer or neuron-specific one). UnitValByIdx(vidx NeuronVars, idx int) float32 // GateSend updates gating state and sends it along to other layers. // Called after std Cycle methods. // Only implemented for gating layers. GateSend(ltime *leabra.Time) // RecGateAct records the gating activation from current activation, when gating occcurs // based on GateState.Now RecGateAct(ltime *leabra.Time) // SendMods is called at end of Cycle to send modulator signals (DA, etc) // which will then be active for the next cycle of processing SendMods(ltime *leabra.Time) // Quarter2DWt is optional Q2 DWt -- PFC and matrix layers can do this as appropriate Quarter2DWt() // DoQuarter2DWt returns true if this recv layer should have its weights updated DoQuarter2DWt() bool }
PBWMLayer defines the essential algorithmic API for PBWM at the layer level. Builds upon the deep.DeepLayer API
type PBWMPrjn ¶
PBWMPrjn defines the essential algorithmic API for PBWM at the projection level. Builds upon the deep.DeepPrjn API
type PFCDyn ¶
type PFCDyn struct { Init float32 `desc:"initial value at point when gating starts -- MUST be > 0 when used."` RiseTau float32 `` /* 161-byte string literal not displayed */ DecayTau float32 `` /* 162-byte string literal not displayed */ Desc string `desc:"description of this factor"` }
PFC dynamic behavior element -- defines the dynamic behavior of deep layer PFC units
type PFCDyns ¶
type PFCDyns []*PFCDyn
PFCDyns is a slice of dyns. Provides deterministic control over PFC maintenance dynamics -- the rows of PFC units (along Y axis) behave according to corresponding index of Dyns. ensure layer Y dim has even multiple of len(Dyns).
func (*PFCDyns) FullDyn ¶
FullDyn creates full dynamic Dyn configuration, with 5 different dynamic profiles: stable maint, phasic, rising maint, decaying maint, and up / down maint. tau is the rise / decay base time constant.
func (*PFCDyns) MaintOnly ¶
func (pd *PFCDyns) MaintOnly()
MaintOnly creates basic default maintenance dynamic configuration -- every unit just maintains over time. This should be used for Output gating layer.
type PFCGateParams ¶
type PFCGateParams struct { OutGate bool `desc:"if true, this PFC layer is an output gate layer, which means that it only has transient activation during gating"` OutQ1Only bool `` /* 345-byte string literal not displayed */ MntThal float32 `` /* 277-byte string literal not displayed */ }
PFCGateParams has parameters for PFC gating
func (*PFCGateParams) Defaults ¶
func (gp *PFCGateParams) Defaults()
type PFCLayer ¶
type PFCLayer struct { GateLayer Gate PFCGateParams `view:"inline" desc:"PFC Gating parameters"` Maint PFCMaintParams `view:"inline" desc:"PFC Maintenance parameters"` Dyns PFCDyns `` /* 257-byte string literal not displayed */ PFCNeurs []PFCNeuron `` /* 144-byte string literal not displayed */ }
PFCLayer is a Prefrontal Cortex BG-gated working memory layer
func (*PFCLayer) ActFmG ¶
ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. PFC extends to call Gating.
func (*PFCLayer) Build ¶
Build constructs the layer state, including calling Build on the projections.
func (*PFCLayer) BurstFmAct ¶
BurstFmAct updates Burst layer 5 IB bursting value from current Act (superficial activation) Subject to thresholding.
func (*PFCLayer) ClearCtxtPool ¶
ClearCtxtPool clears CtxtGe in given pool index (0 based)
func (*PFCLayer) ClearMaint ¶
ClearMaint resets maintenance in corresponding pool (0 based) in maintenance layer
func (*PFCLayer) CtxtFmGe ¶
CtxtFmGe integrates new CtxtGe excitatory conductance from projections, and computes overall Ctxt value, only on Deep layers. This must be called at the end of the DeepBurst quarter for this layer, after SendCtxtGe.
func (*PFCLayer) DecayStatePool ¶
DecayStatePool decays activation state by given proportion in given pool index (0 based)
func (*PFCLayer) DeepMaint ¶
DeepMaint updates deep maintenance activations -- called at end of bursting quarter via CtxtFmGe after CtxtGe is updated and available. quarter check is already called.
func (*PFCLayer) DeepPFC ¶
DeepPFC returns corresponding PFC deep layer with same name + D could be nil
func (*PFCLayer) DoQuarter2DWt ¶
DoQuarter2DWt indicates whether to do optional Q2 DWt
func (*PFCLayer) GFmInc ¶
GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.
func (*PFCLayer) GateStateToDeep ¶
GateStateToDeep copies superficial gate state to corresponding deep layer. This happens at end of BurstQtr (from QuarterFinal), prior to SendCtxtGe call which happens at Network level after QuarterFinal
func (*PFCLayer) MaintPFC ¶
MaintPFC returns corresponding PFC maintenance layer with same name but out -> mnt could be nil
func (*PFCLayer) QuarterFinal ¶
QuarterFinal does updating after end of a quarter
func (*PFCLayer) RecGateAct ¶
RecGateAct records the gating activation from current activation, when gating occcurs based on GateState.Now
func (*PFCLayer) SendCtxtGe ¶
SendCtxtGe sends full Burst activation over BurstCtxt projections to integrate CtxtGe excitatory conductance on deep layers. This must be called at the end of the DeepBurst quarter for this layer.
func (*PFCLayer) UnitValByIdx ¶
func (ly *PFCLayer) UnitValByIdx(vidx NeuronVars, idx int) float32
UnitValByIdx returns value of given PBWM-specific variable by variable index and flat neuron index (from layer or neuron-specific one).
type PFCMaintParams ¶
type PFCMaintParams struct { SMnt minmax.F32 `` /* 196-byte string literal not displayed */ MntGeMax float32 `` /* 282-byte string literal not displayed */ Clear float32 `` /* 210-byte string literal not displayed */ UseDyn bool `` /* 262-byte string literal not displayed */ MaxMaint int `` /* 200-byte string literal not displayed */ }
PFCMaintParams for PFC maintenance functions
func (*PFCMaintParams) Defaults ¶
func (mp *PFCMaintParams) Defaults()
type PFCNeuron ¶
type PFCNeuron struct { ActG float32 `desc:"gating activation -- the activity value when gating occurred in this pool"` Maint float32 `desc:"maintenance value for Deep layers"` }
PFCNeuron contains extra variables for PFCLayer neurons -- stored separately
type RWDaLayer ¶
type RWDaLayer struct { DaSrcLayer RewLay string `desc:"name of Reward-representing layer from which this computes DA -- if nothing clamped, no dopamine computed"` }
RWDaLayer computes a dopamine (Da) signal based on a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework). It computes difference between r(t) and RWPred inputs. r(t) is accessed directly from a Rew layer -- if no external input then no DA is computed -- critical for effective use of RW only for PV cases. Receives RWPred prediction from direct (fixed) weights.
type RWPredLayer ¶
type RWPredLayer struct { ModLayer PredRange minmax.F32 `` /* 180-byte string literal not displayed */ }
RWPredLayer computes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework). Activity is computed as linear function of excitatory conductance (which can be negative -- there are no constraints). Use with RWPrjn which does simple delta-rule learning on minus-plus.
func (*RWPredLayer) ActFmG ¶
func (ly *RWPredLayer) ActFmG(ltime *leabra.Time)
ActFmG computes linear activation for RWPred
func (*RWPredLayer) Defaults ¶
func (ly *RWPredLayer) Defaults()
type RWPrjn ¶
RWPrjn does dopamine-modulated learning for reward prediction: Da * Send.Act Use in RWPredLayer typically to generate reward predictions. Has no weight bounds or limits on sign etc.
type SeSrcLayer ¶
SeSrcLayer is the basic type of layer that sends Se to other layers. Uses a list of layer names to send to -- not use Prjn infrastructure as it is global broadcast modulator -- individual neurons can use it in their own special way.
func (*SeSrcLayer) AddSendTo ¶
func (ly *SeSrcLayer) AddSendTo(laynm string)
AddSendTo adds given layer name to list of those to send DA to
func (*SeSrcLayer) Build ¶
func (ly *SeSrcLayer) Build() error
Build constructs the layer state, including calling Build on the projections.
func (*SeSrcLayer) SendSe ¶
func (ly *SeSrcLayer) SendSe(se float32)
SendSe sends serotonin to SendTo list of layers
func (*SeSrcLayer) SendToCheck ¶
func (ly *SeSrcLayer) SendToCheck() error
SendToCheck is called during Build to ensure that SendTo layers are valid
type TDDaLayer ¶
type TDDaLayer struct { DaSrcLayer RewInteg string `desc:"name of TDRewIntegLayer from which this computes the temporal derivative"` }
TDDaLayer computes a dopamine (Da) signal as the temporal difference (TD) between the TDRewIntegLayer activations in the minus and plus phase.
func (*TDDaLayer) Build ¶
Build constructs the layer state, including calling Build on the projections.
func (*TDDaLayer) RewIntegLayer ¶
func (ly *TDDaLayer) RewIntegLayer() (*TDRewIntegLayer, error)
type TDRewIntegLayer ¶
type TDRewIntegLayer struct { ModLayer RewInteg TDRewIntegParams `desc:"parameters for reward integration"` }
TDRewIntegLayer is the temporal differences reward integration layer. It represents estimated value V(t) in the minus phase, and estimated V(t+1) + r(t) in the plus phase. It computes r(t) from (typically fixed) weights from a reward layer, and directly accesses values from RewPred layer.
func (*TDRewIntegLayer) ActFmG ¶
func (ly *TDRewIntegLayer) ActFmG(ltime *leabra.Time)
func (*TDRewIntegLayer) Build ¶
func (ly *TDRewIntegLayer) Build() error
Build constructs the layer state, including calling Build on the projections.
func (*TDRewIntegLayer) Defaults ¶
func (ly *TDRewIntegLayer) Defaults()
func (*TDRewIntegLayer) RewPredLayer ¶
func (ly *TDRewIntegLayer) RewPredLayer() (*TDRewPredLayer, error)
type TDRewIntegParams ¶
type TDRewIntegParams struct { Discount float32 `desc:"discount factor -- how much to discount the future prediction from RewPred"` RewPred string `desc:"name of TDRewPredLayer to get reward prediction from "` }
TDRewIntegParams are params for reward integrator layer
func (*TDRewIntegParams) Defaults ¶
func (tp *TDRewIntegParams) Defaults()
type TDRewPredLayer ¶
type TDRewPredLayer struct {
ModLayer
}
TDRewPredLayer is the temporal differences reward prediction layer. It represents estimated value V(t) in the minus phase, and computes estimated V(t+1) based on its learned weights in plus phase. Use TDRewPredPrjn for DA modulated learning.
func (*TDRewPredLayer) ActFmG ¶
func (ly *TDRewPredLayer) ActFmG(ltime *leabra.Time)
ActFmG computes linear activation for TDRewPred
type TDRewPredPrjn ¶
TDRewPredPrjn does dopamine-modulated learning for reward prediction: DWt = Da * Send.ActQ0 (activity on *previous* timestep) Use in TDRewPredLayer typically to generate reward predictions. Has no weight bounds or limits on sign etc.
func (*TDRewPredPrjn) DWt ¶
func (pj *TDRewPredPrjn) DWt()
DWt computes the weight change (learning) -- on sending projections.
func (*TDRewPredPrjn) Defaults ¶
func (pj *TDRewPredPrjn) Defaults()
func (*TDRewPredPrjn) WtFmDWt ¶
func (pj *TDRewPredPrjn) WtFmDWt()
WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections
type TraceParams ¶
type TraceParams struct { NotGatedLR float32 `` /* 351-byte string literal not displayed */ GateNoGoPosLR float32 `` /* 947-byte string literal not displayed */ AChResetThr float32 `min:"0" def:"0.5" desc:"threshold on receiving unit ACh value, sent by TAN units, for reseting the trace"` Deriv bool `` /* 305-byte string literal not displayed */ Decay float32 `` /* 294-byte string literal not displayed */ }
Params for for trace-based learning in the MatrixTracePrjn
func (*TraceParams) Defaults ¶
func (tp *TraceParams) Defaults()
func (*TraceParams) LrateMod ¶
func (tp *TraceParams) LrateMod(gated, d2r, posDa bool) float32
LrateMod returns the learning rate modulator based on gating, d2r, and posDa factors
func (*TraceParams) LrnFactor ¶
func (tp *TraceParams) LrnFactor(act float32) float32
LrnFactor resturns multiplicative factor for level of msn activation. If Deriv is 2 * act * (1-act) -- the factor of 2 compensates for otherwise reduction in learning from these factors. Otherwise is just act.
type TraceSyn ¶
type TraceSyn struct { NTr float32 `` /* 136-byte string literal not displayed */ Tr float32 `` /* 183-byte string literal not displayed */ }
TraceSyn holds extra synaptic state for trace projections
type Valences ¶
type Valences int
Valences for Appetitive and Aversive valence coding