Documentation ¶
Index ¶
- Constants
- Variables
- func AddAmygdala(nt *axon.Network, prefix string, neg bool, nUs, unY, unX int, space float32) (...)
- func AddBLALayers(nt *axon.Network, prefix string, pos bool, nUs, unY, unX int, ...) (acq, ext axon.AxonLayer)
- type BLALayer
- func (ly *BLALayer) Build() error
- func (ly *BLALayer) Defaults()
- func (ly *BLALayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)
- func (ly *BLALayer) GetACh() float32
- func (ly *BLALayer) InitActs()
- func (ly *BLALayer) PlusPhase(ctime *axon.Time)
- func (ly *BLALayer) SetACh(ach float32)
- func (ly *BLALayer) USActiveFmUS(ctime *axon.Time)
- func (ly *BLALayer) UnitVal1D(varIdx int, idx int) float32
- func (ly *BLALayer) UnitVarIdx(varNm string) (int, error)
- func (ly *BLALayer) UnitVarNum() int
- type BLAParams
- type BLAPrjn
- type DARs
- type DaModParams
- type LayerType
- type PPTgLayer
- func (ly *PPTgLayer) Build() error
- func (ly *PPTgLayer) Defaults()
- func (ly *PPTgLayer) GFmRawSyn(ni int, nrn *axon.Neuron, ctime *axon.Time)
- func (ly *PPTgLayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)
- func (ly *PPTgLayer) NewState()
- func (ly *PPTgLayer) SpikeFmG(ni int, nrn *axon.Neuron, ctime *axon.Time)
- type PPTgNeuron
Constants ¶
const ( // BLA is a basolateral amygdala layer BLA emer.LayerType = emer.LayerType(rl.LayerTypeN) + iota // CeM is a central nucleus of the amygdala layer // integrating Acq - Ext for a tet value response. CeM // PPTg is a pedunculopontine tegmental gyrus layer // computing a deporalerivative if that is what happens PPTg )
Variables ¶
var KiT_BLALayer = kit.Types.AddType(&BLALayer{}, LayerProps)
var KiT_DARs = kit.Enums.AddEnum(DARsN, kit.NotBitFlag, nil)
var KiT_LayerType = kit.Enums.AddEnumExt(rl.KiT_LayerType, LayerTypeN, kit.NotBitFlag, nil)
var KiT_PPTgLayer = kit.Types.AddType(&PPTgLayer{}, LayerProps)
var LayerProps = ki.Props{ "EnumType:Typ": KiT_LayerType, "ToolBar": ki.PropSlice{ {"Defaults", ki.Props{ "icon": "reset", "desc": "return all parameters to their intial default values", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the layer's activation values", }}, {"sep-act", ki.BlankProp{}}, {"LesionNeurons", ki.Props{ "icon": "close", "desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)", "Args": ki.PropSlice{ {"Proportion", ki.Props{ "desc": "proportion (0 -- 1) of neurons to lesion", }}, }, }}, {"UnLesionNeurons", ki.Props{ "icon": "reset", "desc": "Un-Lesion (reset the Off flag) for all neurons in the layer", }}, }, }
LayerProps are required to get the extended EnumType
Functions ¶
func AddAmygdala ¶ added in v1.5.12
func AddAmygdala(nt *axon.Network, prefix string, neg bool, nUs, unY, unX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, pptg axon.AxonLayer)
AddAmygdala adds a full amygdala complex including BLA, CeM, and PPTg. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included.
Types ¶
type BLALayer ¶ added in v1.5.2
type BLALayer struct { rl.Layer DaMod DaModParams `view:"inline" desc:"dopamine modulation parameters"` BLA BLAParams `view:"inline" desc:"special BLA parameters"` USLayers emer.LayNames `` /* 154-byte string literal not displayed */ ACh float32 `` /* 198-byte string literal not displayed */ USActive bool `inactive:"+" desc:"marks presence of US as a function of activity over USLayers -- affects learning rate."` }
BLALayer represents a basolateral amygdala layer
func (*BLALayer) USActiveFmUS ¶ added in v1.5.12
USActiveFmUS updates the USActive flag based on USLayers state
func (*BLALayer) UnitVarIdx ¶ added in v1.5.12
func (*BLALayer) UnitVarNum ¶ added in v1.5.12
type BLAParams ¶ added in v1.5.12
type BLAParams struct { NoDALrate float32 `desc:"baseline learning rate without any dopamine"` NoUSLrate float32 `desc:"learning rate outside of US active time window (i.e. for CSs)"` NegLrate float32 `` /* 143-byte string literal not displayed */ }
BLAParams has parameters for basolateral amygdala
type BLAPrjn ¶ added in v1.5.2
BLAPrjn implements the PVLV BLA learning rule: dW = Ach * X_t-1 * (Y_t - Y_t-1) The recv delta is across trials, where the US should activate on trial boundary, to enable sufficient time for gating through to OFC, so BLA initially learns based on US present - US absent. It can also learn based on CS onset if there is a prior CS that predicts that.
type DARs ¶ added in v1.5.2
type DARs int
Dopamine receptor type, for D1R and D2R dopamine receptors
func (*DARs) FromString ¶ added in v1.5.2
type DaModParams ¶
type DaModParams struct { On bool `desc:"whether to use dopamine modulation"` DAR DARs `desc:"dopamine receptor type, D1 or D2"` BurstGain float32 `` /* 173-byte string literal not displayed */ DipGain float32 `` /* 233-byte string literal not displayed */ }
DaModParams specifies parameters shared by all layers that receive dopaminergic modulatory input.
func (*DaModParams) Defaults ¶ added in v1.5.2
func (dp *DaModParams) Defaults()
func (*DaModParams) Gain ¶ added in v1.5.2
func (dp *DaModParams) Gain(da float32) float32
Gain returns effective DA gain factor given raw da +/- burst / dip value
type LayerType ¶ added in v1.5.12
LayerType has the extensions to the emer.LayerType types, for gui
const ( BLA_ LayerType = LayerType(rl.LayerTypeN) + iota CeM_ PPTg_ LayerTypeN )
gui versions
type PPTgLayer ¶
type PPTgLayer struct { rl.Layer PPTgNeurs []PPTgNeuron }
PPTgLayer represents a pedunculopontine nucleus layer it subtracts prior trial's excitatory conductance to compute the temporal derivative over time, with a positive rectification. also sets Act to the exact differenence.