pcore

package
v1.2.10 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 7, 2024 License: BSD-3-Clause Imports: 14 Imported by: 0

README

PCore: Pallidal Core Basal Ganglia Model

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// NeuronVars are extra neuron variables for pcore
	NeuronVars = []string{"DA", "DALrn", "ACh", "Ca", "KCa"}

	// NeuronVarsAll is the pcore collection of all neuron-level vars
	NeuronVarsAll []string

	// SynVarsAll is the pcore collection of all synapse-level vars (includes TraceSynVars)
	SynVarsAll []string
)
View Source
var (
	STNNeuronVars    = []string{"Ca", "KCa"}
	STNNeuronVarsMap map[string]int
)
View Source
var KiT_CINLayer = kit.Types.AddType(&CINLayer{}, leabra.LayerProps)
View Source
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
View Source
var KiT_GPLayer = kit.Types.AddType(&GPLayer{}, leabra.LayerProps)
View Source
var KiT_GPLays = kit.Enums.AddEnum(GPLaysN, kit.NotBitFlag, nil)
View Source
var KiT_GPiLayer = kit.Types.AddType(&GPiLayer{}, leabra.LayerProps)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, leabra.LayerProps)
View Source
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, leabra.LayerProps)
View Source
var KiT_MatrixPrjn = kit.Types.AddType(&MatrixPrjn{}, leabra.PrjnProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_STNLayer = kit.Types.AddType(&STNLayer{}, leabra.LayerProps)
View Source
var KiT_VThalLayer = kit.Types.AddType(&VThalLayer{}, leabra.LayerProps)
View Source
var NetworkProps = leabra.NetworkProps
View Source
var TraceSynVars = []string{"NTr", "Tr"}

Functions

func AddBG

func AddBG(nt *leabra.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (mtxGo, mtxNo, cin, gpeOut, gpeIn, gpeTA, stnp, stns, gpi, vthal leabra.LeabraLayer)

AddBG adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Only Matrix has more than 1 unit per Pool by default. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical)

func AddBGPy added in v1.1.15

func AddBGPy(nt *leabra.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) []leabra.LeabraLayer

AddBGPy adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Only Matrix has more than 1 unit per Pool by default. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) Py is Python version, returns layers as a slice

func ConnectToMatrix

func ConnectToMatrix(nt *leabra.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixTracePrjn from given sending layer to a matrix layer

func STNNeuronVarIdxByName added in v1.1.4

func STNNeuronVarIdxByName(varNm string) (int, error)

STNNeuronVarIdxByName returns the index of the variable in the STNNeuron, or error

Types

type CINLayer

type CINLayer struct {
	leabra.Layer

	// threshold on reward values from RewLays, to count as a significant reward event, which then drives maximal ACh -- set to 0 to disable this nonlinear behavior
	RewThr float32 `` /* 164-byte string literal not displayed */

	// Reward-representing layer(s) from which this computes ACh as Max absolute value
	RewLays emer.LayNames `desc:"Reward-representing layer(s) from which this computes ACh as Max absolute value"`

	// list of layers to send acetylcholine to
	SendACh rl.SendACh `desc:"list of layers to send acetylcholine to"`

	// acetylcholine value for this layer
	ACh float32 `desc:"acetylcholine value for this layer"`
}

CINLayer (cholinergic interneuron) reads reward signals from named source layer(s) and sends the Max absolute value of that activity as the positively-rectified non-prediction-discounted reward signal computed by CINs, and sent as an acetylcholine (ACh) signal. To handle positive-only reward signals, need to include both a reward prediction and reward outcome layer.

func AddCINLayer

func AddCINLayer(nt *leabra.Network, name string) *CINLayer

AddCINLayer adds a CINLayer, with a single neuron.

func (*CINLayer) ActFmG

func (ly *CINLayer) ActFmG(ltime *leabra.Time)

func (*CINLayer) Build

func (ly *CINLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*CINLayer) CyclePost

func (ly *CINLayer) CyclePost(ltime *leabra.Time)

CyclePost is called at end of Cycle We use it to send ACh, which will then be active for the next cycle of processing.

func (*CINLayer) Defaults

func (ly *CINLayer) Defaults()

func (*CINLayer) GetACh

func (ly *CINLayer) GetACh() float32

func (*CINLayer) MaxAbsRew

func (ly *CINLayer) MaxAbsRew() float32

MaxAbsRew returns the maximum absolute value of reward layer activations

func (*CINLayer) SetACh

func (ly *CINLayer) SetACh(ach float32)

func (*CINLayer) UnitVal1D

func (ly *CINLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*CINLayer) UnitVarIdx

func (ly *CINLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*CINLayer) UnitVarNum

func (ly *CINLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type CaParams

type CaParams struct {

	// [def: 0.9] activation threshold for bursting that drives strong influx of Ca to turn on KCa channels -- there is a complex de-inactivation dynamic involving the volley of excitation and inhibition from GPe, but we can just use a threshold
	BurstThr float32 `` /* 244-byte string literal not displayed */

	// [def: 0.7] activation threshold for increment in activation above baseline that drives lower influx of Ca
	ActThr float32 `def:"0.7" desc:"activation threshold for increment in activation above baseline that drives lower influx of Ca"`

	// [def: 1] Ca level for burst level activation
	BurstCa float32 `def:"1" desc:"Ca level for burst level activation"`

	// [def: 0.2] Ca increment from regular sub-burst activation -- drives slower inhibition of firing over time -- for stop-type STN dynamics that initially put hold on GPi and then decay
	ActCa float32 `` /* 187-byte string literal not displayed */

	// [def: 10] maximal KCa conductance (actual conductance is applied to KNa channels)
	GbarKCa float32 `def:"10" desc:"maximal KCa conductance (actual conductance is applied to KNa channels)"`

	// [def: 20] KCa conductance time constant -- 40 from Gillies & Willshaw, 2006, but sped up here to fit in AlphaCyc
	KCaTau float32 `def:"20" desc:"KCa conductance time constant -- 40 from Gillies & Willshaw, 2006, but sped up here to fit in AlphaCyc"`

	// [def: 50] Ca time constant of decay to baseline -- 185.7 from Gillies & Willshaw, 2006, but sped up here to fit in AlphaCyc
	CaTau float32 `` /* 129-byte string literal not displayed */

	// initialize Ca, KCa values at start of every AlphaCycle
	AlphaInit bool `desc:"initialize Ca, KCa values at start of every AlphaCycle"`
}

CaParams control the calcium dynamics in STN neurons. Gillies & Willshaw, 2006 provide a biophysically detailed simulation, and we use their logistic function for computing KCa conductance based on Ca, but we use a simpler approximation with burst and act threshold. KCa are Calcium-gated potassium channels that drive the long afterhyperpolarization of STN neurons. Auto reset at each AlphaCycle. The conductance is applied to KNa channels to take advantage of the existing infrastructure.

func (*CaParams) Defaults

func (kc *CaParams) Defaults()

func (*CaParams) KCaGFmCa

func (kc *CaParams) KCaGFmCa(ca float32) float32

KCaGFmCa returns the driving conductance for KCa channels based on given Ca level. This equation comes from Gillies & Willshaw, 2006.

type DaModParams

type DaModParams struct {

	// whether to use dopamine modulation
	On bool `desc:"whether to use dopamine modulation"`

	// [viewif: On] modulate gain instead of Ge excitatory synaptic input
	ModGain bool `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"`

	// [viewif: On] how much to multiply Da in the minus phase to add to Ge input -- use negative values for NoGo/indirect pathway/D2 type neurons
	Minus float32 `` /* 145-byte string literal not displayed */

	// [viewif: On] how much to multiply Da in the plus phase to add to Ge input -- use negative values for NoGo/indirect pathway/D2 type neurons
	Plus float32 `` /* 144-byte string literal not displayed */

	// [viewif: On&&ModGain] for negative dopamine, how much to change the default gain value as a function of dopamine: gain = gain * (1 + da * NegNain) -- da is multiplied by minus or plus depending on phase
	NegGain float32 `` /* 208-byte string literal not displayed */

	// [viewif: On&&ModGain] for positive dopamine, how much to change the default gain value as a function of dopamine: gain = gain * (1 + da * PosGain) -- da is multiplied by minus or plus depending on phase
	PosGain float32 `` /* 208-byte string literal not displayed */
}

Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects

func (*DaModParams) Defaults

func (dm *DaModParams) Defaults()

func (*DaModParams) Gain

func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32

Gain returns da-modulated gain value

func (*DaModParams) GainModOn

func (dm *DaModParams) GainModOn() bool

GainModOn returns true if modulating Gain

func (*DaModParams) Ge

func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32

Ge returns da-modulated ge value

func (*DaModParams) GeModOn

func (dm *DaModParams) GeModOn() bool

GeModOn returns true if modulating Ge

type DaReceptors

type DaReceptors int

DaReceptors for D1R and D2R dopamine receptors

const (
	// D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum
	D1R DaReceptors = iota

	// D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum
	D2R

	DaReceptorsN
)

func (*DaReceptors) FromString

func (i *DaReceptors) FromString(s string) error

func (DaReceptors) MarshalJSON

func (ev DaReceptors) MarshalJSON() ([]byte, error)

func (DaReceptors) String

func (i DaReceptors) String() string

func (*DaReceptors) UnmarshalJSON

func (ev *DaReceptors) UnmarshalJSON(b []byte) error

type GPLayer

type GPLayer struct {
	Layer

	// type of GP layer
	GPLay GPLays `desc:"type of GP layer"`
}

GPLayer represents a globus pallidus layer, including: GPeOut, GPeIn, GPeTA (arkypallidal), and GPi (see GPLay for type). Typically just a single unit per Pool representing a given stripe.

func AddGPeLayer

func AddGPeLayer(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPLayer

AddGPLayer adds a GPLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*GPLayer) Defaults

func (ly *GPLayer) Defaults()

type GPLays

type GPLays int

GPLays for GPLayer type

const (
	// GPeOut is Outer layer of GPe neurons, receiving inhibition from MtxGo
	GPeOut GPLays = iota

	// GPeIn is Inner layer of GPe neurons, receiving inhibition from GPeOut and MtxNo
	GPeIn

	// GPeTA is arkypallidal layer of GPe neurons, receiving inhibition from GPeIn
	// and projecting inhibition to Mtx
	GPeTA

	// GPi is the inner globus pallidus, functionally equivalent to SNr,
	// receiving from MtxGo and GPeIn, and sending inhibition to VThal
	GPi

	GPLaysN
)

func (*GPLays) FromString

func (i *GPLays) FromString(s string) error

func (GPLays) MarshalJSON

func (ev GPLays) MarshalJSON() ([]byte, error)

func (GPLays) String

func (i GPLays) String() string

func (*GPLays) UnmarshalJSON

func (ev *GPLays) UnmarshalJSON(b []byte) error

type GPiLayer

type GPiLayer struct {
	GPLayer
}

GPiLayer represents the GPi / SNr output nucleus of the BG. It gets inhibited by the MtxGo and GPeIn layers, and its minimum activation during this inhibition is recorded in ActLrn, for learning. Typically just a single unit per Pool representing a given stripe.

func AddGPiLayer

func AddGPiLayer(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPiLayer

AddGPiLayer adds a GPiLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*GPiLayer) Defaults

func (ly *GPiLayer) Defaults()

type Layer

type Layer struct {
	glong.AlphaMaxLayer

	// dopamine value for this layer
	DA float32 `inactive:"+" desc:"dopamine value for this layer"`
}

Layer is the base layer type for PCore framework. Adds a dopamine variable to base Leabra layer type.

func (*Layer) GetDA

func (ly *Layer) GetDA() float32

func (*Layer) InitActs

func (ly *Layer) InitActs()

func (*Layer) SetDA

func (ly *Layer) SetDA(da float32)

func (*Layer) UnitVal1D

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitVarIdx

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type MatrixLayer

type MatrixLayer struct {
	Layer

	// dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo
	DaR DaReceptors `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"`

	// [view: inline] matrix parameters
	Matrix MatrixParams `view:"inline" desc:"matrix parameters"`

	// effective learning dopamine value for this layer: reflects DaR and Gains
	DALrn float32 `inactive:"+" desc:"effective learning dopamine value for this layer: reflects DaR and Gains"`

	// acetylcholine value from CIN cholinergic interneurons reflecting the absolute value of reward or CS predictions thereof -- used for resetting the trace of matrix learning
	ACh float32 `` /* 190-byte string literal not displayed */
}

MatrixLayer represents the dorsal matrisome MSN's that are the main Go / NoGo gating units in BG. D1R = Go, D2R = NoGo.

func AddMatrixLayer

func AddMatrixLayer(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*MatrixLayer) ActFmG

func (ly *MatrixLayer) ActFmG(ltime *leabra.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. Matrix extends to call DALrnFmDA and updates AlphaMax -> ActLrn

func (*MatrixLayer) DAActLrn

func (ly *MatrixLayer) DAActLrn(ltime *leabra.Time)

DAActLrn sets effective learning dopamine value from given raw DA value, applying Burst and Dip Gain factors, and then reversing sign for D2R. Also sets ActLrn based on whether corresponding VThal stripe fired above ThalThr -- flips sign of learning for stripe firing vs. not.

func (*MatrixLayer) Defaults

func (ly *MatrixLayer) Defaults()

func (*MatrixLayer) GetACh

func (ly *MatrixLayer) GetACh() float32

func (*MatrixLayer) InitActs

func (ly *MatrixLayer) InitActs()

func (*MatrixLayer) SetACh

func (ly *MatrixLayer) SetACh(ach float32)

func (*MatrixLayer) ThalLayer

func (ly *MatrixLayer) ThalLayer() (*VThalLayer, error)

func (*MatrixLayer) UnitVal1D

func (ly *MatrixLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixLayer) UnitVarIdx

func (ly *MatrixLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

type MatrixParams

type MatrixParams struct {

	// name of VThal layer -- needed to get overall gating output action
	ThalLay string `desc:"name of VThal layer -- needed to get overall gating output action"`

	// [def: 0.25] threshold for thal max activation (in pool) to be gated -- typically .25 or so to accurately reflect PFC output gating -- may need to adjust based on actual behavior
	ThalThr float32 `` /* 183-byte string literal not displayed */

	// [def: true] use the sigmoid derivative factor 2 * Act * (1-Act) for matrix (recv) activity in modulating learning -- otherwise just multiply by activation directly -- this is generally beneficial for learning to prevent weights from continuing to increase when activations are already strong (and vice-versa for decreases)
	Deriv bool `` /* 328-byte string literal not displayed */

	// [def: 1] multiplicative gain factor applied to positive (burst) dopamine signals in computing DALrn effect learning dopamine value based on raw DA that we receive (D2R reversal occurs *after* applying Burst based on sign of raw DA)
	BurstGain float32 `` /* 237-byte string literal not displayed */

	// [def: 1] multiplicative gain factor applied to positive (burst) dopamine signals in computing DALrn effect learning dopamine value based on raw DA that we receive (D2R reversal occurs *after* applying Burst based on sign of raw DA)
	DipGain float32 `` /* 237-byte string literal not displayed */
}

MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

func (*MatrixParams) LrnFactor

func (mp *MatrixParams) LrnFactor(act float32) float32

LrnFactor returns multiplicative factor for level of msn activation. If Deriv is 2 * act * (1-act) -- the factor of 2 compensates for otherwise reduction in learning from these factors. Otherwise is just act.

type MatrixPrjn

type MatrixPrjn struct {
	leabra.Prjn

	// [view: inline] special parameters for matrix trace learning
	Trace MatrixTraceParams `view:"inline" desc:"special parameters for matrix trace learning"`

	// trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array
	TrSyns []TraceSyn `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`
}

MatrixPrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context

func (*MatrixPrjn) Build

func (pj *MatrixPrjn) Build() error

func (*MatrixPrjn) ClearTrace

func (pj *MatrixPrjn) ClearTrace()

func (*MatrixPrjn) DWt

func (pj *MatrixPrjn) DWt()

DWt computes the weight change (learning) -- on sending projections.

func (*MatrixPrjn) Defaults

func (pj *MatrixPrjn) Defaults()

func (*MatrixPrjn) InitWts

func (pj *MatrixPrjn) InitWts()

func (*MatrixPrjn) SynVal1D

func (pj *MatrixPrjn) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixPrjn) SynVarIdx

func (pj *MatrixPrjn) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*MatrixPrjn) SynVarNum

func (pj *MatrixPrjn) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.

type MatrixTraceParams

type MatrixTraceParams struct {

	// [def: true] if true, current trial DA dopamine can drive learning (i.e., synaptic co-activity trace is updated prior to DA-driven dWt), otherwise DA is applied to existing trace before trace is updated, meaning that at least one trial must separate gating activity and DA
	CurTrlDA bool `` /* 277-byte string literal not displayed */

	// [def: 2] [min: 0] multiplier on CIN ACh level for decaying prior traces -- decay never exceeds 1.  larger values drive strong credit assignment for any US outcome.
	Decay float32 `` /* 168-byte string literal not displayed */
}

MatrixTraceParams for for trace-based learning in the MatrixPrjn. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is reset at time of reward based on ACh level from CINs.

func (*MatrixTraceParams) Defaults

func (tp *MatrixTraceParams) Defaults()

type Network

type Network struct {
	leabra.Network
}

pcore.Network has methods for configuring specialized PCore network components PCore = Pallidal Core mode of BG

func (*Network) AddBG

func (nt *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (mtxGo, mtxNo, cin, gpeOut, gpeIn, gpeTA, stnp, stns, gpi, vthal leabra.LeabraLayer)

AddBG adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Only Matrix has more than 1 unit per Pool by default. Appropriate PoolOneToOne connections are made between layers, using standard styles space is the spacing between layers (2 typical)

func (*Network) ConnectToMatrix

func (nt *Network) ConnectToMatrix(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixTracePrjn from given sending layer to a matrix layer

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

type STNLayer

type STNLayer struct {
	Layer

	// [view: inline] parameters for calcium and calcium-gated potassium channels that drive the afterhyperpolarization that open the gating window in STN neurons (Hallworth et al., 2003)
	Ca CaParams `` /* 186-byte string literal not displayed */

	// slice of extra STNNeuron state for this layer -- flat list of len = Shape.Len(). You must iterate over index and use pointer to modify values.
	STNNeurs []STNNeuron `` /* 149-byte string literal not displayed */
}

STNLayer represents STN neurons, with two subtypes: STNp are more strongly driven and get over bursting threshold, driving strong, rapid activation of the KCa channels, causing a long pause in firing, which creates a window during which GPe dynamics resolve Go vs. No balance. STNs are more weakly driven and thus more slowly activate KCa, resulting in a longer period of activation, during which the GPi is inhibited to prevent premature gating based only MtxGo inhibition -- gating only occurs when GPeIn signal has had a chance to integrate its MtxNo inputs.

func AddSTNLayer

func AddSTNLayer(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *STNLayer

AddSTNLayer adds a subthalamic nucleus Layer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*STNLayer) ActFmG

func (ly *STNLayer) ActFmG(ltime *leabra.Time)

func (*STNLayer) AlphaCycInit

func (ly *STNLayer) AlphaCycInit(updtActAvg bool)

func (*STNLayer) Build

func (ly *STNLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*STNLayer) Defaults

func (ly *STNLayer) Defaults()

func (*STNLayer) GetDA

func (ly *STNLayer) GetDA() float32

func (*STNLayer) InitActs

func (ly *STNLayer) InitActs()

func (*STNLayer) SetDA

func (ly *STNLayer) SetDA(da float32)

func (*STNLayer) UnitVal1D

func (ly *STNLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*STNLayer) UnitVarIdx

func (ly *STNLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*STNLayer) UnitVarNum

func (ly *STNLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type STNNeuron

type STNNeuron struct {

	// intracellular Calcium concentration -- increased by bursting and elevated levels of activation, drives KCa currents that result in hyperpolarization / inhibition.
	Ca float32 `` /* 169-byte string literal not displayed */

	// Calcium-gated potassium channel conductance level, computed using function from gillies & Willshaw 2006 as function of Ca.
	KCa float32 `` /* 129-byte string literal not displayed */
}

STNNeuron holds the extra neuron (unit) level variables for STN computation.

func (*STNNeuron) VarByIndex

func (nrn *STNNeuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in STNNeuronVars list)

func (*STNNeuron) VarByName

func (nrn *STNNeuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*STNNeuron) VarNames

func (nrn *STNNeuron) VarNames() []string

type TraceSyn

type TraceSyn struct {

	// new trace = send * recv -- drives updates to trace value: sn.ActLrn * rn.ActLrn (subject to derivative too)
	NTr float32 `desc:"new trace = send * recv -- drives updates to trace value: sn.ActLrn * rn.ActLrn (subject to derivative too)"`

	//  current ongoing trace of activations, which drive learning -- adds ntr and clears after ACh-modulated learning on current values
	Tr float32 `` /* 136-byte string literal not displayed */
}

TraceSyn holds extra synaptic state for trace projections

func (*TraceSyn) VarByIndex

func (sy *TraceSyn) VarByIndex(varIdx int) float32

VarByIndex returns synapse variable by index

func (*TraceSyn) VarByName

func (sy *TraceSyn) VarByName(varNm string) float32

VarByName returns synapse variable by name

type VThalLayer

type VThalLayer struct {
	Layer
}

VThalLayer represents the Ventral thalamus: VA / VM / VL, which receives BG gating in the form of inhibitory projection from GPi.

func AddVThalLayer

func AddVThalLayer(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *VThalLayer

AddVThalLayer adds a ventral thalamus (VA/VL/VM) Layer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*VThalLayer) Defaults

func (ly *VThalLayer) Defaults()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL