pcore

package
v1.5.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 5, 2022 License: BSD-3-Clause Imports: 15 Imported by: 0

README

PCore BG Model: Pallidal Core model of BG

The core of this model is the Globus Pallidus (external segment), GPe, which plays a central role in integrating Go and NoGo signals from the striatum, in contrast to the standard, "classical" framework which focuses on the GPi or SNr as the primary locus of integration.

There are two recently-identified revisions to the standard circuitry diagram that drive this model (Suryanarayana et al, 2019; others):

  • A distinction between outer (GPeOut) and inner (GPeIn) layers of the GPe, with both D1-dominant (Go) and D2-dominant (NoGo / No) projections into the GPe (the classical "direct" vs. "indirect" terminology thus not being quite as applicable).

  • And a third, even more distinct arkypallidal (GPeTA) GPe layer.

Thus, the GPe circuitry is clearly capable of doing significant additional computation within this more elaborate circuitry, and it has also long been recognized as strongly interconnected with the subthalamic nucleus (STN), providing a critical additional dynamic. This model provides a clear computational function for the GPe / STN complex within the larger BG system, with the following properties:

  • GPeIn is the core of the core: it integrates Go and No striatal signals in consistent terms, with direct No inhibitory inputs, and indirect Go net excitatory inputs, inverted by way of double-inhibition through the GPeOut. By having both of these signals converging on single neurons, the GPeIn can directly weigh the balance for and against a potential action. Thus, in many ways, GPeIn is like the GPi / SNr of the classical model, except with the sign reversed (i.e., it is more active for a more net-Go balance).

  • The GPeIn then projects inhibition to the GPeTA, which in turn drives the strongest source of broad, diffuse inhibition to the striatum: this provides the long-sought winner-take-all (WTA) action selection dynamic in the BG circuitry, by broadcasting back an effective inhibitory threshold that only the most strongly-activated striatal neurons can withstand. In addition, the GPeTA sends a weaker inhibitory projection into the striatum, and having both of these is essential to prevent strong oscillatory dynamics.

  • The GPi integrates the direct Go inhibition from the striatum, and the integrated Go vs. No balance from the GPeIn, which have the same sign and contribute synergistically to the GPi's release of inhibition on the thalamus, as in the standard model. In our model, the integrated GPeIn input is stronger than the one from striatum Go pathway.

  • The STN in our model plays two distinct functional roles, across two subtypes of neurons, defined in terms of differential connectivity patterns, which are known have the relevant diversity of connectivity (cites):

    • The STNp (pausing) neurons receive strong excitatory connections from the frontal cortex, and project to GPe (In and Out), while also receiving GPe inhibition. The frontal input triggers a rapid inhibitory oscillation through the GPe circuits, which project strong inhibition in return to the STN's strong excitation. This dynamic then produces a sustained inhibition of STNp firing, due to calcium-gated potassium channels (KCA, cites), which has been well documented in a range of preparations including, critically, awake-behaving animals (in activo) (Fujimoto & Kita, 1993; Magill et al, 2004). The net effect of this burst / pause is to excite the GPe and then open up a window of time when it can then be free from driving STN inputs, to integrate the balance of Go vs. No pathways. We see in our model that this produces a nice graded settling process over time within the pallidal core, such that the overall gating output reaction time (RT) reflects the degree of Go vs. No conflict: high conflict cases are significantly slower than unopposed Go cases, whilst the strength of Go overall also modulates RT (stronger Go = faster RT).

    • The STNs (stopping) neurons receive weaker excitatory inputs from frontal cortex, and project more to the GPi / SNr compared to GPe. These neurons do not trigger the disinactivation required for strong levels of KCa channel opening, and instead experience a more gradual activation of KCa due to Ca influx over a more prolonged wave of activation. Functionally, they are critical for stopping premature expression of the BG output through the GPi, before the core GPe integration has had a chance to unfold. In this way, the STNs population functions like the more traditional view of STN overall, as a kind of global NoGo signal to prevent BG firing from being too "impulsive" (cites). However, in this model, it is the GPe Go vs. No balancing that provides the more "considered", slower calculation, whereas the frontal cortex is assumed to play this role in the standard account.

  • Both STN pathways recover from the KCa inhibition while inactivated, and their resumed activity excites the GP areas, terminating the window when the BG can functionally gate. Furthermore, it is important for the STN neurons to experience a period with no driving frontal inputs, to reset the KCa channels to be able to support the STNp burst / pausing dynamics. This has the desirable functional consequence of preventing any sustained frontal patterns from driving repeated BG gating of the same pathways again and again. This provides a beneficial bias to keep the flow of action and cognition in a constant state of flux.

In summary, the GPe is the core integrator of the BG circuit, while the STN orchestrates the timing, opening the window for integration and preventing premature output. The striatum is thus free to play its traditional role of learning to identify critical features supporting Go vs. No action weights, under the influence of phasic dopamine signals. Because it is by far the largest BG structure, it is not well suited for WTA competition among alternative possible courses of action, or for integration between Go and No, which instead are much better supported by the compact and well-connected GPe core.

Dynamics

Net

The above figure shows the key stages unfolding over cycles within a standard Alpha cycle of 100 cycles. Some of the time constants have been sped up to ensure everything occurs within this timeframe -- it may take longer in the real system.

  • Cycle 1: tonic activation of GP and STN, just at onset of cortical inputs. GPeTA is largely inhibited by GPeIn, consistent with neural recording.

  • Cycle 7: STNp peaks at above the .9 threshold that KCa triggers pausing, while STNs rises more slowly, driving a slower accumulation of KCa. STN has activated the GPe layers.

  • Cycle 23: Offset of STNp enables GPe layers to settle into integration of MtxGo and MtxNo activations. GPeTA sends broad inhibition to Matrix, such that only strongest are able to stay active.

  • Cycle 51: STNs succumbs to slow accumulation of KCa, allowing full inhibition of GPi output from remaining MtxGo and GPeIn activity, disinhibiting the ventral thalamus (VThal), which then has a excitatory loop through PFCo output layers.

  • Cycle 91: STN layers regain activation, resetting the circuit. If PFC inputs do not turn off, the system will not re-gate, because the KCa channels are not fully reset.

Net

The above figure shows the reaction time (cycles) to activate the thalamus above a firing threshold of .5, for a full sweep of ACC positive and negative values, which preferentially activate Go and No respectively. The positive values are incremented by .1 in an outer loop, while the negative values are incremented by .1 within each level of positive. Thus, you can see that as NoGo gets stronger, it competes against Go, causing an increase in reaction time, followed by a failure to gate at all.

Electrophysiological data

Recent data: : majority of STN units show decreasing ramp prior to go cue, then small subset show brief phasic burst at Go then brief inhib window then strong sustained activity during / after movement. This sustained activity will turn off gating window -- gating in PFCd can presumably do that and provide the final termination of the gating event.

RT

RT

Data from Dodson et al, 2015 and Mirzaei et al, 2017 shows brief increase then dips or increases in activity in GPe prototypical neurons, with also very consistent data about brief burst then shutoff in TA neurons. So both outer and inner GPe prototypical neuron profiles can be found.

RT

STNp pause mechanisms: SKCa channels

The small-conductance calcium-activated potassium channel (SKCa) is widely distributed throughout the brain, and in general plays a role in medium-term after-hyper-polarization (mAHP) (Dwivedi & Bhalla, 2021), including most dramatically the full pausing of neural firing as observed in the STNp neurons. The basic mechanism is straightforward: Ca++ influx from VGCC channels, opened via spiking, then activates SKCa channels in a briefly delayed manner (activation time constant of 5-15 msec), and the combined trace of Ca and relatively slow SKCa deactivation results in a window where the increased K+ conductance (leak) can prevent the cell from firing. If insufficiently intense initial spiking occurs, then the resulting slow accumulation of Ca partially activates SKCa, slowing firing but not fully pausing it. Thus, the STNp implements a critical switch between opening the BG gating window for strong, novel inputs, vs. keeping it closed for weak, familiar ones.

These SKCa channels are not widely modeled, and existing models from FujitaFukaiKitano12 (based on GunayEdgertonJaeger08), and GilliesWillshaw06 have different implementations that diverge from some of the basic literature cited below. Thus, we use a simple model based on the Hill activation curve and separate activation vs. deactivation time constants.

  • XiaFaklerRivardEtAl98: "Time constants for activation and deactivation, determined from mono-exponential fits, were 5.8, 6.3 and 12.9 ms for activation and 21.7, 29.6 and 38.1ms for deactivation of SK1, SK2 and SK3, respectively." "... Ca2+ concentrations required for half-maximal activation (K_0.5) of 0.3 uM and a Hill coefficient of ~4 (Fig. 1a)."

  • AdelmanMaylieSah12: "Fast application of saturating Ca2+ (10 μM) to inside-out patches shows that SK channels have activation time constants of 5–15 ms and deactivation time constants of ∼50 ms (Xia et al, 1998)." Mediates mAHP, which decays over "several hundred msec".

  • DwivediBhalla21 "SK channels are voltage insensitive and are activated solely by an increase of 0.5–1 μM in intracellular calcium (Ca2+) levels (Blatz and Magleby, 1986; Köhler et al., 1996; Sah, 1996; Hirschberg et al., 1999). An individual channel has a conductance of 10 pS and achieves its half activation at an intracellular calcium level of approximately 0.6 μM (Hirschberg et al., 1999). The time constant of channel activation is 5–15 ms, and the deactivation time is 30 ms (Xia et al., 1998; Oliver et al., 2000)." in PD: "while the symptoms were aggravated when the channels were blocked in the STN (Mourre et al., 2017)."

Learning logic

For MSNs, the standard 3-factor matrix learning (with D1/D2 sign reversal) works well here, without any extra need to propagate the gating signal from GPi back up to Striatum -- the GPeTA projection inhibits the Matrix neurons naturally.

  • dwt = da * rn.Act * sn.Act

However, there still is the perennial problem of temporal delay between gating action and subsequent reward. We use the trace algorithm here, but with one new wrinkle. The cholinergic interneurons (CINs, aka TANs = tonically active neurons) are ideally suited to provide a "learn now" signal, by firing in proportion to the non-discounted, positive rectified US or CS value (i.e., whenever any kind of reward or punishment signal arrives, or is indicated by a CS).

Thus, on every trial, a gating trace signal is accumulated:

  • Tr += rn.Act * sn.Act

and when the CINs fire, this trace is then applied in proportion to the current DA value, and the trace is reset:

  • DWt += Tr * DA
  • Tr = 0

Other models

  • SuryanarayanaHellgrenKotaleskiGrillnerEtAl19 -- focuses mainly on WTA dynamics and doesn't address in conceptual terms the dynamic unfolding.

References

Bogacz, R., Moraud, E. M., Abdi, A., Magill, P. J., & Baufreton, J. (2016). Properties of neurons in external globus pallidus can support optimal action selection. PLoS computational biology, 12(7).

  • lots of discussion, good data, but not same model for sure.

Dodson PD, Larvin JT, Duffell JM, Garas FN, Doig NM, Kessaris N, Duguid IC, Bogacz R, Butt SJ, Magill PJ. Distinct Developmental Origins Manifest in the Specialized Encoding of Movement by Adult Neurons of the External Globus Pallidus. Neuron. 2015; 86:501–513. [PubMed: 25843402]

Dunovan, K., Lynch, B., Molesworth, T., & Verstynen, T. (2015). Competing basal ganglia pathways determine the difference between stopping and deciding not to go. Elife, 4, e08723.

  • good ref for Hanks -- DDM etc

Hegeman, D. J., Hong, E. S., Hernández, V. M., & Chan, C. S. (2016). The external globus pallidus: progress and perspectives. European Journal of Neuroscience, 43(10), 1239-1265.

Mirzaei, A., Kumar, A., Leventhal, D., Mallet, N., Aertsen, A., Berke, J., & Schmidt, R. (2017). Sensorimotor processing in the basal ganglia leads to transient beta oscillations during behavior. Journal of Neuroscience, 37(46), 11220-11232.

  • great data on STN etc

Suryanarayana, S. M., Hellgren Kotaleski, J., Grillner, S., & Gurney, K. N. (2019). Roles for globus pallidus externa revealed in a computational model of action selection in the basal ganglia. Neural Networks, 109, 113–136. https://doi.org/10.1016/j.neunet.2018.10.003

  • key modeling paper with lots of refs

Wei, W., & Wang, X. J. (2016). Inhibitory control in the cortico-basal ganglia-thalamocortical loop: complex regulation and interplay with memory and decision processes. Neuron, 92(5), 1093-1105.

  • simple model of SSRT -- good point of comparison

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// NeuronVars are extra neuron variables for pcore -- union across all types
	NeuronVars = []string{"DA", "ActLrn", "PhasicMax", "DALrn", "ACh", "SKCai", "SKCaM", "Gsk"}

	// NeuronVarsAll is the pcore collection of all neuron-level vars
	NeuronVarsAll []string

	// SynVarsAll is the pcore collection of all synapse-level vars (includes TraceSynVars)
	SynVarsAll []string
)
View Source
var (
	PCoreNeuronVars    = []string{"ActLrn", "PhasicMax"}
	PCoreNeuronVarsMap map[string]int
)
View Source
var (
	STNNeuronVars    = []string{"SKCai", "SKCaM", "Gsk"}
	STNNeuronVarsMap map[string]int
)
View Source
var KiT_CINLayer = kit.Types.AddType(&CINLayer{}, axon.LayerProps)
View Source
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
View Source
var KiT_GPLayer = kit.Types.AddType(&GPLayer{}, axon.LayerProps)
View Source
var KiT_GPLays = kit.Enums.AddEnum(GPLaysN, kit.NotBitFlag, nil)
View Source
var KiT_GPiLayer = kit.Types.AddType(&GPiLayer{}, axon.LayerProps)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, axon.LayerProps)
View Source
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, axon.LayerProps)
View Source
var KiT_MatrixPrjn = kit.Types.AddType(&MatrixPrjn{}, axon.PrjnProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_STNLayer = kit.Types.AddType(&STNLayer{}, axon.LayerProps)
View Source
var KiT_VThalLayer = kit.Types.AddType(&VThalLayer{}, axon.LayerProps)
View Source
var NetworkProps = axon.NetworkProps
View Source
var TraceSynVars = []string{"NTr"}

Functions

func AddBG

func AddBG(nt *axon.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, cin, gpeOut, gpeIn, gpeTA, stnp, stns, gpi, vthal axon.AxonLayer)

AddBG adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical)

func AddBGPy

func AddBGPy(nt *axon.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) []axon.AxonLayer

AddBGPy adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Only Matrix has more than 1 unit per Pool by default. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) Py is Python version, returns layers as a slice

func ConnectToMatrix

func ConnectToMatrix(nt *axon.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixTracePrjn from given sending layer to a matrix layer

func PCoreNeuronVarIdxByName added in v1.5.1

func PCoreNeuronVarIdxByName(varNm string) (int, error)

PCoreNeuronVarIdxByName returns the index of the variable in the PCoreNeuron, or error

func STNNeuronVarIdxByName

func STNNeuronVarIdxByName(varNm string) (int, error)

STNNeuronVarIdxByName returns the index of the variable in the STNNeuron, or error

Types

type CINLayer

type CINLayer struct {
	axon.Layer
	RewThr  float32       `` /* 164-byte string literal not displayed */
	RewLays emer.LayNames `desc:"Reward-representing layer(s) from which this computes ACh as Max absolute value"`
	SendACh rl.SendACh    `desc:"list of layers to send acetylcholine to"`
	ACh     float32       `desc:"acetylcholine value for this layer"`
}

CINLayer (cholinergic interneuron) reads reward signals from named source layer(s) and sends the Max absolute value of that activity as the positively-rectified non-prediction-discounted reward signal computed by CINs, and sent as an acetylcholine (ACh) signal. To handle positive-only reward signals, need to include both a reward prediction and reward outcome layer.

func AddCINLayer

func AddCINLayer(nt *axon.Network, name string) *CINLayer

AddCINLayer adds a CINLayer, with a single neuron.

func (*CINLayer) ActFmG

func (ly *CINLayer) ActFmG(ltime *axon.Time)

func (*CINLayer) Build

func (ly *CINLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*CINLayer) CyclePost

func (ly *CINLayer) CyclePost(ltime *axon.Time)

CyclePost is called at end of Cycle We use it to send ACh, which will then be active for the next cycle of processing.

func (*CINLayer) Defaults

func (ly *CINLayer) Defaults()

func (*CINLayer) GetACh

func (ly *CINLayer) GetACh() float32

func (*CINLayer) MaxAbsRew

func (ly *CINLayer) MaxAbsRew() float32

MaxAbsRew returns the maximum absolute value of reward layer activations

func (*CINLayer) SetACh

func (ly *CINLayer) SetACh(ach float32)

func (*CINLayer) UnitVal1D

func (ly *CINLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*CINLayer) UnitVarIdx

func (ly *CINLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*CINLayer) UnitVarNum

func (ly *CINLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type CaParams

type CaParams struct {
	SKCa      chans.SKCaParams `view:"inline" desc:"small-conductance calcium-activated potassium channel"`
	CaD       bool             `desc:"use CaD timescale (delayed) calcium signal -- for STNs -- else use CaP (faster) for STNp"`
	CaScale   float32          `desc:"scaling factor applied to input Ca to bring into proper range of these dynamics"`
	ThetaInit bool             `desc:"initialize Ca, KCa values at start of every ThetaCycle (i.e., behavioral trial)"`
}

CaParams control the calcium dynamics in STN neurons. The SKCa small-conductance calcium-gated potassium channel produces the pausing function as a consequence of rapid bursting.

func (*CaParams) Defaults

func (kc *CaParams) Defaults()

func (*CaParams) Update added in v1.5.1

func (kc *CaParams) Update()

type DaModParams

type DaModParams struct {
	On      bool    `desc:"whether to use dopamine modulation"`
	ModGain bool    `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"`
	Minus   float32 `` /* 145-byte string literal not displayed */
	Plus    float32 `` /* 144-byte string literal not displayed */
	NegGain float32 `` /* 208-byte string literal not displayed */
	PosGain float32 `` /* 208-byte string literal not displayed */
}

Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects

func (*DaModParams) Defaults

func (dm *DaModParams) Defaults()

func (*DaModParams) Gain

func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32

Gain returns da-modulated gain value

func (*DaModParams) GainModOn

func (dm *DaModParams) GainModOn() bool

GainModOn returns true if modulating Gain

func (*DaModParams) Ge

func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32

Ge returns da-modulated ge value

func (*DaModParams) GeModOn

func (dm *DaModParams) GeModOn() bool

GeModOn returns true if modulating Ge

type DaReceptors

type DaReceptors int

DaReceptors for D1R and D2R dopamine receptors

const (
	// D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum
	D1R DaReceptors = iota

	// D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum
	D2R

	DaReceptorsN
)

func (*DaReceptors) FromString

func (i *DaReceptors) FromString(s string) error

func (DaReceptors) MarshalJSON

func (ev DaReceptors) MarshalJSON() ([]byte, error)

func (DaReceptors) String

func (i DaReceptors) String() string

func (*DaReceptors) UnmarshalJSON

func (ev *DaReceptors) UnmarshalJSON(b []byte) error

type GPLayer

type GPLayer struct {
	Layer
	GPLay GPLays `desc:"type of GP layer"`
}

GPLayer represents a globus pallidus layer, including: GPeOut, GPeIn, GPeTA (arkypallidal), and GPi (see GPLay for type). Typically just a single unit per Pool representing a given stripe.

func AddGPeLayer

func AddGPeLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPLayer

AddGPLayer adds a GPLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*GPLayer) Defaults

func (ly *GPLayer) Defaults()

type GPLays

type GPLays int

GPLays for GPLayer type

const (
	// GPeOut is Outer layer of GPe neurons, receiving inhibition from MtxGo
	GPeOut GPLays = iota

	// GPeIn is Inner layer of GPe neurons, receiving inhibition from GPeOut and MtxNo
	GPeIn

	// GPeTA is arkypallidal layer of GPe neurons, receiving inhibition from GPeIn
	// and projecting inhibition to Mtx
	GPeTA

	// GPi is the inner globus pallidus, functionally equivalent to SNr,
	// receiving from MtxGo and GPeIn, and sending inhibition to VThal
	GPi

	GPLaysN
)

func (*GPLays) FromString

func (i *GPLays) FromString(s string) error

func (GPLays) MarshalJSON

func (ev GPLays) MarshalJSON() ([]byte, error)

func (GPLays) String

func (i GPLays) String() string

func (*GPLays) UnmarshalJSON

func (ev *GPLays) UnmarshalJSON(b []byte) error

type GPiLayer

type GPiLayer struct {
	GPLayer
}

GPiLayer represents the GPi / SNr output nucleus of the BG. It gets inhibited by the MtxGo and GPeIn layers, and its minimum activation during this inhibition is recorded in ActLrn, for learning. Typically just a single unit per Pool representing a given stripe.

func AddGPiLayer

func AddGPiLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPiLayer

AddGPiLayer adds a GPiLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*GPiLayer) Defaults

func (ly *GPiLayer) Defaults()

type Layer

type Layer struct {
	rl.Layer
	PhasicMaxCycMin int           `desc:"minimum cycle after which phasic maximum activity is recorded"`
	PCoreNeurs      []PCoreNeuron `` /* 146-byte string literal not displayed */
}

Layer is the basic pcore layer, which has a DA dopamine value from rl.Layer and tracks the phasic maximum activation during the gating window.

func (*Layer) ActFmG added in v1.5.1

func (ly *Layer) ActFmG(ltime *axon.Time)

func (*Layer) ActLrnFmPhasicMax added in v1.5.1

func (ly *Layer) ActLrnFmPhasicMax()

ActLrnFmPhasicMax sets ActLrn to PhasicMax

func (*Layer) Build added in v1.5.1

func (ly *Layer) Build() error

func (*Layer) Defaults added in v1.5.1

func (ly *Layer) Defaults()

func (*Layer) InitActLrn added in v1.5.1

func (ly *Layer) InitActLrn()

InitActLrn initializes the ActLrn to 0

func (*Layer) InitActs

func (ly *Layer) InitActs()

func (*Layer) InitPhasicMax added in v1.5.1

func (ly *Layer) InitPhasicMax()

InitPhasicMax initializes the PhasicMax to 0

func (*Layer) MaxPhasicMax added in v1.5.1

func (ly *Layer) MaxPhasicMax() float32

MaxPhasicMax returns the maximum PhasicMax across the layer

func (*Layer) NewState added in v1.5.1

func (ly *Layer) NewState()

func (*Layer) PCoreNeuronByIdx added in v1.5.1

func (ly *Layer) PCoreNeuronByIdx(idx int) *PCoreNeuron

PCoreNeuronByIdx returns neuron at given index

func (*Layer) PhasicMaxAvgByPool added in v1.5.1

func (ly *Layer) PhasicMaxAvgByPool(pli int) float32

PhasicMaxAvgByPool returns the average PhasicMax value by given pool index Pool index 0 is whole layer, 1 is first sub-pool, etc

func (*Layer) PhasicMaxFmAct added in v1.5.1

func (ly *Layer) PhasicMaxFmAct(ltime *axon.Time)

PhasicMaxFmAct computes PhasicMax from Activation

func (*Layer) PhasicMaxMaxByPool added in v1.5.1

func (ly *Layer) PhasicMaxMaxByPool(pli int) float32

PhasicMaxMaxByPool returns the average PhasicMax value by given pool index Pool index 0 is whole layer, 1 is first sub-pool, etc

func (*Layer) UnitVal1D

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitVarIdx

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type MatrixLayer

type MatrixLayer struct {
	Layer
	DaR    DaReceptors  `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"`
	Matrix MatrixParams `view:"inline" desc:"matrix parameters"`
	DALrn  float32      `inactive:"+" desc:"effective learning dopamine value for this layer: reflects DaR and Gains"`
	ACh    float32      `` /* 190-byte string literal not displayed */
}

MatrixLayer represents the dorsal matrisome MSN's that are the main Go / NoGo gating units in BG. D1R = Go, D2R = NoGo.

func AddMatrixLayer

func AddMatrixLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*MatrixLayer) ActFmG

func (ly *MatrixLayer) ActFmG(ltime *axon.Time)

ActFmG computes rate-code activation from Ge, Gi, Gl conductances and updates learning running-average activations from that Act. Matrix extends to call DALrnFmDA and updates PhasicMax -> ActLrn

func (*MatrixLayer) DAActLrn

func (ly *MatrixLayer) DAActLrn(ltime *axon.Time)

DAActLrn sets effective learning dopamine value from given raw DA value, applying Burst and Dip Gain factors, and then reversing sign for D2R. Also sets ActLrn based on whether corresponding VThal stripe fired above ThalThr -- flips sign of learning for stripe firing vs. not.

func (*MatrixLayer) Defaults

func (ly *MatrixLayer) Defaults()

func (*MatrixLayer) GetACh

func (ly *MatrixLayer) GetACh() float32

func (*MatrixLayer) InitActs

func (ly *MatrixLayer) InitActs()

func (*MatrixLayer) SetACh

func (ly *MatrixLayer) SetACh(ach float32)

func (*MatrixLayer) ThalLayer

func (ly *MatrixLayer) ThalLayer() (*VThalLayer, error)

func (*MatrixLayer) UnitVal1D

func (ly *MatrixLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixLayer) UnitVarIdx

func (ly *MatrixLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*MatrixLayer) UnitVarNum added in v1.5.1

func (ly *MatrixLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type MatrixParams

type MatrixParams struct {
	ThalLay   string  `desc:"name of VThal layer -- needed to get overall gating output action"`
	ThalThr   float32 `` /* 183-byte string literal not displayed */
	Deriv     bool    `` /* 328-byte string literal not displayed */
	BurstGain float32 `` /* 237-byte string literal not displayed */
	DipGain   float32 `` /* 237-byte string literal not displayed */
}

MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG driving updating of PFC WM in PBWM

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

func (*MatrixParams) LrnFactor

func (mp *MatrixParams) LrnFactor(act float32) float32

LrnFactor returns multiplicative factor for level of msn activation. If Deriv is 2 * act * (1-act) -- the factor of 2 compensates for otherwise reduction in learning from these factors. Otherwise is just act.

type MatrixPrjn

type MatrixPrjn struct {
	axon.Prjn
	Trace  MatrixTraceParams `view:"inline" desc:"special parameters for matrix trace learning"`
	TrSyns []TraceSyn        `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`
}

MatrixPrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context

func (*MatrixPrjn) Build

func (pj *MatrixPrjn) Build() error

func (*MatrixPrjn) ClearTrace

func (pj *MatrixPrjn) ClearTrace()

func (*MatrixPrjn) DWt

func (pj *MatrixPrjn) DWt(ltime *axon.Time)

DWt computes the weight change (learning) -- on sending projections.

func (*MatrixPrjn) Defaults

func (pj *MatrixPrjn) Defaults()

func (*MatrixPrjn) InitWts

func (pj *MatrixPrjn) InitWts()

func (*MatrixPrjn) SynVal1D

func (pj *MatrixPrjn) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixPrjn) SynVarIdx

func (pj *MatrixPrjn) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*MatrixPrjn) SynVarNames added in v1.5.1

func (pj *MatrixPrjn) SynVarNames() []string

func (*MatrixPrjn) SynVarNum

func (pj *MatrixPrjn) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.

type MatrixTraceParams

type MatrixTraceParams struct {
	CurTrlDA bool    `` /* 277-byte string literal not displayed */
	Decay    float32 `` /* 168-byte string literal not displayed */
}

MatrixTraceParams for for trace-based learning in the MatrixPrjn. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is reset at time of reward based on ACh level from CINs.

func (*MatrixTraceParams) Defaults

func (tp *MatrixTraceParams) Defaults()

type Network

type Network struct {
	deep.Network
}

pcore.Network has methods for configuring specialized PCore network components PCore = Pallidal Core mode of BG

func (*Network) AddBG

func (nt *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, cin, gpeOut, gpeIn, gpeTA, stnp, stns, gpi, vthal axon.AxonLayer)

AddBG adds MtxGo, No, CIN, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi, and VThal layers, with given optional prefix. Assumes that a 4D structure will be used, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons Appropriate PoolOneToOne connections are made between layers, using standard styles space is the spacing between layers (2 typical)

func (*Network) ConnectToMatrix

func (nt *Network) ConnectToMatrix(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixTracePrjn from given sending layer to a matrix layer

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

type PCoreLayer added in v1.5.1

type PCoreLayer interface {
	// PCoreNeuronByIdx returns neuron at given index
	PCoreNeuronByIdx(idx int) *PCoreNeuron

	//	PhasicMaxAvgByPool returns the average PhasicMax value by given pool index
	PhasicMaxAvgByPool(pli int) float32

	//	PhasicMaxMaxByPool returns the max PhasicMax value by given pool index
	PhasicMaxMaxByPool(pli int) float32

	//	PhasicMaxMax returns the max PhasicMax value across layer
	PhasicMaxMax() float32
}

PCoreLayer exposes PCoreNeuron access and PhaseMax values

type PCoreNeuron added in v1.5.1

type PCoreNeuron struct {
	ActLrn    float32 `desc:"learning activity value -- based on PhasicMax activation plus other potential factors depending on layer type."`
	PhasicMax float32 `desc:"maximum phasic activation value during a gating window."`
}

PCoreNeuron holds the extra neuron (unit) level variables for pcore computation.

func (*PCoreNeuron) VarByIndex added in v1.5.1

func (nrn *PCoreNeuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in PCoreNeuronVars list)

func (*PCoreNeuron) VarByName added in v1.5.1

func (nrn *PCoreNeuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*PCoreNeuron) VarNames added in v1.5.1

func (nrn *PCoreNeuron) VarNames() []string

type STNLayer

type STNLayer struct {
	Layer
	Ca       CaParams    `` /* 186-byte string literal not displayed */
	STNNeurs []STNNeuron `` /* 149-byte string literal not displayed */
}

STNLayer represents STN neurons, with two subtypes: STNp are more strongly driven and get over bursting threshold, driving strong, rapid activation of the KCa channels, causing a long pause in firing, which creates a window during which GPe dynamics resolve Go vs. No balance. STNs are more weakly driven and thus more slowly activate KCa, resulting in a longer period of activation, during which the GPi is inhibited to prevent premature gating based only MtxGo inhibition -- gating only occurs when GPeIn signal has had a chance to integrate its MtxNo inputs.

func AddSTNLayer

func AddSTNLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *STNLayer

AddSTNLayer adds a subthalamic nucleus Layer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*STNLayer) ActFmG

func (ly *STNLayer) ActFmG(ltime *axon.Time)

func (*STNLayer) Build

func (ly *STNLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*STNLayer) Defaults

func (ly *STNLayer) Defaults()

func (*STNLayer) InitActs

func (ly *STNLayer) InitActs()

func (*STNLayer) NewState added in v1.5.1

func (ly *STNLayer) NewState()

func (*STNLayer) UnitVal1D

func (ly *STNLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*STNLayer) UnitVarIdx

func (ly *STNLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*STNLayer) UnitVarNum

func (ly *STNLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*STNLayer) UpdateParams added in v1.5.1

func (ly *STNLayer) UpdateParams()

type STNNeuron

type STNNeuron struct {
	SKCai float32 `` /* 158-byte string literal not displayed */
	SKCaM float32 `desc:"Calcium-gated potassium channel gating factor, driven by SKCai via a Hill equation as in chans.SKPCaParams."`
	Gsk   float32 `desc:"Calcium-gated potassium channel conductance as a function of Gbar * SKCaM."`
}

STNNeuron holds the extra neuron (unit) level variables for STN computation.

func (*STNNeuron) VarByIndex

func (nrn *STNNeuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in STNNeuronVars list)

func (*STNNeuron) VarByName

func (nrn *STNNeuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*STNNeuron) VarNames

func (nrn *STNNeuron) VarNames() []string

type TraceSyn

type TraceSyn struct {
	NTr float32 `desc:"new trace = send * recv -- drives updates to trace value: sn.ActLrn * rn.ActLrn (subject to derivative too)"`
}

TraceSyn holds extra synaptic state for trace projections

func (*TraceSyn) VarByIndex

func (sy *TraceSyn) VarByIndex(varIdx int) float32

VarByIndex returns synapse variable by index

func (*TraceSyn) VarByName

func (sy *TraceSyn) VarByName(varNm string) float32

VarByName returns synapse variable by name

type VThalLayer

type VThalLayer struct {
	Layer
}

VThalLayer represents the Ventral thalamus: VA / VM / VL, which receives BG gating in the form of inhibitory projection from GPi.

func AddVThalLayer

func AddVThalLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *VThalLayer

AddVThalLayer adds a ventral thalamus (VA/VL/VM) Layer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. Typically nNeurY, nNeurX will both be 1, but could have more for noise etc.

func (*VThalLayer) Defaults

func (ly *VThalLayer) Defaults()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL