pcore

package
v1.6.12 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 6, 2022 License: BSD-3-Clause Imports: 15 Imported by: 0

README

PCore BG Model: Pallidal Core model of BG

The core of this model is the Globus Pallidus (external segment), GPe, which plays a central role in integrating Go and NoGo signals from the striatum, in contrast to the standard, "classical" framework which focuses on the GPi or SNr as the primary locus of integration.

There are two recently-identified revisions to the standard circuitry diagram that drive this model (Suryanarayana et al, 2019; others):

  • A distinction between outer (GPeOut) and inner (GPeIn) layers of the GPe, with both D1-dominant (Go) and D2-dominant (NoGo / No) projections into the GPe (the classical "direct" vs. "indirect" terminology thus not being quite as applicable).

  • And a third, even more distinct arkypallidal (GPeTA) GPe layer.

Thus, the GPe circuitry is clearly capable of doing significant additional computation within this more elaborate circuitry, and it has also long been recognized as strongly interconnected with the subthalamic nucleus (STN), providing a critical additional dynamic. This model provides a clear computational function for the GPe / STN complex within the larger BG system, with the following properties:

  • GPeIn is the core of the core: it integrates Go and No striatal signals in consistent terms, with direct No inhibitory inputs, and indirect Go net excitatory inputs, inverted by way of double-inhibition through the GPeOut. By having both of these signals converging on single neurons, the GPeIn can directly weigh the balance for and against a potential action. Thus, in many ways, GPeIn is like the GPi / SNr of the classical model, except with the sign reversed (i.e., it is more active for a more net-Go balance).

  • The GPeIn then projects inhibition to the GPeTA, which in turn drives the strongest source of broad, diffuse inhibition to the striatum: this provides the long-sought winner-take-all (WTA) action selection dynamic in the BG circuitry, by broadcasting back an effective inhibitory threshold that only the most strongly-activated striatal neurons can withstand. In addition, the GPeTA sends a weaker inhibitory projection into the striatum, and having both of these is essential to prevent strong oscillatory dynamics.

  • The GPi integrates the direct Go inhibition from the striatum, and the integrated Go vs. No balance from the GPeIn, which have the same sign and contribute synergistically to the GPi's release of inhibition on the thalamus, as in the standard model. In our model, the integrated GPeIn input is stronger than the one from striatum Go pathway.

  • The STN in our model plays two distinct functional roles, across two subtypes of neurons, defined in terms of differential connectivity patterns, which are known have the relevant diversity of connectivity (cites):

    • The STNp (pausing) neurons receive strong excitatory connections from the frontal cortex, and project to GPe (In and Out), while also receiving GPe inhibition. The frontal input triggers a rapid inhibitory oscillation through the GPe circuits, which project strong inhibition in return to the STN's strong excitation. This dynamic then produces a sustained inhibition of STNp firing, due to calcium-gated potassium channels (KCA, cites), which has been well documented in a range of preparations including, critically, awake-behaving animals (in activo) (Fujimoto & Kita, 1993; Magill et al, 2004). The net effect of this burst / pause is to excite the GPe and then open up a window of time when it can then be free from driving STN inputs, to integrate the balance of Go vs. No pathways. We see in our model that this produces a nice graded settling process over time within the pallidal core, such that the overall gating output reaction time (RT) reflects the degree of Go vs. No conflict: high conflict cases are significantly slower than unopposed Go cases, whilst the strength of Go overall also modulates RT (stronger Go = faster RT).

    • The STNs (stopping) neurons receive weaker excitatory inputs from frontal cortex, and project more to the GPi / SNr compared to GPe. These neurons do not trigger the disinactivation required for strong levels of KCa channel opening, and instead experience a more gradual activation of KCa due to Ca influx over a more prolonged wave of activation. Functionally, they are critical for stopping premature expression of the BG output through the GPi, before the core GPe integration has had a chance to unfold. In this way, the STNs population functions like the more traditional view of STN overall, as a kind of global NoGo signal to prevent BG firing from being too "impulsive" (cites). However, in this model, it is the GPe Go vs. No balancing that provides the more "considered", slower calculation, whereas the frontal cortex is assumed to play this role in the standard account.

  • Both STN pathways recover from the KCa inhibition while inactivated, and their resumed activity excites the GP areas, terminating the window when the BG can functionally gate. Furthermore, it is important for the STN neurons to experience a period with no driving frontal inputs, to reset the KCa channels to be able to support the STNp burst / pausing dynamics. This has the desirable functional consequence of preventing any sustained frontal patterns from driving repeated BG gating of the same pathways again and again. This provides a beneficial bias to keep the flow of action and cognition in a constant state of flux.

In summary, the GPe is the core integrator of the BG circuit, while the STN orchestrates the timing, opening the window for integration and preventing premature output. The striatum is thus free to play its traditional role of learning to identify critical features supporting Go vs. No action weights, under the influence of phasic dopamine signals. Because it is by far the largest BG structure, it is not well suited for WTA competition among alternative possible courses of action, or for integration between Go and No, which instead are much better supported by the compact and well-connected GPe core.

Dynamics

Net

The above figure shows the key stages unfolding over cycles within a standard Alpha cycle of 100 cycles. Some of the time constants have been sped up to ensure everything occurs within this timeframe -- it may take longer in the real system.

  • Cycle 1: tonic activation of GP and STN, just at onset of cortical inputs. GPeTA is largely inhibited by GPeIn, consistent with neural recording.

  • Cycle 7: STNp peaks at above the .9 threshold that KCa triggers pausing, while STNs rises more slowly, driving a slower accumulation of KCa. STN has activated the GPe layers.

  • Cycle 23: Offset of STNp enables GPe layers to settle into integration of MtxGo and MtxNo activations. GPeTA sends broad inhibition to Matrix, such that only strongest are able to stay active.

  • Cycle 51: STNs succumbs to slow accumulation of KCa, allowing full inhibition of GPi output from remaining MtxGo and GPeIn activity, disinhibiting the ventral thalamus (VThal), which then has a excitatory loop through PFCo output layers.

  • Cycle 91: STN layers regain activation, resetting the circuit. If PFC inputs do not turn off, the system will not re-gate, because the KCa channels are not fully reset.

Net

The above figure shows the reaction time (cycles) to activate the thalamus above a firing threshold of .5, for a full sweep of ACC positive and negative values, which preferentially activate Go and No respectively. The positive values are incremented by .1 in an outer loop, while the negative values are incremented by .1 within each level of positive. Thus, you can see that as NoGo gets stronger, it competes against Go, causing an increase in reaction time, followed by a failure to gate at all.

Electrophysiological data

Recent data: : majority of STN units show decreasing ramp prior to go cue, then small subset show brief phasic burst at Go then brief inhib window then strong sustained activity during / after movement. This sustained activity will turn off gating window -- gating in PFCd can presumably do that and provide the final termination of the gating event.

RT

RT

Data from Dodson et al, 2015 and Mirzaei et al, 2017 shows brief increase then dips or increases in activity in GPe prototypical neurons, with also very consistent data about brief burst then shutoff in TA neurons. So both outer and inner GPe prototypical neuron profiles can be found.

RT

STNp pause mechanisms: SKCa channels

The small-conductance calcium-activated potassium channel (SKCa) is widely distributed throughout the brain, and in general plays a role in medium-term after-hyper-polarization (mAHP) (Dwivedi & Bhalla, 2021), including most dramatically the full pausing of neural firing as observed in the STNp neurons. The basic mechanism is straightforward: Ca++ influx from VGCC channels, opened via spiking, then activates SKCa channels in a briefly delayed manner (activation time constant of 5-15 msec), and the combined trace of Ca and relatively slow SKCa deactivation results in a window where the increased K+ conductance (leak) can prevent the cell from firing. If insufficiently intense initial spiking occurs, then the resulting slow accumulation of Ca partially activates SKCa, slowing firing but not fully pausing it. Thus, the STNp implements a critical switch between opening the BG gating window for strong, novel inputs, vs. keeping it closed for weak, familiar ones.

These SKCa channels are not widely modeled, and existing models from FujitaFukaiKitano12 (based on GunayEdgertonJaeger08), and GilliesWillshaw06 have different implementations that diverge from some of the basic literature cited below. Thus, we use a simple model based on the Hill activation curve and separate activation vs. deactivation time constants.

  • XiaFaklerRivardEtAl98: "Time constants for activation and deactivation, determined from mono-exponential fits, were 5.8, 6.3 and 12.9 ms for activation and 21.7, 29.6 and 38.1ms for deactivation of SK1, SK2 and SK3, respectively." "... Ca2+ concentrations required for half-maximal activation (K_0.5) of 0.3 uM and a Hill coefficient of ~4 (Fig. 1a)."

  • AdelmanMaylieSah12: "Fast application of saturating Ca2+ (10 μM) to inside-out patches shows that SK channels have activation time constants of 5–15 ms and deactivation time constants of ∼50 ms (Xia et al, 1998)." Mediates mAHP, which decays over "several hundred msec".

  • DwivediBhalla21 "SK channels are voltage insensitive and are activated solely by an increase of 0.5–1 μM in intracellular calcium (Ca2+) levels (Blatz and Magleby, 1986; Köhler et al., 1996; Sah, 1996; Hirschberg et al., 1999). An individual channel has a conductance of 10 pS and achieves its half activation at an intracellular calcium level of approximately 0.6 μM (Hirschberg et al., 1999). The time constant of channel activation is 5–15 ms, and the deactivation time is 30 ms (Xia et al., 1998; Oliver et al., 2000)." in PD: "while the symptoms were aggravated when the channels were blocked in the STN (Mourre et al., 2017)."

Learning logic

For MSNs, the standard 3-factor matrix learning (with D1/D2 sign reversal) works well here, without any extra need to propagate the gating signal from GPi back up to Striatum -- the GPeTA projection inhibits the Matrix neurons naturally.

  • dwt = da * rn.Act * sn.Act

However, there still is the perennial problem of temporal delay between gating action and subsequent reward. We use the trace algorithm here, but with one new wrinkle. The cholinergic interneurons (CINs, aka TANs = tonically active neurons) are ideally suited to provide a "learn now" signal, by firing in proportion to the non-discounted, positive rectified US or CS value (i.e., whenever any kind of reward or punishment signal arrives, or is indicated by a CS).

Thus, on every trial, a gating trace signal is accumulated:

  • Tr += rn.Act * sn.Act

and when the CINs fire, this trace is then applied in proportion to the current DA value, and the trace is reset:

  • DWt += Tr * DA
  • Tr = 0

Other models

  • SuryanarayanaHellgrenKotaleskiGrillnerEtAl19 -- focuses mainly on WTA dynamics and doesn't address in conceptual terms the dynamic unfolding.

References

Bogacz, R., Moraud, E. M., Abdi, A., Magill, P. J., & Baufreton, J. (2016). Properties of neurons in external globus pallidus can support optimal action selection. PLoS computational biology, 12(7).

  • lots of discussion, good data, but not same model for sure.

Dodson PD, Larvin JT, Duffell JM, Garas FN, Doig NM, Kessaris N, Duguid IC, Bogacz R, Butt SJ, Magill PJ. Distinct Developmental Origins Manifest in the Specialized Encoding of Movement by Adult Neurons of the External Globus Pallidus. Neuron. 2015; 86:501–513. [PubMed: 25843402]

Dunovan, K., Lynch, B., Molesworth, T., & Verstynen, T. (2015). Competing basal ganglia pathways determine the difference between stopping and deciding not to go. Elife, 4, e08723.

  • good ref for Hanks -- DDM etc

Hegeman, D. J., Hong, E. S., Hernández, V. M., & Chan, C. S. (2016). The external globus pallidus: progress and perspectives. European Journal of Neuroscience, 43(10), 1239-1265.

Mirzaei, A., Kumar, A., Leventhal, D., Mallet, N., Aertsen, A., Berke, J., & Schmidt, R. (2017). Sensorimotor processing in the basal ganglia leads to transient beta oscillations during behavior. Journal of Neuroscience, 37(46), 11220-11232.

  • great data on STN etc

Suryanarayana, S. M., Hellgren Kotaleski, J., Grillner, S., & Gurney, K. N. (2019). Roles for globus pallidus externa revealed in a computational model of action selection in the basal ganglia. Neural Networks, 109, 113–136. https://doi.org/10.1016/j.neunet.2018.10.003

  • key modeling paper with lots of refs

Wei, W., & Wang, X. J. (2016). Inhibitory control in the cortico-basal ganglia-thalamocortical loop: complex regulation and interplay with memory and decision processes. Neuron, 92(5), 1093-1105.

  • simple model of SSRT -- good point of comparison

Documentation

Index

Constants

View Source
const (
	// Matrix are the matrisome medium spiny neurons (MSNs) that are the main
	// Go / NoGo gating units in BG.
	Matrix emer.LayerType = emer.LayerType(pvlv.LayerTypeN) + iota

	// STN is a subthalamic nucleus layer: STNp or STNs
	STN

	// GP is a globus pallidus layer: GPe or GPi
	GP

	// Thal is a thalamic layer, used for MD mediodorsal thalamus and
	// VM / VL / VA ventral thalamic nuclei.
	Thal

	// PT are layer 5IB intrinsic bursting pyramidal tract neocortical neurons.
	// These are bidirectionally interconnected with BG-gated thalamus in PFC.
	PT
)

Variables

View Source
var (
	// NeuronVars are extra neuron variables for pcore -- union across all types
	NeuronVars = []string{"Burst", "BurstPrv", "CtxtGe", "DA", "DALrn", "ACh", "Gated", "SKCai", "SKCaM", "Gsk"}

	// NeuronVarsAll is the pcore collection of all neuron-level vars
	NeuronVarsAll []string

	// SynVarsAll is the pcore collection of all synapse-level vars (includes TraceSynVars)
	SynVarsAll []string
)
View Source
var (
	STNNeuronVars    = []string{"SKCai", "SKCaM", "Gsk"}
	STNNeuronVarsMap map[string]int
)
View Source
var KiT_DaReceptors = kit.Enums.AddEnum(DaReceptorsN, kit.NotBitFlag, nil)
View Source
var KiT_GPLayer = kit.Types.AddType(&GPLayer{}, LayerProps)
View Source
var KiT_GPLays = kit.Enums.AddEnum(GPLaysN, kit.NotBitFlag, nil)
View Source
var KiT_GPiLayer = kit.Types.AddType(&GPiLayer{}, LayerProps)
View Source
var KiT_LayerType = kit.Enums.AddEnumExt(pvlv.KiT_LayerType, LayerTypeN, kit.NotBitFlag, nil)
View Source
var KiT_MatrixLayer = kit.Types.AddType(&MatrixLayer{}, LayerProps)
View Source
var KiT_MatrixPrjn = kit.Types.AddType(&MatrixPrjn{}, axon.PrjnProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_PTLayer = kit.Types.AddType(&PTLayer{}, LayerProps)
View Source
var KiT_STNLayer = kit.Types.AddType(&STNLayer{}, LayerProps)
View Source
var KiT_ThalLayer = kit.Types.AddType(&ThalLayer{}, LayerProps)
View Source
var LayerProps = ki.Props{
	"EnumType:Typ": KiT_LayerType,
	"ToolBar": ki.PropSlice{
		{"Defaults", ki.Props{
			"icon": "reset",
			"desc": "return all parameters to their intial default values",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"LesionNeurons", ki.Props{
			"icon": "close",
			"desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)",
			"Args": ki.PropSlice{
				{"Proportion", ki.Props{
					"desc": "proportion (0 -- 1) of neurons to lesion",
				}},
			},
		}},
		{"UnLesionNeurons", ki.Props{
			"icon": "reset",
			"desc": "Un-Lesion (reset the Off flag) for all neurons in the layer",
		}},
	},
}

LayerProps are required to get the extended EnumType

View Source
var NetworkProps = axon.NetworkProps
View Source
var TraceSynVars = []string{"DTr"}

Functions

func AddBG

func AddBG(nt *axon.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi axon.AxonLayer)

AddBG adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical) A CIN or more widely used rl.RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func AddBG4D added in v1.5.10

func AddBG4D(nt *axon.Network, prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi axon.AxonLayer)

AddBG4D adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. This version makes 4D pools throughout the GP layers, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) A CIN or more widely used rl.RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func AddCINLayer

func AddCINLayer(nt *axon.Network, name, mtxGo, mtxNo string, space float32) *rl.RSalienceLayer

AddCINLayer adds a rl.RSalienceLayer unsigned reward salience coding ACh layer which sends ACh to given Matrix Go and No layers (names), and is default located to the right of the MtxNo layer with given spacing. CIN is a cholinergic interneuron interspersed in the striatum that shows these response properties and modulates learning in the striatum around US and CS events. If other ACh modulation is needed, a global RSalienceLayer can be used.

func AddPTThalForSuper added in v1.5.10

func AddPTThalForSuper(nt *axon.Network, super, ct emer.Layer, suffix string, superToPT, ptSelf, ctToThal prjn.Pattern, space float32) (pt, thal emer.Layer)

AddPTThalForSuper adds a PT pyramidal tract layer and a Thalamus layer for given superficial layer (deep.SuperLayer) and associated CT with given suffix (e.g., MD, VM). PT and Thal have SetClass(super.Name()) called to allow shared params. Projections are made with given classes: SuperToPT, PTSelfMaint, CTtoThal. The PT and Thal layers are positioned behind the CT layer.

func BoolToFloat32 added in v1.5.10

func BoolToFloat32(b bool) float32

todo: replace with ki/bools.ToFloat32 BoolToFloat32 -- the lack of ternary conditional expressions is *only* Go decision I disagree about

func ConnectPTSelf added in v1.5.10

func ConnectPTSelf(nt *axon.Network, ly emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectPTSelf adds a Self (Lateral) projection within a PT layer, which supports active maintenance, with a class of PTSelfMaint

func ConnectToMatrix

func ConnectToMatrix(nt *axon.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixPrjn from given sending layer to a matrix layer

func STNNeuronVarIdxByName

func STNNeuronVarIdxByName(varNm string) (int, error)

STNNeuronVarIdxByName returns the index of the variable in the STNNeuron, or error

Types

type CaParams

type CaParams struct {
	SKCa      chans.SKCaParams `view:"inline" desc:"small-conductance calcium-activated potassium channel"`
	CaD       bool             `desc:"use CaD timescale (delayed) calcium signal -- for STNs -- else use CaP (faster) for STNp"`
	CaScale   float32          `desc:"scaling factor applied to input Ca to bring into proper range of these dynamics"`
	ThetaInit bool             `desc:"initialize Ca, KCa values at start of every ThetaCycle (i.e., behavioral trial)"`
}

CaParams control the calcium dynamics in STN neurons. The SKCa small-conductance calcium-gated potassium channel produces the pausing function as a consequence of rapid bursting.

func (*CaParams) Defaults

func (kc *CaParams) Defaults()

func (*CaParams) Update added in v1.5.1

func (kc *CaParams) Update()

type DaModParams

type DaModParams struct {
	On      bool    `desc:"whether to use dopamine modulation"`
	ModGain bool    `viewif:"On" desc:"modulate gain instead of Ge excitatory synaptic input"`
	Minus   float32 `` /* 145-byte string literal not displayed */
	Plus    float32 `` /* 144-byte string literal not displayed */
	NegGain float32 `` /* 208-byte string literal not displayed */
	PosGain float32 `` /* 208-byte string literal not displayed */
}

Params for effects of dopamine (Da) based modulation, typically adding a Da-based term to the Ge excitatory synaptic input. Plus-phase = learning effects relative to minus-phase "performance" dopamine effects

func (*DaModParams) Defaults

func (dm *DaModParams) Defaults()

func (*DaModParams) Gain

func (dm *DaModParams) Gain(da, gain float32, plusPhase bool) float32

Gain returns da-modulated gain value

func (*DaModParams) GainModOn

func (dm *DaModParams) GainModOn() bool

GainModOn returns true if modulating Gain

func (*DaModParams) Ge

func (dm *DaModParams) Ge(da, ge float32, plusPhase bool) float32

Ge returns da-modulated ge value

func (*DaModParams) GeModOn

func (dm *DaModParams) GeModOn() bool

GeModOn returns true if modulating Ge

type DaReceptors

type DaReceptors int

DaReceptors for D1R and D2R dopamine receptors

const (
	// D1R primarily expresses Dopamine D1 Receptors -- dopamine is excitatory and bursts of dopamine lead to increases in synaptic weight, while dips lead to decreases -- direct pathway in dorsal striatum
	D1R DaReceptors = iota

	// D2R primarily expresses Dopamine D2 Receptors -- dopamine is inhibitory and bursts of dopamine lead to decreases in synaptic weight, while dips lead to increases -- indirect pathway in dorsal striatum
	D2R

	DaReceptorsN
)

func (*DaReceptors) FromString

func (i *DaReceptors) FromString(s string) error

func (DaReceptors) MarshalJSON

func (ev DaReceptors) MarshalJSON() ([]byte, error)

func (DaReceptors) String

func (i DaReceptors) String() string

func (*DaReceptors) UnmarshalJSON

func (ev *DaReceptors) UnmarshalJSON(b []byte) error

type GPLayer

type GPLayer struct {
	rl.Layer
	GPLay GPLays `desc:"type of GP layer"`
}

GPLayer represents a globus pallidus layer, including: GPeOut, GPeIn, GPeTA (arkypallidal), and GPi (see GPLay for type). Typically just a single unit per Pool representing a given stripe.

func AddGPeLayer2D added in v1.5.10

func AddGPeLayer2D(nt *axon.Network, name string, nNeurY, nNeurX int) *GPLayer

AddGPLayer2D adds a GPLayer of given size, with given name.

func AddGPeLayer4D added in v1.5.10

func AddGPeLayer4D(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPLayer

AddGPLayer4D adds a GPLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*GPLayer) Defaults

func (ly *GPLayer) Defaults()

type GPLays

type GPLays int

GPLays for GPLayer type

const (
	// GPeOut is Outer layer of GPe neurons, receiving inhibition from MtxGo
	GPeOut GPLays = iota

	// GPeIn is Inner layer of GPe neurons, receiving inhibition from GPeOut and MtxNo
	GPeIn

	// GPeTA is arkypallidal layer of GPe neurons, receiving inhibition from GPeIn
	// and projecting inhibition to Mtx
	GPeTA

	// GPi is the inner globus pallidus, functionally equivalent to SNr,
	// receiving from MtxGo and GPeIn, and sending inhibition to VThal
	GPi

	GPLaysN
)

func (*GPLays) FromString

func (i *GPLays) FromString(s string) error

func (GPLays) MarshalJSON

func (ev GPLays) MarshalJSON() ([]byte, error)

func (GPLays) String

func (i GPLays) String() string

func (*GPLays) UnmarshalJSON

func (ev *GPLays) UnmarshalJSON(b []byte) error

type GPiLayer

type GPiLayer struct {
	GPLayer
}

GPiLayer represents the GPi / SNr output nucleus of the BG. It gets inhibited by the MtxGo and GPeIn layers, and its minimum activation during this inhibition is recorded in ActLrn, for learning. Typically just a single unit per Pool representing a given stripe.

func AddGPiLayer2D added in v1.5.10

func AddGPiLayer2D(nt *axon.Network, name string, nNeurY, nNeurX int) *GPiLayer

AddGPiLayer2D adds a GPiLayer of given size, with given name.

func AddGPiLayer4D added in v1.5.10

func AddGPiLayer4D(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *GPiLayer

AddGPiLayer4D adds a GPiLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*GPiLayer) Defaults

func (ly *GPiLayer) Defaults()

type LayerType added in v1.5.10

type LayerType pvlv.LayerType

LayerType has the DeepAxon extensions to the emer.LayerType types, for gui

const (
	Matrix_ LayerType = LayerType(pvlv.LayerTypeN) + iota
	STN_
	GP_
	Thal_
	PT_
	LayerTypeN
)

gui versions

func StringToLayerType added in v1.5.10

func StringToLayerType(s string) (LayerType, error)

func (LayerType) String added in v1.5.10

func (i LayerType) String() string

type MatrixLayer

type MatrixLayer struct {
	rl.Layer
	DaR      DaReceptors   `desc:"dominant type of dopamine receptor -- D1R for Go pathway, D2R for NoGo"`
	Matrix   MatrixParams  `view:"inline" desc:"matrix parameters"`
	MtxThals emer.LayNames `` /* 134-byte string literal not displayed */
	USLayers emer.LayNames `` /* 345-byte string literal not displayed */
	HasMod   bool          `inactive:"+" desc:"has modulatory projections, flagged with Modulator setting -- automatically detected"`
	Gated    []bool        `` /* 160-byte string literal not displayed */
	DALrn    float32       `inactive:"+" desc:"effective learning dopamine value for this layer: reflects DaR and Gains"`
	ACh      float32       `` /* 479-byte string literal not displayed */
	USActive bool          `` /* 126-byte string literal not displayed */
	Mods     []float32     `desc:"modulatory values from Modulator projection(s) for each neuron"`
}

MatrixLayer represents the matrisome medium spiny neurons (MSNs) that are the main Go / NoGo gating units in BG. D1R = Go, D2R = NoGo. The Gated value for each pool is updated in the PlusPhase and can be called separately too.

func AddMatrixLayer

func AddMatrixLayer(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DaReceptors) *MatrixLayer

AddMatrixLayer adds a MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)

func (*MatrixLayer) AnyGated added in v1.5.10

func (ly *MatrixLayer) AnyGated() bool

AnyGated returns true if any of the pools gated

func (*MatrixLayer) Build added in v1.5.10

func (ly *MatrixLayer) Build() error

func (*MatrixLayer) Class added in v1.5.10

func (ly *MatrixLayer) Class() string

func (*MatrixLayer) DAActLrn

func (ly *MatrixLayer) DAActLrn()

DAActLrn sets effective learning dopamine value from given raw DA value, applying Burst and Dip Gain factors, and then reversing sign for D2R and also for InvertNoGate -- must have done GatedFmThal before this.

func (*MatrixLayer) DecayState added in v1.5.12

func (ly *MatrixLayer) DecayState(decay, glong float32)

func (*MatrixLayer) Defaults

func (ly *MatrixLayer) Defaults()

func (*MatrixLayer) GInteg added in v1.6.0

func (ly *MatrixLayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*MatrixLayer) GatedFmAvgSpk added in v1.5.10

func (ly *MatrixLayer) GatedFmAvgSpk()

GatedFmAvgSpk updates Gated based on Avg SpkMax activity in Go Matrix and ThalLayers listed in MtxThals

func (*MatrixLayer) GetACh

func (ly *MatrixLayer) GetACh() float32

func (*MatrixLayer) GiFmACh added in v1.5.12

func (ly *MatrixLayer) GiFmACh(ctime *axon.Time)

GiFmACh sets inhibitory conductance from ACh value, where ACh is 0 at baseline and goes up to 1 at US or CS -- effect is disinhibitory on MSNs

func (*MatrixLayer) GiFmSpikes added in v1.6.0

func (ly *MatrixLayer) GiFmSpikes(ctime *axon.Time)

func (*MatrixLayer) InitActs

func (ly *MatrixLayer) InitActs()

func (*MatrixLayer) InitMods added in v1.5.12

func (ly *MatrixLayer) InitMods()

InitMods initializes the Mods modulator values

func (*MatrixLayer) PlusPhase added in v1.5.10

func (ly *MatrixLayer) PlusPhase(ctime *axon.Time)

PlusPhase does updating at end of the plus phase calls DAActLrn

func (*MatrixLayer) SetACh

func (ly *MatrixLayer) SetACh(ach float32)

func (*MatrixLayer) SpikeFmG added in v1.6.4

func (ly *MatrixLayer) SpikeFmG(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*MatrixLayer) USActiveFmUS added in v1.5.12

func (ly *MatrixLayer) USActiveFmUS(ctime *axon.Time)

USActiveFmUS updates the USActive flag based on USLayers state

func (*MatrixLayer) UnitVal1D

func (ly *MatrixLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixLayer) UnitVarIdx

func (ly *MatrixLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*MatrixLayer) UnitVarNum added in v1.5.1

func (ly *MatrixLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type MatrixParams

type MatrixParams struct {
	GPHasPools   bool    `` /* 189-byte string literal not displayed */
	InvertNoGate bool    `` /* 163-byte string literal not displayed */
	GateThr      float32 `desc:"threshold on layer Avg SpkMax for Matrix Go and Thal layers to count as having gated"`
	BurstGain    float32 `` /* 237-byte string literal not displayed */
	DipGain      float32 `` /* 237-byte string literal not displayed */
	ModGain      float32 `desc:"gain factor multiplying the modulator input GeSyn conductances -- total modulation has a maximum of 1"`
	AChInhib     float32 `desc:"strength of extra Gi current multiplied by MaxACh-ACh (ACh > Max = 0) -- ACh is disinhibitory on striatal firing"`
	MaxACh       float32 `desc:"level of ACh at or above which AChInhib goes to 0 -- ACh typically ranges between 0-1"`
}

MatrixParams has parameters for Dorsal Striatum Matrix computation These are the main Go / NoGo gating units in BG.

func (*MatrixParams) Defaults

func (mp *MatrixParams) Defaults()

func (*MatrixParams) GiFmACh added in v1.5.12

func (mp *MatrixParams) GiFmACh(ach float32) float32

GiFmACh returns inhibitory conductance from ach value, where ACh is 0 at baseline and goes up to 1 at US or CS -- effect is disinhibitory on MSNs

type MatrixPrjn

type MatrixPrjn struct {
	axon.Prjn
	Trace  MatrixTraceParams `view:"inline" desc:"special parameters for matrix trace learning"`
	TrSyns []TraceSyn        `desc:"trace synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SendConIdx array"`
}

MatrixPrjn does dopamine-modulated, gated trace learning, for Matrix learning in PBWM context

func (*MatrixPrjn) Build

func (pj *MatrixPrjn) Build() error

func (*MatrixPrjn) ClearTrace

func (pj *MatrixPrjn) ClearTrace()

func (*MatrixPrjn) DWt

func (pj *MatrixPrjn) DWt(ctime *axon.Time)

func (*MatrixPrjn) DWtNoUS added in v1.5.12

func (pj *MatrixPrjn) DWtNoUS(ctime *axon.Time)

DWtNoUS computes the weight change (learning) -- on sending projections. for non-USActive special case

func (*MatrixPrjn) DWtUS added in v1.5.12

func (pj *MatrixPrjn) DWtUS(ctime *axon.Time)

DWtUS computes the weight change (learning) -- on sending projections. case with USActive flag available to condition learning on US.

func (*MatrixPrjn) Defaults

func (pj *MatrixPrjn) Defaults()

func (*MatrixPrjn) InitWts

func (pj *MatrixPrjn) InitWts()

func (*MatrixPrjn) RecvSynCa added in v1.6.12

func (pj *MatrixPrjn) RecvSynCa(ctime *axon.Time)

func (*MatrixPrjn) SendSynCa added in v1.6.12

func (pj *MatrixPrjn) SendSynCa(ctime *axon.Time)

func (*MatrixPrjn) SlowAdapt added in v1.6.12

func (pj *MatrixPrjn) SlowAdapt(ctime *axon.Time)

func (*MatrixPrjn) SynVal1D

func (pj *MatrixPrjn) SynVal1D(varIdx int, synIdx int) float32

SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*MatrixPrjn) SynVarIdx

func (pj *MatrixPrjn) SynVarIdx(varNm string) (int, error)

SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*MatrixPrjn) SynVarNames added in v1.5.1

func (pj *MatrixPrjn) SynVarNames() []string

func (*MatrixPrjn) SynVarNum

func (pj *MatrixPrjn) SynVarNum() int

SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.

type MatrixTraceParams

type MatrixTraceParams struct {
	CurTrlDA  bool    `` /* 277-byte string literal not displayed */
	Decay     float32 `` /* 168-byte string literal not displayed */
	NoACh     bool    `desc:"ignore ACh for learning modulation -- only used for reset if so -- otherwise ACh directly multiplies dWt"`
	Modulator bool    `` /* 136-byte string literal not displayed */
}

MatrixTraceParams for for trace-based learning in the MatrixPrjn. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is reset at time of reward based on ACh level from CINs.

func (*MatrixTraceParams) Defaults

func (tp *MatrixTraceParams) Defaults()

type Network

type Network struct {
	rl.Network
}

pcore.Network has methods for configuring specialized PCore network components PCore = Pallidal Core mode of BG

func (*Network) AddBG

func (nt *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi axon.AxonLayer)

AddBG adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical). A CIN or more widely used rl.RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func (*Network) AddBG4D added in v1.5.10

func (nt *Network) AddBG4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeOut, gpeIn, gpeTA, stnp, stns, gpi axon.AxonLayer)

AddBG4D adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. This version makes 4D pools throughout the GP layers, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) A CIN or more widely used rl.RSalienceLayer should be added and project ACh to the MtxGo, No layers.

func (*Network) AddCINLayer added in v1.5.12

func (nt *Network) AddCINLayer(name, mtxGo, mtxNo string, space float32) *rl.RSalienceLayer

AddCINLayer adds a rl.RSalienceLayer unsigned reward salience coding ACh layer which sends ACh to given Matrix Go and No layers (names), and is default located to the right of the MtxNo layer with given spacing. CIN is a cholinergic interneuron interspersed in the striatum that shows these response properties and modulates learning in the striatum around US and CS events. If other ACh modulation is needed, a global RSalienceLayer can be used.

func (*Network) AddPTThalForSuper added in v1.5.10

func (nt *Network) AddPTThalForSuper(super, ct emer.Layer, suffix string, superToPT, ptSelf, ctToThal prjn.Pattern, space float32) (pt, thal emer.Layer)

AddPTThalForSuper adds a PT pyramidal tract layer and a Thalamus layer for given superficial layer (SuperLayer) with given suffix (e.g., MD, VM). Projections are made with given classes: SuperToPT, PTSelfMaint, CTtoThal. The PT and Thal layers are positioned behind the CT layer.

func (*Network) AddThalLayer2D added in v1.5.10

func (nt *Network) AddThalLayer2D(name string, nNeurY, nNeurX int) *ThalLayer

AddThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure

func (*Network) AddThalLayer4D added in v1.5.10

func (nt *Network) AddThalLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *ThalLayer

AddThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.

func (*Network) ConnectToMatrix

func (nt *Network) ConnectToMatrix(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectToMatrix adds a MatrixPrjn from given sending layer to a matrix layer

func (*Network) SynVarNames

func (nt *Network) SynVarNames() []string

SynVarNames returns the names of all the variables on the synapses in this network.

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

type PTLayer added in v1.5.10

type PTLayer struct {
	rl.Layer             // access as .Layer
	ThalNMDAGain float32 `` /* 146-byte string literal not displayed */
}

PTLayer implements the pyramidal tract layer 5 intrinsic bursting deep neurons.

func AddPTLayer2D added in v1.5.10

func AddPTLayer2D(nt *axon.Network, name string, nNeurY, nNeurX int) *PTLayer

AddPTLayer2D adds a PTLayer of given size, with given name.

func AddPTLayer4D added in v1.5.10

func AddPTLayer4D(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *PTLayer

AddPTLayer4D adds a PTLayer of given size, with given name.

func (*PTLayer) Class added in v1.5.10

func (ly *PTLayer) Class() string

func (*PTLayer) Defaults added in v1.5.10

func (ly *PTLayer) Defaults()

func (*PTLayer) GFmRawSyn added in v1.6.0

func (ly *PTLayer) GFmRawSyn(ni int, nrn *axon.Neuron, ctime *axon.Time, thalGeRaw, thalGeSyn float32)

func (*PTLayer) GFmSpikeRaw added in v1.6.0

func (ly *PTLayer) GFmSpikeRaw(ni int, nrn *axon.Neuron, ctime *axon.Time) (thalGeRaw, thalGeSyn float32)

func (*PTLayer) GInteg added in v1.6.0

func (ly *PTLayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*PTLayer) UpdateParams added in v1.5.10

func (ly *PTLayer) UpdateParams()

type STNLayer

type STNLayer struct {
	rl.Layer
	Ca       CaParams    `` /* 186-byte string literal not displayed */
	STNNeurs []STNNeuron `` /* 149-byte string literal not displayed */
}

STNLayer represents STN neurons, with two subtypes: STNp are more strongly driven and get over bursting threshold, driving strong, rapid activation of the KCa channels, causing a long pause in firing, which creates a window during which GPe dynamics resolve Go vs. No balance. STNs are more weakly driven and thus more slowly activate KCa, resulting in a longer period of activation, during which the GPi is inhibited to prevent premature gating based only MtxGo inhibition -- gating only occurs when GPeIn signal has had a chance to integrate its MtxNo inputs.

func AddSTNLayer2D added in v1.5.10

func AddSTNLayer2D(nt *axon.Network, name string, nNeurY, nNeurX int) *STNLayer

AddSTNLayer2D adds a subthalamic nucleus Layer of given size, with given name.

func AddSTNLayer4D added in v1.5.10

func AddSTNLayer4D(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *STNLayer

AddSTNLayer4D adds a subthalamic nucleus Layer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.

func (*STNLayer) Build

func (ly *STNLayer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*STNLayer) Class added in v1.5.10

func (ly *STNLayer) Class() string

func (*STNLayer) Defaults

func (ly *STNLayer) Defaults()

func (*STNLayer) GInteg added in v1.6.0

func (ly *STNLayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*STNLayer) InitActs

func (ly *STNLayer) InitActs()

func (*STNLayer) NewState added in v1.5.1

func (ly *STNLayer) NewState()

func (*STNLayer) UnitVal1D

func (ly *STNLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*STNLayer) UnitVarIdx

func (ly *STNLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*STNLayer) UnitVarNum

func (ly *STNLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

func (*STNLayer) UpdateParams added in v1.5.1

func (ly *STNLayer) UpdateParams()

type STNNeuron

type STNNeuron struct {
	SKCai float32 `` /* 158-byte string literal not displayed */
	SKCaM float32 `desc:"Calcium-gated potassium channel gating factor, driven by SKCai via a Hill equation as in chans.SKPCaParams."`
	Gsk   float32 `desc:"Calcium-gated potassium channel conductance as a function of Gbar * SKCaM."`
}

STNNeuron holds the extra neuron (unit) level variables for STN computation.

func (*STNNeuron) VarByIndex

func (nrn *STNNeuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in STNNeuronVars list)

func (*STNNeuron) VarByName

func (nrn *STNNeuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*STNNeuron) VarNames

func (nrn *STNNeuron) VarNames() []string

type ThalLayer added in v1.5.10

type ThalLayer struct {
	rl.Layer
	Gated []bool `` /* 172-byte string literal not displayed */
}

ThalLayer represents a BG gated thalamic layer, e.g., the Ventral thalamus: VA / VM / VL or MD mediodorsal thalamus, which receives BG gating in the form of an inhibitory projection from GPi.

func AddThalLayer2D added in v1.5.10

func AddThalLayer2D(nt *axon.Network, name string, nNeurY, nNeurX int) *ThalLayer

AddThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure

func AddThalLayer4D added in v1.5.10

func AddThalLayer4D(nt *axon.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *ThalLayer

AddThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.

func (*ThalLayer) AnyGated added in v1.5.10

func (ly *ThalLayer) AnyGated() bool

AnyGated returns true if any of the pools gated

func (*ThalLayer) Build added in v1.5.10

func (ly *ThalLayer) Build() error

func (*ThalLayer) Class added in v1.5.10

func (ly *ThalLayer) Class() string

func (*ThalLayer) DecayState added in v1.5.12

func (ly *ThalLayer) DecayState(decay, glong float32)

func (*ThalLayer) Defaults added in v1.5.10

func (ly *ThalLayer) Defaults()

func (*ThalLayer) GatedFmAvgSpk added in v1.5.10

func (ly *ThalLayer) GatedFmAvgSpk(thr float32) bool

GatedFmAvgSpk updates the Gated values based on Avg SpkMax using given threshold. Called by Go Matrix layer. returns true if any gated.

func (*ThalLayer) UnitVal1D added in v1.5.10

func (ly *ThalLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*ThalLayer) UnitVarIdx added in v1.5.10

func (ly *ThalLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*ThalLayer) UnitVarNum added in v1.5.10

func (ly *ThalLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type TraceSyn

type TraceSyn struct {
	DTr float32 `desc:"delta trace = send * recv -- increments to Tr when a gating event happens."`
}

TraceSyn holds extra synaptic state for trace projections

func (*TraceSyn) VarByIndex

func (sy *TraceSyn) VarByIndex(varIdx int) float32

VarByIndex returns synapse variable by index

func (*TraceSyn) VarByName

func (sy *TraceSyn) VarByName(varNm string) float32

VarByName returns synapse variable by name

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL