glong

package
v1.1.31 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 19, 2021 License: BSD-3-Clause Imports: 10 Imported by: 0

README

Glong: longer time-scale conductances from GABA-B / GIRK and NMDA receptors

This package implements two complementary conductances, GABA-B / GIRK and NMDA, that have relatively long time constants (on the order of 100-200 msec) and thus drive these longer-scale dynamics in neurons, beyond the basic AMPA and GABA-A channels with short ~10 msec time constants. In addition to their longer time constants, these neurons have interesting mirror-image rectification properties (voltage dependencies), inward and outward, which are a critical part of their function (Sanders et al, 2013).

GABA-B / GIRK

GABA-B is an inhibitory channel activated by the usual GABA inhibitory neurotransmitter, which is coupled to the GIRK G-protein coupled inwardly rectifying potassium (K) channel. It is ubiquitous in the brain, and is likely essential for basic neural function (especially in spiking networks from a computational perspective). The inward rectification is caused by a Mg+ ion block from the inside of the neuron, which means that these channels are most open when the neuron is hyperpolarized (inactive), and thus it serves to keep inactive neurons inactive.

In standard Leabra rate-code neurons using FFFB inhibition, the continuous nature of the GABA-A type inhibition serves this function already, so these GABA-B channels have not been as important, but whenever a discrete spiking function has been used along with FFFB inhibition or direct interneuron inhibition, there is a strong tendency for every neuron to fire at some point, in a rolling fashion, because neurons that are initially inhibited during the first round of firing can just pop back up once that initial wave of associated GABA-A inhibition passes. This is especially problematic for untrained networks where excitatory connections are not well differentiated, and neurons are receiving very similar levels of excitatory input. In this case, learning does not have the ability to further differentiate the neurons, and does not work effectively.

NMDA

NMDA is predominantly important for learning in most parts of the brain, but in frontal areas especially, a particular type of NMDA receptor drives sustained active maintenance, by driving sustained excitatory voltage-gated conductances (Goldman-Rakic, 1995; Lisman et al, 1998; Brunel & Wang, 2001). The well-known external Mg+ ion block of NMDA channels, which is essential for its Hebbian character in learning, also has the beneficial effect of keeping active neurons active by virtue of causing an outward rectification that is the mirror-image of the GIRK inward rectification, with the complementary functional effect. The Brunel & Wang (2001) model is widely used in many other active maintenance / working memory models.

Sanders et al, 2013 emphasize the "perfect couple" nature of the complementary bistability of GABA-B and NMDA, and show that it produces a more robust active maintenance dynamic in frontal cortical circuits.

We are using both NMDA and GABA-B to support robust active maintenance but also the ability to clear out previously-maintained representations, as a result of the GABA-B sustained inhibition.

Implementation

Spiking effects on Vm: Backpropagating Action Potentials

Spikes in the soma of a neuron also drive backpropagating action potentials back into the dendrites, which increases the overall membrane potential there above what is produced by the raw synaptic inputs. In our rate code models, we simulate these effects by adding a proportion of the rate code activation value to the Vm value. Also note that this Vm value is an equilibrium Vm that is not reset back to rest after each spike, so it is itself perhaps a bit elevated as a result.

GABA-B

The GABA-B / GIRK dynamics are based on a combination of Sanders et al, 2013 and Thomson & Destexhe, 1999 (which was major source for Sanders et al, 2013).

T&D found a sigmoidal logistic function described the maximum conductance as a function of GABA spikes, which saturates after about 10 spikes, and a conductance time course that is fit by a 4th-order function similar to the Hodgkin Huxley equations, with a peak at about 60 msec and a total duration envelope of around 200 msec. Sanders et al implemented these dynamics directly and describe the result as such:

These equations give a latency of GABAB-activated KIR current of 􏰅10 ms, time to peak of 􏰅50 ms, and a decay time constant of 􏰅80 ms as described by Thomson and Destexhe (1999). A slow time constant for GABA removal was chosen because much of the GABAB response is mediated by extrasynaptic receptors (Kohl and Paulsen, 2010) and because the time constant for GABA decline at extracellular sites appears to be much slower than at synaptic sites.

We take a simpler approach which is to just use a bi-exponential function to simulate the time course, using 45 and 50 msec time constants.

See gabab_plot subdirectory for program that generates these plots:

GABA-B V-G

V-G plot for GABA-B / GIRK from Sanders et al, 2013.

GABA-B S-G

Spike-G sigmoid plot for GABA-B / GIRK from Thomson & Destexhe (1999)

GABA-B Time

Bi-exponential time course with 45msec rise and 50msec decay fitting to results from Thomson & Destexhe (1999)

NMDA

The NMDA code uses the exponential function from Brunel & Wang (2001), which is very similar in shape to the Sanders et al, 2013 curve, to compute the voltage dependency of the current:

	vbio := v*100 - 100
	return 1 / (1 + 0.28*math32.Exp(-0.062*vbio))

where v is the normalized Vm value from the Leabra rate code equations -- vbio converts it into biological units.

See the nmda_plot directory for a program that plots this function, which looks like this (in terms of net conductance also taking into account the driving potential, which adds an (E - v) term in the numerator).

GABA-B V-G

V-G plot for NMDA from Brunel & Wang (2001)

Documentation

Index

Constants

View Source
const (
	// NMDAPrjn are projections that have strong NMDA channels supporting maintenance
	NMDA emer.PrjnType = emer.PrjnType(emer.PrjnTypeN) + iota
)

The GLong prjn types

Variables

View Source
var (
	// NeuronVars are extra neuron variables for glong
	NeuronVars = []string{"AlphaMax", "VmEff", "Gnmda", "NMDA", "NMDASyn", "GgabaB", "GABAB", "GABABx"}

	// NeuronVarsAll is the glong collection of all neuron-level vars
	NeuronVarsAll []string

	NeuronVarsMap map[string]int

	// NeuronVarProps are integrated neuron var props including leabra
	NeuronVarProps = map[string]string{
		"NMDA":   `auto-scale:"+"`,
		"GABAB":  `auto-scale:"+"`,
		"GABABx": `auto-scale:"+"`,
	}
)
View Source
var KiT_AlphaMaxLayer = kit.Types.AddType(&AlphaMaxLayer{}, leabra.LayerProps)
View Source
var KiT_Layer = kit.Types.AddType(&Layer{}, leabra.LayerProps)
View Source
var KiT_NMDAPrjn = kit.Types.AddType(&NMDAPrjn{}, PrjnProps)
View Source
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
View Source
var KiT_PrjnType = kit.Enums.AddEnumExt(emer.KiT_PrjnType, PrjnTypeN, kit.NotBitFlag, nil)
View Source
var NetworkProps = leabra.NetworkProps
View Source
var PrjnProps = ki.Props{
	"EnumType:Typ": KiT_PrjnType,
}

Functions

func ConnectNMDA

func ConnectNMDA(nt *leabra.Network, send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectNMDA adds a NMDAPrjn between given layers

func NeuronVarIdxByName

func NeuronVarIdxByName(varNm string) (int, error)

NeuronVarIdxByName returns the index of the variable in the Neuron, or error

Types

type AlphaMaxLayer

type AlphaMaxLayer struct {
	leabra.Layer
	AlphaMaxCyc int       `desc:"cycle upon which to start updating AlphaMax value"`
	AlphaMaxs   []float32 `desc:"per-neuron maximum activation value during alpha cycle"`
}

AlphaMaxLayer computes the maximum activation per neuron over the alpha cycle. Needed for recording activations on layers with transient dynamics over alpha.

func (*AlphaMaxLayer) ActFmG

func (ly *AlphaMaxLayer) ActFmG(ltime *leabra.Time)

func (*AlphaMaxLayer) ActLrnFmAlphaMax

func (ly *AlphaMaxLayer) ActLrnFmAlphaMax()

ActLrnFmAlphaMax sets ActLrn to AlphaMax

func (*AlphaMaxLayer) AlphaCycInit

func (ly *AlphaMaxLayer) AlphaCycInit()

func (*AlphaMaxLayer) AlphaMaxFmAct

func (ly *AlphaMaxLayer) AlphaMaxFmAct(ltime *leabra.Time)

AlphaMaxFmAct computes AlphaMax from Activation

func (*AlphaMaxLayer) Build

func (ly *AlphaMaxLayer) Build() error

func (*AlphaMaxLayer) Defaults

func (ly *AlphaMaxLayer) Defaults()

func (*AlphaMaxLayer) InitActs

func (ly *AlphaMaxLayer) InitActs()

func (*AlphaMaxLayer) InitAlphaMax

func (ly *AlphaMaxLayer) InitAlphaMax()

InitAlphaMax initializes the AlphaMax to 0

func (*AlphaMaxLayer) MaxAlphaMax

func (ly *AlphaMaxLayer) MaxAlphaMax() float32

MaxAlphaMax returns the maximum AlphaMax across the layer

func (*AlphaMaxLayer) UnitVal1D

func (ly *AlphaMaxLayer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*AlphaMaxLayer) UnitVarIdx

func (ly *AlphaMaxLayer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*AlphaMaxLayer) UnitVarNum

func (ly *AlphaMaxLayer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type GABABParams

type GABABParams struct {
	RiseTau  float32 `def:"45" desc:"rise time for bi-exponential time dynamics of GABA-B"`
	DecayTau float32 `def:"50" desc:"decay time for bi-exponential time dynamics of GABA-B"`
	Gbar     float32 `def:"0.2" desc:"overall strength multiplier of GABA-B current"`
	Gbase    float32 `` /* 130-byte string literal not displayed */
	Smult    float32 `def:"15" desc:"multiplier for converting Gi from FFFB to GABA spikes"`
	MaxTime  float32 `inactive:"+" desc:"time offset when peak conductance occurs, in msec, computed from RiseTau and DecayTau"`
	TauFact  float32 `view:"-" desc:"time constant factor used in integration: (Decay / Rise) ^ (Rise / (Decay - Rise))"`
}

GABABParams control the GABAB dynamics in PFC Maint neurons, based on Brunel & Wang (2001) parameters. We have to do some things to make it work for rate code neurons..

func (*GABABParams) BiExp

func (gp *GABABParams) BiExp(g, x float32) (dG, dX float32)

BiExp computes bi-exponential update, returns dG and dX deltas to add to g and x

func (*GABABParams) Defaults

func (gp *GABABParams) Defaults()

func (*GABABParams) GABAB

func (gp *GABABParams) GABAB(gabaB, gabaBx, gi float32) (g, x float32)

GABAB returns the updated GABA-B / GIRK activation and underlying x value based on current values and gi inhibitory conductance (proxy for GABA spikes)

func (*GABABParams) GFmS

func (gp *GABABParams) GFmS(s float32) float32

GFmS returns the GABA-B conductance as a function of GABA spiking rate, based on normalized spiking factor (i.e., Gi from FFFB etc)

func (*GABABParams) GFmV

func (gp *GABABParams) GFmV(v float32) float32

GFmV returns the GABA-B conductance as a function of normalized membrane potential

func (*GABABParams) GgabaB

func (gp *GABABParams) GgabaB(gabaB, vm float32) float32

GgabaB returns the overall net GABAB / GIRK conductance including Gbar, Gbase, and voltage-gating

func (*GABABParams) Update

func (gp *GABABParams) Update()

type Layer

type Layer struct {
	leabra.Layer
	NMDA    NMDAParams  `view:"inline" desc:"NMDA channel parameters plus more general params"`
	GABAB   GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"`
	GlNeurs []Neuron    `` /* 152-byte string literal not displayed */
}

Layer has GABA-B and NMDA channels, with longer time-constants, to supports bistable activation dynamics including active maintenance in frontal cortex. NMDA requires NMDAPrjn on relevant projections. It also records AlphaMax = maximum activation within an AlphaCycle, which is important given the transient dynamics.

func AddGlongLayer2D

func AddGlongLayer2D(nt *leabra.Network, name string, nNeurY, nNeurX int) *Layer

AddGlongLayer2D adds a glong.Layer using 2D shape

func AddGlongLayer4D

func AddGlongLayer4D(nt *leabra.Network, name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer

AddGlongLayer4D adds a glong.Layer using 4D shape with pools

func (*Layer) ActFmG

func (ly *Layer) ActFmG(ltime *leabra.Time)

func (*Layer) ActLrnFmAlphaMax

func (ly *Layer) ActLrnFmAlphaMax()

ActLrnFmAlphaMax sets ActLrn to AlphaMax

func (*Layer) AlphaCycInit

func (ly *Layer) AlphaCycInit()

func (*Layer) AlphaMaxFmAct

func (ly *Layer) AlphaMaxFmAct(ltime *leabra.Time)

AlphaMaxFmAct computes AlphaMax from Activation

func (*Layer) Build

func (ly *Layer) Build() error

Build constructs the layer state, including calling Build on the projections.

func (*Layer) DecayState

func (ly *Layer) DecayState(decay float32)

func (*Layer) Defaults

func (ly *Layer) Defaults()

func (*Layer) GABABFmGi

func (ly *Layer) GABABFmGi(ltime *leabra.Time)

func (*Layer) GFmInc

func (ly *Layer) GFmInc(ltime *leabra.Time)

GFmInc integrates new synaptic conductances from increments sent during last SendGDelta.

func (*Layer) GFmIncNeur

func (ly *Layer) GFmIncNeur(ltime *leabra.Time)

GFmIncNeur is the neuron-level code for GFmInc that integrates overall Ge, Gi values from their G*Raw accumulators.

func (*Layer) InitActs

func (ly *Layer) InitActs()

func (*Layer) InitAlphaMax

func (ly *Layer) InitAlphaMax()

InitAlphaMax initializes the AlphaMax to 0

func (*Layer) InitGInc

func (ly *Layer) InitGInc()

func (*Layer) InitGlong

func (ly *Layer) InitGlong()

func (*Layer) MaxAlphaMax

func (ly *Layer) MaxAlphaMax() float32

MaxAlphaMax returns the maximum AlphaMax across the layer

func (*Layer) RecvGInc

func (ly *Layer) RecvGInc(ltime *leabra.Time)

RecvGInc calls RecvGInc on receiving projections to collect Neuron-level G*Inc values. This is called by GFmInc overall method, but separated out for cases that need to do something different.

func (*Layer) RecvGnmdaPInc

func (ly *Layer) RecvGnmdaPInc(ltime *leabra.Time)

RecvGnmdaPInc increments the recurrent-specific GeInc

func (*Layer) UnitVal1D

func (ly *Layer) UnitVal1D(varIdx int, idx int) float32

UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.

func (*Layer) UnitVarIdx

func (ly *Layer) UnitVarIdx(varNm string) (int, error)

UnitVarIdx returns the index of given variable within the Neuron, according to UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.

func (*Layer) UnitVarNum

func (ly *Layer) UnitVarNum() int

UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.

type NMDAParams

type NMDAParams struct {
	ActVm       float32 `` /* 199-byte string literal not displayed */
	AlphaMaxCyc int     `desc:"cycle upon which to start updating AlphaMax value"`
	Tau         float32 `def:"100" desc:"decay time constant for NMDA current -- rise time is 2 msec and not worth extra effort for biexponential"`
	Gbar        float32 `desc:"strength of NMDA current -- 0.02 is just over level sufficient to maintain in face of completely blank input"`
}

NMDAParams control the NMDA dynamics in PFC Maint neurons, based on Brunel & Wang (2001) parameters. We have to do some things to make it work for rate code neurons..

func (*NMDAParams) Defaults

func (np *NMDAParams) Defaults()

func (*NMDAParams) GFmV

func (np *NMDAParams) GFmV(v float32) float32

GFmV returns the NMDA conductance as a function of normalized membrane potential

func (*NMDAParams) Gnmda

func (np *NMDAParams) Gnmda(nmda, vm float32) float32

Gnmda returns the NMDA net conductance from nmda activation and vm

func (*NMDAParams) NMDA

func (np *NMDAParams) NMDA(nmda, nmdaSyn float32) float32

NMDA returns the updated NMDA activation from current NMDA and NMDASyn input

func (*NMDAParams) VmEff

func (np *NMDAParams) VmEff(vm, act float32) float32

VmEff returns the effective Vm value including backpropagating action potentials from ActVm

type NMDAPrjn

type NMDAPrjn struct {
	leabra.Prjn // access as .Prjn
}

NMDAPrjn is a projection with NMDA maintenance channels. It marks a projection for special treatment in a MaintLayer which actually does the NMDA computations. Excitatory conductance is aggregated separately for this projection.

func (*NMDAPrjn) PrjnTypeName

func (pj *NMDAPrjn) PrjnTypeName() string

func (*NMDAPrjn) Type

func (pj *NMDAPrjn) Type() emer.PrjnType

func (*NMDAPrjn) UpdateParams

func (pj *NMDAPrjn) UpdateParams()

type Network

type Network struct {
	leabra.Network
}

glong.Network has methods for configuring specialized Glong network components.

func (*Network) ConnectNMDA

func (nt *Network) ConnectNMDA(send, recv emer.Layer, pat prjn.Pattern) emer.Prjn

ConnectNMDA adds a NMDAPrjn between given layers

func (*Network) Defaults

func (nt *Network) Defaults()

Defaults sets all the default parameters for all layers and projections

func (*Network) NewLayer

func (nt *Network) NewLayer() emer.Layer

NewLayer returns new layer of glong.Layer type -- this is default type for this network

func (*Network) UnitVarNames

func (nt *Network) UnitVarNames() []string

UnitVarNames returns a list of variable names available on the units in this layer

func (*Network) UnitVarProps

func (nt *Network) UnitVarProps() map[string]string

UnitVarProps returns properties for variables

func (*Network) UpdateParams

func (nt *Network) UpdateParams()

UpdateParams updates all the derived parameters if any have changed, for all layers and projections

type Neuron

type Neuron struct {
	AlphaMax float32 `desc:"Maximum activation over Alpha cycle period"`
	VmEff    float32 `desc:"Effective membrane potential, including simulated backpropagating action potential contribution from activity level."`
	Gnmda    float32 `desc:"net NMDA conductance, after Vm gating and Gbar -- added directly to Ge as it has the same reversal potential."`
	NMDA     float32 `desc:"NMDA channel activation -- underlying time-integrated value with decay"`
	NMDASyn  float32 `desc:"synaptic NMDA activation directly from projection(s)"`
	GgabaB   float32 `desc:"net GABA-B conductance, after Vm gating and Gbar + Gbase -- set to Gk for GIRK, with .1 reversal potential."`
	GABAB    float32 `desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"`
	GABABx   float32 `desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"`
}

Neuron holds the extra neuron (unit) level variables for glong computation.

func (*Neuron) VarByIndex

func (nrn *Neuron) VarByIndex(idx int) float32

VarByIndex returns variable using index (0 = first variable in NeuronVars list)

func (*Neuron) VarByName

func (nrn *Neuron) VarByName(varNm string) (float32, error)

VarByName returns variable by name, or error

func (*Neuron) VarNames

func (nrn *Neuron) VarNames() []string

type PrjnType

type PrjnType emer.PrjnType

PrjnType has the GLong extensions to the emer.PrjnType types, for gui

const (
	NMDA_ PrjnType = PrjnType(emer.PrjnTypeN) + iota
	PrjnTypeN
)

gui versions

func StringToPrjnType

func StringToPrjnType(s string) (PrjnType, error)

func (PrjnType) String

func (i PrjnType) String() string

Directories

Path Synopsis
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations.
eqplot plots an equation updating over time in a etable.Table and Plot2D. This is a good starting point for any plotting to explore specific equations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL