pvlv

package
v1.6.12 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 6, 2022 License: BSD-3-Clause Imports: 11 Imported by: 0

README

PVLV: Primary Value, Learned Value

GoDoc

This is a ground-up rewrite of PVLV Mollick et al, 2020 for axon, designed to capture the essential properties of the Go leabra version in a yet simpler and cleaner way. Each layer type is implemented in a more self-contained manner using the axon trace-based learning rule, which has better natural affordances for DA modulation. Thus, these can be used in a more mix-and-match manner (e.g., the BLA can be used to train OFC, independent of its phasic dopamine role).

This is an incremental work-in-progress, documented as it goes along.

BLA

There are 2x2 BLA types: Positive or Negative valence US's with Acquisition vs. Extinction:

  • BLAPosD1 = Pos / Acq
  • BLAPosD2 = Pos / Ext
  • BLANegD2 = Neg / Acq
  • BLANegD1 = Neg / Ext

The D1 / D2 flips the sign of the influence of DA on the plus-phase activation of the BLA neuron (D1 = excitatory, D2 = inhibitory).

A major simplification and improvement in the axon version is that the extinction neurons receive from the OFC neurons that are activated by the corresponding acquisition neurons, thus solving the "learn from disappointment" problem in a much better way: when we are OFC-expecting a given US, and we give up on that and suck up the negative DA, then the corresponding BLA ext neurons get punished.

The (new) learning rule based on the axon trace code is:

  • DWt = lr * Tr_prv * (1 + abs(DA)) * (CaP - CaD)
    • CaP: plus phase Ca -- has DA modulation reflected in a D1 / D2 direction
    • CaD: minus phase Ca -- prior to DA -- note in original this was t-1 -- axon uses longer theta cycles and plus phase is when DA is delivered by default.
    • Tr_prv: is S * R trace from previous time (i.e., not yet reflecting new Tr update -- as in CTPrjn)
    • DA also drives lrate, included in RLRate which also does the small-delta filtering included in prior impl

References

  • Mollick, J. A., Hazy, T. E., Krueger, K. A., Nair, A., Mackie, P., Herd, S. A., & O'Reilly, R. C. (2020). A systems-neuroscience model of phasic dopamine. Psychological Review, 127(6), 972–1021. https://doi.org/10.1037/rev0000199

Documentation

Index

Constants

View Source
const (
	// BLA is a basolateral amygdala layer
	BLA emer.LayerType = emer.LayerType(rl.LayerTypeN) + iota

	// CeM is a central nucleus of the amygdala layer
	// integrating Acq - Ext for a tet value response.
	CeM

	// PPTg is a pedunculopontine tegmental gyrus layer
	// computing a deporalerivative if that is what happens
	PPTg
)

Variables

View Source
var KiT_BLALayer = kit.Types.AddType(&BLALayer{}, LayerProps)
View Source
var KiT_BLAPrjn = kit.Types.AddType(&BLAPrjn{}, axon.PrjnProps)
View Source
var KiT_DARs = kit.Enums.AddEnum(DARsN, kit.NotBitFlag, nil)
View Source
var KiT_LayerType = kit.Enums.AddEnumExt(rl.KiT_LayerType, LayerTypeN, kit.NotBitFlag, nil)
View Source
var KiT_PPTgLayer = kit.Types.AddType(&PPTgLayer{}, LayerProps)
View Source
var LayerProps = ki.Props{
	"EnumType:Typ": KiT_LayerType,
	"ToolBar": ki.PropSlice{
		{"Defaults", ki.Props{
			"icon": "reset",
			"desc": "return all parameters to their intial default values",
		}},
		{"InitWts", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer",
		}},
		{"InitActs", ki.Props{
			"icon": "update",
			"desc": "initialize the layer's activation values",
		}},
		{"sep-act", ki.BlankProp{}},
		{"LesionNeurons", ki.Props{
			"icon": "close",
			"desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)",
			"Args": ki.PropSlice{
				{"Proportion", ki.Props{
					"desc": "proportion (0 -- 1) of neurons to lesion",
				}},
			},
		}},
		{"UnLesionNeurons", ki.Props{
			"icon": "reset",
			"desc": "Un-Lesion (reset the Off flag) for all neurons in the layer",
		}},
	},
}

LayerProps are required to get the extended EnumType

Functions

func AddAmygdala added in v1.5.12

func AddAmygdala(nt *axon.Network, prefix string, neg bool, nUs, unY, unX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, pptg axon.AxonLayer)

AddAmygdala adds a full amygdala complex including BLA, CeM, and PPTg. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included.

func AddBLALayers added in v1.5.2

func AddBLALayers(nt *axon.Network, prefix string, pos bool, nUs, unY, unX int, rel relpos.Relations, space float32) (acq, ext axon.AxonLayer)

AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence

Types

type BLALayer added in v1.5.2

type BLALayer struct {
	rl.Layer
	DaMod    DaModParams   `view:"inline" desc:"dopamine modulation parameters"`
	BLA      BLAParams     `view:"inline" desc:"special BLA parameters"`
	USLayers emer.LayNames `` /* 154-byte string literal not displayed */
	ACh      float32       `` /* 198-byte string literal not displayed */
	USActive bool          `inactive:"+" desc:"marks presence of US as a function of activity over USLayers -- affects learning rate."`
}

BLALayer represents a basolateral amygdala layer

func (*BLALayer) Build added in v1.5.12

func (ly *BLALayer) Build() error

func (*BLALayer) Defaults added in v1.5.2

func (ly *BLALayer) Defaults()

func (*BLALayer) GInteg added in v1.6.0

func (ly *BLALayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*BLALayer) GetACh added in v1.5.12

func (ly *BLALayer) GetACh() float32

func (*BLALayer) InitActs added in v1.5.12

func (ly *BLALayer) InitActs()

func (*BLALayer) PlusPhase added in v1.5.2

func (ly *BLALayer) PlusPhase(ctime *axon.Time)

func (*BLALayer) SetACh added in v1.5.12

func (ly *BLALayer) SetACh(ach float32)

func (*BLALayer) USActiveFmUS added in v1.5.12

func (ly *BLALayer) USActiveFmUS(ctime *axon.Time)

USActiveFmUS updates the USActive flag based on USLayers state

func (*BLALayer) UnitVal1D added in v1.5.12

func (ly *BLALayer) UnitVal1D(varIdx int, idx int) float32

func (*BLALayer) UnitVarIdx added in v1.5.12

func (ly *BLALayer) UnitVarIdx(varNm string) (int, error)

func (*BLALayer) UnitVarNum added in v1.5.12

func (ly *BLALayer) UnitVarNum() int

type BLAParams added in v1.5.12

type BLAParams struct {
	NoDALrate float32 `desc:"baseline learning rate without any dopamine"`
	NoUSLrate float32 `desc:"learning rate outside of US active time window (i.e. for CSs)"`
	NegLrate  float32 `` /* 143-byte string literal not displayed */
}

BLAParams has parameters for basolateral amygdala

func (*BLAParams) Defaults added in v1.5.12

func (bp *BLAParams) Defaults()

type BLAPrjn added in v1.5.2

type BLAPrjn struct {
	axon.Prjn
}

BLAPrjn implements the PVLV BLA learning rule: dW = Ach * X_t-1 * (Y_t - Y_t-1) The recv delta is across trials, where the US should activate on trial boundary, to enable sufficient time for gating through to OFC, so BLA initially learns based on US present - US absent. It can also learn based on CS onset if there is a prior CS that predicts that.

func (*BLAPrjn) DWt added in v1.5.2

func (pj *BLAPrjn) DWt(ctime *axon.Time)

DWt computes the weight change (learning) for BLA projections

func (*BLAPrjn) Defaults added in v1.5.2

func (pj *BLAPrjn) Defaults()

func (*BLAPrjn) RecvSynCa added in v1.5.10

func (pj *BLAPrjn) RecvSynCa(ctime *axon.Time)

func (*BLAPrjn) SendSynCa added in v1.5.10

func (pj *BLAPrjn) SendSynCa(ctime *axon.Time)

func (*BLAPrjn) WtFmDWt added in v1.5.2

func (pj *BLAPrjn) WtFmDWt(ctime *axon.Time)

WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections

type DARs added in v1.5.2

type DARs int

Dopamine receptor type, for D1R and D2R dopamine receptors

const (
	// D1R: primarily expresses Dopamine D1 Receptors, which are excitatory from DA bursts
	D1R DARs = iota

	// D2R: primarily expresses Dopamine D2 Receptors, which are inhibitory from DA dips
	D2R

	DARsN
)

func (*DARs) FromString added in v1.5.2

func (i *DARs) FromString(s string) error

func (DARs) String added in v1.5.2

func (i DARs) String() string

type DaModParams

type DaModParams struct {
	On        bool    `desc:"whether to use dopamine modulation"`
	DAR       DARs    `desc:"dopamine receptor type, D1 or D2"`
	BurstGain float32 `` /* 173-byte string literal not displayed */
	DipGain   float32 `` /* 233-byte string literal not displayed */
}

DaModParams specifies parameters shared by all layers that receive dopaminergic modulatory input.

func (*DaModParams) Defaults added in v1.5.2

func (dp *DaModParams) Defaults()

func (*DaModParams) Gain added in v1.5.2

func (dp *DaModParams) Gain(da float32) float32

Gain returns effective DA gain factor given raw da +/- burst / dip value

type LayerType added in v1.5.12

type LayerType rl.LayerType

LayerType has the extensions to the emer.LayerType types, for gui

const (
	BLA_ LayerType = LayerType(rl.LayerTypeN) + iota
	CeM_
	PPTg_
	LayerTypeN
)

gui versions

type PPTgLayer

type PPTgLayer struct {
	rl.Layer
	PPTgNeurs []PPTgNeuron
}

PPTgLayer represents a pedunculopontine nucleus layer it subtracts prior trial's excitatory conductance to compute the temporal derivative over time, with a positive rectification. also sets Act to the exact differenence.

func (*PPTgLayer) Build

func (ly *PPTgLayer) Build() error

func (*PPTgLayer) Defaults

func (ly *PPTgLayer) Defaults()

func (*PPTgLayer) GFmRawSyn added in v1.6.0

func (ly *PPTgLayer) GFmRawSyn(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*PPTgLayer) GInteg added in v1.6.0

func (ly *PPTgLayer) GInteg(ni int, nrn *axon.Neuron, ctime *axon.Time)

func (*PPTgLayer) NewState added in v1.5.12

func (ly *PPTgLayer) NewState()

func (*PPTgLayer) SpikeFmG added in v1.6.0

func (ly *PPTgLayer) SpikeFmG(ni int, nrn *axon.Neuron, ctime *axon.Time)

type PPTgNeuron added in v1.5.12

type PPTgNeuron struct {
	GeSynMax  float32 `desc:"max excitatory synaptic inputs"`
	GeSynPrev float32 `desc:"previous max excitatory synaptic inputs"`
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL