sims

module
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 10, 2019 License: BSD-3-Clause

README

Computational Cognitive Neuroscience Simulations

This repository contains the neural network simulation models for the CCN Textbook.

These models are implemented in the new Go (golang) version of emergent, with Python versions available as well (note: not yet!). This github repository contains the full source code and you can build and run the models by cloning the repository and building / running the individual projects as described in the emergent Wiki help page: Wiki Install.

The simplest way to run the simulations is by downloading a zip file of all of the built models for your platform. These are fully self-contained executable files and should "just work" on each platform.

  • TODO: link to zip files

Usage

Each simulation has a README button, which directs your browser to open the corresponding README.md file on github. This contains full step-by-step instructions for running the model, and questions to answer for classroom usage of the models. See your syllabus etc for more info.

Use standard Ctrl+ and Ctrl- key sequences to zoom the display to desired scale, and the GoGi preferences menu has an option to save the zoom (and various other options).

The main actions for running are in the Toolbar at the top, while the parameters of most relevance to the model are in the Control panel on the left. Different output displays are selectable in the Tabbed views on the right of the window.

The Go Emergent Wiki contains various help pages for using things like the NetView that displays the network.

You can always access more detailed parameters by clicking on the button to the right off Net in the control panel (also by clicking on the layer names in the NetView), and custom params for this model are set in the Params field.

Status

8/24/2019: Initial projects are just being created. Python versions will be made available pending a program to convert the go files to python more automatically. Classes may need to depend on the C++ emergent versions for some gaps.

Directories

Path Synopsis
ch2
detector
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
neuron
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
ch3
cats_dogs
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
face_categ
face_categ: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
face_categ: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
inhib
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
necker_cube
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli.
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli.
ch4
err_driven_hidden
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
family_trees
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
pat_assoc
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
self_org
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL