CAP 5937: COMPUTATIONAL NEUROSCIENCE
O. V. Favorov, Ph.D.
Spring 2003
GENERAL COURSE DESCRIPTION:
Subject: The structure and function of the brain as an information processing system, computation in neurobiological networks.
Objective: To introduce students to basic principles of the mammalian brain design, information processing approaches and algorithms used by the central nervous system, and their implementation in biologically-inspired neural networks.
Schedule: Tuesday, Thursday 17:30 – 18:45, Communication Bldg. 115.
Prerequisites: Good computer programming skills.
Teaching methods: Lectures, development of computer models and their simulations.
Evaluation of student performance: Homework (computer modeling projects, 50%), mid-term and final exams (50%), class participation.
Textbook: Essentials of Neural Science and Behavior.
E. R. Kandel, J. H. Schwartz, T. M. Jessell; Simon and Schuster Co., 1995.
ISBN 0-8385-2245-9
Office hours: Monday, Wednesday 12 - 2 pm, room 242 Computer Science Building.
Contact: phone 407-823-6495; e-mail
favorov@cs.ucf.edu
COURSE CONTENT:
General layout of CNS as an information processing system
Information processing in single neurons
- Elements of membrane physiology
- Compartmental modeling
of dendritic trees
- Synaptic input integration
in dendritic trees
Learning in neurobiological networks
- Hebbian rule: associative
learning networks
- Error-Backpropagation
learning
- The spiked random neural
network (RNN)
- Synaptic plasticity: SINBAD
hierarchical networks
- Self-organization of cortical
networks: Neural Mechanics
Local network dynamics
- Functional architecture
of CNS networks (emphasis on cerebral cortex)
- Attractor dynamics
- Nonlinear dynamical concepts
in neural systems
- Cerebral cortical dynamics
Information processing in cortical output layers
- SINBAD mechanism for discovery
of key hidden environmental variables
- Building an internal working
model of the environment
- Basic uses of an internal
model
Visual information processing
- Stages and tasks of visual
information processing
- Distribution of roles
among visual cortical areas
- Functions and mechanisms
of attention
Memory
- Classification of different
forms of memory
- Memory loci and mechanisms
Motor control
- Basics of muscle force
generation and its neural control
- Reflexes vs. internal
models
- Sensory-motor integration
in the cerebral cortex
- Limbic system and purposeful
behaviors
Computer modeling projects (throughout the course)
- RNN based pattern recognition
algorithm
- Point electrical model
of a neuron
- SINBAD network
- Neural Mechanics
Neuron’s function – to receive information from some neurons, process it, and send it to other neurons
Subdivisions of a neuron’ body:
· Dendrites – receive and process information from other
neurons
· Soma (cell body) – combines all information
· Axon hillock – generates output signal (pulse)
· Axon – carries output signal to other neurons
· Axon terminals – endings of axonal branches
· Synapse – site of contact of two neurons
Neurons communicate by electrical pulses, called ACTION POTENTIALS (APs) or SPIKES. Spikes are COMMUNICATION SIGNALS.
All spikes generated by a neuron have the same waveform; information
is encoded in timing and frequency of spike discharges from axon
hillock.
NEURON ELECTROPHYSIOLOGY
Neurons are electrical devices.
Inside of a neuron (cytoplasm) is separated from outside extracellular
fluid by NEURONAL MEMBRANE.
Neuronal membrane contains numerous ION PUMPS and ION CHANNELS.
Na+/K+ ion pump – moves Na+ ions from inside to outside and K+
ions from outside to inside a neuron. As a result, these pumps set
up Na+ and K+ concentration gradients across the
membrane.
Ion channels – pores in the membrane that let ions move passively
(by diffusion) across the membrane. Each type of channel is selectively
permeable to one or two types of ions only.
Ion channels can be open or closed, (“gated”). In different types
of channels the gate can be controlled by voltage across the membrane or
by special chemical compounds or mechanically (in
sensory receptors). Some types of ion channels are always open
and are called “resting” channels.
RESTING ION CHANNELS: Neurons have K+ Na+ and
Cl- resting ion channels.
Concentration gradients of Na+ and K+ across the neuronal membrane
drive ions across the membrane through ion channels and set up an electrical
gradient, or a MEMBRANE
POTENTIAL (V).
The value of the membrane potential is determined by permeability of
resting ion channels and typically is around –70 mV. It is called
RESTING
POTENTIAL (Vr).
Vr = -70 mV - it is “functional zero” or baseline.
Text: pp.21-40 (overview), 115-116 (ion channels)
GATED ION CHANNELS
Opening gated channels lets ions to flow through them and results in
a change of the membrane potential.
HYPERPOLARIZATION
– making membrane potential more negative.
DEPOLARIZATION –
making membrane potential less negative.
EQUILIBRIUM POTENTIAL (E) of a particular class of channels is
such a membrane potential V at which there is no net ion flow across open
channels.
ENa+ , equilibrium
potential of Na+ channels is +40 mV.
EK+ , equilibrium
potential of K+ channels is -90 mV.
ECl-, equilibrium
potential of Cl- channels is -70 mV.
INTERNAL SIGNALS used by neurons are carried by membrane potential
V. The signal is a deviation of membrane potential from its resting
level Vr = –70 mV.
- positive deviation is
depolarization,
or excitation
- negative deviation is
hyperpolarization,
or inhibition
Mechanism of generating internal signals is by opening ion channels.
Each type of channel is selectively permeable to certain ions. Each
ion has a specific equilibrium potential E. When a
particular type of channels is opened, V will move towards E of those
channels.
To raise V:
open Na+ channels.
To lower V:
open K+ channels.
To scale down (V –
Vr) : open Cl- channels.
MECHANISM OF ACTION POTENTIAL (AP) GENERATION
APs are produced by 2 types of channels:
1. Voltage-gated Na+ channels. Open when V
is above –55 mV. Open very quickly, but only transiently (stay open
only about 1 msec). Need resetting by lowering V below –55
mV.
2. Voltage-gated K+ channels. Open when
V is above –55 mV. Slower to open, but not transient. Do not
need resetting.
AP is triggered when V exceeds –55 mV. Thus, –55 mV is the
AP
THRESHOLD.
APs are ALL-OR-NONE – i.e., have uniform shape.
ABSOLUTE REFRACTORY PERIOD (about 2 msec) – period after firing
a spike during which it is impossible to trigger another spike.
RELATIVE REFRACTORY PERIOD (about 15 msec) – period after firing
a spike during which AP threshold is raised above –55 mV.
Frequencies of firing APs: up to 400 Hz, but normally less
than 100 Hz.
SYNAPTIC TRANSMISSION
Brains of mammals, and cerebral cortex in particular, mostly rely on
chemical
synapses.
Source neuron is called PRESYNAPTIC NEURON and target neuron is called
POSTSYNAPTIC NEURON
Presynaptic axon terminal has synaptic vesicles filled with
chemical compound called TRANSMITTER. Presynaptic membrane
is separated from postsynaptic membrane by synaptic
cleft. Postsynaptic membrane has RECEPTORS and
RECEPTOR-GATED
ION CHANNELS.
When an AP arrives to the presynaptic terminal, it makes a few of the
synaptic vesicles to release their transmitter into the synaptic cleft.
In the cleft, transmitter molecules bind with receptors
and make them to open ion channels. Open ion channels let ions
flow through them, which changes membrane potential V.
AP-evoked change of V is called POSTSYNAPTIC POTENTIAL (PSP).
Positive change of V is called EXCITATORY PSP (EPSP).
Negative change of V is called INHIBITORY PSP (IPSP).
Transmitter stays in the synaptic cleft only a few msec and then escapes
or is taken back into the presynaptic terminal and is recycled.
Text: pp. 31-39 (overview). If want extra information,
see pp. 115-116 (ion channels), pp. 133-134, 135-139 (membrane potential),
pp. 168-169 (action potential), pp. 181-182, 191
(synapse), pp. 227-234 (synaptic receptors and channels).
A postsynaptic potential (PSP), evoked by an action potential at a given
synapse, spreads passively (electrotonicly) throughout the neuron’s
dendrites and eventually reaches the axon hillock,
where it contributes to generation of action potential.
Because of its passive spread, PSP fades with distance from the site
of its origin, so its size is reduced significantly by the time it reaches
the axon hillock.
A single synapse generates only very small PSPs (typically, less than
1 mV). However, any given neuron receives thousands of synaptic connections
and together they can add up their PSPs
and depolarize axon hillock sufficiently to trigger action potentials.
SPATIAL SUMMATION – addition of PSPs occurring simultaneously
at different synapses.
TEMPORAL SUMMATION – temporal buildup of PSPs occurring in rapid
succession.
SYNAPTIC EFFICACY
Synaptic efficacy = connection “weight” (or strength) – how effective
is a given synapse at changing membrane potential at the axon hillock.
Major determining factors:
1. amount of transmitter released from the presynaptic
terminal by one spike.
2. number of postsynaptic receptors/channels
3. distance to the axon hillock
BRAIN DESIGN PRINCIPLES:
- A synapse transmits information in one direction only.
- A given synapse can only be excitatory or inhibitory, but not both.
- A given neuron can only make excitatory or inhibitory connections,
but not excitatory on some cells, inhibitory on others.
- Synapses vary in their efficacy, or weights.
An ion channel can be represented as an electrical resistor
of a particular conductance, connected in series with a battery,
whose charge is equal to the equilibrium potential E of this ion
channel.
Different ion channels in the same membrane can be represented by resistors/batteries
connected in parallel to each other.
All the resting channels (K+ Na+ Cl-) can be represented
together by a single, lumped resistor of conductance gm,
in series with battery whose E = -70 mV, or resting membrane potential.
gm is called “passive membrane conductance”
A synapse can be represented by a resistor, whose conductance is equivalent
to that synapse’s weight (efficacy), in series with a battery and a switch,
controlled by presynaptic action
potentials.
All the synapses that use the same receptors and ion channels can be
represented together by a single, lumped variable resistor of conductance
G, in series with a battery.
G = S wi * Ai,
where wi (or gi) is the weight of synapse i and Ai
is the activity of the presynaptic cell i.
An entire neuron can be represented by an electrical circuit that consists
of a number of components all connected in parallel:
- capacitor Cm, representing membrane capacitance
- resistor gm in series with a battery Er = -70
mV, representing passive membrane resistance
- variable resistor Gex in series with a battery Eex = 0 mV, representing
net conductance of all the excitatory synapses
- variable resistor GinK in series with a battery EinK
= -90 mV, representing net conductance of all the inhibitory synapses that
use “subtracting” K+ ion channels
- variable resistor GinCl in series with a battery EinCl
= -70 mV, representing net conductance of all the inhibitory synapses that
use “dividing” Cl- ion channels
- variable resistor gAPNa in series with a battery ENa=
+40 mV, representing voltage-gated Na+ channels responsible for generation
of action potential
- variable resistor gAPK in series with a battery EK
= -90 mV, representing voltage-gated K+ channels responsible for generation
of action potential
- other components, if known and desired to be included in such a model
This circuit representation of a neuron is called POINT NEURON MODEL, because it treats a neuron as a point in space, dimensionless, i.e., it ignores the dendritic structure of neurons.
If for simplicity (because of its minor effect) we ignore contribution of membrane capacitance, then we can compute membrane potential V that is generated by this circuit as:
gm* Er + Gex * Eex + GinK* EinK + GinCl*
EinCl + gAPNa* ENa + gAPK*
EK
V =
--------------------------------------------------------------------------------------
gm + Gex + GinK + GinCl+ gAPNa
+ gAPK
Text: pp. 142 – 147 (representing resting channels),
pp. 213 – 216 (representing synapses; discussed on an example of synapses
between neurons and muscles, called end-plates; they use
transmitter called acetylcholine, or ACh).
Write a computer program to simulate a neuron modeled as point electric circuit.
Parameters: number of excitatory
input cells - 100
number of inhibitory input cells - 100
time constant, t - 4 msec
gm = 1
assign connection weights w+ and w- of excitatory and inhibitory input
cells randomly in the range between 0 and 1
set Gex = 0 and Gin = 0 before the start of simulation
Simulation: simulate
time-course of membrane potential with time step of 1 msec
At each time step, do the following 3 procedures -
1. Pick randomly which of the input cells have an action potential
at this point in time.
For these cells, set their activity A = 1; for all the other cells, set
their activity A = 0
Active cells should be chosen such that approximately 10% of them will
have an action potential at any given time.
2. Update Gex and Gin :
Gext = (1-1/t) * Gext-1
+ (1/t) * Cex * S(w+i
* A+i t )
Gint = (1-1/t) * Gint-1 + (1/t) * Cin * S(w-i * A-it )
3. Update deviation of membrane potential from resting level, DV :
DV = ( 70 * Gext ) / ( Gext + Gint + gm )
Exercises:
1. Testing the program.
Set all w+'s and w-'s to 1 (i.e., all connections should have the same
weight, =1).
Set Cex = 1 and Cin = 2
Run the program 20 time steps and plot DV
as a function of time (Plot #1).
Hint: if the program is correct, DV
should raise quickly from 0 to around 22-23 mV and stay there.
2. Effect of Cex.
Randomize all w+ 's and w- 's in the range between 0 and 1.
Set Cin = 0 (i.e., inhibition is turned off).
Run the program with many (e.g., 50) different values of Cex.
In each run, do 20 time steps and save the value of DV
on the last, 20th time step (by then DV should
reach a steady state).
Plot this value of DV as a function of Cex
on a log scale. You should find such a range of Cex in which DV
will start at approximately 0 and
then at some point will raise sigmoidally to 70 mV. This is Plot #2.
3. Effect of Cin.
Same as Exercise #2, but set Cex to such a value at which DV
in Exercise #2 was approximately 60 mV.
Run the program with many different values of Cin.
Plot DV as a function of Cin on a
log scale. You should find such a range of Cin in which DV
will start at approximately 60 and
then at some point will descend sigmoidally to 0 mV. This is Plot #3.
Submit for grading:
brief summary of the model
description of the 3 exercises and their results (plots #1, 2, 3)
text of the program
Due date: February 6.
PROGRAM PSEUDO-CODE (if you want it)
Define arrays: Aex(1-100)
- activities of excitatory input cells
Ain(1-100) - activities of inhibitory input
cells
Wex(1-100) - excitatory connection weights
Win(1-100) - inhibitory connection weights
Set parameters: Nex = 100
- number of excitatory input cells
Nin = 100 - number of inhibitory
input cells
TAU = 4 - time constant
Ntimes = 100 - number of time steps (in Exercises 2 and 3,
Ntimes = 20)
Cex = 1 - excitatory
scaling constant (in Exercise 2, vary the value systematically)
Cin = 2
- inhibitory scaling constant (in Exercise 2, Cin = 0; in Exercise 3, vary
systematically)
Gm = 1 - passive
membrane conductance
Assign random weights to excitatory and inhibitory connections:
FOR I = 1 … Nex
Get random number RN in range [0 … 1]
Wex(I) = RN (in Exercise 1, Wex(I) = 1)
NEXT I
FOR I = 1 … Nin
Get random number RN in range [0 … 1]
Win(I) = RN (in Exercise 1, Win(I) = 1)
NEXT I
Initialize excitatory and inhibitory conductances:
Gex = 0
Gin = 0
Compute DV for Ntimes time steps:
FOR Itime = 1 … Ntimes
Choose
activities of input cells at random with probability of 10% of A = 1:
FOR I = 1 … Nex
Get random number RN in range [0 … 1]
Aex(I) = 1
IF ( RN > 0.1 ) Aex(I) = 0
NEXT I
FOR I = 1 … Nin
Get random number RN in range [0 … 1]
Ain(I) = 1
IF ( RN > 0.1 ) Ain(I) = 0
NEXT I
Sum
all excitatory inputs:
SUM = 0
FOR I = 1 … Nex
SUM = SUM + Wex(I) * Aex(I)
NEXT I
Update
excitatory conductance:
Gex = (1 – 1/TAU) * Gex + (1/TAU) * Cex *SUM
Sum
all inhibitory inputs:
SUM = 0
FOR I = 1 … Nin
SUM = SUM + Win(I) * Ain(I)
NEXT I
Update
inhibitory conductance:
Gin = (1 – 1/TAU) * Gin + (1/TAU) * Cin *SUM
Calculate
DV:
DV = (70 * Gex)/(Gex + Gin + Gm)
In Exercise 1, plot DV as a function of time step (Plot #1)
NEXT Itime
In Exercises 2 and 3, plot final DV as a function of Cex (Plot #2) or
Cin (Plot #3).
MULTIPLICITY OF ION CHANNEL TYPES
A neuron has large number (probably on the order of 100) of different
types of ion channels. These channels differ in:
- kinetics (faster-slower)
- direction of membrane potential change when open (towards its equilibrium
potential)
- factors that control channel opening (transmitter, V, etc.)
2 most common ion channels in excitatory synapses of the cerebral cortex:
channel type:
AMPA
NMDA
transmitter:
glutamate glutamate
receptor:
AMPA
NMDA
ions:
Na (+K)
Na Ca (+K)
kinetics:
fast
approx. 10 times slower
controls:
transmitter
transmitter, membrane potential (needs depolarization to open)
How would we modify temporal behavior equations in our program to incorporate these 2 channel types:
GAMPAt = (1-1/tAMPA) * GAMPAt-1 + (1/tAMPA) * CAMPA * S(w+i * A+i t )
GNMDAt = (1-1/tNMDA) * GNMDAt-1 + (1/tNMDA) * CNMDA * S[(w+i * A+i t ) * DV]+
2 most common ion channels in inhibitory synapses of the cerebral cortex:
channel type:
GABAA
GABAB
transmitter:
GABA
GABA
receptor:
GABAA
GABAB
ions:
Cl
K
kinetics:
fast
approx. 40 times slower
controls:
transmitter
transmitter
ADAPTATION
Neurons respond transiently to their inputs and gradually reduce their
firing rate even when the input drive is constant
Mechanisms: - presynaptic
adaptation
- postsynaptic receptor/channel desensitization
- AFTERHYPERPOLARIZATION (AHP)
Afterhyperpolarization is produced by several types of ion channels.
They all are hyperpolarizing K+ channels, but have
different kinetics (fast – 15 msec, medium – 100 msec, slow – 1 sec,
very slow >3 sec).
They all are opened by membrane depolarization (especially by action
potentials).
NEUROMODULATION
Neuromodulators are chemical compounds, released from synapses of special
neurons, that temporarily modify properties of receptors/channels.
4 basic neuromodulators: acetylcholine, norepinephrine, serotonin,
dopamine
Text: pp. 149 – 159 (this is optional, if you do not understand
something and want to clarify it).
DENDRITIC TREES
How to model dendritic trees? A branch of a dendrite is essentially
a tube made up of insulator – membrane.
To think about it as an electrical circuit, we can subdivide it into
small compartments.
These compartments become essentially dimensionless and each can be
represented by the point model.
Then a dendrite can be represented as a chain of point electrical circuits
connected with each other in series via longitudinal conductances.
The entire neuron then is a branching tree of such chained compartments.
Such a model of a neuron is called a COMPARTMENTAL MODEL.
NEURON'S INTEGRATIVE FUNCTION
Neuron is a set of several extensively branching trees of electrical
compartments, all converging on a single output compartment.
Each compartment integrates its synaptic inputs and inputs from the
adjoining compartments.
This integrative function is complex, nonlinear, and bound between
–90 and +40 mV.
Thus, membrane potential of a compartment i, Vi = f (Vof
adjoining compartments, synaptic inputs, local ion channels, etc.)
Bottom line: dendritic trees, and neuron as a whole, can implement
a wide range of complex nonlinear integrating functions over their synaptic
inputs.
HOMEWORK ASSIGNMENT #2
COMPARMENTAL MODELING OF A PYRAMIDAL CELL
The task is to draw an electrical circuit diagram representing a hypothetical
PYRAMIDAL CELL from the cerebral cortex.
Pyramidal cells are the principal class of cells in the cerebral cortex
(80% of all the cells there).
They have the body in a shape of a pyramid, out of whose base grow
4-6 BASAL DENDRITES and out of whose apex grows APICAL DENDRITE.
The apical dendrite is very long; it grows all the way to the cortical
surface, near which the apical dendrite sprouts a clump of dendrites called
TERMINAL TUFT.
The particular cell that we want to model is such a cell, but with
only one basal dendrite (to reduce amount of work for you drawing all these
dendrites).
The assignment is to model this cell with 4 electrical compartments:
compartment S (representing soma) is connected
with compartment B (representing the basal dendrite)
and with compartment A (representing apical
dendrite).
Compartment A in turn is connected with compartment
T (representing terminal tuft).
This neuron has following components in its compartments:
Excitatory connections
in: T A
B
Inhibitory K+
connections in: T A
B
Inhibitory Cl-
connections in: T A
B S
Action potential
channels in:
S
P channels
T
P channels are special channels, permeable to Na+ and voltage-gated
(more depolarized the local membrane, more channels open).
P stands for "persistent" - as long as V is high, the channels stay
open.
As a part of your homework, suggest what might be the purpose of these
P channels in the terminal tuft.
Due date: February 12. Submit a drawing of the electrical circuit representing this cell and suggestion for P channel function.
REQUIRED Reading: C. Koch (1997) Computation and the
single neuron. Nature 385: 207-210
This article has the answer to the P channel question.
HOMEWORK ASSIGNMENT #3
2 most common ion channels in inhibitory synapses of the cerebral cortex:
channel type:
GABAA
GABAB
transmitter:
GABA
GABA
receptor:
GABAA
GABAB
ions:
Cl
K
kinetics:
fast
approx. 40 times slower
controls:
transmitter
transmitter
Draw a connectional and electrical circuit diagrams of a POINT neuron
with 3 sets of connections:
(1) excitatory (AMPA and NMDA channels)
(2) inhibitory (GABAA and GABAB channels)
(3) inhibitory (GABAA channels only)
Also include in the electrical diagram:
(1) action potential-generating channels
(2) medium afterhyperpolarization channels for adaptation
(3) slow afterhyperpolarization channels for adaptation
Write all the equations necessary to describe this neuron (ignore membrane capacitance for simplicity).
Due date: February 18
Divisions of the Nervous System:
NERVOUS
SYSTEM consists of (1) PERIPHERAL NERVOUS SYSTEM and
(2) CENTRAL NERVOUS SYSTEM (CNS)
Peripheral
NS consists of (1) SENSORY NERVES,
(2) MOTOR NERVES to skeletal muscles, and
(3) GANGLIA and NERVES OF AUTONOMIC NERVOUS SYSTEM (it controls internal
organs)
Central Nervous System has 2 subdivisions: BRAIN and SPINAL CORD
Brain
has following major subdivisions: BRAINSTEM, CEREBELLUM, DIENCEPHALON,
two CEREBRAL HEMISPHERES
Sensory Systems
There are 6 sensory systems: SOMATOSENSORY (touch, body posture, muscle
sense, pain, temperature), VISUAL, AUDITORY,
OLFACTORY, GUSTATORY, VESTIBULAR
CEREBRAL CORTEX
Cerebral cortex is a thin (approx. 2mm thick), but large in area layer
of neurons.
In more advanced mammals (like cats, apes, humans) this layer is convoluted
(to pack more surface into the same volume).
The folds are called sulci (singular is sulcus), the
ridges are called gyri (singular is gyrus).
Major partitions of the cerebral cortex are called LOBES. There
are 6 lobes:
FRONTAL LOBE
(planning and motor control)
PARIETAL LOBE
(somatosensory + high cognitive functions)
OCCIPITAL LOBE
(visual + high cognitive functions)
TEMPORAL LOBE
(auditory + high cognitive functions)
INSULA (polysensory)
LIMBIC LOBE
(emotions)
Cortex is further subdivided into smaller regions, called CORTICAL
AREAS. There are about 50 of these areas.
Cortical areas are divided into:
PRIMARY CORTICAL
AREAS (handle initial sensory input or final motor output)
SECONDARY CORTICAL
AREAS (process output of primary sensory areas or control primary motor
area)
ASSOCIATIVE CORTICAL
AREAS (all the other areas)
SENSORY PATHWAYS
SENSORY RECEPTORS -->
PRIMARY AFFERENT NEURONS --> SENSORY NUCLEUS in spinal cord or brainstem
-->
--> RELAY NUCLEUS in THALAMUS
--> PRIMARY SENSORY CORTICAL AREA -->
--> SECONDARY SENSORY CORTICAL
AREAS -->ASSOCIATIVE CORTEX
Text: pp. 10 –11, 77 - 88
MOTOR CONTROL SYSTEM
Motor control system contains 3 major subsystems:
1) PREFRONTAL CORTEX --> PREMOTOR CORTEX -->
PRIMARY MOTOR CORTICAL AREA -->
--> MOTOR NUCLEI in brainstem (for head control) or in spinal cord
(for the rest of the body control) --> SKELETAL MUSCLES
Motor nucleus – contracts one muscle
Primary motor cortex (MI) – control of single muscles or groups of muscles
Premotor cortex – spatiotemporal patterns of muscle contractions (e.g.,
finger tapping)
Prefrontal cortex – behavioral patterns, planning, problem solving
2) Entire cortex --> BASAL GANGLIA -->
VA nucleus in thalamus --> prefrontal cortex
--> VL nucleus in thalamus --> premotor and motor cortex
Function
of basal ganglia – learned behavioral programs, routines, habits (e.g.,
writing, dressing up)
3) Somatosensory information from receptors -->
CEREBELLUM
Premotor, motor, and somatosensory cortex
-->
--> VL nucleus in thalamus --> premotor and motor cortex
CEREBELLUM
--> motor nuclei
Functions
of cerebellum – learned motor skills
- movement planning
- muscle coordination (e.g., not to loose balance while extending arm)
- comparator function (compensation of errors during movements)
MOTIVATIONAL SYSTEM
HYPOTHALAMUS
in diencephalon – monitors and controls body’s needs (food, water, temperature,
etc.)
AMYGDALA
and LIMBIC CORTEX in cerebral hemispheres – emotions, interests, learned
desires
RETICULAR
FORMATION in brainstem – arousal (sleep-awake), orienting reflex, focused
attention
Text: pp. 10 –11, 77 - 88
The subject of topography is how sensory inputs are distributed in the
cortex.
The 2 basic principles are:
(1) different sensory modalities (e. g., vision, auditory,
etc.) are first processed separately in different parts of the cortex,
and only after this processing they are brought together in higher-level
associative cortical areas.
(2) within each sensory modality, information from neighboring
(and therefore more closely related) sensory receptors is delivered to
local cortical sites within the primary sensory cortical area,
and different cortical sites process information from different groups
of sensory receptors. The idea here is at first to bring together
only local sensory information, so that cortical cells can
extract local sensory features (e.g., local edges, textures). In the
next cortical area, cells then can use these local features to recognize
larger things (e.g., shapes, objects), and so on.
So, projections from sensory periphery (skin, retina, cochlea) to primary
sensory cortical are topographically arranged; however, they are not POINT-TO-POINT
but SMALL
AREA-TO-SMALL AREA. Next, from one cortical area to the next,
still it would not be useful to mix widely different (unrelated) inputs.
So, projections from lower to higher cortical areas are
also topographic.
As a result of such a distribution of inputs to cortical areas, these
areas have TOPOGRAPHIC MAPS:
in somatosensory cortex – SOMATOTOPIC MAPS
in visual cortex – RETINOTOPIC MAPS
in auditory cortex – TONOTOPIC MAPS.
RECEPTIVE FIELD (RF) of a neuron (or a sensory receptor) is the area
of RECEPTOR SURFACE (e.g., skin, retina, cochlea) whose stimulation can
evoke a response in that neuron.
Sensory receptors have very small RFs. Cortical cells build their RFs
from RFs of their input cells; as a result, RFs of neurons in higher cortical
areas become larger and larger.
RECEPTIVE FIELD PROFILE – distribution of the neuron’s responsivity across its receptive field (i.e., how much it responds to stimulation of different loci in its RF).
Text: pp. 86-87, 324-329, 369-375.
Modeling Project: Receptive Field of a Neuron.
Write a computer program to compute a RECEPTIVE FIELD PROFILE of a target
neuron that receives input connections from a set of excitatory source
neurons.
Model parameters: N = 40 - number of
source cells.
RFR = 3 - receptive field radius.
D = 1 - spacing between receptive fields
of neighboring neurons.
Weights of connections of source cells to the target cell, w(i), should be chosen randomly and then normalized so that Sw(i) = 1
Activity of source cell i is: As(i) = 1 - (|S - RFC(i)| / RFR)
where S - stimulus location on the receptor surface,
RFC(i) - receptive field of cell i. RFC(i) is computed as RFC(i)
= RFR + (i – 1) * D
As is “instantaneous firing rate (or frequency)”; it is calculated as an
inverse of time interval between successive action potentials.
If As(i) < 0, set As(i) = 0.
The target cell is modeled as a point electrical circuit, made up of
passive membrane conductance gm = 1 (E = -70) and net excitatory conductance
Gex (E = 0).
Activity of the target cell is: DV
= 70 * Gex / (Gex + gm)
In the steady state, Gex would be Gex = Cex * S
w(i) As(i), where w(i) is the connection weight of source cell i
on the target cell.
To reflect temporal behaviors of ion channels,
Gex = (1 – 1/t) * Gex + (1/t)
* Cex * S w(i) As(i)
Use t = 4.
The task is to map the receptive field profile of the target cell.
Program flow:
Phase 1: Set all the parameters (RFR, RFC(i), w(i),
D,
N, t, Cex)
Phase 2: Deliver 450 point stimuli, spread evenly across
the entire receptor surface.
The entire length of the receptor surface is RFR + (N – 1) * D
+ RFR = 45.
Therefore, space stimulus locations 0.1 distance apart: S1=0.1,
S2=0.2 … S450=45.
For each stimulus location:
Compute activities of all the source cells, As(i)
Set Gex = 0
Do 20 time steps, updating Gex and DV.
Plot DV after 20th time update as a function
of stimulus location S.
Submit for grading:
Brief description of the
model
plot of DV
as a function of S (choose Cex such that DV
is in approx. 30 mV range)
text of the program
In generating topographic maps in target cortical areas, the sensory
axons are guided to the right cortical area and to the general region in
that area by chemical clues.
This is the basic mechanism for interconnecting different parts of
the brain and for laying the topographic map in each area in a particular
orientation (e.g., in the primary somatosensory cortex,
the topographic map in all individuals is oriented the same way, with
foot most medial, head most lateral in the area).
The basic mechanism for finer precision in topographic maps is by axons that originate from neighboring neurons (or sensory receptors) traveling together and then terminating near to each other.
This way, original neighborhood relations in the source area can be easily preserved in their projections to the target area.
In addition to this genetic mechanism of generating topographic maps
in cortical areas (as well as in all the other parts of the CNS), there
is also a learning mechanism.
This mechanism can adjust the topographic map to the particular circumstances
of an individual.
For example, if a person lost an arm, the regions of somatosensory
cortex that process information from that arm should be re-wired to receive
information from other body regions.
Or, if a person preferentially makes use of some part of the body (e.g.,
a pianist's use of fingers), it would be desirable to send information
from that part to a larger cortical region for improved
processing.
The mechanism for such individual tuning of topographic maps is SYNAPTIC
PLASTICITY (change of connections in response to experience).
Connections can be changed by:
(1) axon sprouting (growing new axon branches, selective elimination of
some other existing branches)
(2) changing efficacy of the existing synapses.
Text: pp. 99-104.
It has been demonstrated in experimental studies in the cerebral cortex
that when a presynaptic cell and the postsynaptic cell both are very active
during some short period of time, then their
synapse increases its efficacy (or weight). This increase is
very long-lasting (maybe even permanent) and it is called LONG-TERM
POTENTIATION (LTP).
If the presynaptic cell is very active during some short period of time,
but the postsynaptic cell is only weakly active, then their synapse decreases
its efficacy. This decrease is also very
long-lasting (maybe even permanent) and it is called LONG-TERM DEPRESSION
(LTD).
In general, there is an expression: “cells that fire together, wire together.” Or, more correlated the behaviors of two cells, stronger their connection.
This idea was originally proposed by Donald Hebb in 1949 and, consequently,
this type of synaptic plasticity (where correlated cells get stronger connections)
is called HEBBIAN SYNAPTIC
PLASTICITY. The rule that governs the relationship between synaptic
weight and activities of the pre- and postsynaptic cells is called HEBBIAN
RULE.
Currently, there are a number of mathematical formulations of the Hebbian
rule. None of them can fully account for all the different experimental
results, so they all should be viewed as
approximations of the real rule that operates in the cortex.
One of the basic versions of Hebbian rule is COVARIANCE RULE:
At time t, the change
in the strength of connection between source cell s and target cell t is
computed as
Dw
= (As – As) * (At – At) * RL,
where As is the activity of the source cell (e.g., instantaneous firing
rate),
As
is its average activity,
At is
the output activity of the target cell and
At
is its average activity.
LR is “rate of learning” scaling constant.
This is “covariance” rule, because it computes across many stimuli
the covariance in coincident activities of the pre- and postsynaptic cells.
Stronger the covariance (correlation in behaviors of
the two cells), stronger the connection.
A neuron has a limited capacity how many synapses it can maintain on
its dendrites. In other words, there is a top limit on the sum of
all the connection weights a cell can receive from other
cells. We will call this wmax.
To accommodate this limitation, we can extend the covariance rule this
way:
At time t, compute Dw
of all the connections on the target cell -
Dwi
= (Asi – Asi) * (At – At) * RL
Next compute tentative new
weights - w’i = wi + Dwi
If the sum of all w’i
(Sw’i) is less or equal wmax,
then new weights are wi = w’i
If Sw’i
> wmax, then new weights are wi = w’i*
wmax / Sw’i
This way, sum of weights will never exceed wmax.
Text: pp. 680-681.
If a target cell receives synaptic connections from a number of source cells and these connections can change their weights by some version of Hebb Rule, then - depending on the particular mathematical version of Hebb Rule - the target cell can learn to extract different information from its inputs.
(1) Oja's version of Hebb Rule
At a given time moment the change in the weight of connection from source
cell i is:
Dwi = Asi *
At * RL - At2 * wi
where wi is the current weight of the connection,
Asi is the activity of the source cell i (e.g., instantaneous
firing rate),
At is
the output activity of the target cell and
LR is
“rate of learning” scaling constant.
Under this learning rule, the target cell will learn to compute and represent by its activity the First Principal Component of its input. That is, it will perform "PCA" (Principal Component Analysis, which is a popular technique in data compression). The First Principal Component is the direction in the input space (defined by activities of all the source cells) that has the maximal variance. Thus, PCA-performing Hebb Rule can be used in groups of neurons to compress information coming to them from other brain regions, so that the same information will be represented by fewer neurons.
(2) BCM version of Hebb Rule
Dwi = Asi * At * (At
- At2) *RL
where At2 is the average of the squared activity
of the target cell.
This version of Hebb Rule performs "ICA" (Independent Component
Analysis). That is, suppose the set of source cells carry information
about several important sensory variables x1 ... xn
but, unfortunately, these variables are mixed together: activity
of each source cell is a linear sum of the variables - Asi
= Swij * xj
How can we extract x1 ... xn from As1
... Asm ? They can be extracted by ICA (see http://www.cs.ucf.edu/~kursun/fav_ica.doc)
Here x1 ... xn are called "Independent
Components". BCM version of Hebb Rule will make the target cell to
learn to represent one of the variables x1 ... xn.
If we have several target cells, then together they can learn to represent
by their activities all the hidden variables x1 ... xn.
Hebbian rule makes presynaptic cells compete with each other for connection to the target cell:
(1) Suppose a target cell had connections from only 2 source cells.
Then if these two presynaptic cells had similar behaviors (i.e., highly
overlapping RFs), they will share the target cell equally
and will give it the behavior that is average of theirs.
(2) If the two presynaptic cells had different behaviors (i.e., nonoverlapping or just minimally overlapping RFs), then they will fight for connections until one leaves and the other takes all.
(3) If a neuron receives Hebbian synaptic connections from many
other neurons, then the target cell will strengthen connections only with
a limited subset of the source cells that all have
correlated behaviors (prominently overlapping RF profiles) and will
reduce the weights of all the other connections to 0. This subset
– defined by having greater than 0 connection weights on the
target cell – is the “AFFERENT GROUP” of that target cell.
HOMEWORK ASSIGNMENT #4
MODELING PROJECT: AFFERENT GROUP SELECTION
Write a computer program to implement Hebbian learning in the network
set up in the previous modeling project, i.e., the network of one target
cell that receives excitatory inputs from 40
source cells. The source cells have somatotopically arranged
RFs.
Task: apply point stimuli to the receptor surface (skin) and
adjust all the connections according to the Hebbian rule.
Goal: develop a stable afferent group for the target cell.
Model parameters: N = 40 - number of source
cells.
RFR = 3 - receptive field radius.
D
= 1 - spacing between receptive fields of neighboring neurons.
Cex = 5 - excitatory scaling constant
RL = 0.00001 - rate of learning by the connections
Initial weights of connections of source cells to the target cell, w(i), should be chosen randomly and then normalized so that Sw(i) = 1
Activity of source cell i is: As(i) = 1 - (|S - RFC(i)| / RFR)
where S - stimulus location on the receptor surface,
RFC(i) - receptive field of cell i. RFC(i) is computed as RFC(i)
= RFR + (i – 1) * D
As is “instantaneous firing rate (or frequency)”; it is calculated as an
inverse of time interval between successive action potentials.
If As(i) < 0, set As(i) = 0.
The target cell is modeled as a point electrical circuit, made up of
passive membrane conductance gm = 1 (E = -70) and net excitatory
conductance Gex (E = 0).
Activity of the target cell is: DV
= 70 * Gex / (Gex + gm)
In the steady state, Gex would be Gex = Cex * S
w(i) As(i), where w(i) is the connection weight of source cell i
on the target cell.
To reflect temporal behaviors of ion channels,
Gex = (1 – 1/t) * Gex + (1/t)
* Cex * S w(i) As(i)
Use t = 4.
The new feature of the model is that after each stimulus all the connection weights wi should be adjusted according to the Hebbian rule spelled out below.
Program flow:
Phase 1: Set all the parameters (RFR, RFC(i), w(i),
D,
N, t, Cex). New parameter is RL.
Set average activity parameters of the source and target cells to 0: As(i)
= 0 DV = 0
Phase 2: Deliver 100000 point stimuli, picked in a random
sequence anywhere on the entire receptor surface.
For each stimulus location:
Compute activities of all the source cells, As(i)
Set Gex = 0
Do 20 time steps, updating Gex and DV.
Adjust connection weights:
- compute for all cells w’(i) = w(i) + RL * (As(i) – As(i)) * (DV
– DV)
- if w’(i) < 0, set w’(i) = 0
- compute SUM = S w(i)
- compute new values of connection weights: w(i) = w’(i) / SUM
- update DV and As(i): DV
= 0.99 * DV + 0.01 * DV
As(i) = 0.99 * As(i) + 0.01 * As(i)
Phase 3: After 100000 stimuli a local afferent group should
form.
Show it by (1) plotting w(i) as a function of source cell number, i,
and (2) plotting the RF profile of the target cell (i.e., plot DV
as a function of stimulus location on the receptor surface).
Submit for grading:
Brief description of the
model
The two plots
Text of the program
If two cells, A and B, receive afferent connections from a set of source cells and these connections are Hebbian, then each target cell will strengthen connections with some subset of the source cells that have similar behaviors, thus choosing these cells as its afferent group. The two cells, A and B, will likely choose different subsets of the source cells for their afferent groups.
Next, if the target cells are also interconnected by LATERAL connections,
then these connections will act on the cells’ afferent groups, and the
effect will depend on whether the lateral
connection is excitatory or inhibitory.
An excitatory lateral connection from cell A to cell B will move the afferent group of cell B towards the afferent group of cell A, until the two afferent groups become identical.
An inhibitory lateral connection from cell A to cell B will move the
afferent group of cell B away from the afferent group of cell A, until
the RFs of the two afferent groups do
not overlap
anymore.
Thus, LATERAL EXCITATION = ATTRACTION of afferent groups.
LATERAL INHIBITION = REPULSION of the afferent groups.
CORTICAL ARCHITECTURE
Cerebral cortex is approximately 1.5 – 2 mm thick and is subdivided
into 6 cortical layers.
Layer 1 is the topmost layer, it has almost no cell bodies, but dendrites
and axons.
Layers 2-3 are called UPPER LAYERS
Layer 4 and bottom part of Layer 3 are called MIDDLE LAYERS (thus upper
and middle layers overlap according to this terminology)
Layers 5 and 6 are called DEEP LAYERS
Layer 4 is the INPUT LAYER
Layers 2-3 and 5-6 are OUTPUT LAYERS
External (afferent) input comes to a cortical area to layer 4 (and that
is why it is called “input” layer).
This afferent input comes from a thalamic nucleus associated with the
particular cortical area and from layers 2-3 of the preceding cortical
areas (unless this is a primary sensory cortical area).
Layer 4 in turn distributes input it receives to all other layers.
Layers 2-3 send their output to deep layers and to other cortical areas.
Layer 5 sends its output outside the cortex, to the brainstem and spinal
cord.
Layer 6 sends its output back to the thalamic nucleus where this cortical
locus got its afferent input, thus forming a feedback loop.
Optional reading: Favorov and Kelly (1994) Network's afferent
connectivity is governed by neural mechanics. Society for Neuroscience
Abstracts.
Excitatory neurons in layer 4 are called SPINY STELLATE CELLS.
They are the cells that receive afferent input and then distribute it radially
among all other layers.
Axons of individual spiny stellate cells form a narrow bundle (they
do not spread much sideway) and consequently they activate only a narrow
column of cells in the upper and deep layers. Thus
the afferent input is distributed preferentially vertically and much
less horizontally.
As a result, cortical cells have more similar functional properties
(e.g. RF location) in the vertical dimension than in the horizontal dimensions.
Going from one cell to the next in a cortical area,
RFs change much faster when going across the cortex horizontally than
vertically.
That is, CORTEX IS ANISOTROPIC; it has COLUMNAR ORGANIZATION (i.e.,
more functional similarity vertically than horizontally).
In other words, a cortical area is organized in a form of CORTICAL
COLUMNS.
Because cells in the radial (vertical) dimension have similar RFs, the
topographic maps at all cortical depths are in register – there is a single
topographic map for all cortical layers.
This topographic map is established in layer 4, because it is the input
layer.
FUNCTION OF LAYER 4 IS TO ORGANIZE AFFERENT INPUTS TO A CORTICAL AREA.
General layout of topographic maps is genetically determined (see lecture
#11), preserving topological relations of the source area in its projection
onto the target area.
In addition there is fine-tuning of topographic maps in layer 4.
The mechanism of this fine-tuning is based on Hebbian synapses of afferent
connections and lateral connections in layer 4, adjusting afferent connections
to produce an optimal map.
MEXICAN HAT refers to patterns of lateral connections in a neural
network in which each node of the network has excitatory lateral connections
with its closest neighbors, but
inhibitory lateral connections with more distant neighbors.
Role of the Mexican Hat pattern of lateral connections in layer 4:
lateral interconnections move afferent groups of spiny stellate cells
in layer 4, fine-tuning topographic map there.
Mexican Hat spreads RFs evenly throughout the input space, making sure
that no region of the input space (e.g., a skin region or a region of retina)
is not processed by the cortical area.
If some regions of the input space are used more than others and receive
a rich variety of stimulus patterns (e.g., hands of a pianist or a surgeon),
Mexican hat will act to devote more
cortical territory to processing inputs from those regions. That
is, in the cortical topographic map these regions will be increased in
area (while underused regions of the input space will
correspondingly shrink in size). Also, sizes of RFs in more used regions
will become smaller, while in the underused regions RFs will become larger.
An extreme case of underuse of an input space region involves
somatosensory system – when a part of a limb is amputated. In this case,
the cortical regions that originally received input from
this part of the body will gradually establish new afferent connections
with neighboring surviving parts of the limb.
To summarize, the task of Mexican hat pattern of lateral connections
in layer 4 is to provide an efficient allocation of cortical information-processing
resources according to the individual’s
specific, idiosyncratic behavioral needs.
Reading: pp. 328-332, 335-337
Florence and Kaas (1995) Somatotopy: plasticity of sensory maps. In: The
Handbook of Brain Theory and Neural Networks, M.A. Arbib, (ed), Bradford
Books/MIT Press, pp. 888-891
INVERTED MEXICAN HAT pattern of lateral connections
In this type of pattern of lateral connections, each node (locus) of
the network has inhibitory lateral connections with its closest neighbors,
but excitatory lateral connections with more
distant neighbors.
Inverted Mexican hat will drive immediate neighbors to move their afferent
groups away from each other (due to their inhibitory interconnections),
but will make them stay close to afferent
groups of the farther neighbors (due to their excitatory interconnections).
As a result, RFs of cells in a local cortical region will become SHUFFLED
– they will stay in the same local region of
the input space, but will overlap only minimally among closest cells.
The place for Inverted Mexican hat in layer 4:
cells in the cerebral cortex are organized into MINICOLUMNS.
Each minicolumn is a radially oriented cord of neurons extending from the
bottom (white matter/layer 6 border) to the top
(layer1/layer 2 border). A minicolumn is approximately 0.05 mm
in diameter and is essentially one-cell wide cortical column.
Neurons within a minicolumn have very similar functional properties
(very similar RFs), but adjacent minicolumns have dissimilar functional
properties (their RFs overlap only minimally). As a
result, neurons in adjacent minicolumns are only very weakly correlated
in their behaviors. In local cortical regions RFs of minicolumns appear
shuffled – across such local regions RFs shift back
and forth in seemingly random directions. It is only on a larger
spatial scale – looking across larger cortical distances – that the orderly
topographic map becomes apparent.
Thus, a cortical area has a topographic map of the input space, but
this map looks noisy on a fine spatial scale.
The likely mechanism responsible for shuffling RFs among local groups
of minicolumns is an Inverted Mexican hat pattern of lateral connections:
adjacent minicolumns inhibit each other, but
excite more distant minicolumns.
The purpose of Inverted Mexican hat – to provide local cortical regions (groups of minicolumns) with diverse information about a local region of the input space.
Overall, the pattern of lateral connections in the cortical input layer
(layer 4) apparently is a combination of a small-scale Inverted Mexican
hat and larger-scale Mexican hat.
That is, each minicolumn inhibits its immediate neighbors, excites
1-2 next neighbors, and again inhibits more distant minicolumns.
ORGANIZATION OF CORTICAL OUTPUT LAYERS
Approximately 80% of cortical cells are PYRAMIDAL cells. These
are excitatory cells.
The other 20% belong to several different cell classes:
- SPINY STELLATE cells; excitatory cells located in layer 4
- CHANDELIER cells; inhibitory cells in upper and deep layers, with
synaptic connections on the initial segments of axons of pyramidal cells,
and therefore in position to exert inhibition most
effectively
- BASKET cells; inhibitory cells in all layers, with synapses on somata
and dendrites of pyramidal and other basket cells
- DOUBLE BOUQUET cells; inhibitory cells located in layer 2 and top
layer 3, with synapses on dendrites of pyramidal and spiny stellate cells
- BIPOLAR cells; inhibitory cells
- SPIDER WEB cells; inhibitory cells in layers 2 and 4
(Note: the above list of targets of connections of different types of cells is not complete; it only mentions the most notable ones)
Cells in the output layers have dense and extensive connections with
each other. Inhibitory cells send their axons for only short distances
horizontally, typically less than 0.3-0.5 mm.
Basket cells are an exception: they can spread their axons for up to
1 mm away from the soma.
Pyramidal cells have a much more extensive horizontal spread of axons.
Each pyramidal cell makes a lot of synapses locally (within approx. 0.3-0.5
mm cortical column), but it also sends axon
branches horizontally for several mm (2-8) away from the soma and forms
sparse connections over such wide cortical territories. These connections
are called LONG-RANGE
HORIZONTAL CONNECTIONS.
Pyramidal cells, of course, also send some axon branches outside their
cortical area (to other cortical areas, or brainstem, or thalamus; see
lecture 14).
Thus, cortical columns separated by 1 mm or more in a cortical area
are linked by excitatory connections (of pyramidal cells) exclusively.
However, because these connections are made on
both excitatory cells and inhibitory cells, their net effect on the
target cortical column can be either excitatory or inhibitory (depending
on the relative strengths of these connections). In fact,
inhibitory cells are more easily excitable than pyramidal cells and
as a result long-range horizontal connections evoke initial excitation
in the target column, quickly followed by longer period of
inhibition. This sequence of excitation-inhibition is produced by strong
lateral input. When the lateral input is weak, it does not activate
inhibitory cells sufficiently and as a result it evokes only
excitatory response in the target column.
Take-home test
Read the following three papers:
Shadlen and Newsome (1994) Noise, neural codes and cortical organization.
Current
Opinion in Neurobiology 4: 569-579
Softky (1995) Simple codes vsersus efficient codes. Current Opinion
in Neurobiology 5: 239-247
Shadlen and Newsome (1995) Is there a signal in the noise?. Current
Opinion in Neurobiology 5: 248-250
Briefly summarize the question addressed by these papers, the alternative answers offered by the opponents, and the essence of their arguments in favor of their positions.
Due: April 8.
Before we turn to information processing carried out in the output cortical
layers, we should consider them as a NONLINEAR DYNAMICAL SYSTEM.
The reason is that dynamical
systems (i.e., sets of interacting elements of whatever nature) that
are described by nonlinear equations have a very strong tendency for generating
complex dynamical behaviors, loosely referred
to as CHAOTIC DYNAMICS. Henri Poincare was the first to recognize
this feature of nonlinear dynamical systems 100 years ago, but a more systematic
study of such dynamics started in
1960s with work by Edward Lorenz.
From these studies we know that even very simple dynamical systems can
be unstable and exhibit complex dynamical behaviors; and more structurally
complex systems are even more so.
Cortical network structurally is a very complex dynamical system and
can be expected to generate complex dynamical behaviors whether we like
them or not. Such “unplanned” behaviors are
called EMERGENT BEHAVIORS or EMERGENT PHENOMENA.
Dynamical behaviors can be displayed as TIME SERIES or PHASE-SPACE PLOTS
Phase-space plots can be constructed in a number of ways:
1) Use one variable (e.g., activity of one cell at time
t) and plot it against itself some fixed time later, Xt vs. Xt+Dt.
More generally, you can make an N-dimensional phase-space plot by plotting
Xt vs. Xt+Dt1 vs Xt+Dt2
vs. …. Xt+Dt(N-1).
2) Plot one variable vs. its first derivative (for a 2-D
plot) or vs. first and second etc. derivatives for higher-dimensional plots.
For example, you can plot position of a pendulum against its velocity
to produce 2-D phase-space plot.
3) Use N different variables (e.g. simultaneous activities of 2 or more different cells) and plot them against each other to produce N-dimensional plot.
Regardless of a particular approach to producing a phase-space plot,
the resulting graphs will look qualitatively the same (but not quantitatively):
they will all show a single point, or a closed
loop, or a quasi-periodic figure, or chaotic plot.
STEADY-STATE DYNAMICS
In the steady state the dynamical system has reached its DYNAMICAL
ATTRACTOR.
The attractor might be:
FIXED POINT (a stable single state, showing up as a point in phase-space
plots)
LIMIT CYCLE, or PERIODIC (periodic oscillations of the system’s state,
showing up as a closed loop in phase-space plots)
QUASI-PERIODIC (oscillations that look almost periodic, but not quite;
this is due to oscillations having two or more frequencies the ratio of
which is an IRRATIONAL number)
CHAOTIC (non-periodic fluctuations)
Modeling studies of cortical networks demonstrate that they can easily
produce complex dynamical behaviors, including chaotic. Depending
on network parameters, they can have fixed-point,
periodic, quasi-periodic, or chaotic attractors. Each and every
parameter of the network (e.g., densities of excitatory and inhibitory
connections, relative strengths of excitation and inhibition,
different balances of fast and slow transmitter-gated ion channels,
afterhyperpolarization, stimulus strength, etc.) has an effect on the complexity
of dynamics. Some increase the complexity,
others decrease, yet others have a non-monotonic effect.
To show how the complexity of dynamics varies with a particular network
parameter, use BIFURCATION PLOTS.
A bifurcation plot is a plot with the horizontal axis representing
the values of the studied parameter, while the vertical axis represents
a section through the phase space, e.g., the values of some
variable (such as activity of a particular cell) at which the first
derivative of that variable is equal 0.
The progression of complexity of dynamics from fixed-point to chaotic
is not monotonic.
With even a tiny change in the controlling parameter, dynamics can
change from chaotic to quasi-periodic or periodic, and back, and forth.
Thus, for example, two even very similar stimuli can have attractors
of very different types.
What changes monotonically is the probability of having dynamics of
certain type.
CHAOS is DETERMINISTIC DISORDER. Chaotic attractor is a nonperiodic
attractor.
Chaotic dynamics is distinguished by its SENSITIVITY TO INITIAL CONDITIONS.
That is, chaotic attractor can be thought of as a ball of string of an
infinite length (because it never
repeats its path) in a high-dimensional state space. Two distant
points on this string can be very close to each other in the space (because
the string is folded), but if we travel along the string
starting from these two points, then the paths will gradually diverge
and can bring us to very distant locations in the state space. This
is what is meant by “sensitivity to initial conditions” – even
very similar initial conditions lead to very different outcomes.
This is also why it is impossible to make predictions about future developments
of chaotic dynamical systems.
Based on modeling studies, cortical networks can readily generate chaotic
dynamics. But there are also opposite factors:
- complexity of dynamics is bounded, it does not grow with an increase
in the structural complexity of the network beyond the initial.
- Stimulus strength reduces complexity of dynamics, so stronger the
stimulus less chaotic the dynamics.
- Random noise (which is an unavoidable feature of biological systems)
reduces the complexity of dynamics, converting chaotic dynamics to quasi-periodic-like
ones.
Overall, it appears likely that cortical networks operate at the EDGE OF CHAOS.
TRANSIENT DYNAMICS
Cortical networks never reach their dynamical attractors, because they
deal only with TRANSIENTS. It is impossible to have steady-state
dynamics in cortical networks, because of (1)
constant variability of sensory inputs, (2) adaptation in sensory receptors
and in neurons in the CNS, and (3) presence of long-term processes in neurons.
Transients are much more complex than steady-state dynamics. A dynamical
system might have a fixed-point attractor, but to get to it the system
might have to go through a very complex,
chaotic-looking temporal process.
Although transients in cortical networks look very chaotic, they are
quite ORDERLY in that there is an underlying WAVEFORM. This underlying
waveform is very stimulus-specific – even
small change in the stimulus can greatly change the shape of this waveform.
Conclusions
We draw a number of lessons from our studies of model
cortical networks. First, one major source of dynamics in cortical
networks is likely to be the sheer structural complexity of these
networks, regardless of specific details. This should be sufficient
for emergence of quasiperiodic or even chaotic dynamics, although it appears
from our studies that such spurious dynamical
behaviors will be greatly constrained in their complexity. This
constraint is fortuitous, considering that a crucial requirement for perception,
and thus for the cortex, is an ability to attend to some
details of the perceived sensory patterns and at the same time to ignore
other, irrelevant details. A high-dimensional dynamics with its great
sensitivity to conditions – which would include
perceptually irrelevant ones – would present a major obstacle for such
detail-invariant information processing. In contrast, low-dimensional
dynamics might offer a degree of sensitivity to
sensory input details that is optimal for our ability to discriminate
among similar stimuli without being captives of irrelevant details.
Second, spurious dynamics is likely to contribute many
intriguing features to cortical stimulus-evoked behaviors. We tend
to expect specific mechanisms for specific dynamical behaviors, but
the presence of spurious dynamics should warn us that a clearly identifiable
dynamical feature does not necessarily imply a clearly identifiable cause:
the cause might be distributed – everywhere
and nowhere.
Finally, spurious dynamics has great potential to contribute
to cortical cells’ functional properties, constraining (and maybe in some
cases expanding) information-representational capabilities of cortical
networks. Some of these contributions might be functionally insignificant,
others might be useful, and yet others might be detrimental and thus require
cortical networks to develop special mechanisms to counteract them.
Reading: Eberhart (1989) Chaos theory for the biomedical
engineer. IEEE Engineering in Medicine and Biology 8: 41-45
Favorov et al. (2002) Spurious dynamics in somatosensory cortex
Optional: For very nice source on chaos (with great graphics)
see hypertextbook.com/chaos
Also, a very popular book on chaos written for non-scientists is:
Glieck (1988) Chaos: Making a New Science. ISBN 0140092501
Neural networks can be set up to learn INPUT-TO-OUTPUT TRANFER FUNCTIONS.
That is, the network is given a set of input channels IN1 IN2 ….INn.
A training set is specified, in which for each particular input pattern
(IN vector) there is a “desired” output, OUT (output
might be a single variable or a vector, but here we will focus on a
single output). Formally, OUT = f (IN1 IN2 ….INn) and f is called
a TRANSFER FUNCTION. The network’s task is to
learn to produce correct output for each input vector in the training
set. This is accomplished by presenting the network with a randomly
chosen sequence of training input-output pairs,
computing the network’s responses and adjusting weights of connections
in the network after each such presentation.
ERROR-CORRECTION LEARNING is a type of learning in neural networks where
connection weights are adjusted as a function of error between the network’s
desired and actual
outputs. This is SUPERVISED LEARNING.
ERROR-BACKPROPAGATION LEARNING is a version of error-correction learning used in nonlinear multi-layered networks.
BACKPROP NETS can vary in number of input channels, number of layers of hidden units, and number of units in each hidden layer.
The power of backprop algorithm is that it can in principle learn any
transfer function, given enough hidden layers and enough units in those
layers. However, in practice it takes
more time to learn more complex transfer functions, and this learning
time grows very quickly for even moderately complex nonlinear functions
to unacceptably long periods. Also, the network
might settle on a less than optimal solution.
Homework Assignment #5: ERROR-BACKPROPAGATION (“BACKPROP”) NEURAL NETWORK
Write a computer program to implement Error Backpropagation Learning
Algorithm in a backprop network made up of 2 input channels IN1 and IN2,
one layer of 10 hidden units, and 1
output unit.
The network's task is to learn the relationship between activities of
input channels IN1 and IN2 and the desired output OUTdesired. An
input channel's activity can be either 0 or 1.
This relationship is : OUTdesired = IN1 exclusive-or IN2.
Thus there are only 4 possible input patterns:
IN1 IN2 OUTdesired
0 0
0
0 1
1
1 0
1
1 1
0
Activity of hidden unit i is: Hi = tanh(Win1i
* IN1 + Win2i * IN2)
tanh() is the hyperbolic tangent function y =(ex-e-x)/(ex+e-x).
Activity of the output unit is: OUT = tanh( S Whi * Hi)
Assign initial weights to all connections RANDOMLY.
-3 < Win <+3
-0.4 < Wh < +0.4
Present 1000 input patterns chosen randomly among 4 possible ones.
For each input pattern compute all Hs, OUT, and then adjust connection
weights according to these steps:
(1) Compute error: ERROR
= OUTdesired - OUT
(2) Compute error signal: d
= ERROR * (1 – OUT2)
(3) Adjust hidden unit connections:
Whi = Whi + d * Hi * RLh
where RLh = 0.003 is rate of learning
(4) Backpropagate error signal
to each hidden unit: di = d
*
Whi * (1- Hi2)
(5) Adjust input unit connections:
Winij = Winij + dj
* INi *RLin
where RLin = 6 is rate of learning
Submit for grading: brief description of the project
plot of | ERROR | as a function of training trial #.
text of the program
Optional Reading: J. Anderson (1995) An Introduction to Neural Networks. ISBN 0-262-01144-1, chapter 9, pp. 239-280.
Lectures 25-26, April 15-17: SINBAD MODEL OF CORTICAL ORGANIZATION AND FUNCTION
MAIN READING:
Download from www.sinbad.info: Favorov
and Ryder. SINBAD: a neocortical mechanism for discovering environmental
variables and regularities hidden in sensory input.
Also Required Reading:
Clark and Thornton (1997) Trading
spaces: computation, representation, and the limits of uninformed learning.
Behavioral and Brain Sciences 20: 57-66.
Perlovsky (1998) Conundrum of combinatorial
complexity. IEEE Trans. on Pattern Analysis and Machine Intelligence 20:
666-670.
FUNCTIONAL ORGANIZATION OF CEREBRAL CORTEX (OVERVIEW)
Summary of functions of major components of thalamo-cortical network:
THALAMIC NUCLEUS - relays environmental variables from sensory receptors
to cortical input layer (layer 4)
CORTICAL INPUT LAYER - organizes sensory inputs and delivers them to
dendrites of pyramidal cells in the output layers
BASAL DENDRITES OF PYRAMIDAL CELLS - tune pyramidal cells to new orderly
environmental features (i.e., inferentially useful environmental variables)
LATERAL CONNECTIONS ON APICAL DENDRITES - learn inferential relations
among environmental variables
CORTICO-THALAMIC FEEDBACK - disambiguates and fills in missing sensory
information in thalamus
HIGHER-ORDER CORTICAL AREAS - pyramidal cells there tune to more complex
orderly features of the environment and learn more complex inferential
relations
CORTICO-CORTICAL FEEDBACK - disambiguates and fills in missing information
in lower-level cortical areas, making use of deeper understanding of the
situation by the higher-level
cortical areas
1. Describe how neurons communicate with each other (synaptic transmission).
2. Describe major subdivisions of the nervous system, partitioning of the cerebral cortex, and general organization of sensory pathways to the cerebral cortex. Indicate functional roles of the identified brain regions.
3. Give concise overview of the topographic organization of cerebral cortical areas: define the concept of a Receptive Field, describe basic properties of a typical topographic map, list (but do not explain how they work) the major mechanisms of topographic map formation and contributions of those mechanisms to cortical topography.
4. Describe Hebbian synaptic plasticity, and its consequences for the development of patterns of afferent connections to cortical neurons. Include discussion of Mexican Hat and Inverted Mexican Hat patterns of lateral connections, how they affect the afferent connections to cortical neurons, and the roles they play in formation of cortical topographic maps.
5. Describe cortical functional architecture, including laminar subdivisions, flow of information across layers, connectional organization of the output layers (lateral connections), and their interpretation in the context of the SINBAD model of the cortical network.
6. Describe nonlinear dynamical properties of the cortical network: define phase-space plots, bifurcation plots, concept of a dynamical attractor, and types of attractors. What does distinguish chaotic dynamics? What kinds of dynamics are present in the cortical network?
7. Discuss the importance of knowing the hidden, but influential environmental
variables. How can such hidden variables be discovered? Outline the SINBAD
model of the cortical pyramidal cell. How can a network of SINBAD cells
function as an INFERENTIAL model of the outside world? How can such an
internal model be used to generate successful behaviors?
---------------
During the exam, you will be given two of these questions. You will
have to write your answers without any help from notes, books, etc. The
third question will be either (1) to draw a compartmental model of a particular
neuron, or (2) to write equations describing a point model of a particular
neuron.
If you have any questions, you can see me any afternoon before the exam.
---------------
NEUROSCIENTIFIC TERMS THAT YOU NEED TO KNOW:
(This list of terms will be provided to you during the final
exam)
Neuron Dendrites Soma
Axon hillock Axon Axon terminals
Synapse
Action potential Spike
Neuronal membrane Ion pump
Ion channel Na+/K+ ion pump
K+ Na+ and Cl- resting ion channels
GATED ION CHANNELS
Concentration gradient Electrical gradient
MEMBRANE POTENTIAL RESTING POTENTIAL
HYPERPOLARIZATION DEPOLARIZATION
Excitation Inhibition
EQUILIBRIUM POTENTIAL
ENa+ , equilibrium potential
of Na+ channels is +40 mV.
EK+ , equilibrium potential
of K+ channels is -90 mV.
ECl-, equilibrium potential
of Cl- channels is -70 mV.
Action potential generating voltage-gated Na+ channels
Action potential generating voltage-gated K+ channels
Action potential threshold = -55 mV
ABSOLUTE REFRACTORY PERIOD
RELATIVE REFRACTORY PERIOD
SYNAPTIC TRANSMISSION PRESYNAPTIC NEURON
POSTSYNAPTIC NEURON
Synaptic cleft Synaptic vesicles
TRANSMITTER RECEPTORS
RECEPTOR-GATED ION CHANNELS
POSTSYNAPTIC POTENTIAL (PSP) EXCITATORY PSP
(EPSP) INHIBITORY PSP (IPSP)
Inhibitory synapses Excitatory synapses
SYNAPTIC INTEGRATION
SPATIAL SUMMATION TEMPORAL SUMMATION
SYNAPTIC EFFICACY Connection “weight”
(or strength)
Passive membrane conductance Membrane
capacitance
Channel conductance in series with a battery
POINT NEURON MODEL COMPARTMENTAL
MODEL
ADAPTATION Presynaptic adaptation
Postsynaptic receptor/channel desensitization
AFTERHYPERPOLARIZATION
NERVOUS SYSTEM
PERIPHERAL NERVOUS SYSTEM CENTRAL NERVOUS
SYSTEM (CNS)
SENSORY NERVES MOTOR NERVES
BRAIN SPINAL CORD
BRAINSTEM CEREBELLUM
DIENCEPHALON
CEREBRAL HEMISPHERES
CEREBRAL CORTEX LOBES
FRONTAL LOBE PARIETAL LOBE
OCCIPITAL LOBE TEMPORAL LOBE
INSULA LIMBIC LOBE
CORTICAL AREAS PRIMARY CORTICAL AREAS
SECONDARY CORTICAL AREAS ASSOCIATIVE
CORTICAL AREAS
SENSORY PATHWAYS
SENSORY RECEPTORS PRIMARY AFFERENT NEURONS
SENSORY NUCLEUS RELAY NUCLEUS
in THALAMUS
MOTOR CONTROL SYSTEM
PREFRONTAL CORTEX PREMOTOR CORTEX
PRIMARY MOTOR CORTICAL AREA
MOTOR NUCLEI SKELETAL MUSCLES
BASAL GANGLIA
MOTIVATIONAL SYSTEM HYPOTHALAMUS
AMYGDALA LIMBIC CORTEX
RETICULAR FORMATION
CORTICAL TOPOGRAPHY
POINT-TO-POINT SMALL
AREA-TO-SMALL AREA
TOPOGRAPHIC MAPS
RECEPTIVE FIELD (RF)
RECEPTOR SURFACE
Genetic mechanism of generating topographic maps in cortical areas
SYNAPTIC PLASTICITY
HEBBIAN SYNAPTIC PLASTICITY
HEBBIAN RULE
LONG-TERM POTENTIATION (LTP)
LONG-TERM DEPRESSION (LTD)
AFFERENT GROUP
LATERAL EXCITATION = ATTRACTION of afferent groups
LATERAL INHIBITION = REPULSION of the afferent groups
MEXICAN HAT INVERTED MEXICAN
HAT
Cortical layers UPPER LAYERS
MIDDLE LAYERS DEEP
LAYERS
INPUT LAYER OUTPUT
LAYERS
COLUMNAR ORGANIZATION CORTICAL
COLUMNS MINICOLUMNS
Locally shuffled receptive fields
PYRAMIDAL cells SPINY STELLATE cells
LONG-RANGE HORIZONTAL CONNECTIONS
NONLINEAR DYNAMICAL SYSTEM CHAOTIC DYNAMICS
EMERGENT BEHAVIORS
TIME SERIES PHASE-SPACE PLOTS
BIFURCATION PLOTS
STEADY-STATE DYNAMICS TRANSIENT DYNAMICS
DYNAMICAL ATTRACTOR
FIXED POINT LIMIT CYCLE
PERIODIC
QUASI-PERIODIC CHAOTIC
DETERMINISTIC DISORDER SENSITIVITY
TO INITIAL CONDITIONS
TRANSIENTS
INPUT-TO-OUTPUT TRANFER FUNCTIONS.
ERROR-CORRECTION LEARNING SUPERVISED LEARNING.
ERROR-BACKPROPAGATION LEARNING BACKPROP
NETS
BASAL DENDRITES OF PYRAMIDAL CELLS
APICAL DENDRITES
CORTICO-THALAMIC FEEDBACK
HIGHER-ORDER CORTICAL AREAS
CORTICO-CORTICAL FEEDBACK