1
Introduction to the book
2
Introduction to the book
2.1
What is computational neuroscience?
2.2
Marr’s levels
2.3
Why computers?
2.4
What is the real value gained?
2.5
Why Python
2.6
Getting started with Python
2.7
Learning Python
3
Numpy and Matplotlib
3.1
Learning Python
4
Intro to calculus
5
Intro to dynamical systems
5.1
Differential equations
5.2
Euler’s method
6
Simple neuron models
6.1
Review of HH
6.2
Leaky integrate and fire
6.3
Quadratic integrate and fire
6.4
Izhikevich Neuron
7
Hodgkin-Huxley neuron model
7.1
neuron_func_1
7.2
neuron_func_2
7.3
neuron_func_3
7.4
neuron_func_4
7.5
neuron_func_5
7.6
neuron_func_6
7.7
neuron_func_7
8
Neurotransmitter release
8.1
Synaptics responses
8.2
Spikes as a change in conductance
8.3
Coupling conductance to PSP
8.4
PSPs superimpose
8.5
PSP model in the Izhikevich neuron
8.6
Chaining neurons together
9
Synpatic Plasticity
9.1
Python helper functions
9.2
Hebbian Learning 1
9.3
Hebbian Learning 2
9.4
Hebbian Learning 3
9.5
Hebbian STDP
10
Reinforcement Learning
11
Reinforcement learning rule
12
Example 1
13
Example 2
14
Example 3
14.1
Reinforcement learning framework
14.2
Estimating the state-value function
14.3
Temporal difference (TD) learning
14.3.1
TD RL model of classical conditioning
14.3.2
TD RL as the learning signal in a spiking network
14.4
TD(0) for estimating
\(v_{\pi}\)
14.4.1
TD in a simple 2-arm bandit task
14.4.2
Action selection policy
14.5
SARSA for estimating
\(Q\)
14.6
Q-learning for estimating
\(\pi\)
14.6.1
Q-learning applied to instrumental conditioning
14.6.2
Q-learning applied to instrumental conditioning 2
14.7
Dyna-Q: Model-based RL
14.7.1
Dyna-Q applied to instrumental conditioning
15
supervised learning
16
Supervised Learning
Introduction to Computational Neuroscience
10
Reinforcement Learning