Nengo Modelling API¶
Nengo Objects¶
-
class
nengo.
Network
(label=None, seed=None, add_to_container=None)[source]¶ A network contains ensembles, nodes, connections, and other networks.
A network is primarily used for grouping together related objects and connections for visualization purposes. However, you can also use networks as a nice way to reuse network creation code.
To group together related objects that you do not need to reuse, you can create a new
Network
and add objects in awith
block. For example:network = nengo.Network() with network: with nengo.Network(label="Vision"): v1 = nengo.Ensemble(nengo.LIF(100), dimensions=2) with nengo.Network(label="Motor"): sma = nengo.Ensemble(nengo.LIF(100), dimensions=2) nengo.Connection(v1, sma)
To reuse a group of related objects, you can create a new subclass of
Network
, and add objects in the__init__
method. For example:class OcularDominance(nengo.Network): def __init__(self): self.column = nengo.Ensemble(nengo.LIF(100), dimensions=2) network = nengo.Network() with network: left_eye = OcularDominance() right_eye = OcularDominance() nengo.Connection(left_eye.column, right_eye.column)
Parameters: label : str, optional (Default: None)
Name of the network.
seed : int, optional (Default: None)
Random number seed that will be fed to the random number generator. Setting the seed makes the network’s build process deterministic.
add_to_container : bool, optional (Default: None)
Determines if this network will be added to the current container. If None, this network will be added to the network at the top of the
Network.context
stack unless the stack is empty.Attributes
connections (list) Connection
instances in this network.ensembles (list) Ensemble
instances in this network.label (str) Name of this network. networks (list) Network
instances in this network.nodes (list) Node
instances in this network.probes (list) Probe
instances in this network.seed (int) Random seed used by this network. -
all_objects
¶ (list) All objects in this network and its subnetworks.
-
all_ensembles
¶ (list) All ensembles in this network and its subnetworks.
-
all_nodes
¶ (list) All nodes in this network and its subnetworks.
-
all_networks
¶ (list) All networks in this network and its subnetworks.
-
all_connections
¶ (list) All connections in this network and its subnetworks.
-
all_probes
¶ (list) All probes in this network and its subnetworks.
-
config
¶ (
Config
) Configuration for this network.
-
-
class
nengo.
Ensemble
(n_neurons, dimensions, radius=Default, encoders=Default, intercepts=Default, max_rates=Default, eval_points=Default, n_eval_points=Default, neuron_type=Default, gain=Default, bias=Default, noise=Default, label=Default, seed=Default)[source]¶ A group of neurons that collectively represent a vector.
Parameters: n_neurons : int
The number of neurons.
dimensions : int
The number of representational dimensions.
radius : int, optional (Default: 1.0)
The representational radius of the ensemble.
encoders : Distribution or (n_neurons, dimensions) array_like, optional (Default: UniformHypersphere(surface=True))
The encoders used to transform from representational space to neuron space. Each row is a neuron’s encoder; each column is a representational dimension.
intercepts : Distribution or (n_neurons,) array_like, optional (Default:
nengo.dists.Uniform(-1.0, 1.0)
)The point along each neuron’s encoder where its activity is zero. If
e
is the neuron’s encoder, then the activity will be zero whendot(x, e) <= c
, wherec
is the given intercept.max_rates : Distribution or (n_neurons,) array_like, optional (Default:
nengo.dists.Uniform(200, 400)
)The activity of each neuron when the input signal
x
is magnitude 1 and aligned with that neuron’s encodere
; i.e., whendot(x, e) = 1
.eval_points : Distribution or (n_eval_points, dims) array_like, optional (Default:
nengo.dists.UniformHypersphere(surface=True)
)The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points.
n_eval_points : int, optional (Default: None)
The number of evaluation points to be drawn from the
eval_points
distribution. If None, then a heuristic is used to determine the number of evaluation points.neuron_type :
NeuronType
, optional (Default:nengo.LIF()
)The model that simulates all neurons in the ensemble (see
NeuronType
).gain : Distribution or (n_neurons,) array_like (Default: None)
The gains associated with each neuron in the ensemble. If None, then the gain will be solved for using
max_rates
andintercepts
.bias : Distribution or (n_neurons,) array_like (Default: None)
The biases associated with each neuron in the ensemble. If None, then the gain will be solved for using
max_rates
andintercepts
.noise : Process, optional (Default: None)
Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step.
label : str, optional (Default: None)
A name for the ensemble. Used for debugging and visualization.
seed : int, optional (Default: None)
The seed used for random number generation.
Attributes
bias (Distribution or (n_neurons,) array_like or None) The biases associated with each neuron in the ensemble. dimensions (int) The number of representational dimensions. encoders (Distribution or (n_neurons, dimensions) array_like) The encoders, used to transform from representational space to neuron space. Each row is a neuron’s encoder, each column is a representational dimension. eval_points (Distribution or (n_eval_points, dims) array_like) The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points. gain (Distribution or (n_neurons,) array_like or None) The gains associated with each neuron in the ensemble. intercepts (Distribution or (n_neurons) array_like or None) The point along each neuron’s encoder where its activity is zero. If e
is the neuron’s encoder, then the activity will be zero whendot(x, e) <= c
, wherec
is the given intercept.label (str or None) A name for the ensemble. Used for debugging and visualization. max_rates (Distribution or (n_neurons,) array_like or None) The activity of each neuron when dot(x, e) = 1
, wheree
is the neuron’s encoder.n_eval_points (int or None) The number of evaluation points to be drawn from the eval_points
distribution. If None, then a heuristic is used to determine the number of evaluation points.n_neurons (int or None) The number of neurons. neuron_type (NeuronType) The model that simulates all neurons in the ensemble (see nengo.neurons
).noise (Process or None) Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step. radius (int) The representational radius of the ensemble. seed (int or None) The seed used for random number generation. -
neurons
¶ A direct interface to the neurons in the ensemble.
-
probeable
¶ (tuple) Signals that can be probed on an ensemble.
-
size_in
¶ The dimensionality of the ensemble.
-
size_out
¶ The dimensionality of the ensemble.
-
-
class
nengo.ensemble.
Neurons
(ensemble)[source]¶ An interface for making connections directly to an ensemble’s neurons.
This should only ever be accessed through the
neurons
attribute of an ensemble, as a way to signal toConnection
that the connection should be made directly to the neurons rather than to the ensemble’s decoded value.-
ensemble
¶ (Ensemble) The ensemble these neurons are part of.
-
probeable
¶ (tuple) Signals that can be probed in the neuron population.
-
size_in
¶ (int) The number of neurons in the population.
-
size_out
¶ (int) The number of neurons in the population.
-
-
class
nengo.
Node
(output=Default, size_in=Default, size_out=Default, label=Default, seed=Default)[source]¶ Provide non-neural inputs to Nengo objects and process outputs.
Nodes can accept input, and perform arbitrary computations for the purpose of controlling a Nengo simulation. Nodes are typically not part of a brain model per se, but serve to summarize the assumptions being made about sensory data or other environment variables that cannot be generated by a brain model alone.
Nodes can also be used to test models by providing specific input signals to parts of the model, and can simplify the input/output interface of a
Network
when used as a relay to/from its internal ensembles (seeEnsembleArray
for an example).Parameters: output : callable, array_like, or None
Function that transforms the Node inputs into outputs, a constant output value, or None to transmit signals unchanged.
size_in : int, optional (Default: 0)
The number of dimensions of the input data parameter.
size_out : int, optional (Default: None)
The size of the output signal. If None, it will be determined based on the values of
output
andsize_in
.label : str, optional (Default: None)
A name for the node. Used for debugging and visualization.
seed : int, optional (Default: None)
The seed used for random number generation. Note: no aspects of the node are random, so currently setting this seed has no effect.
Attributes
label (str) The name of the node. output (callable, array_like, or None) The given output. size_in (int) The number of dimensions for incoming connection. size_out (int) The number of output dimensions. -
probeable
¶ (tuple) Signals that can be probed on a node.
-
-
class
nengo.
Connection
(pre, post, synapse=Default, function=Default, transform=Default, solver=Default, learning_rule_type=Default, eval_points=Default, scale_eval_points=Default, label=Default, seed=Default, modulatory=Unconfigurable)[source]¶ Connects two objects together.
The connection between the two object is unidirectional, transmitting information from the first argument,
pre
, to the second argument,post
.Almost any Nengo object can act as the pre or post side of a connection. Additionally, you can use Python slice syntax to access only some of the dimensions of the pre or post object.
For example, if
node
hassize_out=2
andensemble
hassize_in=1
, we could not create the following connection:nengo.Connection(node, ensemble)
But, we could create either of these two connections:
nengo.Connection(node[0], ensemble) nengo.Connection(node[1], ensemble)
Parameters: pre : Ensemble or Neurons or Node
The source Nengo object for the connection.
post : Ensemble or Neurons or Node or Probe
The destination object for the connection.
synapse : Synapse, optional (Default:
nengo.synapses.Lowpass(tau=0.005)
)Synapse model to use for filtering (see
Synapse
).function : callable, optional (Default: None)
Function to compute across the connection. Note that
pre
must be an ensemble to apply a function across the connection.transform : (post.size_in, pre.size_out) array_like, optional (Default:
np.array(1.0)
)Linear transform mapping the pre output to the post input. This transform is in terms of the sliced size; if either pre or post is a slice, the transform must be shaped according to the sliced dimensionality. Additionally, the function is applied before the transform, so if a function is computed across the connection, the transform must be of shape
(len(function(np.zeros(post.size_in))), pre.size_out)
.solver : Solver, optional (Default:
nengo.solvers.LstsqL2()
)Solver instance to compute decoders or weights (see
Solver
). Ifsolver.weights
is True, a full connection weight matrix is computed instead of decoders.learning_rule_type : LearningRuleType or iterable of LearningRuleType, optional (Default: None)
Modifies the decoders or connection weights during simulation.
eval_points : (n_eval_points, pre.size_out) array_like or int, optional (Default: None)
Points at which to evaluate
function
when computing decoders, spanning the interval (-pre.radius, pre.radius) in each dimension. If None, will use the eval_points associated withpre
.scale_eval_points : bool, optional (Default: True)
Indicates whether the evaluation points should be scaled by the radius of the pre Ensemble.
label : str, optional (Default: None)
A descriptive label for the connection.
seed : int, optional (Default: None)
The seed used for random number generation.
Attributes
is_decoded (bool) True if and only if the connection is decoded. This will not occur when solver.weights
is True or both pre and post areNeurons
.function (callable) The given function. function_size (int) The output dimensionality of the given function. If no function is specified, function_size will be 0. label (str) A human-readable connection label for debugging and visualization. If not overridden, incorporates the labels of the pre and post objects. learning_rule_type (instance or list or dict of LearningRuleType, optional) The learning rule types. post (Ensemble or Neurons or Node or Probe or ObjView) The given post object. post_obj (Ensemble or Neurons or Node or Probe) The underlying post object, even if post
is anObjView
.post_slice (slice or list or None) The slice associated with post
if it is an ObjView, or None.pre (Ensemble or Neurons or Node or ObjView) The given pre object. pre_obj (Ensemble or Neurons or Node) The underlying pre object, even if post
is anObjView
.pre_slice (slice or list or None) The slice associated with pre
if it is an ObjView, or None.seed (int) The seed used for random number generation. solver (Solver) The Solver instance that will be used to compute decoders or weights (see nengo.solvers
).synapse (Synapse) The Synapse model used for filtering across the connection (see nengo.synapses
).transform ((size_mid, size_out) array_like) Linear transform mapping the pre function output to the post input. -
learning_rule
¶ (LearningRule or iterable) Connectable learning rule object(s).
-
size_in
¶ (int) The number of output dimensions of the pre object.
Also the input size of the function, if one is specified.
-
size_mid
¶ (int) The number of output dimensions of the function, if specified.
If the function is not specified, then
size_in == size_mid
.
-
size_out
¶ (int) The number of input dimensions of the post object.
Also the number of output dimensions of the transform.
-
-
class
nengo.connection.
LearningRule
(connection, learning_rule_type)[source]¶ An interface for making connections to a learning rule.
Connections to a learning rule are to allow elements of the network to affect the learning rule. For example, learning rules that use error information can obtain that information through a connection.
Learning rule objects should only ever be accessed through the
learning_rule
attribute of a connection.-
connection
¶ (Connection) The connection modified by the learning rule.
-
error_type
¶ (str) The type of information expected by the learning rule.
-
modifies
¶ (str) The quantity modified by the learning rule.
-
probeable
¶ (tuple) Signals that can be probed in the learning rule.
-
size_in
¶ (int) Dimensionality of the signal expected by the learning rule.
-
size_out
¶ (int) Cannot connect from learning rules, so always 0.
-
-
class
nengo.
Probe
(target, attr=None, sample_every=Default, synapse=Default, solver=Default, label=Default, seed=Default)[source]¶ A probe is an object that collects data from the simulation.
This is to be used in any situation where you wish to gather simulation data (spike data, represented values, neuron voltages, etc.) for analysis.
Probes do not directly affect the simulation.
All Nengo objects can be probed (except Probes themselves). Each object has different attributes that can be probed. To see what is probeable for each object, print its
probeable
attribute.>>> with nengo.Network(): ... ens = nengo.Ensemble(10, 1) >>> print(ens.probeable) ('decoded_output', 'input')
Parameters: target : Ensemble, Neurons, Node, or Connection
The object to probe.
attr : str, optional (Default: None)
The signal to probe. Refer to the target’s
probeable
list for details. If None, the first element in theprobeable
list will be used.sample_every : float, optional (Default: None)
Sampling period in seconds. If None, the
dt
of the simluation will be used.synapse : Synapse, optional (Default: None)
A synaptic model to filtering the probed signal.
solver : Solver, optional (Default:
ConnectionDefault
)Solver
to compute decoders for probes that require them.label : str, optional (Default: None)
A name for the probe. Used for debugging and visualization.
seed : int, optional (Default: None)
The seed used for random number generation.
Attributes
attr (str or None) The signal that will be probed. If None, the first element of the target’s probeable
list will be used.sample_every (float or None) Sampling period in seconds. If None, the dt
of the simluation will be used.solver (Solver or None) Solver
to compute decoders. Only used for probes of an ensemble’s decoded output.synapse (Synapse or None) A synaptic model to filtering the probed signal. target (Ensemble, Neurons, Node, or Connection) The object to probe. -
obj
¶ (Nengo object) The underlying Nengo object target.
-
size_in
¶ (int) Dimensionality of the probed signal.
-
size_out
¶ (int) Cannot connect from probes, so always 0.
-
slice
¶ (slice) The slice associated with the Nengo object target.
-
Neuron types¶
-
class
nengo.neurons.
NeuronType
[source]¶ Base class for Nengo neuron models.
Attributes
probeable (tuple) Signals that can be probed in the neuron population. -
gain_bias
(max_rates, intercepts)[source]¶ Compute the gain and bias needed to satisfy max_rates, intercepts.
This takes the neurons, approximates their response function, and then uses that approximation to find the gain and bias value that will give the requested intercepts and max_rates.
Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuron-specific implementation.
Parameters: max_rates : ndarray(dtype=float64)
Maximum firing rates of neurons.
intercepts : ndarray(dtype=float64)
X-intercepts of neurons.
Returns: gain : ndarray(dtype=float64)
Gain associated with each neuron. Sometimes denoted alpha.
bias : ndarray(dtype=float64)
Bias current associated with each neuron.
-
rates
(x, gain, bias)[source]¶ Compute firing rates (in Hz) for given vector input,
x
.This default implementation takes the naive approach of running the step function for a second. This should suffice for most rate-based neuron types; for spiking neurons it will likely fail.
Parameters: x : ndarray(dtype=float64)
Vector-space input.
gain : ndarray(dtype=float64)
Gains associated with each neuron.
bias : ndarray(dtype=float64)
Bias current associated with each neuron.
-
step_math
(dt, J, output)[source]¶ Implements the differential equation for this neuron type.
At a minimum, NeuronType subclasses must implement this method. That implementation should modify the
output
parameter rather than returning anything, for efficiency reasons.Parameters: dt : float
Simulation timestep.
J : ndarray(dtype=float64)
Input currents associated with each neuron.
output : ndarray(dtype=float64)
Output activities associated with each neuron.
-
-
class
nengo.
Direct
[source]¶ Signifies that an ensemble should simulate in direct mode.
In direct mode, the ensemble represents and transforms signals perfectly, rather than through a neural approximation. Note that direct mode ensembles with recurrent connections can easily diverge; most other neuron types will instead saturate at a certain high firing rate.
-
class
nengo.
RectifiedLinear
[source]¶ A rectified linear neuron model.
Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless it passes below zero, at which point the neural activity will stay at zero.
-
class
nengo.
LIF
(tau_rc=0.02, tau_ref=0.002, min_voltage=0)[source]¶ Spiking version of the leaky integrate-and-fire (LIF) neuron model.
Parameters: tau_rc : float
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
tau_ref : float
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
min_voltage : float
Minimum value for the membrane voltage. If
-np.inf
, the voltage is never clipped.
-
class
nengo.
LIFRate
(tau_rc=0.02, tau_ref=0.002)[source]¶ Non-spiking version of the leaky integrate-and-fire (LIF) neuron model.
Parameters: tau_rc : float
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
tau_ref : float
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
-
class
nengo.
AdaptiveLIF
(tau_n=1, inc_n=0.01, **lif_args)[source]¶ Adaptive spiking version of the LIF neuron model.
Works as the LIF model, except with adapation state
n
, which is subtracted from the input current. Its dynamics are:tau_n dn/dt = -n
where
n
is incremented byinc_n
when the neuron spikes.Parameters: tau_n : float
Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).
inc_n : float
Adaptation increment. How much the adaptation state is increased after each spike.
tau_rc : float
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
tau_ref : float
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
References
[R3] Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339
-
class
nengo.
AdaptiveLIFRate
(tau_n=1, inc_n=0.01, **lif_args)[source]¶ Adaptive non-spiking version of the LIF neuron model.
Works as the LIF model, except with adapation state
n
, which is subtracted from the input current. Its dynamics are:tau_n dn/dt = -n
where
n
is incremented byinc_n
when the neuron spikes.Parameters: tau_n : float
Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay).
inc_n : float
Adaptation increment. How much the adaptation state is increased after each spike.
tau_rc : float
Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay).
tau_ref : float
Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike.
References
[R4] Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339
-
class
nengo.
Izhikevich
(tau_recovery=0.02, coupling=0.2, reset_voltage=-65.0, reset_recovery=8.0)[source]¶ Izhikevich neuron model.
This implementation is based on the original paper [R5]; however, we rename some variables for clarity. What was originally ‘v’ we term ‘voltage’, which represents the membrane potential of each neuron. What was originally ‘u’ we term ‘recovery’, which represents membrane recovery, “which accounts for the activation of K+ ionic currents and inactivation of Na+ ionic currents.” The ‘a’, ‘b’, ‘c’, and ‘d’ parameters are also renamed (see the parameters below).
We use default values that correspond to regular spiking (‘RS’) neurons. For other classes of neurons, set the parameters as follows.
- Intrinsically bursting (IB):
reset_voltage=-55, reset_recovery=4
- Chattering (CH):
reset_voltage=-50, reset_recovery=2
- Fast spiking (FS):
tau_recovery=0.1
- Low-threshold spiking (LTS):
coupling=0.25
- Resonator (RZ):
tau_recovery=0.1, coupling=0.26
Parameters: tau_recovery : float, optional (Default: 0.02)
(Originally ‘a’) Time scale of the recovery varaible.
coupling : float, optional (Default: 0.2)
(Originally ‘b’) How sensitive recovery is to subthreshold fluctuations of voltage.
reset_voltage : float, optional (Default: -65.)
(Originally ‘c’) The voltage to reset to after a spike, in millivolts.
reset_recovery : float, optional (Default: 8.)
(Originally ‘d’) The recovery value to reset to after a spike.
References
[R5] (1, 2) E. M. Izhikevich, “Simple model of spiking neurons.” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569-1572. (http://www.izhikevich.org/publications/spikes.pdf) - Intrinsically bursting (IB):
Learning rule types¶
-
class
nengo.learning_rules.
LearningRuleType
(learning_rate=1e-06)[source]¶ Base class for all learning rule objects.
To use a learning rule, pass it as a
learning_rule_type
keyword argument to theConnection
on which you want to do learning.Each learning rule exposes two important pieces of metadata that the builder uses to determine what information should be stored.
The
error_type
is the type of the incoming error signal. Options are:'none'
: no error signal'scalar'
: scalar error signal'decoded'
: vector error signal in decoded space'neuron'
: vector error signal in neuron space
The
modifies
attribute denotes the signal targeted by the rule. Options are:'encoders'
'decoders'
'weights'
Parameters: learning_rate : float, optional (Default: 1e-6)
A scalar indicating the rate at which
modifies
will be adjusted.Attributes
error_type (str) The type of the incoming error signal. This also determines the dimensionality of the error signal. learning_rate (float) A scalar indicating the rate at which modifies
will be adjusted.modifies (str) The signal targeted by the learning rule.
-
class
nengo.
PES
(learning_rate=0.0001, pre_tau=0.005)[source]¶ Prescribed Error Sensitivity learning rule.
Modifies a connection’s decoders to minimize an error signal provided through a connection to the connection’s learning rule.
Parameters: learning_rate : float, optional (Default: 1e-4)
A scalar indicating the rate at which weights will be adjusted.
pre_tau : float, optional (Default: 0.005)
Filter constant on activities of neurons in pre population.
Attributes
learning_rate (float) A scalar indicating the rate at which weights will be adjusted. pre_tau (float) Filter constant on activities of neurons in pre population.
-
class
nengo.
BCM
(pre_tau=0.005, post_tau=None, theta_tau=1.0, learning_rate=1e-09)[source]¶ Bienenstock-Cooper-Munroe learning rule
Modifies connection weights as a function of the presynaptic activity and the difference between the postsynaptic activity and the average postsynaptic activity.
Parameters: theta_tau : float, optional (Default: 1.0)
A scalar indicating the time constant for theta integration.
pre_tau : float, optional (Default: 0.005)
Filter constant on activities of neurons in pre population.
post_tau : float, optional (Default: None)
Filter constant on activities of neurons in post population. If None, post_tau will be the same as pre_tau.
learning_rate : float, optional (Default: 1e-9)
A scalar indicating the rate at which weights will be adjusted.
Attributes
learning_rate (float) A scalar indicating the rate at which weights will be adjusted. post_tau (float) Filter constant on activities of neurons in post population. pre_tau (float) Filter constant on activities of neurons in pre population. theta_tau (float) A scalar indicating the time constant for theta integration.
-
class
nengo.
Oja
(pre_tau=0.005, post_tau=None, beta=1.0, learning_rate=1e-06)[source]¶ Oja learning rule
Modifies connection weights according to the Hebbian Oja rule, which augments typicaly Hebbian coactivity with a “forgetting” term that is proportional to the weight of the connection and the square of the postsynaptic activity.
Parameters: pre_tau : float, optional (Default: 0.005)
Filter constant on activities of neurons in pre population.
post_tau : float, optional (Default: None)
Filter constant on activities of neurons in post population. If None, post_tau will be the same as pre_tau.
beta : float, optional (Default: 1.0)
A scalar weight on the forgetting term.
learning_rate : float, optional (Default: 1e-6)
A scalar indicating the rate at which weights will be adjusted.
Attributes
beta (float) A scalar weight on the forgetting term. learning_rate (float) A scalar indicating the rate at which weights will be adjusted. post_tau (float) Filter constant on activities of neurons in post population. pre_tau (float) Filter constant on activities of neurons in pre population.
-
class
nengo.
Voja
(post_tau=0.005, learning_rate=0.01)[source]¶ Vector Oja learning rule.
Modifies an ensemble’s encoders to be selective to its inputs.
A connection to the learning rule will provide a scalar weight for the learning rate, minus 1. For instance, 0 is normal learning, -1 is no learning, and less than -1 causes anti-learning or “forgetting”.
Parameters: post_tau : float, optional (Default: 0.005)
Filter constant on activities of neurons in post population.
learning_rate : float, optional (Default: 1e-2)
A scalar indicating the rate at which encoders will be adjusted.
Attributes
learning_rate (float) A scalar indicating the rate at which encoders will be adjusted. post_tau (float) Filter constant on activities of neurons in post population.
Synapse models¶
-
class
nengo.synapses.
Synapse
(default_size_in=1, default_size_out=None, default_dt=0.001, seed=None)[source]¶ Abstract base class for synapse model.
Conceptually, a synapse model emulates a biological synapse, taking in input in the form of released neurotransmitter and opening ion channels to allow more or less current to flow into the neuron.
In Nengo, the implementation of a synapse is as a specific case of a
Process
in which the input and output shapes are the same. The input is the current across the synapse, and the output isthe current that will be imparted in the postsynaptic neuron.Synapses also contain the
Synapse.filt
andSynapse.filtfilt
methods, which make it easy to use Nengo’s synapse models outside of Nengo simulations.Parameters: default_size_in : int, optional (Default: 1)
The size_in used if not specified.
default_size_out : int (Default: None)
The size_out used if not specified. If None, will be the same as default_size_in.
default_dt : float (Default: 0.001 (1 millisecond))
The simulation timestep used if not specified.
seed : int, optional (Default: None)
Random number seed. Ensures random factors will be the same each run.
Attributes
default_dt (float (Default: 0.001 (1 millisecond))) The simulation timestep used if not specified. default_size_in (int (Default: 0)) The size_in used if not specified. default_size_out (int (Default: 1)) The size_out used if not specified. seed (int, optional (Deafult: None)) Random number seed. Ensures random factors will be the same each run. -
filt
(x, dt=None, axis=0, y0=None, copy=True, filtfilt=False)[source]¶ Filter
x
with this synapse model.Parameters: x : array_like
The signal to filter.
dt : float, optional (Default: None)
The timestep of the input signal. If None,
default_dt
will be used.axis : int, optional (Default: 0)
The axis along which to filter.
y0 : array_like, optional (Default: None)
The starting state of the filter output. If None, the initial value of the input signal along the axis filtered will be used.
copy : bool, optional (Default: True)
Whether to copy the input data, or simply work in-place.
filtfilt : bool, optional (Default: False)
If True, runs the process forward then backward on the signal, for zero-phase filtering (like Matlab’s
filtfilt
).
-
filtfilt
(x, **kwargs)[source]¶ Zero-phase filtering of
x
using this filter.Equivalent to
filt(x, filtfilt=True, **kwargs)
.
-
make_step
(shape_in, shape_out, dt, rng, y0=None, dtype=<type 'numpy.float64'>)[source]¶ Create function to advance the synapse forward one time step.
At a minimum, Synapse subclasses must implement this method. That implementation should return a callable that will perform the synaptic filtering operation.
Parameters: shape_in : tuple
Shape of the input signal to be filtered.
shape_out : tuple
Shape of the output filtered signal.
dt : float
The timestep of the simulation.
rng :
numpy.random.RandomState
Random number generator.
y0 : array_like, optional (Default: None)
The starting state of the filter output. If None, each dimension of the state will start at zero.
dtype :
numpy.dtype
(Default: np.float64)Type of data used by the synapse model. This is important for ensuring that certain synapses avoid or force integer division.
-
-
nengo.synapses.
filt
(signal, synapse, dt, axis=0, x0=None, copy=True)[source]¶ Filter
signal
withsynapse
.Note
Deprecated in Nengo 2.1.0. Use
Synapse.filt
method instead.
-
nengo.synapses.
filtfilt
(signal, synapse, dt, axis=0, x0=None, copy=True)[source]¶ Zero-phase filtering of
signal
using thesynapse
filter.Note
Deprecated in Nengo 2.1.0. Use
Synapse.filtfilt
method instead.
-
class
nengo.
LinearFilter
(num, den, analog=True, **kwargs)[source]¶ General linear time-invariant (LTI) system synapse.
This class can be used to implement any linear filter, given the filter’s transfer function. [R6]
Parameters: num : array_like
Numerator coefficients of transfer function.
den : array_like
Denominator coefficients of transfer function.
analog : boolean, optional (Default: True)
Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator
dt
.References
[R6] (1, 2) http://en.wikipedia.org/wiki/Filter_%28signal_processing%29 Attributes
analog (boolean) Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator dt
.den (ndarray) Denominator coefficients of transfer function. num (ndarray) Numerator coefficients of transfer function. -
evaluate
(frequencies)[source]¶ Evaluate the transfer function at the given frequencies.
Examples
Using the
evaluate
function to make a Bode plot:synapse = nengo.synapses.LinearFilter([1], [0.02, 1]) f = numpy.logspace(-1, 3, 100) y = synapse.evaluate(f) plt.subplot(211); plt.semilogx(f, 20*np.log10(np.abs(y))) plt.xlabel('frequency [Hz]'); plt.ylabel('magnitude [dB]') plt.subplot(212); plt.semilogx(f, np.angle(y)) plt.xlabel('frequency [Hz]'); plt.ylabel('phase [radians]')
-
make_step
(shape_in, shape_out, dt, rng, y0=None, dtype=<type 'numpy.float64'>, method='zoh')[source]¶ Returns a
Step
instance that implements the linear filter.
-
class
LinearFilter.
NoDen
(num, den, output)[source]¶ An LTI step function for transfer functions with no denominator.
This step function should be much faster than the equivalent general step function.
-
class
LinearFilter.
Simple
(num, den, output, y0=None)[source]¶ An LTI step function for transfer functions with one num and den.
This step function should be much faster than the equivalent general step function.
-
class
LinearFilter.
General
(num, den, output, y0=None)[source]¶ An LTI step function for any given transfer function.
Implements a discrete-time LTI system using the difference equation [R7] for the given transfer function (num, den).
References
[R7] (1, 2) http://en.wikipedia.org/wiki/Digital_filter#Difference_equation
-
-
class
nengo.
Lowpass
(tau, **kwargs)[source]¶ Standard first-order lowpass filter synapse.
Parameters: tau : float
The time constant of the filter in seconds.
Attributes
tau (float) The time constant of the filter in seconds. -
make_step
(shape_in, shape_out, dt, rng, y0=None, dtype=<type 'numpy.float64'>, **kwargs)[source]¶ Returns an optimized
LinearFilter.Step
subclass.
-
-
class
nengo.
Alpha
(tau, **kwargs)[source]¶ Alpha-function filter synapse.
The impulse-response function is given by:
alpha(t) = (t / tau) * exp(-t / tau)
and was found by [R8] to be a good basic model for synapses.
Parameters: tau : float
The time constant of the filter in seconds.
References
[R8] (1, 2) Mainen, Z.F. and Sejnowski, T.J. (1995). Reliability of spike timing in neocortical neurons. Science (New York, NY), 268(5216):1503-6. Attributes
tau (float) The time constant of the filter in seconds. -
make_step
(shape_in, shape_out, dt, rng, y0=None, dtype=<type 'numpy.float64'>, **kwargs)[source]¶ Returns an optimized
LinearFilter.Step
subclass.
-
-
class
nengo.synapses.
Triangle
(t, **kwargs)[source]¶ Triangular finite impulse response (FIR) synapse.
This synapse has a triangular and finite impulse response. The length of the triangle is
t
seconds; thus the digital filter will havet / dt + 1
taps.Parameters: t : float
Length of the triangle, in seconds.
Attributes
t (float) Length of the triangle, in seconds.
Decoder and connection weight solvers¶
-
class
nengo.solvers.
Solver
[source]¶ Decoder or weight solver.
-
__call__
(A, Y, rng=None, E=None)[source]¶ Call the solver.
Parameters: A : (n_eval_points, n_neurons) array_like
Matrix of the neurons’ activities at the evaluation points
Y : (n_eval_points, dimensions) array_like
Matrix of the target decoded values for each of the D dimensions, at each of the evaluation points.
rng :
numpy.random.RandomState
, optional (Default: None)A random number generator to use as required. If None, the
numpy.random
module functions will be used.E : (dimensions, post.n_neurons) array_like, optional (Default: None)
Array of post-population encoders. Providing this tells the solver to return an array of connection weights rather than decoders.
Returns: X : (n_neurons, dimensions) or (n_neurons, post.n_neurons) ndarray
(n_neurons, dimensions) array of decoders (if
solver.weights
is False) or (n_neurons, post.n_neurons) array of weights (if'solver.weights
is True).info : dict
A dictionary of information about the solver. All dictionaries have an
'rmses'
key that contains RMS errors of the solve. Other keys are unique to particular solvers.
-
mul_encoders
(Y, E, copy=False)[source]¶ Helper function that projects signal
Y
onto encodersE
.Parameters: Y : ndarray
The signal of interest.
E : (dimensions, n_neurons) array_like or None
Array of encoders. If None,
Y
will be returned unchanged.copy : bool, optional (Default: False)
Whether a copy of
Y
should be returned ifE
is None.
-
-
class
nengo.solvers.
Lstsq
[source]¶ Unregularized least-squares solver.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
rcond : float, optional (Default: 0.01)
Cut-off ratio for small singular values (see
numpy.linalg.lstsq
).Attributes
rcond (float) Cut-off ratio for small singular values (see numpy.linalg.lstsq
).weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqNoise
(weights=False, noise=0.1, solver=<nengo.utils.least_squares_solvers.Cholesky object>)[source]¶ Least-squares solver with additive Gaussian white noise.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
noise : float, optional (Default: 0.1)
Amount of noise, as a fraction of the neuron activity.
solver :
LeastSquaresSolver
, optional (Default:Cholesky()
)Subsolver to use for solving the least squares problem.
Attributes
noise (float) Amount of noise, as a fraction of the neuron activity. solver ( LeastSquaresSolver
) Subsolver to use for solving the least squares problem.weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqMultNoise
(weights=False, noise=0.1, solver=<nengo.utils.least_squares_solvers.Cholesky object>)[source]¶ Least-squares solver with multiplicative white noise.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
noise : float, optional (Default: 0.1)
Amount of noise, as a fraction of the neuron activity.
solver :
LeastSquaresSolver
, optional (Default:Cholesky()
)Subsolver to use for solving the least squares problem.
Attributes
noise (float) Amount of noise, as a fraction of the neuron activity. solver ( LeastSquaresSolver
) Subsolver to use for solving the least squares problem.weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqL2
(weights=False, reg=0.1, solver=<nengo.utils.least_squares_solvers.Cholesky object>)[source]¶ Least-squares solver with L2 regularization.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
reg : float, optional (Default: 0.1)
Amount of regularization, as a fraction of the neuron activity.
solver :
LeastSquaresSolver
, optional (Default:Cholesky()
)Subsolver to use for solving the least squares problem.
Attributes
reg (float) Amount of regularization, as a fraction of the neuron activity. solver ( LeastSquaresSolver
) Subsolver to use for solving the least squares problem.weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqL2nz
(weights=False, reg=0.1, solver=<nengo.utils.least_squares_solvers.Cholesky object>)[source]¶ Least-squares solver with L2 regularization on non-zero components.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
reg : float, optional (Default: 0.1)
Amount of regularization, as a fraction of the neuron activity.
solver :
LeastSquaresSolver
, optional (Default:Cholesky()
)Subsolver to use for solving the least squares problem.
Attributes
reg (float) Amount of regularization, as a fraction of the neuron activity. solver ( LeastSquaresSolver
) Subsolver to use for solving the least squares problem.weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqL1
(weights=False, l1=0.0001, l2=1e-06)[source]¶ Least-squares solver with L1 and L2 regularization (elastic net).
This method is well suited for creating sparse decoders or weight matrices.
Note
Requires scikit-learn.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
l1 : float, optional (Default: 1e-4)
Amount of L1 regularization.
l2 : float, optional (Default: 1e-6)
Amount of L2 regularization.
Attributes
l1 (float) Amount of L1 regularization. l2 (float) Amount of L2 regularization. weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
LstsqDrop
(weights=False, drop=0.25, solver1=<nengo.solvers.LstsqL2nz object>, solver2=<nengo.solvers.LstsqL2nz object>)[source]¶ Find sparser decoders/weights by dropping small values.
This solver first solves for coefficients (decoders/weights) with L2 regularization, drops those nearest to zero, and retrains remaining.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
drop : float, optional (Default: 0.25)
Fraction of decoders or weights to set to zero.
solver1 : Solver, optional (Default:
LstsqL2nz(reg=0.1)
)Solver for finding the initial decoders.
solver2 : Solver, optional (Default:
LstsqL2nz(reg=0.01)
)Used for re-solving for the decoders after dropout.
Attributes
drop (float) Fraction of decoders or weights to set to zero. solver1 (Solver) Solver for finding the initial decoders. solver2 (Solver) Used for re-solving for the decoders after dropout. weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
Nnls
(weights=False)[source]¶ Non-negative least-squares solver without regularization.
Similar to
Lstsq
, except the output values are non-negative.Note
Requires SciPy.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
Attributes
weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
NnlsL2
(weights=False, reg=0.1)[source]¶ Non-negative least-squares solver with L2 regularization.
Similar to
LstsqL2
, except the output values are non-negative.Note
Requires SciPy.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
reg : float, optional (Default: 0.1)
Amount of regularization, as a fraction of the neuron activity.
Attributes
reg (float) Amount of regularization, as a fraction of the neuron activity. weights (bool) If False, solve for decoders. If True, solve for weights.
-
class
nengo.solvers.
NnlsL2nz
(weights=False, reg=0.1)[source]¶ Non-negative least-squares with L2 regularization on nonzero components.
Similar to
LstsqL2nz
, except the output values are non-negative.Note
Requires SciPy.
Parameters: weights : bool, optional (Default: False)
If False, solve for decoders. If True, solve for weights.
reg : float, optional (Default: 0.1)
Amount of regularization, as a fraction of the neuron activity.
Attributes
reg (float) Amount of regularization, as a fraction of the neuron activity. weights (bool) If False, solve for decoders. If True, solve for weights.
Simulator¶
-
class
nengo.simulator.
Simulator
(network, dt=0.001, seed=None, model=None)[source]¶ Reference simulator for Nengo models.
The simulator takes a
Network
and builds internal data structures to run the model defined by that network. Run the simulator with therun
method, and access probed data through thedata
attribute.Building and running the simulation may allocate resources like files and sockets. To properly free these resources, call the
Simulator.close
method. Alternatively,Simulator.close
will automatically be called if you use thewith
syntax:with nengo.Simulator(my_network) as sim: sim.run(0.1) print(sim.data[my_probe])
Note that the
data
attribute is still accessible even when a simulator has been closed. Running the simulator, however, will raise an error.Parameters: network : Network or None
A network object to the built and then simulated. If None, then a
Model
with the build model must be provided instead.dt : float, optional (Default: 0.001)
The length of a simulator timestep, in seconds.
seed : int, optional (Default: None)
A seed for all stochastic operators used in this simulator.
model : Model, optional (Default: None)
Attributes
closed (bool) Whether the simulator has been closed. Once closed, it cannot be reopened. data (ProbeDict) The ProbeDict
mapping from Nengo objects to the data associated with those objects. In particular, eachProbe
maps to the data probed while running the simulation.dg (dict) A dependency graph mapping from each Operator
to the operators that depend on that operator.model (Model) The Model
containing the signals and operators necessary to simulate the network.signals (SignalDict) The SignalDict
mapping fromSignal
instances to NumPy arrays.-
dt
¶ (float) The time step of the simulator.
-
n_steps
¶ (int) The current time step of the simulator.
-
time
¶ (float) The current time of the simulator.
-
close
()[source]¶ Closes the simulator.
Any call to
Simulator.run
,Simulator.run_steps
,Simulator.step
, andSimulator.reset
on a closed simulator raises aSimulatorClosed
exception.
-
reset
(seed=None)[source]¶ Reset the simulator state.
Parameters: seed : int, optional
A seed for all stochastic operators used in the simulator. This will change the random sequences generated for noise or inputs (e.g. from processes), but not the built objects (e.g. ensembles, connections).
-
run
(time_in_seconds, progress_bar=True)[source]¶ Simulate for the given length of time.
Parameters: time_in_seconds : float
Amount of time to run the simulation for.
progress_bar : bool or
ProgressBar
orProgressUpdater
, optional (Default: True)Progress bar for displaying the progress of the simulation run.
If True, the default progress bar will be used. If False, the progress bar will be disabled. For more control over the progress bar, pass in a
ProgressBar
orProgressUpdater
instance.
-
run_steps
(steps, progress_bar=True)[source]¶ Simulate for the given number of
dt
steps.Parameters: steps : int
Number of steps to run the simulation for.
progress_bar : bool or
ProgressBar
orProgressUpdater
, optional (Default: True)Progress bar for displaying the progress of the simulation run.
If True, the default progress bar will be used. If False, the progress bar will be disabled. For more control over the progress bar, pass in a
ProgressBar
orProgressUpdater
instance.
-
trange
(dt=None)[source]¶ Create a vector of times matching probed data.
Note that the range does not start at 0 as one might expect, but at the first timestep (i.e.,
dt
).Parameters: dt : float, optional (Default: None)
The sampling period of the probe to create a range for. If None, the simulator’s
dt
will be used.
-