activation function
aggregation function
These are the attributes of a node. They determine the output of a node as follows: \(\begin{equation}\operatorname{activation}(bias + (response * \operatorname{aggregation}(inputs)))\end{equation}\) For available activation functions, see Overview of builtin activation functions; for adding new ones, see Customizing Behavior. For the available aggregation functions, see the aggregations module.
These are the properties of a node (such as its activation function) or connection (such as whether it is enabled or not) determined by its associated gene (in the default implementation, in the attributes module in combination with the gene class).
compute node
Using the distributed module, genomes can be evaluated on multiple machines (including virtual machines) at once. Each such machine/host is called a compute node. These are of two types, primary nodes and secondary nodes.
These connect between nodes, and give rise to the network in the term neural network. For non-loopback (directly recurrent) connections, they are equivalent to biological synapses. Connections have two attributes, their weight and whether or not they are enabled; both are determined by their gene. An example gene class for connections can be seen in genes.DefaultConnectionGene.
A discrete-time neural network (which should be assumed unless specified otherwise) proceeds in time steps, with processing at one node followed by going through connections to other nodes followed by processing at those other nodes, eventually giving the output. A continuous-time neural network, such as the ctrnn (continuous-time recurrent neural network) implemented in NEAT-Python, simulates a continuous process via differential equations (or other methods).
The process in sexual reproduction in which two genomes are combined. This involves the combination of homologous genes and the copying (from the highest-fitness genome) of disjoint/excess genes. Along with mutation, one of the two sources of innovation in (classical) evolution.
These are genes in NEAT not descended from a common ancestor - i.e., not homologous. This implementation of NEAT, like most, does not distinguish between disjoint and excess genes. For further discussion, see the NEAT Overview.
A neural network that is not recurrent is feedforward - it has no loops. (Note that this means that it has no memory - no ability to take into account past events.) It can thus be described as a DAG (Directed Acyclic Graph).
The information coding (in the current implementation) for a particular aspect (node or connection) of a neural network phenotype. Contains several attributes, varying depending on the type of gene. Example gene classes include genes.DefaultNodeGene, genes.DefaultConnectionGene, and iznn.IZNodeGene; all of these are subclasses of genes.BaseGene.
This implementation of NEAT uses, like most, multiple semi-separated generations (some genomes may survive multiple generations via elitism). In terms of generations, the steps are as follows: generate the next generation from the current population; partition the new generation into species based on genetic similarity; evaluate fitness of all genomes; check if a/the termination criterion is satisfied; if not, repeat. (The ordering in the population module is somewhat different.) Generations are numbered, and a limit on the number of generations is one type of termination criterion.
genetic distance
The distance between two homologous genes, added up as part of the genomic distance. Also sometimes used as a synonym for genomic distance.
The set of genes that together code for a (neural network) phenotype. Example genome objects can be seen in genome.DefaultGenome and iznn.IZGenome, and the object interface is described in Genome Interface.
genomic distance
An approximate measure of the difference between genomes, used in dividing the population into species. For further discussion, see the NEAT Overview.
hidden node
These are the nodes other than input nodes and output nodes. In the original NEAT (NeuroEvolution of Augmenting Topologies) algorithm, networks start with no hidden nodes, and evolve more complexity as necessary - thus “Augmenting Topologies”.
Descended from a common ancestor; two genes in NEAT from different genomes are either homologous or disjoint/excess. In NEAT, two genes that are homologous will have the same key/id. For node genes, the key is an int incremented with each newly-created node; for connection genes, the key is a tuple of the keys of the nodes being connected. For further discussion, see the NEAT Overview.
Various of the objects used by the library are indexed by an key (id); for most, this is an int, which is either unique in the library as a whole (as with species and genomes), or within a genome (as with node genes). For connection genes, this is a tuple of two ints, the keys of the connected nodes. For input nodes (or input pins), it is the input’s (list or tuple) index plus one, then multiplied by negative one; for output nodes, it is equal to the output’s (list or tuple) index.
input node
These are the nodes through which the network receives inputs. They cannot be deleted (although connections from them can be), cannot be the output end of a connection, and have: no aggregation function; a fixed bias of 0; a fixed response multiplier of 1; and a fixed activation function of identity. Note: In the genome module, they are not in many respects treated as actual nodes, but simply as keys for input ends of connections. Sometimes known as an input pin.
The process in which the attributes of a gene (or the genes in a genome) are (randomly, with likelihoods determined by configuration parameters) altered. Along with crossover, one of the two sources of innovation in (classical) evolution.
Also known as a neuron (as in a neural network). They are of three types: input, hidden, and output. Nodes have one or more attributes, such as an activation function; all are determined by their gene. Classes of node genes include genes.DefaultNodeGene and iznn.IZNodeGene. (They should not be confused with compute nodes, host machines on which distributed evaluations of genomes are performed.)
output node
These are the nodes to which the network delivers outputs. They cannot be deleted (although connections to them can be) but can otherwise be mutated normally. The output of this node is connected to the corresponding output pin with an implicit weight-1, enabled connection.
Point at which the network is effectively connected to the external world. Pins are either input (aka input nodes) or output (connected to an output node with the same key as the output pin).
primary node
primary compute node
If using the distributed module, you will need one primary compute node and at least one secondary node. The primary node creates and mutates genomes, then distributes them to the secondary nodes for evaluation. (It does not do any evaluations itself; thus, at least one secondary node is required.)
A recurrent neural network has cycles in its topography. These may be a node having a connection back to itself, with (for a discrete-time neural network) the prior time period’s output being provided to the node as one of its inputs. They may also have longer cycles, such as with output from node A going into node B (via a connection) and an output from node B going (via another connection) into node A. (This gives it a possibly-useful memory - an ability to take into account past events - unlike a feedforward neural network; however, it also makes it harder to work with in some respects.)
secondary node
secondary compute node
If using the distributed module, you will need at least one secondary compute node, as well as a primary node. The secondary nodes evaluate genomes, distributed to them by the primary node.
Subdivisions of the population into groups of similar (by the genomic distance measure) individuals (genomes), which compete among themselves but share fitness relative to the rest of the population. This is, among other things, a mechanism to try to avoid the quick elimination of high-potential topological mutants that have an initial poor fitness prior to smaller “tuning” changes. For further discussion, see the NEAT Overview.
These are the attributes of a connection. If a connection is enabled, then the input to it (from a node) is multiplied by the weight then sent to the output (to a node - possibly the same node, for a recurrent neural network). If a connection is not enabled, then the output is 0; genes for such connections are the equivalent of pseudogenes that, as in in vivo evolution, can be reactivated at a later time. TODO: Some versions of NEAT give a chance, such as 25%, that a disabled connection will be enabled during crossover; in the future, this should be an option.

Table of Contents