#!/usr/bin/env python # coding: utf-8 # # Introduction to Neurokernel's API # This notebook illustrates how to define and connect local processing unit (LPU) models using Neurokernel. # ### Background # An LPU comprises two distinct populations of neurons [(Chiang et al., 2011)](#chiang_three-dimensional_2011): *local* neurons may only project to other neurons in the LPU, while *projection* neurons may project both to local neurons and neurons in other LPUs. All synapses between neurons are comprised by *internal connectivity* patterns. LPUs are linked by *inter-LPU connectivity* patterns that map one LPU's outputs to inputs in other LPUs. The general structure of an LPU is shown below: # # ### Defining an LPU Interface # #### Interface Ports # All communication between LPUs must pass through *ports* that are internally associated with modeling elements that must emit or receive external data. An LPU's *interface* is defined as the set of ports it exposes to other LPUs. Each port is defined by a unique identifier string and attributes that indicate whether # - it transmits *spikes* (i.e., boolean values) or *graded potentials* (i.e., floating point numbers) at each step of model execution and whether # - it accepts *input* or emits *output*. # # To facilitate management of a large numbers of ports, Neurokernel requires that port identifiers conform to a hierarchical format similar to that used to label files or [elements in structured documents](http://www.w3.org/TR/xpath/). Each identifier may comprise multiple *levels* joined by separators (/ and []). Neurokernel also defines an extended format for selecting multiple ports with a single *selector*; a selector that cannot be expanded to an explicit list of individual port identifiers is said to be *ambiguous*. Rather than define a formal grammar for this format, the following table depicts examples of how it may be used to refer to multiple ports. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Identifier/SelectorComments
/med/L1[0]selects a single port
/med/L1[0]equivalent to /med/L1[0]
/med+/L[1]equivalent to /med/L1[0]
/med[L1,L2][0]selects two ports
/med/L1[0,1]another example of two ports
/med/L1[0],/med/L1[1]equivalent to /med/L1[0,1]
/med/L1[0:10]selects a range of 10 ports
/med/L1/*selects all ports starting with /med/L1
(/med/L1,/med/L2)+[0]equivalent to /med/[L1,L2][0]
/med/[L1,L2].+[0:2]equivalent to /med/L1[0],/med/L2[1]
# #### Inter-LPU Connectivity Patterns # All connections between LPUs must be defined in inter-LPU connectivity patterns that map the output ports of one LPU to the input ports of another LPU. Since individual LPUs may internally implement multiplexing of input signals to a single destination in different ways, the LPU interface only permits fan-out from individual output ports to multiple input ports; connections from multiple output ports may not converge on a single input port. A single pattern may define connections in both directions. # # A connectivity pattern between two LPUs is fully specified by the identifiers and attributes of the ports in its two interfaces and the directed graph of connections defined between them. An example of such pattern defined between ports /lam[0:6] and /med[0:5] follows: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
PortInterfaceI/OPort Type
/lam[0]0ingraded potential
/lam[1]0ingraded potential
/lam[2]0outgraded potential
/lam[3]0outspiking
/lam[4]0outspiking
/lam[5]0outspiking
/med[0]1outgraded potential
/med[1]1outgraded potential
/med[2]1outgraded potential
/med[3]1inspiking
/med[4]1inspiking
# # # # # # # # # # # # # # # # # # # # # # # #
FromTo
/lam[0]/med[0]
/lam[0]/med[1]
/lam[1]/med[2]
/med[3]/lam[3]
/med[4]/lam[4]
/med[4]/lam[5]
# # ### Using Neurokernel's API # #### Setting up LPU Interfaces and Patterns # Neurokernel provides Python classes for defining LPUs and connectivity patterns that can be used to link them together. The former (``neurokernel.core.Module`` for LPUs that don't access the GPU and ``neurokernel.core_gpu.Module`` for LPUs that do) requires an LPU designer to implement all of the LPU's internals from the ground up; the latter class places no explicit constraints upon how an LPU uses GPU resources. In order to enable independently implemented LPUs to communicate with each other, each LPU must implement a method called ``run_step()`` called during each step of execution that consumes incoming data from other LPUs and produces data for transmission to other LPUs. The example below generates random data in its ``run_step()`` method: # In[1]: import numpy as np from neurokernel.core_gpu import Module class MyModule(Module): # Process incoming data and set outgoing data: def run_step(self): super(MyModule, self).run_step() # Display input graded potential data: self.logger.info('input gpot port data: '+str(self.pm['gpot'][self.in_gpot_ports])) # Display input spike data: self.logger.info('input spike port data: '+str(self.pm['spike'][self.in_spike_ports])) # Output random graded potential data: out_gpot_data = gpuarray.to_gpu(np.random.rand(len(self.out_gpot_ports))) self.pm['gpot'][self.out_gpot_ports] = out_gpot_data self.log_info('output gpot port data: '+str(out_gpot_data)) # Randomly select output ports to emit spikes: out_spike_data = gpuarray.to_gpu(np.random.randint(0, 2, len(self.out_spike_ports))) self.pm['spike'][self.out_spike_ports] = out_spike_data self.log_info('output spike port data: '+str(out_spike_data)) # Notice that every LPU instance must be associated with a unique identifier (``id``). An LPU contains a port-mapper attribute (``pm``) that maps input and output ports to a data array that may be accessed by the LPU's internal implementation; after each step of execution, the array associated with the port-mapper is updated with input data from source LPUs while output data from the array is transmitted to destination LPUs. # One can instantiate the above LPU class as follows: # In[2]: from neurokernel.plsel import Selector,SelectorMethods m1_int_sel_in_gpot = Selector('/a/in/gpot[0:2]') m1_int_sel_out_gpot = Selector('/a/out/gpot[0:2]') m1_int_sel_in_spike = Selector('/a/in/spike[0:2]') m1_int_sel_out_spike = Selector('/a/out/spike[0:2]') m1_int_sel = m1_int_sel_in_gpot+m1_int_sel_out_gpot+\ m1_int_sel_in_spike+m1_int_sel_out_spike m1_int_sel_in = m1_int_sel_in_gpot+m1_int_sel_in_spike m1_int_sel_out = m1_int_sel_out_gpot+m1_int_sel_out_spike m1_int_sel_gpot = m1_int_sel_in_gpot+m1_int_sel_out_gpot m1_int_sel_spike = m1_int_sel_in_spike+m1_int_sel_out_spike N1_gpot = SelectorMethods.count_ports(m1_int_sel_gpot) N1_spike = SelectorMethods.count_ports(m1_int_sel_spike) m2_int_sel_in_gpot = Selector('/b/in/gpot[0:2]') m2_int_sel_out_gpot = Selector('/b/out/gpot[0:2]') m2_int_sel_in_spike = Selector('/b/in/spike[0:2]') m2_int_sel_out_spike = Selector('/b/out/spike[0:2]') m2_int_sel = m2_int_sel_in_gpot+m2_int_sel_out_gpot+\ m2_int_sel_in_spike+m2_int_sel_out_spike m2_int_sel_in = m2_int_sel_in_gpot+m2_int_sel_in_spike m2_int_sel_out = m2_int_sel_out_gpot+m2_int_sel_out_spike m2_int_sel_gpot = m2_int_sel_in_gpot+m2_int_sel_out_gpot m2_int_sel_spike = m2_int_sel_in_spike+m2_int_sel_out_spike N2_gpot = SelectorMethods.count_ports(m2_int_sel_gpot) N2_spike = SelectorMethods.count_ports(m2_int_sel_spike) # Using the ports in each of the above LPUs' interfaces, one can define a connectivity pattern between them as follows: # In[3]: from neurokernel.pattern import Pattern pat12 = Pattern(m1_int_sel, m2_int_sel) pat12.interface[m1_int_sel_out_gpot] = [0, 'in', 'gpot'] pat12.interface[m1_int_sel_in_gpot] = [0, 'out', 'gpot'] pat12.interface[m1_int_sel_out_spike] = [0, 'in', 'spike'] pat12.interface[m1_int_sel_in_spike] = [0, 'out', 'spike'] pat12.interface[m2_int_sel_in_gpot] = [1, 'out', 'gpot'] pat12.interface[m2_int_sel_out_gpot] = [1, 'in', 'gpot'] pat12.interface[m2_int_sel_in_spike] = [1, 'out', 'spike'] pat12.interface[m2_int_sel_out_spike] = [1, 'in', 'spike'] pat12['/a/out/gpot[0]', '/b/in/gpot[0]'] = 1 pat12['/a/out/gpot[1]', '/b/in/gpot[1]'] = 1 pat12['/b/out/gpot[0]', '/a/in/gpot[0]'] = 1 pat12['/b/out/gpot[1]', '/a/in/gpot[1]'] = 1 pat12['/a/out/spike[0]', '/b/in/spike[0]'] = 1 pat12['/a/out/spike[1]', '/b/in/spike[1]'] = 1 pat12['/b/out/spike[0]', '/a/in/spike[0]'] = 1 pat12['/b/out/spike[1]', '/a/in/spike[1]'] = 1 # ### A Simple Example: Creating an LPU # To obviate the need to implement an LPU completely from scratch, the [Neurodriver](https://github.com/neurokernel/neurodriver) package provides a functional LPU class (``neurokernel.LPU.LPU.LPU``) that supports the following neuron and synapse models: # # * Leaky Integrate-and-Fire (LIF) neuron (spiking neuron) # * Morris-Lecar (ML) neuron (graded potential neuron), # * Alpha function synapse # * Conductance-based synapse (referred to as ``power_gpot_gpot``). # # Note that although the ML model can in principle be configured as a spiking neuron model, the implementation in the LPU class is configured to output its membrane potential. # # Alpha function synapses may be used to connect any type of presynaptic neuron to any type of of postsynaptic neuron; the neuron presynaptic to a conductance-based synapse must be a graded potential neuron. # # It should be emphasized that the above LPU implementation and the currently support models are not necessarily optimal and may be replaced with improved implementations in the future. # The ``LPU`` class provided by Neurodriver may be instantiated with a graph describing its internal structure. The graph must be stored in [GEXF](http://gexf.net) file format with nodes and edges respectively corresponding to instances of the supported neuron and synapse models. To facilitate construction of an LPU, the [networkx](http://networkx.github.io) Python package may be used to set the parameters of the model instances. For example, the following code defines a simple network consisting of an LIF neuron with a single synaptic connection to an ML neuron; the synaptic current elicited by the LIF neuron's spikes is modeled by an alpha function: # In[4]: import numpy as np import networkx as nx G = nx.MultiDiGraph() # add a neuron node with LeakyIAF model G.add_node('neuron0', # UID **{'class': 'LeakyIAF', # component model 'name': 'neuron_0', # component name 'initV': np.random.uniform(-60.0, -25.0), # initial membrane voltage 'reset_potential': -67.5489770451, # reset voltage 'threshold': -25.1355161007, # spike threshold 'resting_potential': 0.0, # resting potential 'resistance': 1024.45570216, # membrane resistance 'capacitance': 0.0669810502993}) # membrane capacitance # The above neuron is a projection neuron, # create an output port for it G.add_node('neuron0_port', # UID **{'class': 'Port', # indicates it is a port 'name': 'neuron_0_output_port', # name of the port 'selector': '/a[0]', # selector of the port 'port_io': 'out', # indicates it is an output port 'port_type': 'spike'}) # indicates it is a spike port # connect the neuron node and its port G.add_edge('neuron0', 'neuron0_port') # add a second neuron node with MorrisLecar model G.add_node('neuron1', **{'class': "MorrisLecar", 'name': 'neuron_1', 'V1': 30.0, 'V2': 15.0, 'V3': 0.0, 'V4': 30.0, 'phi': 25.0, 'offset': 0.0, 'V_L': -50., 'V_Ca': 100.0, 'V_K': -70.0, 'g_Ca': 1.1, 'g_K': 2.0, 'g_L': 0.5, 'initV': -52.14, 'initn': 0.03,}) # add a synapse node with AlphaSynapse model G.add_node('synapse_0_1', **{'class': 'AlphaSynapse', 'name': 'synapse_0_1', 'ar': 1.1*1e2, # decay rate 'ad': 1.9*1e3, # rise rate 'reverse': 65.0, # reversal potential 'gmax': 2*1e-3, # maximum conductance }) # connect presynaptic neuron to synapse G.add_edge('neuron0', 'synapse_0_1') # connect synapse to postsynaptic neuron G.add_edge('synapse_0_1', 'neuron1') # export the graph to GEXF file nx.write_gexf(G, 'simple_lpu.gexf.gz') # We can prepare a simple pulse input and save it in an HDF5 file to pass to ``neuron_0`` as follows: # In[5]: import h5py dt = 1e-4 # time resolution of model execution in seconds dur = 1.0 # duration in seconds Nt = int(dur/dt) # number of data points in time start = 0.3 stop = 0.6 I_max = 0.6 t = np.arange(0, dt*Nt, dt) I = np.zeros((Nt, 1), dtype=np.double) I[np.logical_and(t>start, tChiang, A.-S., Lin, C.-Y., Chuang, C.-C., Chang, H.-M., Hsieh, C.-H., Yeh, C.-W., et al. (2011), Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution, Current Biology, 21, 1, 1–11, doi:10.1016/j.cub.2010.11.056