Compute With Time, Not Over It: An Introduction to Spiking Neural Networks

Problem

Artificial Neural Networks (ANNs) have become the de-facto standard tool to carry out supervised, unsupervised, and reinforcement learning tasks. Their recent successes have built upon various algorithmic advances, but have also heavily relied on the unprecedented availability of computing power and memory in data centers and cloud computing platforms. The resulting considerable energy requirements run counter to the constraints imposed by implementations on low-power mobile or embedded devices for applications such as personal health monitoring or neural prosthetics.

How can the human brain perform general and complex tasks at a minute fraction of the power required by state-of-the-art supercomputers and ANN-based models? Neurons in the human brain are different from those in an ANN: they process and communicate using sparse spiking signals over time, rather than real numbers; and they are dynamic devices, rather than static non-linearites (see, Figure 1). Taking inspiration from this observation, Spiking Neural Networks (SNNs) have been introduced in the theoretical neuroscience literature as networks of dynamic spiking neurons that enables efficient on-line inference learning. SNNs have the unique capability to process information encoded in the timing of spikes, with the energy per spike being as a few picojoules. Proof-of-concept and commercial hardware implementations of SNNs (e.g., Intel, IBM) have demonstrated orders-of-magnitude improvements in terms of energy efficiency over ANNs.

Figure 1. Illustration of neural networks: (left) an ANN, where each neuron processes real numbers; and (right) an SNN, where dynamic spiking neurons process and communicate binary sparse spiking signals over time.

The most common SNN model consists of a network of neurons with deterministic dynamics, e.g., leaky-integrate-and-fire model, whereby a spike is emitted as soon as an internal state variable, known as membrane potential, crosses a given threshold value. Learning problems should be formulated as the minimization of a loss function that directly accounts for the timing of the spikes emitted by the neurons. While this minimization can be done using Stochastic Gradient Descent (SGD) as for ANNs, it is made challenging by the non-differentiability of the behavior of spiking neurons with respect to the synaptic weights. In contrast to deterministic models, a probabilistic model for SNNs defines the outputs of all spiking neurons as differentiable joint distributed binary random processes. A probabilistic viewpoint has hence significant analytic advantages in that we can apply flexible learning rules from the principled learning criteria such as likelihood and mutual information.

Some Results

Our recent work to be published on IEEE Signal Processing Magazine (SPM) Special Issue on Learning Algorithms and Signal Processing for Brain-Inspired Computing provides a review on the topic of probabilistic SNNs with a specific focus on the most commonly used Generalized Linear Models (GLMs) by covering probabilistic models, learning rules, and applications.

Figure 2. Illustration of the neurons with probabilistic dynamics with exponential feedforward and feedback kernels.

As illustrated in Figure 2, in a GLM, any post-synaptic neuron i receives the signals emitted by pre-synaptic neurons through synapses. Its internal state, or the probability to spike, is defined by membrane potential, which is the sum of contributions from the incoming spikes of the pre-synaptic neurons and from the past spiking behavior of the neuron itself, where both contributions are filtered by feedforward and feedback kernels, respectively. Under the GLM, the gradient of the log-likelihood of the spiking signals depends on the difference between the desired spiking behavior and its average behavior under the model.

SNNs can be trained using supervised, unsupervised, and reinforcement learning, by following a learning rule. This defines how the model parameters are updated on the basis of the available observations – in a batch mode or in an on-line fashion. Our work derives Maximum Likelihood learning rules using SGD in a batch and on-line mode, for both fully observed and partially observed SNNs. The learning rules can be interpreted in light of the general form of the three-factor rule; the synaptic weight wj,i from pre-synaptic neuron j to a post-synaptic neuron i is updated as wj,i ← wj,i + η × ℓ × pre(j) × post(i), where η is a learning rate; is a scalar global learning signal which is absent in case of fully observed SNNs; pre(j) is given by the filtered feedforward trace of the pre-synaptic neuron j; and post(i) is given by the error term of the post-synaptic neuron i, appeared in the gradient above. In case of partially observed SNNs, variational inference is needed to approximate the true posterior distribution by means of variational posterior. With a feedforward distribution for the variational posterior, we derive the learning rule using doubly SGD, whereby the global learning signal is obtained by sampling spike signals of unobserved neurons.

Figure 3. On-line prediction task based on an SNN with 9 visible and 2 hidden neurons; (left, top) real, analog time signal (dashed) and predicted, decoded signal (solid); (left, bottom) total number of spikes emitted by the SNN; and (right) spike raster plot of the SNN.

Experiments on an on-line prediction task allowed us to observe the potential of SNNs for ‘always-on’ event-driven applications. The SNN observes a time sequence and is trained to predict the next value of sequence given the observation of the previous values, where the time sequence is encoded in the spike domain with ΔT spike samples per each value of the sequence. In Figure 3, the SNN is seen to be able to provide an accurate prediction (left, top) with the corresponding number of spikes (left, bottom) and spikes emitted by the SNN (right). To demonstrate the efficiency benefits of SNNs that may arise from their unique time encoding capabilities, we also compare the prediction error and the number of spikes, with rate and time coding schemes.

Please refer to the full paper for details.

Integrating Wireless Access and Edge Learning

Problem

Figure 1. Delay-constrained edge learning based on data received from a device.

The increasing number of connected devices has led to an explosion in the amounts of data being collected: smartphones, wearable devices and sensors generate data to an extent previously unseen. However, these devices often present power and computational capability constraints that do not allow them to make use of the data – for instance, to train Machine Learning (ML) models. In such circumstances, thanks to mobile edge computing, devices can rely on remote servers to perform the data processing (see Fig. 1). When the amount of data is large, or the access link slow, the amount of time required to transmit the data may be prohibitive. Given a delay constraint on the overall time available for both communication and learning, what is the joint communication-computation strategy that obtains the best performing ML model?

Pipelining communication and computation

Figure 2. Transmission and training protocol.

In a recent work to be published in IEEE Communication Letters, we propose to pipeline communication and computation with an optimized block size. We consider an Empirical Risk Minimization (ERM) problem, for which learning is carried at the server side using Stochastic Gradient Descent (SGD). As the first data block arrives at the server, training of the ML model can start. This continues by fetching data from all the data blocks received thus far. To provide some intuition on the problem of optimizing the block size, communicating the entire data set first reduces the bias of the training process but it may not leave sufficient time for learning. Conversely, transmitting very few samples in each block will bias the model towards the samples sent in the first blocks, as many computation rounds will happen based on these samples.
We determine an upper bound on the expected optimality gap at the end of the time limit, which gives us an indication on how far we are from an optimal model. We can then minimize this bound with regard to the communication block size to obtain an optimized value.

Some results

Figure 3. Training loss versus training time for different values of the block size. Solid line: experimental and theoretical optima.

Numerical experiments allowed us to compare the optimal block size found using the bound with a numerically determined optimal value found by running Monte Carlo experiments over all possible block sizes. Determining the optimal value through an extensive search over the possible block sizes allowed a gain of 3.8% in terms of the final training loss in one of our experiments (see Fig. 3). This small gain comes at the cost of a burdensome parameter optimization that took days on an HPC cluster. Minimizing the proposed bound takes seconds.
We further experimentally determined that our results, which were derived for convex loss functions satisfying the Polyak-Lojasiewicz condition, can be extended to non-convex models. As an example (not found in the paper), we studied the problem of training a multilayer perceptron with non-linear activations according to our scheme (see Fig. 4). Using the same dataset as described in the paper, we train a 2-layers perceptron with ReLU activation for the first layer and linear activation for the second. The experiments show a similar behaviour to the convex example discussed in the main text. In particular, the derived bound predicts well the existence of an optimum value of the block size (see crosses).

Figure 4. Training loss versus block size for different overhead sizes, for an MLP with non-linear activations.

The full paper can be found here.

Meta-learning: A new framework for few-pilot transmission in IoT networks

Problem

Fig. 1: Illustration of few-pilot training for an IoT system via meta-learning

For channels with an unknown model or an unavailable optimal receiver of manageable complexity, the design of demodulation and decoding can potentially benefit from a data-driven approach based on machine learning. Machine learning solutions, however, cannot be directly applied to Internet- of-Things (IoT) scenarios in which devices transmit sporadically using short packets with few pilot symbols. In fact, the few pilots do not provide enough data for training the receiver.

A Novel Solution based on Meta-learning

Fig. 2: MAML is to find an initial value 𝜃 that minimizes the loss L𝑘(θ´𝑘) for all devices 𝑘 after one step of update. In contrast, joint training carries out an optimization on the cumulative loss              L1(θ) + L2(θ) 

In a recent work to be presented at IEEE SPAWC 2019, we proposed a novel solution for demodulation in IoT networks that is based on model-agnostic meta-learning (MAML) algorithm. The key idea is to use pilots from previous transmissions of other IoT devices as meta- training data in order to learn a demodulator that is able to quickly adapt to the end-to-end channel conditions of a new device from few pilots. MAML derives an inductive bias as an initialization point for a neural network-based demodulator. As illustrated in Fig. 2, MAML seeks an initialization point such that all the performance losses of the demodulators for all IoT devices obtained after one update are collectively minimized. In comparison, a more conventional approach to use meta-training data, namely joint training, would pool together all the pilots received from the meta-training devices and seeks for minimizing the cumulative loss.

Some Results

To give a taste of the results in the paper, we now provide an example.

Fig. 3: Probability of symbol error with respect to number of pilots for the      meta-test device (see paper).

In Fig. 3, we plot probability of symbol error with respect to the number of pilots for new IoT device. We adopt 4-PAM with 20 meta-training devices, each with 8 pilots for meta-training. We compare the performance of the proposed MAML approach with: (i) a fixed initialization scheme where data from the meta-training devices is not used; (ii) joint training with the meta-training dataset as described above; (iii) optimal ideal demodulator that assumes perfect channel state information.

MAML is seen to vastly outperform the mentioned baseline approaches (i) – (ii) by adapting to the channel of the meta-test device using only a few pilots. In contrast, joint training fails to perform better than fixed initialization. This confirms that, unlike conventional solutions, MAML can effectively transfer information from meta-training devices to a new target device.

The full paper can be found here.

On the Interplay Between Coded Distributed Inference and Transmission in Mobile Edge Computing Systems

Problem

Introduced by the European Telecommunications Standards Institute (ETSI), the concept of mobile edge computing is by now established as a pillar of the 5G network architecture as an enabler of computation-intensive applications on mobile devices. As illustrated in the figure with mobile edge computing, users offload local data to edge servers connected to wireless Edge Nodes (ENs). The ENs in turn carry out the necessary computations and return the desired output to the users on the wireless downlink.

As a baseline application, assume that each user wishes to compute a linear function Wx of a local data vector x, e.g., an image taken by the user’s camera, and a network-side model matrix W. Each EN acquires the users’ local data points x through uplink transmission at runtime, while the matrix W can be pre-stored at the ENs offline. Matrix W is generally large and hence it is split across the servers of multiple ENs. After the computing phase, the ENs transmit the computed outputs back to the users in the downlink.

Linear operations of the type illustrated above are of practical importance. For example, they underlie the implementation of recommendation systems based on collaborative filtering, or similarity searches based on the cosine distance. In both cases, the user-side data is a vector x that embeds the user profile or a query, and the goal is to search through the matrix of all items on the basis of the inner products between the corresponding row of matrix W and the userdata x.

In the presence of storage redundancy, matrix W can be stored at the ENs in uncoded or coded form. In the first case, the rows of the matrix are duplicated across different ENs. As a result, the ENs can transmit any shared computed output back to the users using cooperative transmission techniques. In contrast, with coding, no cooperation transmission is possible but downlink transmission can start as soon as only a subset of ENs has completed computations. The question main is: How should one balance the robustness to straggling ENs afforded by coding with the cooperative downlink transmission advantages of uncoded repetition storage in order to reduce the overall computation-plus-communication latency?

Some Results

Our work investigates three approaches: Uncoded Storage and Computing (UC), MDS coded Storage and Computing (MC), and a proposed Hybrid Scheme (HS) that concatenates an MDS code with a repetition code. The main contribution of this research is to demonstrate that HS is able to combine the robustness to stragglers afforded by MC and the cooperative downlink transmission advantages of UC.

To illustrate this point, consider the figure where we plot overall communication-plus-computation latency as a function of the ratio γ between the communication and computation latencies. The variability in the computing times is defined by a parameter η. It is observed that as γ increases, the total latencies of both UC and MC grow linearly. When the variability in the computing times of the ENs is high, hence this happens for η=0.8, and MDS coding for the most part outperforms the UC scheme due to its robustness to stragglers. This is unless γ is large enough, in which case downlink transmission latency becomes dominant and the UC scheme can benefit from redundant computations via cooperative EN communication. In contrast, when the computing times have low variability, hence for η=8, MDS coding is uniformly outperformed by the UC scheme. The proposed hybrid coding strategy is seen to be effective in trading off computation and communication latencies by controlling the balance between robustness to stragglers and cooperative opportunities.

The full paper can be found at ieeexplore (open access: arxiv)  

Combining Cloud and Edge Processing for Optimal Wireless Content Delivery

Problem

Content delivery is one of the most important use cases for mobile broadband services in 5G networks. As seen in Fig. 1, in 5G systems, content can be potentially stored at distributed units, or edge nodes (ENs), and hence closer to the user, with the aim of minimizing delivery latency and network congestion. Furthermore, a cloud processor, also known as central unit, has typically access to the content library and connects to the ENs via finite capacity fronthaul links. The central unit is not only necessary to enable content delivery when the overall edge cache capacity is insufficient, but it can also foster cooperative transmission from the ENs to the users by sharing common information to the ENs. However, any transmission from cloud unit to the ENs comes at a latency cost due to the use of fronthaul links. How should edge and fronthaul resources be optimally combined to minimize delivery latency?

In a recent work just published on IEEE Transaction on Information Theory, we provided a conclusive answer to this question by taking an information-theoretic viewpoint, and making the following simplifying assumptions:

1) only uncoded edge caching is allowed;
2) the cloud can only send fractions of contents via the fronthaul links;
3) the ENs are constrained to use standard linear precoding on the wireless channel;
4) The signal to noise ratio is sufficiently large.

Some Results

Our work derives a caching and delivery policy that is able to offer a near optimal trade-off between fronthaul latency overhead and downlink transmission latency from the ENs to the users. Two key scenarios are identified that depend on key system parameters such as fronthaul capacity, edge cache capacity, and number of per-edge node antennas:

1) When the overall cache capacity of the ENs is smaller than a given threshold, as illustrated in Fig. 2, it is necessary to use both fronthaul and edge caching resources in order to minimize latency. Importantly, even when the edge resource alone would be sufficient to deliver all requested contents, the policy, it is generally required to make use of fronthaul resources in order to foster EN  cooperative transmission. In fact, when the fronthaul capacity is sufficiently large, the latency cost caused by a fronthaul delay does not offset the cooperative transmission gains in the downlink;

2) Otherwise, when edge cache capacity is above the given threshold, as seen in Fig. 2, only edge caching should be used. Under this condition, the gains due to enhanced EN cooperation do not overcome the latency associated with fronthaul transmission. Interestingly, the threshold on the edge cache capacity increases as the number of ENs’ antennas increases, since edge processing becomes more effective when more antennas are deployed.

The full paper can be found at ieeexplore (open access: arxiv)

How can heterogeneous 5G services coexist on a shared Fog-Radio architecture?

Problem

Figure 1: A Fog-Radio Architecture with coexisting 5G services (URLLC and eMBB)

In 5G, Ultra-Reliable Low-Latency Communications (URLLC) – catering to use cases such as vehicular-to-cellular communications and Industry 4.0 — and enhanced Mobile Broadband (eMBB) – with its support of applications such as virtual reality – will share the same radio interface and network architecture. The 5G network architecture will be fog-like (see Fig. 1), enabling a flexible split of network functionalities between cloud and edge nodes. The cloud generally enables centralised processing, but at the cost of an increased latency for fronthaul transfer, while the edge can provide low-latency feedback but subject to the constraints of local processing.

This raises the following questions:

  • How should radio resources be shared between the two services?
  • How should the URLLC and eMBB network slices be configured?

A Novel Solution

In a recent work just published on IEEE Access , we proposed a novel solution illustrated in Fig. 1, whereby

  • Baseband processing is carried out at the edge for the URLLC slice, hence ensuring low  latency, and centrally at the Base Band Unit (BBU) as in a C-RAN for the eMBB slice, with the aim of increasing spectral efficiency;
  • eMBB and URLLC services can share the same radio resources in a non-orthogonal fashion – an approach we define as Heterogeneous Non-Orthogonal Multiple Access.

Towards the goal of managing the interference between URLLC and eMBB packets arising from H-NOMA, we consider a number of practical approaches in order of complexity. For the uplink, we have:

  • Treating URLLC interference as noise: each edge node forwards both eMBB and URLLC signal to the BBU, where the eMBB signal is decoded while treating URLLC signal as noise;
  • Puncturing: each edge node discards the received eMBB signal whenever a URLLC user is transmitting;
  • Successive Interference Cancellation (SIC): each edge node decodes and cancels the URLLC signal before transmitting only the eMBB signal to the cloud.

And for the downlink we consider:

  • Superposition coding: each edge node transmits a superposition of both eMBB and URLLC signal to corresponding users;
  • Puncturing: each edge node discards the eMBB signal whenever a URLLC signal is generated at the edge node.

It is noted that there is no counterpart of successive interference cancellation for the downlink.

Some Results

Figure 2

To give a taste of the results in the paper, we now provide an example. In Fig. 2, we plot the eMBB average per-cell sum-rates (black curves) and URLLC per-cell outage capacity (red curves) for the uplink as function of the URLLC activation probability. The latter is a measure of the URLLC traffic load. In general, the results demonstrate the potential advantages of H-NOMA for both services, especially when the URLLC traffic load is sufficiently large and successive interference cancellation is enabled at the edge nodes.

Link to our paper: https://ieeexplore.ieee.org/stamp/stamp.jsparnumber=8612914

Hello, world!

Welcome to King’s Centre for Learning and Information processing research blog.

We’re excited to share with you our findings in the future!