Combining Cloud and Edge Processing for Optimal Wireless Content Delivery

Problem

Content delivery is one of the most important use cases for mobile broadband services in 5G networks. As seen in Fig. 1, in 5G systems, content can be potentially stored at distributed units, or edge nodes (ENs), and hence closer to the user, with the aim of minimizing delivery latency and network congestion. Furthermore, a cloud processor, also known as central unit, has typically access to the content library and connects to the ENs via finite capacity fronthaul links. The central unit is not only necessary to enable content delivery when the overall edge cache capacity is insufficient, but it can also foster cooperative transmission from the ENs to the users by sharing common information to the ENs. However, any transmission from cloud unit to the ENs comes at a latency cost due to the use of fronthaul links. How should edge and fronthaul resources be optimally combined to minimize delivery latency?

In a recent work just published on IEEE Transaction on Information Theory, we provided a conclusive answer to this question by taking an information-theoretic viewpoint, and making the following simplifying assumptions:

1) only uncoded edge caching is allowed;
2) the cloud can only send fractions of contents via the fronthaul links;
3) the ENs are constrained to use standard linear precoding on the wireless channel;
4) The signal to noise ratio is sufficiently large.

Some Results

Our work derives a caching and delivery policy that is able to offer a near optimal trade-off between fronthaul latency overhead and downlink transmission latency from the ENs to the users. Two key scenarios are identified that depend on key system parameters such as fronthaul capacity, edge cache capacity, and number of per-edge node antennas:

1) When the overall cache capacity of the ENs is smaller than a given threshold, as illustrated in Fig. 2, it is necessary to use both fronthaul and edge caching resources in order to minimize latency. Importantly, even when the edge resource alone would be sufficient to deliver all requested contents, the policy, it is generally required to make use of fronthaul resources in order to foster EN  cooperative transmission. In fact, when the fronthaul capacity is sufficiently large, the latency cost caused by a fronthaul delay does not offset the cooperative transmission gains in the downlink;

2) Otherwise, when edge cache capacity is above the given threshold, as seen in Fig. 2, only edge caching should be used. Under this condition, the gains due to enhanced EN cooperation do not overcome the latency associated with fronthaul transmission. Interestingly, the threshold on the edge cache capacity increases as the number of ENs’ antennas increases, since edge processing becomes more effective when more antennas are deployed.

The full paper can be found at https://arxiv.org/pdf/1712.04266.pdf

How can heterogeneous 5G services coexist on a shared Fog-Radio architecture?

Problem

Figure 1: A Fog-Radio Architecture with coexisting 5G services (URLLC and eMBB)

In 5G, Ultra-Reliable Low-Latency Communications (URLLC) – catering to use cases such as vehicular-to-cellular communications and Industry 4.0 — and enhanced Mobile Broadband (eMBB) – with its support of applications such as virtual reality – will share the same radio interface and network architecture. The 5G network architecture will be fog-like (see Fig. 1), enabling a flexible split of network functionalities between cloud and edge nodes. The cloud generally enables centralised processing, but at the cost of an increased latency for fronthaul transfer, while the edge can provide low-latency feedback but subject to the constraints of local processing.

This raises the following questions:

  • How should radio resources be shared between the two services?
  • How should the URLLC and eMBB network slices be configured?

A Novel Solution

In a recent work just published on IEEE Access , we proposed a novel solution illustrated in Fig. 1, whereby

  • Baseband processing is carried out at the edge for the URLLC slice, hence ensuring low  latency, and centrally at the Base Band Unit (BBU) as in a C-RAN for the eMBB slice, with the aim of increasing spectral efficiency;
  • eMBB and URLLC services can share the same radio resources in a non-orthogonal fashion – an approach we define as Heterogeneous Non-Orthogonal Multiple Access.

Towards the goal of managing the interference between URLLC and eMBB packets arising from H-NOMA, we consider a number of practical approaches in order of complexity. For the uplink, we have:

  • Treating URLLC interference as noise: each edge node forwards both eMBB and URLLC signal to the BBU, where the eMBB signal is decoded while treating URLLC signal as noise;
  • Puncturing: each edge node discards the received eMBB signal whenever a URLLC user is transmitting;
  • Successive Interference Cancellation (SIC): each edge node decodes and cancels the URLLC signal before transmitting only the eMBB signal to the cloud.

And for the downlink we consider:

  • Superposition coding: each edge node transmits a superposition of both eMBB and URLLC signal to corresponding users;
  • Puncturing: each edge node discards the eMBB signal whenever a URLLC signal is generated at the edge node.

It is noted that there is no counterpart of successive interference cancellation for the downlink.

Some Results

Figure 2

To give a taste of the results in the paper, we now provide an example. In Fig. 2, we plot the eMBB average per-cell sum-rates (black curves) and URLLC per-cell outage capacity (red curves) for the uplink as function of the URLLC activation probability. The latter is a measure of the URLLC traffic load. In general, the results demonstrate the potential advantages of H-NOMA for both services, especially when the URLLC traffic load is sufficiently large and successive interference cancellation is enabled at the edge nodes.

Link to our paper: https://ieeexplore.ieee.org/stamp/stamp.jsparnumber=8612914

Hello, world!

Welcome to King’s Centre for Learning and Information processing research blog.

We’re excited to share with you our findings in the future!