Quantile Learn-Then-Test: Quantile-Based Risk Control for Hyperparameter Optimization

Motivation

Hyperparameter optimization (HPO) is essential in tuning artificial intelligence (AI) models for practical engineering applications, as it governs model performance across varied deployment scenarios. Conventional HPO techniques such as random search and Bayesian optimization often focus on optimizing average performance without providing statistical guarantees, which can be limiting in high-stakes engineering tasks where system reliability is crucial. The learn-then-test (LTT) method [1], introduced in recent research, offers statistical guarantees on the average risk associated with selected hyperparameters. However, in fields like wireless networks and real-time systems, designers frequently need assurance that a specified quantile of performance will meet reliability thresholds.

To address this need, our proposed method, Quantile Learn-Then-Test (QLTT), extends LTT to offer statistical guarantees on quantiles of risk rather than just the average. This quantile-based approach provides greater robustness in real-world applications where it’s critical to control risk-aware objectives, ensuring that the system meets performance goals in a specified fraction of scenarios.

Quantile Learn-Then-Test (QLTT)

LTT, as introduced in [1], guarantees that the average risk remains within a defined threshold with high probability. However, many real-world applications require tighter control over performance measures. For instance, in cellular network scheduling, system designers may need to ensure that key performance indicators (KPIs) like latency and throughput stay within acceptable limits for a majority of users, not just on average.

Our approach, QLTT, extends LTT to provide guarantees on any specified quantile of risk. Specifically, QLTT selects hyperparameters that ensure a predefined quantile of the risk distribution meets a target threshold. This probabilistic guarantee, based on quantile risk control, better aligns with the needs of applications where performance variability is critical.

Methodology

QLTT builds on LTT’s multiple hypothesis testing framework, incorporating a quantile-specific confidence interval, obtained using [2], to achieve guarantees on the desired quantile of risk. The method takes a set of hyperparameter candidates and identifies those that meet the desired quantile threshold with high probability, enhancing reliability beyond what is possible through average risk control alone. This quantile-based approach enables QLTT to adapt to varying risk tolerance levels, making it versatile for different engineering contexts.

Experiments

To demonstrate QLTT’s effectiveness, we applied it to a radio access scheduling problem in wireless communication [3]. Here, the task was to allocate limited resources among users with different quality of service (QoS) requirements, ensuring that latency requirements were met for the vast majority of users in real-time.

Our experimental results highlight QLTT’s advantage over LTT with respect to quantile control. While both methods controlled the average risk effectively, only QLTT managed to limit the higher quantiles of the risk distribution, reducing instances where latency exceeded critical thresholds.

The following figure compares the distributions of packet delays for conventional LTT and QLTT for a test run of the simulation. While LTT shows considerable variance, with some instances exceeding the desired threshold, QLTT consistently meets the reliability requirements by providing tighter control over risk quantiles.

Conclusion

QLTT extends the applicability of LTT by providing hyperparameter sets with guarantees on quantiles of a risk measure, thus offering a more rigorous approach to HPO for risk-sensitive engineering applications. Our experiments confirm that QLTT effectively addresses scenarios where quantile risk control is required, providing a robust solution to ensure high-confidence performance across diverse conditions.

Future work may explore expanding QLTT to more complex settings, such as other types of risk functionals and broader engineering challenges. By advancing risk-aware HPO, QLTT represents a significant step toward reliable, application-oriented AI optimization in critical industries.

References

[1] Angelopoulos, A.N., Bates, S., Candès, E.J., Jordan, M.I., & Lei, L. (2021). Learn then test: Calibrating predictive algorithms to achieve risk control. arXiv preprint arXiv:2110.01052.

[2] Howard, S.R., & Ramdas, A. (2022). Sequential estimation of quantiles with applications to A/B testing and best-arm identification. Bernoulli, 28(3), 1704–1728.

[3] De Sant Ana, P.M., & Marchenko, N. (2020). Radio Access Scheduling using CMA-ES for Optimized QoS in Wireless Networks. IEEE Globecom Workshops (GC Wkshps), pp. 1-6.

Statistically Valid Information Bottleneck via Multiple Hypothesis Testing

Motivation

In machine learning, the information bottleneck (IB) problem [1] is a critical framework used to extract compressed features that retain sufficient information for downstream tasks. However, a major challenge lies in selecting hyperparameters that ensure the learned features comply with information-theoretic constraints. Current methods rely on heuristic tuning without providing guarantees that the chosen features satisfy these constraints. This lack of rigor can lead to suboptimal models. For example, in the context of language model distillation, failing to enforce these constraints may result in the distilled model losing important information from the teacher model.

Our proposed method, “IB via Multiple Hypothesis Testing” (IB-MHT), addresses this issue by introducing a statistically valid solution to the IB problem. We ensure that the features learned by any IB solver meet the IB constraints with high probability, regardless of the dataset size. IB-MHT builds on Pareto testing [2] and learn-then-test (LTT) [3] methods to wrap around existing IB solvers, providing statistical guarantees on the information bottleneck constraints. This approach offers robustness and reliability compared to conventional methods that may not meet these constraints in practice.

IB-MHT

In the traditional IB framework, we aim to minimize the mutual information between the input data X and a compressed representation T, while ensuring that T retains sufficient information about a target variable Y. This is expressed mathematically as minimizing I(X;T) under the constraint that I(T;Y) exceeds a certain threshold. In practice, though, solving this problem often relies on tuning a Lagrange multiplier or hyperparameters to balance the compression of T and the information retained about Y. These approaches do not guarantee that the solution will meet the required information-theoretic constraints.

To overcome this, IB-MHT introduces a probabilistic approach where we wrap around any existing IB solver to ensure that the learned features satisfy the IB constraint with high probability. By leveraging Pareto testing, IB-MHT identifies the optimal hyperparameters through a family-wise error rate (FWER) testing mechanism, ensuring that the final solution is statistically sound.

Experiments

To validate the effectiveness of IB-MHT, we conducted experiments on both classical and deterministic IB [4] formulations. One experiment was performed on the MNIST dataset, where we applied IB-MHT to ensure that the learned representations met the IB constraints with high probability. In another experiment, we applied IB-MHT to the task of distilling language models, transferring knowledge from a large teacher model to smaller student model. We demonstrated that IB-MHT successfully guarantees that the compressed features retain sufficient information about the target variable. Compared to conventional IB methods, IB-MHT showed significant improvements in both the reliability and consistency of the learned representations, with reduced variability in the mutual information estimates.

The following figure illustrates the difference between the performance of conventional IB solvers and IB-MHT in a classical IB setup. While the conventional solver shows a wide variance in the mutual information values, IB-MHT provides tighter control, ensuring that the learned representation T meets the desired information-theoretic constraints.

Conclusion

IB-MHT introduces a reliable, statistically valid solution to the IB problem, addressing the limitations of heuristic hyperparameter tuning in existing methods. By guaranteeing that the learned features meet the required information-theoretic constraints with high probability, IB-MHT enhances the robustness and performance of IB solvers across a range of applications. Future work can explore extending IB-MHT to continuous variables and applying similar techniques to other information-theoretic objectives such as convex divergences.

References

[1] Naftali Tishby, Fernando Pereira, and William Bialek. The information bottleneck method. Proceedings of the 37th Allerton Conference on Communication, Control, and Computing, 2001.

[2] Laufer-Goldshtein, Ben, Ariel Fisch, Regina Barzilay, and Tommi Jaakkola. Efficiently controlling multiple risks with Pareto testing. International Conference on Learning Representations, 2023.

[3] Angelopoulos, Anastasios N., Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, and Lucas Lei. Learn then test: Calibrating predictive algorithms to achieve risk control. arXiv preprint arXiv:2110.01052, 2021.

[4] Strouse, Daniel, and David Schwab. The deterministic information bottleneck. Neural Computation, 2017.

Neuromorphic Wireless Split Computing with Wake-Up Radios

Context and Motivations

Neuromorphic processing units (NPUs), such as Intel’s Loihi or BrainChip’s Akida, leverage the sparsity of temporal data to reduce processing energy by activating a small subset of neurons and synapses at each time step. When deployed for split computing in edge-based systems, remote NPUs, each carrying out part of the computation, can reduce the communication power budget by communicating asynchronously using sparse impulse radio (IR) waveforms [1-2], a form of ultra-wide bandwidth (UWB) spread-spectrum signaling.

However, the power savings afforded by sparse transmitted signals are limited to the transmitter’s side, which can transmit impulsive waveforms only at the times of synaptic activations. The main contributor to the overall energy consumption remains the power required to maintain the main radio on.

Architecture

To address this architectural problem, as seen in the figure above, our recent work [3-4] proposes a novel architecture that integrates a wake-up radio mechanism within a split computing system consisting of remote, wirelessly connected, NPUs. In the proposed architecture, the NPU at the transmitter side remains idle until a signal of interest is detected by the signal detection module. Subsequently, a wake-up signal (WUS) is transmitted by the wake-up transmitter over the channel to the wake-up receiver, which activates the main receiver. The IR transmitter modulates the encoded signals from the NPU, and sends them to the main receiver. The NPU at the receiver side then decodes the received signals and make an inference decision.

Digital twin-aided design methodology with reliability guarantee

A key challenge in the design of a wake-up radios is the selection of thresholds for sensing and WUS detection, and decision making (three λ’s in the figure above). A conventional solution would be to calibrate the thresholds via on-air testing, trying out different thresholds via testing on the actual physical system. On-air calibration would be expensive in terms of spectral resources, and there is generally no guarantee that the selected thresholds would provide desirable performance levels for the end application.

To address this design problem, as illustrated in the figure below, this work proposes a novel methodology, dubbed DT-LTT,  that leverages the use of a digital twin, i.e., a simulator, of the physical system, coupled with a sequential statistical testing approach that provides theoretical reliability guarantees. Specifically, the digital twin is leveraged to pre-select a sequence of hyperparameters to be tested using on-air calibration via Learn then Test (LTT) [5]. The proposed DT-LTT calibration procedure is proved to guarantee reliability of the receiver’s decisions irrespective of the fidelity of digital twin and of the data distribution.

Experiment

We compare the proposed DT-LTT calibration method with conventional neuromorphic wireless communications without wake-up radio, conventional LTT without a digital twin, and DT-LTT with an always-on main radio system. As shown in the figure below, the conventional calibration scheme fails to meet the reliability requirement, while the basic LTT scheme selects conservative hyperparameters, often including all classes in the predicted set, which results in zero expected loss. In contrast, the proposed DT-LTT schemes are guaranteed to meet the probabilistic reliability requirement.

References

[1] J. Chen, N. Skatchkovsky and O. Simeone, “Neuromorphic Wireless Cognition: Event-Driven Semantic Communications for Remote Inference,” in IEEE Transactions on Cognitive Communications and Networking, vol. 9, no. 2, pp. 252-265, April 2023,.

[2] J. Chen, N. Skatchkovsky and O. Simeone, “Neuromorphic Integrated Sensing and Communications,” in IEEE Wireless Communications Letters, vol. 12, no. 3, pp. 476-480, March 2023.

[3] J. Chen, S. Park, P. Popovski, H. V. Poor and O. Simeone, “Neuromorphic Split Computing with Wake-Up Radios: Architecture and Design via Digital Twinning,” in IEEE Transactions on Signal Processing, Early Access, 2024.

[4] J. Chen, S. Park, P. Popovski, H. V. Poor and O. Simeone, “Neuromorphic Semantic Communications with Wake-Up Radios,” Proc. IEEE 25th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Lucca, Italy, pp. 91-95, 2024.

[5] Angelopoulos, Anastasios N., et al. “Learn then test: Calibrating predictive algorithms to achieve risk control,” arXiv preprint arXiv:2110.01052, 2021.

Localized Adaptive Risk Control

Motivation

In many online decision-making settings, ensuring that predictions are well-calibrated is crucial for the safe operation of systems. One way to achieve calibration is through adaptive risk control, which adjusts the uncertainty estimates of a machine learning model based on past feedback [1]. This method guarantees that the calibration error over an arbitrary sequence is controlled and that, in the long run, the model becomes statistically well-calibrated if the data points are independently and identically distributed [2]. However, these schemes only ensure calibration when averaged across the entire input space, raising concerns about fairness and robustness. For instance, consider the figure below, which depicts a tumor segmentation model calibrated to identify potentially cancerous areas. If the model is calibrated using images from different datasets, marginal calibration may be achieved by prioritizing certain subpopulations at the expense of others.

A tumor segmentation model is calibrated using data from two sources to ensure that the marginal false negative rate (FNR) is controlled. However, as shown on the right, the error rate for one source is significantly lower than for the other, leading to unfair performance across subpopulations.

Localized Adaptive Risk Control

To address this issue, our recent work at NeurIPS 2024 proposes a method to localize uncertainty estimates by leveraging the connection between online learning in reproducing kernel Hilbert spaces [3] and online calibration methods. The key idea behind our approach is to use feedback to adjust a model’s confidence levels only in regions of the input space that are near observed data points. This allows for localized calibration, tailoring uncertainty estimates to specific areas of the input space. We demonstrate that, for adversarial sequences, the number of mistakes can be controlled. More importantly, the scheme provides asymptotic guarantees that are localized, meaning they remain valid under a wide range of covariate shifts, for instance those induced by considering certain subpopulation of the data.

Experiments

Comparison between the coverage map obtained using adaptive risk control (on the left) and localized adaptive risk control (on the right). Adaptive risk control is unable to deliver uniform coverage across the deployment areas, leading to large regions where the SNR level is unsatisfactory. In contrast, localized adaptive risk control is capable of guaranteeing a more uniform SNR level, improving the overall system coverage.

To demonstrate the fairness improvements of our algorithm, we conducted a series of experiments using standard machine learning benchmarks as well as wireless communication problems. Specifically, in the wireless domain, we considered the problem of beam selection based on contextual information. Here, a base station must select a subset of communication beam vectors to guarantee a level of signal-to-noise ratio (SNR) across a deployment area. Standard calibration methods like adaptive risk control (on the left) result in substantial SNR variation across the area, creating regions where communication is impossible. In contrast, our localized adaptive risk control scheme (on the right) enables the base station to calibrate the beam selection algorithm to match the local uncertainty, providing more uniform coverage throughout the deployment area.

 

References

[1] Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. Advances in Neural Information Processing Systems, 34 (2021).

[2] Anastasios Nikolas Angelopoulos, Rina Barber, Stephen Bates. Online conformal prediction with decaying step sizes. Proceedings of the 41st International Conference on Machine Learning. (2024).

[3] Jyrki Kivinen, Alex Smola and Robert C. Williamson. Online Learning with Kernels. Advances in Neural Information Processing Systems, 14 (2001)

Bayesian Optimization with Formal Safety Guarantees via Online Conformal Prediction

Motivation

In the general formulation of black-box optimization problems, a designer sequentially attempts candidate solutions, receiving noisy feedback on the value of each attempt from the system. As illustrated in Fig. 1, we consider scenarios in which feedback is also provided on the safety of the attempted solution, and the optimizer is constrained to limit the number of unsafe solutions that are tried throughout the optimization process [1] [2]. Focusing on methods based on Bayesian optimization (BO), prior works provide safety guarantee that any unsafe solution is excluded with a controllable probability with respect to feedback noise. This theoretical guarantee is, however, only valid if the optimizer has access to information about the constraint function, e.g., reproducible kernel Hilbert space (RKHS) norm bound of the constraint function. In practice, specifying such information may be difficult, since the constraint function is a priori unknown.

Fig. 1. Illustration of black-box optimization with safety constraints. We provide formal safety guarantee on keeping the fraction of unsafe solutions attempted during the optimization process below some tolerated threshold.

 

Safe-BO via Online Conformal Prediction

In our recent work, to appear in IEEE Journal of Selected Topics in Signal Processing, we study for the first time leveraging online conformal prediction (CP) for providing assumptions-free guarantees on the safety level of the attempted candidate solutions, while enabling any non-zero target safety violation level. As shown in Fig. 2, we introduce Safe-BOCP that models objective function and constraint function by using independent Gaussian processes (GPs) as surrogate models, calibrating the credible intervals constructed for safe sets adaptively based on the observation history via online CP [3] [4]. The key mechanism is to use safety feedback, in the form of a well-designed safety error signal, on the reliability of past decisions to adjust the post-processing of probabilistic surrogate model’s outputs. In contrast to previous safe BO methods assuming RKHS properties of the constraint function to ensure a strict safety guarantee, Safe-BOCP adopts a “caution-increasing” back-off strategy that compensates for the uncertainty on the boundaries of the safe regions without any assumptions.

Fig. 2. Block diagram of the main steps including safe set creation, producing the safe set, and of acquisition, selecting the next iterate.

 

Experiments

We compare Safe-BOCP with the state-of-the-art SAFEOPT in a safe movie recommendation problem and plug flow reactor (PFR) optimization problem. Fig. 3 plots the histograms of the ratings across all selected movies during the optimization procedure with varying target violation rates, showing that SAFEOPT does not meet the safety requirement (red dashed line) while D-SAFE-BOCP can correctly control the fraction of unsafe movies. As shown in Fig. 4, P-SAFE-BOCP is seen to meet the target reliability level irrespective of observation noise power, while SAFEOPT can only achieve it when the observation noise power is sufficiently large.

Fig. 3. Histograms of the ratings of recommended movies by SAFEOPT, as well by D-SAFE-BOCP under different target violation rates.

Fig. 4. Probability of excessive violation rate (top) and optimality ratio (bottom) as a function of constraint observation noise power.

 

References

[1] Y. Sui, A. Gotovos, J. Burdick, and A. Krause, “Safe exploration for optimization with Gaussian processes,” in Proceedings of International Conference on Machine Learning, Lille, France, 2015.
[2] F. Berkenkamp, A. Krause, and A. P. Schoellig, “Bayesian optimization with safety constraints: Safe and automatic parameter tuning in robotics,” Machine Learning, pp. 1–35, 2021.
[3] I. Gibbs and E. Candes, “Adaptive conformal inference under distribution shift,” in Proceedings of Advances in Neural Information Processing Systems, Virtual, 2021.
[4] S. Feldman, L. Ringel, S. Bates, and Y. Romano, “Achieving risk control in online learning settings,” Transactions on Machine Learning Research, 2023.

Cross-Validation Conformal Risk Control

Motivation

Conformal risk control (CRC) [1] [2] is a recently proposed technique that applies post-hoc to a conventional point predictor to provide calibration guarantees. Generalizing conformal prediction (CP) [3], with CRC, calibration is ensured for a set predictor that is extracted from the point predictor to control a risk function such as the probability of miscoverage or the false negative rate. The original CRC requires the available data set to be split between training and validation data sets. This can be problematic when data availability is limited, resulting in inefficient set predictors. In [4], a novel CRC method is introduced that is based on cross-validation, rather than on validation as the original CRC. The proposed cross-validation CRC (CV-CRC) allows for the control of a broader range of risk functions, while proved to offer theoretical guarantees on the average risk of the set predictor, and reduced average set size with respect to CRC when the available data are limited.

Cross-Validation Conformal Risk Control

The objective of CRC is to design a set predictor with a mean risk no larger than a predefined level α, i.e.,

with test data input-label pair (x,y), and a set of N data pairs D.

The risk is defined between the  true label y and a predictive set Γ of labels.

VB-CRC generalizes VB-CP [2] in the sense it allows the risk taking arbitrary form under technical conditions such as boundness and monotonicity in the set. VB-CP is resorted when VB-CRC considers the special case of the miscoverage risk

In this work, we introduce CV-CRC, which is a cross-validation-based version of VB-CRC. In a similar manner how CV-CP [5] generalizes VB-CP, CV-CRC generalizes VB-CRC. See Fig. 1 for illustration.

Fig. 1. (top) validation-based CRC (bottom) the proposed method, CV-CRC.

In the top panel of Fig. 2, VB-CRC is shown as the outcome of available data split into training data and validation data. The former is used to train a model, while the latter is used to post process and control a threshold λ. Upon test input x, a predictive set Γ of labels y’s is formed. In the bottom panel, CV-CRC is illustrated as a generalization. Available data is split K≤N folds, and K leave-fold-out models are trained. Then, K predictive sets are formed and merged via a threshold that is set via the trained models and the left-fold-out data.

Fig. 2. (top) validation-based CRC (bottom) the proposed method, CV-CRC.

Experiments

To illustrate the main theorem that the risk guarantee (1) is met, while the average set sizes are expected to reduce, two experiments were conducted. The first is vector regression using maximum-likelihood learning, and is shown in Fig. 3.

Fig. 3. VB-CRC and CV-CRC for the vector regression problem.

The second problem is a temporal point process prediction, where a point process set predictor aims to predict sets that contain future events of a temporal process with false negative rate of no more than a predefined α. As can be seen, in both problems, CV-CRC is shown to be more data-efficient in the small data regime, while holding the risk condition (1).

 

Fig. 4. VB-CRC and CV-CRC for the temporal point process prediction problem.

Full details can be found at ISIT preprint [4].

References

[1] A. N. Angelopoulos, S. Bates, A. Fisch, L. Lei, and T. Schuster, “Conformal Risk Control,” in The Twelfth International Conference on Learning Representations, 2024.

[2] S. Feldman, L. Ringel, S. Bates, and Y. Romano, “Achieving Risk Control in Online Learning Settings,” Transactions on Machine Learning Research, 2023.

[3] V. Vovk, A. Gammerman, and G. Shafer, Algorithmic Learning in a Random World. Springer, 2005, springer, New York.

[4] K. M. Cohen, S. Park, O. Simeone, and S. Shamai Shitz, “Cross-Validation Conformal Risk Control,” accepted to IEEE International Symposium on Information Theory Proceedings (ISIT2024), July 2024.

[5] R. F. Barber, E. J. Candes, A. Ramdas, and R. J. Tibshirani, “Predictive Inference with the Jackknife+,” The Annals of Statistics, vol. 49, no. 1, pp. 486–507, 2021.

Generalization and Informativeness of Conformal Prediction

Motivation

When using a machine learning model to make important decisions, like in healthcare, finance, or engineering, we not only need accurate predictions but also want to know how sure the model is about its answers [1-3]. CP offers a practical solution for generating certified “error bars”—certified ranges of uncertainty—by post-processing the outputs of a fixed, pre-trained base predictor. This is crucial for safety and reliability. At the upcoming ISIT 2024 conference, we will present our research work, which aims to bridge the generalization properties of the base predictor with the expected size of the set predictions, also known as informativeness, produced by CP. Understanding the informativeness of CP is particularly relevant as it can usually only be assessed at test time.

Conformal prediction

Figure 1: Conformal prediction (CP) set predictors (gray areas) obtained by calibrating a base predictor with a higher generalization error on the left and a lower generalization error on the right. Thanks to CP, both set predictors satisfy a user-defined coverage guarantee, but the inefficiency, i.e., the average prediction set size, is larger when the generalization error of the base predictor is larger.

The most practical form of CP, known as inductive CP, divides the available data into a training set and a calibration set [4]. We use the training data to train a base model, and the calibration data to determine the prediction sets around the decisions made by the base model. As shown in Figure 1, a more accurate base predictor, which generalizes better outside the training set, tends to produce more informative sets when CP is applied.

Results

Figure 2: Bound on the average set size for different values of training and calibration data set sizes as a function of the target reliability level. Increasing the number of calibration data points causes the bound to converge exponentially fast to a function (black line) that is increasing in and decreasing in the amount of training data.

Our work’s main contribution is a high probability bound on the expected size of the predicted sets. The bound relates the informativeness of CP to the generalization properties of the base model and the amount of available training and calibration data. As illustrated in Figure 2, our bound predicts that by increasing the amount of calibration data CP’s efficiency converges rapidly to a quantity influenced by the coverage level, the size of the training set, and the predictor’s generalization performance. However, for finite amount of calibration data, the bound is also influenced by the discrepancy between the target and empirical reliability measured over the training data set. Overall, the bound justifies a common practice: allocating more data to train the base model compared to the data used to calibrate it.

Figure 3: Normalized empirical CP set size for a multi-class classification problem on the MNIST data set as a function of the reliability level and for different sizes of the calibration and training data sets.

Since what really proves the worth of a theory is how well it holds up in real-world testing, we also compare our theoretical findings with numerical evaluations. In our study, we looked at two classification and regression tasks. We ran CP with various splits of calibration and training data, then measured the average efficiency. As shown in the Figure 3, the empirical results from our experiments matched up nicely with what our theory predicted in Figure 2.

References

[1] A. L. Beam and I. S. Kohane, “Big data and machine learning in health care,” JAMA, vol. 319, no. 13, pp. 1317–1318, 2018.

[2] J.. W. Goodell, S. Kumar, W. M. Lim, and D. Pattnaik, “Artificial intelligence and machine learning in finance: Identifying foundations, themes, and research clusters from bibliometric analysis,” Journal of Behavioral and Experimental Finance, vol. 32, p. 100577, 2021.

[3] L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger, “Learning-based model predictive control: Toward safe learning in control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 3, pp. 269–296, 2020.

[4] V. Vovk, A. Gammerman, and G. Shafer, Algorithmic learning in a random world, vol. 29. Springer, 2005.

Empowering Wireless Digital Twins with Ray Tracing Simulations

At the crossroad between simulation and machine learning, digital twin systems are envisioned to bridge the theoretical guarantees of model-based approaches with the flexibility of data-driven methods. However, one major concern is whether insights drawn from the simulation can still apply to the real world. Embodying both the opportunities and challenges of simulation intelligence, we believe that ray tracing will drive the understanding of signal propagation in the next generation of wireless digital twins, while relying on machine learning to cope with the diversity of real-world materials and inaccuracies in the available geometry.

Wireless Reliable Federated Inference

Written by Meiyi Zhu during her visit to KCLIP.

Motivation

Consider a wireless federated inference scenario in which the devices and a server share a pre-trained machine learning model, e.g., trained via federated learning. The server wishes to make an inference on its own new input based on such a pre-trained machine learning model. Note that the server has no access to the data; the data is only presented at the devices. This scenario is common in practice. For example, a personal healthcare system would first train the respective model via federated learning, without acquiring personal data from the end users; while upon achieving a trained healthcare model, wishes to provide useful solution to new users. We will assume that new users ask queries to the central server, while the general conclusion made in this article retains even for the case in which the new user has its own access to the pre-trained model.

However, depending on the quality of the pre-trained model, e.g., lack of data, the solution provided by the pre-trained model may yield wrong decisions. More importantly, such model is likely to yield unreliable decisions; see, e.g., our previous post ‘Is Accuracy Sufficient for AI in 6G? (No, Calibration is Equally Important)’. As reliability plays an important role in various fields including healthcare monitoring and autonomous vehicle navigation, it is important to find ways to make the federated inference reliable. But how can we make the pre-trained model reliable as the central server has no access to the data at all?

Recent work has introduced federated conformal prediction (CP), which improves the reliability of the server’s decision by utilizing available held-out local data at each device, of course, without central server’s access to such data. The goal of federated CP is to provide a guaranteed interval or set of potential outputs that contains the correct answer at a predefined reliability level [1, 2]. As a state-of-the-art solution, reference [1] proposed a quantile-of-quantile (QQ) scheme, referred to as FedCP-QQ, whereby each device computes and communicates a pre-determined quantile of the local losses. However, existing work assumed noise-free communication between the server and the devices, whereby devices can communicate a single real number to the server.

Wireless Federated Conformal Prediction

In our recent work, to appear in Transactions on Signal Processing, we study for the first time federated CP in a wireless setting, as illustrated in Fig. 1. Specifically, we introduce a novel protocol, termed wireless federated conformal prediction (WFCP), which builds on type-based multiple access (TBMA) and on a novel quantile correction scheme.

Fig. 1. Illustration of the wireless reliable federated inference problem under study.

TBMA is a multiple access scheme that aims at recovering aggregated statistics, rather than individual messages [3]. By noting that federated CP also requires aggregated statistics across the devices, i.e., quantile, we have proposed to apply TBMA for WFCP. More precisely, as illustrated in Fig. 2, TBMA enables the estimate of the global histogram of data available across all devices without having to separately estimate the histograms of all devices. Specifically, each histogram bin is assigned an orthogonal codeword and the server can estimate the global histogram thanks to the superposition property of wireless communications. In this way, WFCP enables a direct estimate of the global quantile at the server without imposing bandwidth requirements that scale linearly with the number of active devices like FedCP-QQ. Rather, the communication requirements of WFCP are only dictated by the precision with which the signals are represented for transmission to the server, i.e., the length of each codeword.

Fig. 2. Illustration of the TBMA enabled communication model.

The other key technical challenge tackled in our work is the derivation of a novel quantile correction approach that ensures the reliability of the set predictor despite the presence of channel noise.

Experiments

We evaluate our proposed WFCP on CIFAR-10 data set over Rayleigh fading channels. We show here one of the results that plots the performance gains of WFCP in the presence of limited communication resources. In Fig. 3, we evaluate the performance of WFCP and our implementation of existing FedCP-QQ (DQQ) over wireless channels using finite blocklength information theory as a function of SNR. As SNR increases, both WFCP and DQQ maintain the target reliability level, while offering a decreasing prediction set size. Across all the SNRs, WFCP generates a more informative predicted set than DQQ, and it approaches the performance of the centralized CP. Please refer to our paper for more details.

 

Fig. 3. Empirical coverage and normalized empirical inefficiency of centralized CP, WFCP, and digital implementation of existing FedCP-QQ [1].

References

[1] P. Humbert, B. Le Bars, A. Bellet, and S. Arlot, “One-shot federated conformal prediction,” ICML 2023

[2] C. Lu and J. Kalpathy-Cramer, “Distribution-free federated learning with conformal predictions,” arXiv:2110.07661, 2021

[3 G. Mergen and L. Tong, “Type based estimation over multiaccess channels,” IEEE TSP 2006

Safe Model Predictive Control via Reliable Time-Series Forecasting

Motivation

The control of dynamical systems is the backbone of modern technologies, ranging from industrial processes to autonomous vehicles. In many of these scenarios, systems must be controlled while satisfying a set of safety and reliability constraints with respect to the unknown evolution of a target process. For example, as illustrated in Figure 1, autonomous vehicles or unmanned aerial vehicles (UAVs) must plan their trajectory while maintaining a safe distance from other vehicles or obstacles. To this end, predictions about the future evolution of the system must be used. In this context, a primary challenge is to ensure safety and reliability in the face of predictions that are often uncertain.

Figure 1: UAV tracking problem, an example of model predictive control in which the UAV must plan its path based on the unknown evolution of the object to be tracked.

Probabilistic Time Series-Conformal Risk Prediction

To support the deployment of reliable control mechanisms for dynamical system, in our work we have recently proposed probabilistic time series-conformal risk prediction (PTS-CRC). PTS-CRC is a novel post-hoc calibration procedure that operates on the predictions produced by any pre-designed probabilistic forecaster to yield reliable time series prediction sets. As illustrated in Figure 2, PTS-CRC generates predictive sets based on an ensemble of multiple prototype trajectories sampled from the probabilistic model, supporting the efficient representation of forking uncertainties. This contrasts with previous solutions that apply Conformal Prediction[1] to deterministic predictors (TS-CP)[2], which are bounded to produce compact prediction sets. Furthermore, sets produced by PTS-CRC can be calibrated to satisfy a wide array of reliability definitions, beyond the standard one of coverage.

Figure 2: Construction of a prototype-based set predictor based on 3 prototypical sequences.

PTS-CRC Based Model Predictive Control

Based on the reliability properties of PTS-CRC predictions, we devise a novel Model Predictive Control (MPC) framework that addresses open-loop and closed-loop control problems under general average constraints on the quality or safety of the control policy. The key idea is to derive the control by replacing constraints that depend on the unknown dynamics of the target process with those depending on the predictive sets output by PTS-CRC. The reliability requirements of PTS-CRC predictions translate into reliability requirements for the original control problem.

Experiments

We apply PTS-CRC-based MPC to wireless networking problems, specifically focusing on a scenario where a base station must modulate its future power allocation based on the unknown evolution of channel conditions. For instance, we address the challenge of controlling transmit power to maximize the communication rate at an unlicensed user while adhering to a safety requirement, expressed as the maximum interference experienced by a licensed user. By employing PTS-CRC, we can replace the unknown system evolution with efficient multimodal predictive sets that more effectively capture multimodal channel evolution compared to TS-CP (Figure 3). As exemplified in Figure 4, PTS-CRC-based power control leads to power allocations that achieve a higher communication rate compared to TS-CP.

Figure 3: Comparison between the prediction sets of TS-CP and PTS-CRC for the problem of channel gain evolution forecasting.

Figure 4: Comparison between the power control solution obtained using PTS-CRC and TS-CP based MPC.

References

[1] Vovk, Vladimir, Alexander Gammerman, and Glenn Shafer. “Algorithmic learning in a random world,” Vol. 29. New York: Springer, 2005.

[2] Stankeviciute, Kamile, Ahmed M Alaa, and Mihaela van der Schaar. “Conformal time-series forecasting.” Advances in neural information processing systems 34, 2021.

[3] Zecchin, Matteo, Sangwoo Park, and Osvaldo Simeone. “Forking Uncertainties: Reliable Prediction and Model Predictive Control with Sequence Models via Conformal Risk Control.” arXiv preprint arXiv:2310.10299, 2023.

« Older posts