This seminar is held online approximately bi-weekly. The seminars cover the broad areas of communication and information sciences.
If you are interested in virtually participating in the seminar, as a speaker or attendant, send an email to samir.perlaza@inria.fr and/or padakand@eurecom.fr.
Talk Schedule
Date | Title | Speaker | Video |
Nov 24, 2023 | Towards Practical Massive Random Access | Maxime Guillaud | Not Recorded |
Dec 14,2023 | Loosing control and interfering | Laurent Clavier | Public |
Jan 12, 2024 | Topological Algebra for Wireless Systems | Laurent Decreusefond | Public |
Jan 26, 2024 | Distributed Computation over Networks | Derya Malak | Public |
Feb 08, 2024 | Copulas and Compression | Malcolm Egan | Public |
Mar 01, 2024 | On the computation of the rate-distortion-perception function | Fotios Stavrou | Public |
Mar 29, 2024 | Communication Models for Intelligent Metasurfaces Using Multiport Network Theory | Marco Di Renzo | Public |
Apr 4, 2024 | Information-Theoretic Proofs based on Change of Measure Arguments | Michèle Wigger | Public |
May 17, 2024 | Spatial Network Calculus and Performance Guarantees in Wireless Networks | Ke Feng | Public |
June 28, 2024 | Distributed Binary Hypothesis Testing | Mireille Sarkiss | Public |
October 4, 2024 | Stochastic Geometry and Queuing Dynamics in Large Wireless Networks | François Baccelli | Public |
October 11, 2024 | Decision making via End-to-End Lossy Distributed Wireless Networks: A Distributed Hypothesis Testing based Formulation | Matsumoto Tadashi | Public |
October 25, 2024 | Generalization Error of Machine Learning Algorithms | Samir Perlaza | Public |
Nov 15, 2024 | Pointwise maximal leakage | Giulia Cervia | |
Dec 06, 2024 | On the Linearization of optimal rates in zero error source and channel coding problems | Maël LE TREUST | |
Jan 17, 2025 | Petros Elia | ||
Jan 24, 2024 | How can alpha-information theory formally prove that your sensitive circuits are protected against side-channel attacks? | Olivier Rioul |
Talk Abstracts
- Jan 24, 2024 at 1400 Hrs : Olivier Rioul
- Title : How can alpha-information theory formally prove that your sensitive circuits are protected against side-channel attacks?
- Abstract : Cryptographic algorithms are ubiquitous in our digital society. Principles (such as Kerckhoffs’) and mathematical techniques for securing data against cryptanalysis are well established, even with future quantum computers: the best attacks of this type are essentially brute force, which takes several times the age of the universe. However, one real threat is that algorithm implementations are vulnerable to side-channel attacks, that exploit sensitive information leaks to recover the secret in a “divide and conquer” approach. Some attacks only require a few queries (leakage measurements). Thus, the question is not whether you are secure or not, since it is only a matter of time. The question is how much you can be secure, e.g. with a protected implementation that use data masking.For that, we need a formal evaluation. In this talk, I present such a formal evaluation using alpha-information theory, based on Rényi alpha-divergence and alpha-entropy, and Sibson’s alpha-information. The parameter alpha can be positive or negative, and the limiting case alpha = minus infinity is related to the important notion of Doeblin coefficient, which can be used to reduce the noisy leakage model to a random probing model. Fano and data processing inequalities, as well as Mrs. Gerber’s lemma in the case of additive masking, are used to establish lower bounds on the number of queries that any attacker has to make to achieve a given level of success. In this way, it is possible to be proactive, for example with ephemeral keys, to maintain the security of an implementation.
- Jan 17, 2024 at 1400 Hrs : Petros Elia
- December 06, 2024 at 1400 Hrs : Maël LE TREUST
- Title : On the Linearization of optimal rates in zero error source and channel coding problems
- Abstract : Zero-error coding encompasses a variety of source and channel problems where the probability of error must be exactly zero. This condition is stricter than that of the vanishing error regime, where the error probability goes to zero as the code blocklength goes to infinity. In general, zero-error coding is an open combinatorial question. We investigate two unsolved zero-error problems: the source coding problem with side information and the channel coding problem. We focus our attention on families of independent problems for which the distribution decomposes into a product of distributions, corresponding to solved zero-error problems. A crucial step is the linearization property of the optimal rate which does not always hold in the zero-error regime, unlike in the vanishing error regime. We derive a condition under which the linearization properties of the complementary graph entropy H for the AND product of graph and for the disjoint union of graphs are equivalent. Then we establish the connection with a recent result obtained by Wigderson and Zuiddam and by Schrijver, for the zero-error capacity C0. As a consequence, we provide new single-letter characterizations of H and C0, for example when the graph is a product of perfect graphs, which is not perfect in general, and for the class of graphs obtained by the product of a perfect graph G with the pentagon graph C5. By building on Haemers result for C0, we also show that the linearization of H does not hold for the product of the Schläfli graph with its complementary graph. https://arxiv.org/abs/2407.02281
- November 15, 2024 at 1400 Hrs : Giulia Cervia
- Title : Pointwise maximal leakage
- Abstract : Data traffic increase poses ambitious challenges, while concerns over confidentiality of information aremounting. Amongst the many significant challenges, it is essential to maintain a balance between confidentiality and integrity. However, while there is broad global agreement on the necessity of data protection, no universally accepted definition of privacy measures exists.Recently, information theorists have begun exploring the privacy aspects of machine learning algorithms, offering a fresh perspective on privacy measures. In this talk, we present an information-theoretic privacy measure called pointwise maximal leakage (PML), which extends the concept of maximal leakage. Pointwise maximal leakage serves as a robust and practically meaningful measure of privacy, capturing the maximum information leaked about a secret X to adversaries attempting to infer arbitrary (potentially randomized) functions of X. Finally, we explore several characteristics of pointwise maximal leakage, such as its behavior with multiple outcomes and its sensitivity to pre- and post-processing.
- October 25, 2024 at 1400 Hrs : Samir Perlaza (INRIA) — See Video
- Title : Generalization Error of Machine Learning Algorithms
- Abstract : In this talk, the method of gaps, a technique for deriving closed-form expressions for the generalization error of machine learning algorithms in terms of information measures is introduced. The method relies on two central observations: (a) The generalization error is an average of the variation of the expected empirical risk with respect to changes on the probability measure (used for expectation); and (b) these variations, also referred to as gaps, exhibit closed-form expressions in terms of information measures. The expectation of the empirical risk can be either with respect to a measure on the models (with a fixed dataset) or with respect to a measure on the datasets (with a fixed model), which results in two variants of the method of gaps. The first variant, which focuses on the gaps of the expected empirical risk with respect to a measure on the models, appears to be the most general, as no assumptions are made on the distribution of the datasets. The second variant develops under the assumption that datasets are made of independent and identically distributed data points. All existing exact expressions for the generalization error of machine learning algorithms can be obtained with the proposed method. Also, this method allows obtaining numerous new exact expressions, which improves the understanding of the generalization error; establish connections with other areas in statistics, e.g., hypothesis testing; and potentially, might guide algorithm designs.
- October 11, 2024 at 1400 Hrs : Matsumoto Tadashi — See Video
- Title : Decision making via End-to-End Lossy Distributed Wireless Networks: A Distributed Hypothesis Testing based Formulation
- Abstract : The goal of this seminar is to provide the participants with the knowledge of Decision-Making Theory by Distributed Hypothesis Testing (DHT) with Lossy Correlated Source Observations via End-to-End Distributed Lossy Communications. This seminar is started by a brief background introduction of the instructor’s research topics and shows that they are all related to a common Mathematical and Information Theoretic Framework. Then, this seminar introduces analytical methods according to the Framework, and the results of numerical calculations for evaluating their performance represented by their corresponding Rate-Distortion functions and outage probabilities in fading channels. We use a very simple toy scenario assumed in this seminar is where there are two sources X and Y. X and Y are correlated, and the correlation is expressed by random bit flipping Bern(p0) (if p0=0.0, X and Y are fully correlated). X and Y are lossy-compressed with their rates Rx and Ry, respectively. With this simple model, we formulate the working principles of (1) Wireless Lossy Communication networks (WLC), (2) DHT, (3) Training for Machine Learning systems (ML) and (4) Semantic Communications (SC). It is shown that the mathematical principle connecting the WLC, DHT, ML and SC technologies is the Wyner-Ziv Theorem. This seminar finally applies the Asymmetric DHT (ADHT) for the decision making via a wireless lossy network, which is the core part of this seminar. Some mathematical and numerical results achieved quite recently on the Hypothesis Testing against Independence are presented. The course slides will be available at: https://dspace.jaist.ac.jp/dspace/bitstream/10119/19040/1/Tutorial.pdf
- October 4, 2024 at 1400 Hrs : François Baccelli (ENS Paris) — See Video
- Title : Stochastic Geometry and Queuing Dynamics in Large Wireless Networks
- Abstract : Stochastic Geometry allows one to analyze contention for spectrum in large wireless networks. It is based on the computation of spatial averages over a snapshot of the network state. The aim of this lecture is to survey past and ongoing research on the interconnection of Stochastic Geometry and queuing like dynamics in this wireless networking context. The simplest dynamics in this class is that of a device-to-device network where users arrive according to a Poisson rain process on the Euclidean plane and leave with a stochastic intensity proportional to their instantaneous Shannon rate. The first part of the talk the survey will cover results obtained on this model both in the compact and infinite state space cases. The second part will survey further results and ongoing research on more complex networks, and in particular cellular networks and CSMA-CA based networks.
- June 28, 2024 at 1400 Hrs : Mireille Sarkiss (Telecom SudParis) — See Video
- Title: Distributed Binary Hypothesis Testing
- Abstract: Distributed binary hypothesis testing (DHT) problem has many applications in IoT and future networks such as distributed decision and surveillance systems for security, health monitoring, intelligent car control, etc. In these systems, devices and sensors collect some data to help the decision centers to distinguish between the normal situation (null hypothesis) and the alert situation (alternative hypothesis). The hypotheses are assumed to determine the joint probability distribution underlying the data observed at the various nodes. The objective of such problems is to maximize the exponential decay of miss-detection (type-II error) probabilities while preserving the false alarm (type-I error) probabilities below given thresholds, referred to as the Stein exponent by Ahlswede and Csiszar. Yet, security is a critical concern in these systems, in the sense that eavesdroppers and intruders should not be able to learn the data or decisions or even to detect the presence of communications. In this talk, we focus on a DHT problem with a single sensor, single decision center, and an eavesdropper, all having their own source observations. We consider first testing against independence over a rate-limited noiseless. We characterize the largest possible type-II error exponent at the legitimate receiver under constraints on the legitimate receiver’s type-I error probability and the equivocation (uncertainty) measured at the eavesdropper about the sensor’s observations. Then, we consider the DHT over a discrete memoryless channel under the stronger covertness constraint. Thus, we impose in this case that an eavesdropper should not be able to determine whether communication is ongoing or not. The main result here is an upper-bound on the largest possible Stein exponent showing that it cannot exceed the largest exponent achievable under zero-rate communication over a noise-free link.
- May 17. 2024 at 14H: Ke Feng (LINCS) — See Video
- Title: Spatial Network Calculus and Performance Guarantees in Wireless Networks
- Abstract: Network calculus is a methodology allowing one to provide performance guarantees in queuing networks subject to regulated traffic arrivals and service guarantees. It is a key design tool for latency-critical wireline communication networks where it allows one to guarantee bounds on the end-to-end latency of all transmitted packets. In wireless networks, service guarantees are more intricate as electromagnetic signals propagate in a heterogeneous medium and interfere with each other. In this talk, we present a novel approach toward performance guarantees for all links in arbitrarily large wireless networks. We introduce spatial regulation properties for stationary spatial point processes, which model transmitter and receiver locations, and develop the first steps of a calculus for this type of regulation. This can be seen as an extension to space of the (classical) network calculus developed with respect to time. Using this approach, we derive reliability, rate, and latency guarantees for all links in spatially regulated wireless networks. Such guarantees do not exist in networks without spatial regulations, e.g., Poisson networks.
- April 4, 2024 : Michèle Wigger (Telecom Paris) — See Video
- Title: Information-Theoretic Proofs based on Change of Measure Arguments
- Abstract : We will be presenting information-theoretic proofs based on change of measure arguments for basic source coding and hypothesis testing problems, and if time permits also for channel coding. For the basic source and channel coding setups, the proofs only use a change of measure argument on the strongly typical set and are established solely by analyzing the asymptotic behaviour of the new measure. This proof method is also extended to a source coding setup under the relaxed expected-rate constraint, in which case the minimum compression rate depends on the allowed probability of error $\epsilon$ and an $\epsilon$-dependent converse is required. In the second part of the talk we present converse proof methods for hypothesis testing setups, where in addition asymptotic Markov Chains need to be established.
- Mar 29, 2024 Marco Di Renzo (Paris-Saclay University – CNRS and CentraleSupelec, Paris, France) — See Video
- Title : Communication Models for Intelligent Metasurfaces Using Multiport Network Theory
- Abstract : Intelligent metasurfaces are an emerging technology that enhances the reliability of data transmission by appropriately shaping the propagation of electromagnetic waves in the wave domain, by turning radio propagation environments into smart radio propagation environments. Despite the potential performance gains and applications that this technology may provide in future wireless networks, a major limiting factor preventing information, communication, and signal processing theorists from realizing its full potential and unveiling its ultimate performance limits lies in understanding the electromagnetic and physical properties and limitations underpinning it. In this talk, we present an electromagnetically consistent approach for modeling and optimizing metasurface-aided systems based on multiport network theory.
- Mar 01, 2024 : Fotios Stavrou (EURECOM) — See Video
- Title: On the computation of the rate-distortion-perception function
- Abstract : In this talk, we study methods for the computation of a generalization of the rate-distortion function called the rate-distortion-perception function (RDPF). This generalization to the rate-distortion problem was defined in the machine learning community in recent years, to give a mathematical intuition to the observation that “low distortion” is not a synonym for “high perceptual quality”, and, from an optimization standpoint, there is a tradeoff between the distortion and the perception. The RDPF is also a relevant semantic based metric in compression. For discrete sources, with single-letter distortion and a perception constraint that belongs to the family of f-divergences, we compute the RDPF by proposing an approximate alternating minimization approach a la Blahut-Arimoto algorithm that converges to a globally optimal point. For scalar Gaussian sources with squared error distortion and a perception constraint that can be either the KL divergence, the squared Wasserstein distance, the squared Hellinger distance or the geometric Jensen-Shannon divergence, we derive closed-form expressions under the assumption of jointly Gaussian processes. Our closed-form solutions are also supported by their corresponding test-channel forward realizations, the first of their kind. For vector Gaussian sources, we propose a generic alternating minimization algorithm using a block-coordinated descent method that can compute optimally the vector Gaussian RDPF. For the extreme case of the perfect-realism vector Gaussian RDPF, that is to say, when the Gaussian distribution of the source and that of the reconstruction are the same, we derive in closed-form solution that reveals a new adaptive reverse-water-filling solution. We corroborate our findings with various simulation studies.
- February 08, 2024 : Malcolm Egan (INRIA, Lyon) — See Video
- Title : Copulas and Compression
- Abstract : A key property of Gaussian models is the exponential decay of the probability density function. In many applications, large values occur with much higher probabilities, leading to heavy-tailed probability distributions. In this talk, we consider the family of regularly varying distributions, which captures many popular heavy-tailed models such as alpha-stable and student-t distributions. This family also includes shot noise models which can arise as interference models in wireless communications. By exploiting copula models of statistical dependence, we study (distributed) compression of such non-Gaussian data, highlighting the role of extremal dependence measures and their connection to information measures.
- January 26, 2024 : Derya Malak (EURECOM) — See Video
- Title : Distributed Computation over Networks
- Abstract : Large-scale distributed computing systems, such as MapReduce, Spark, or distributed deep networks, are critical for parallelizing the execution of computational tasks. Nevertheless, a struggle between computation and communication complexity lies at the heart of distributed computing. There has been recently a substantial effort to address this problem for a class of functions, such as distributed matrix multiplication, distributed gradient coding, linearly separable functions, etc. The optimal cost has been achieved under some constraints, based mainly on ideas of linear separability of the tasks and linear space intersections. Motivated by the same challenge, we propose a novel distributed computing framework where a master seeks to compute an arbitrary function of distributed datasets in an asymptotically lossless manner. Our approach exploits the notion of characteristic graphs, which have been widely utilized by Shannon, Korner, and Witsenhausen to derive the rate lower bounds for computation, and later by Alon-Orlitsky, Orlitsky-Roche, Doshi-Shah-Medard, and Feizi-Medard, to resolve some well-known distributed coding and communication problems, allowing for lowered communication complexity and even for a) correlated data, b) a broad class of functions, and c) well-known topologies. The novelty of our approach lies in accurately capturing the communication-computation cost tradeoff by melding the notions of characteristic graphs and distributed placement, to provide a natural generalization of distributed linear function computation, thus elevating distributed gradient coding and distributed linear transform to the realm of distributed computing of any function.
- January 12, 2024: Laurent Decreusefond (TelecomParis) — See Video
- Title: Topological Algebra for Wireless Systems
- Abstract : We show how the mathematical theory called topological algebra can be used to assess and optimize the performance of cellular networks.
- December 14, 2023: Laurent Clavier (IMT Atlantique) — See Video
- Title: Loosing control and interfering
- Abstract : In dense networks, with objects that are expected to operate for several years without maintenance, access to resources can no longer be provided in a globally coordinated way. Nor is it possible or efficient to completely orthogonalize communications. Releasing control, however, will mean an increase in interference, at least some of which will have to be treated as noise. We are interested in the statistical properties of this noise. A Gaussian model is not suitable. Instead, we propose heavy-tailed distributions, which are particularly well-suited to representing the rare but highly penalizing cases of strong interference. The dependence between the different elements of an interference vector is then a key characteristic that needs to be modelled, copulas being a possible way forward.
- November 24, 2023: Maxime Guillaud (INRIA)
- Title: Towards Practical Massive Random Access
- Abstract : In multi-user wireless communications, random access departs from the classical set-up of multiple access in the fact that the number and identity of the active transmitters are unknown to the receiver (typically because the sporadic nature of the traffic does not allow for coordination between transmitters). This is a significant departure from the well-understood multiple access schemes (including non-orthogonal multiple access, NOMA). Random access arises e.g. in massive internet-of-things, ultra-reliable and low-latency communications, and non-terrestrial networks applications. This talk will outline why, compared to multiple access, multi-user decoding in random access scenarios is markedly more difficult, and requires to revise some of the basic assumptions that underpin modern multi-user communications systems, such as the pre-existence of synchronization and timing offset compensation, or centralized assignment of pilot sequences. We will focus on massive random access, which explicitly considers the high-contention regime, i.e. the case where the number of simultaneously active transmitters can be very large, and discuss some of the practical waveforms and coding approaches that have been proposed in practice to solve this problem.