Schedule for: 18w5162 - Intersection of Information Theory and Signal Processing: New Signal Models, their Information Content and Acquisition Complexity
Beginning on Sunday, October 28 and ending Friday November 2, 2018
All times in Banff, Alberta time, MDT (UTC-6).
Sunday, October 28 | |
---|---|
16:00 - 17:30 | Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
20:00 - 22:00 | Informal gathering (Corbett Hall Lounge (CH 2110)) |
Monday, October 29 | |
---|---|
07:00 - 08:45 |
Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
08:45 - 09:00 |
Introduction and Welcome by BIRS Staff ↓ A brief introduction to BIRS with important logistical information, technology instruction, and opportunity for participants to ask questions. (TCPL 201) |
09:00 - 10:00 |
Venkat Chandrasekaran: Learning Regularizers from Data ↓ Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available; the objective is to identify a regularizer to promote the type of structure contained in the data. The regularizers obtained using our framework are specified as convex functions that can be computed efficiently via semidefinite programming. Our approach for learning such semidefinite regularizers combines recent techniques for rank minimization problems along with the Operator Sinkhorn procedure. (Joint work with Yong Sheng Soh) (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:00 |
Arian Maleki: Comparing Signal Recovery Algorithms: Phase Transition Analysis and Beyond ↓ A recent surge of interest in improving imaging/sampling schemes has offered numerous techniques for solving linear and nonlinear inverse problems. The potential user now has a variety of ideas and suggestions that might be helpful. But this inevitably creates a major question that every potential user, even an expert, should address before applying these ideas: Which algorithm offers the best performance? One of the standard techniques that is being used extensively in the publications is the phase transition diagram. Phase transition diagram illustrates the success probability of an algorithm in terms of the number of measurements and sparsity level (or other quantitative measures of structure). In this talk, through some simple and intuitive examples, we will show that the phase transition diagrams have several major limitations. To obtain better alternatives, we will prove that phase transition analysis can be seen as a first-order expansion of the mean square error. This interpretation enables us to explore higher-order expansions. Such expansions, that are very different from usual expansions, such as Taylor series, are capable of explaining and resolving the main limitations of phase transitions. If time permits, we will also go over some other expansions that have been inspired by our framework and are capable of providing useful information in other data-regimes. (TCPL 201) |
11:00 - 11:30 |
Aaron Berk: Parameter instability regimes in proximal denoising ↓ Compressed sensing is a provably stable and robust technique for simultaneous data acquisition and compression, its implementation commonly tied to one of three convex l1 programs: constrained Lasso, unconstrained Lasso and quadratically constrained basis pursuit. Each program gives rise to a one-parameter family of solutions, and for each there exists a parameter value bestowing minimax optimal recovery error. Stability of the recovery error with respect to variation about the optimal choice of the governing parameter is crucial, as the optimal parameter value is unknown in practice. In this talk we demonstrate the existence of regimes giving rise to "parameter instability" for each of the aforementioned l1 programs, restricted to the setting of proximal denoising. Specifically, we prove the existence of asymptotic phase transitions of the recovery error for the proximal denoising problem, and support the theory with numerical simulations. Finally, we discuss how these results extend to their Lasso counterparts. (TCPL 201) |
11:30 - 13:00 |
Lunch ↓ Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
13:00 - 14:00 |
Guided Tour of The Banff Centre ↓ Meet in the Corbett Hall Lounge for a guided tour of The Banff Centre campus. (Corbett Hall Lounge (CH 2110)) |
14:00 - 14:20 |
Group Photo ↓ Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL 201) |
14:30 - 15:00 |
John Murray-Bruce: Beyond Binomial and Negative Binomial: Adaptation in Bernoulli Parameter Estimation ↓ In this talk, we address the problem of estimating the parameter, p, of a Bernoulli process. This problem arises in many applications, including photon-efficient active imaging where each illumination period is regarded as a single Bernoulli trial. Motivated by acquisition efficiency when multiple Bernoulli processes are of interest, we formulate the allocation of trials under a constraint on the mean as an optimal resource allocation problem.
We first explore an oracle-aided trial allocation, which demonstrates that there can be a significant advantage from varying the allocation for different processes and inspires a simple trial allocation gain quantity. Motivated by realizing this gain without an oracle, we present a trellis-based framework for representing and optimizing data-dependent stopping rules. Considering the case of Beta priors, three implementable stopping rules with similar performances are explored, and the simplest of these is shown to asymptotically achieve the oracle-aided trial allocation. These approaches are further extended to estimating functions of a Bernoulli parameter. In simulations inspired by realistic active imaging scenarios, we demonstrate significant mean-squared error improvements: up to 4.36 dB for the estimation of p and up to 1.80 dB for the estimation of log p. (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 16:30 |
Alon Kipnis: Information efficient data acquisition using analog to digital compression ↓ The disproportionally large amount of raw data available compared to resources for processing it poses major challenges in many modern applications. For example, such challenges arise in processing sensor information in self-driving cars and speech-to-text transcribers based on artificial neural networks. In such applications, reductions both in terms of the signals’ dimension (sampling) and information (lossy compression) are necessary in the acquisition stage. Traditionally, these two operations are considered separately: the sampler is designed to minimize information loss due to sampling based on characteristics of the high-dimensional input while the quantizer is designed to represent the samples as accurately as possible subject to a constraint on the number of bits that can be used in the representation. The goal of this talk is to revisit this paradigm by illuminating the dependency between these two operations. In particular, we explore the requirements on the sampling system subject to constraints on the available number of bits for storing, communicating, or processing the original data. We conclude that a joint optimization of the sampler and lossy compressor can lead to signal representation that is optimal both in terms of its dimension and information content. (TCPL 201) |
16:50 - 17:20 |
Wenda Zhou: Compressed Sensing in the Presence of Speckle Noise ↓ Speckle noise is a form of multiplicative noise that is ubiquitous in coherent sensing applications, most notably synthetic aperture radar (SAR) and ultrasound.
We study the problem of compressed sensing, that is, acquiring and reconstructing a signal from an undersampled linear measurement, under the speckle noise model. We are thus faced with the problem of solving an underdetermined linear system where, instead of additive noise on the measurements, we have multiplicative noise on the signal.
We propose a constrained likelihood approach: we derive the likelihood and establish its concentration properties. We establish the convergence rate of the constrained maximum likelihood estimator under a low-dimensional signal hypothesis. Finally, we provide some directions for implementing such an estimator in practice. (TCPL 201) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
Tuesday, October 30 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 10:00 |
Michelle Effros: On a New Approach to Random Access Communication ↓ The challenge of random access communication systems like WiFi hotspots and cell phone towers is that the number of communicating devices transmitting to a single receiver can vary widely over time. This talk considers a new approach to random access communication that in some cases allows performance approaching the optimal rate for the multiple access channel (MAC) in operation -- despite the fact that neither the transmitters nor the receiver knows which MAC that is. The proposed technique suggests a number of potential connections and analogies in signal processing, which are also briefly explored. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:00 |
Armeen Taeb: False Discovery and Its Control in Low Rank Estimation ↓ Models specified by low-rank matrices are ubiquitous in contemporary applications. In many of these problem domains, the row/column space structure of a low-rank matrix carries information about some underlying phenomenon, and it is of interest in inferential settings to evaluate the extent to which the row/column spaces of an estimated low-rank matrix signify discoveries about the phenomenon. However, in contrast to variable selection, we lack a formal framework to assess true/false discoveries in low-rank estimation; in particular, the key source of difficulty is that the standard notion of a discovery is a discrete one that is ill-suited to the smooth structure underlying low-rank matrices. We address this challenge via a \emph{geometric} reformulation of the concept of a discovery, which then enables a natural definition in the low-rank case. We describe and analyze a generalization of the Stability Selection method of Meinshausen and B\"uhlmann to control for false discoveries in low-rank estimation, and we demonstrate its utility compared to previous approaches via numerical experiments. (TCPL 201) |
11:00 - 11:30 |
Eric Lybrand: Quantization for Low-Rank Matrix Recovery ↓ We study Sigma-Delta (Σ∆) quantization methods coupled with appropriate reconstruction algorithms for digitizing randomly sampled low-rank matrices. We will show that the reconstruction error associated with our methods decays polynomially with the oversampling factor, and we leverage our results to obtain root-exponential accuracy by optimizing over the choice of quantization scheme. Additionally, we will show that a random encoding scheme, applied to the quantized measurements, yields a near-optimal exponential bit-rate. As an added benefit, our schemes are robust both to noise and to deviations from the low-rank assumption. In short, we provide a full generalization of analogous results, obtained in the classical setup of bandlimited function acquisition, and more recently, in the finite frame and compressed sensing setups to the case of low-rank matrices sampled with sub-Gaussian linear operators. (TCPL 201) |
11:30 - 13:30 | Lunch (Vistas Dining Room) |
13:30 - 14:30 |
Ozgur Yilmaz: Near-optimal sample complexity for convex tensor completion ↓ We study the problem of estimating a low-rank tensor when we have noisy observations of a subset of its entries. A rank-$r$, order-$d$, $N \times N \times \cdots \times N$ tensor where $r=O(1)$ has $O(dN)$ free variables. On the other hand, prior to our work, the best sample complexity that was achieved in the literature is $O(N^{\frac{d}{2}})$, obtained by solving a tensor nuclear-norm minimization problem. In this talk, we consider the ``M-norm'', an atomic norm whose atoms are rank-1 sign tensors. We also consider a generalization of the matrix max-norm to tensors, which results in a quasi-norm that we call ``max-qnorm''. We prove that solving an M-norm constrained least squares problem results in nearly optimal sample complexity for low-rank tensor completion. A similar result holds for max-qnorm as well. Furthermore, we show that these bounds are nearly minimax rate-optimal. In the last part of the talk, we address the 1-bit tensor completion problem and show that our results in the first part can be generalized to this case — the sample complexity of learning a low-rank tensor from noisy, 1-bit measurements of a subset of its entries still scales linearly with the number of free variables. This is joint work with Navid Ghadermarzy and Yaniv Plan. (TCPL 201) |
14:30 - 15:00 |
Xiaowei Li: Concentration for Euclidean Norm of Random Vectors ↓ We present a new Bernstein’s inequality for sum of mean-zero independent sub-exponential random variables with absolutely bounded first absolute moment. We use this to prove a tight concentration bound for the Euclidean norm of sub-gaussian random vectors. Then we apply the result to sub-gaussian random matrices on geometric sets, where the bounded first absolute moment condition comes naturally from the isotropic condition of random matrices. As an application, we discuss the implications for dimensionality reduction and Johnson-Lindenstrauss transforms. Lastly, we will talk about the possibility of extending this new Bernstein’s inequality to second order chaos (Hanson-Wright inequality). (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 16:00 |
Nir Shlezinger: Hardware-limited task-based quantization. ↓ Quantization plays a critical role in digital signal processing systems. Quantizers are typically designed to obtain an accurate digital representation of the input signal, operating independently of the system task, and are commonly implemented using serial scalar analog-to-digital converters (ADCs). This talk is concerned with hardware-limited task-based quantization, where a system utilizing a serial scalar ADC is designed to provide a suitable representation in order to allow the recovery of a parameter vector underlying the input signal. We propose hardware-limited task-based quantization systems for a fixed and finite quantization resolution, and characterize their achievable distortion. Our results illustrate the benefits of properly taking into account the underlying task in the design of the quantization scheme. (TCPL 201) |
16:00 - 16:30 |
Kaiming Shen: Fractional Programming for Communication Systems ↓ This talk proposes a new transform technique for solving fractional programming (FP), i.e., optimization problems involving ratio terms. The classic FP techniques such as the Charnes-Cooper method and Dinkelbach’s method, typically deal with a single ratio and cannot be extended to the multiple-ratio case. This talk will introduce our recent progress in solving FP with multiple ratios and matrix ratios, along with a broad range of applications in full-duplex cellular network, energy efficiency enhancement, uplink user scheduling, and multiple-input multiple-output (MIMO) device-to-device communications. Furthermore, the talk will discuss the connections of FP to the minorization-maximization, the fixed-point iteration, and the weighted minimum mean-square error algorithms. (TCPL 201) |
16:30 - 17:00 |
Wei Yu: Spatial Deep Learning for Wireless Scheduling ↓ The optimal scheduling of interfering links in a dense wireless network with full frequency reuse is a challenging task. In this talk, we first propose a novel fractional programming method to solve this problem, then point out that the traditional optimization approach of first estimating all the interfering channel strengths then optimizing the scheduling based on the model is not always practical, because channel estimation is resource intensive, especially in dense networks. To address this issue, we investigate the possibility of using a deep learning approach to bypass channel estimation and to schedule links efficiently based solely on the geographic locations of transmitters and receivers. This is accomplished by using locally optimal schedules generated using fractional programming for randomly deployed device-to-device networks as training data, and by using a novel neural network architecture that takes the geographic spatial convolutions of the interfering or interfered neighboring nodes as input over multiple feedback stages to learn the optimum solution. The resulting neural network gives good performance for sum-rate maximization and is capable of generalizing to larger deployment areas and to deployments of different link densities. Further, we propose a novel approach of utilizing the sum-rate optimal scheduling heuristics over judiciously chosen subsets of links to provide fair scheduling across the network, thereby showing the promise of using deep learning to solve discrete optimization problems in wireless networking. (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Wednesday, October 31 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 10:00 |
Lizhong Zheng: Local Geometric Analysis and Applications to Learning Algorithms. ↓ Local geometric analysis is a method to define a coordinate system in a small neighborhood in the space of distributions and the functional space over a given alphabet. It is a powerful technique since the notions of distance, projection, and inner product defined this way are useful in the optimization problems involving distributions, such as regressions. It has been used in many places in the literature such as correlation analysis, correspondence analysis. In this talk, we will go through some of the basic setups and properties, and discuss a specific problem we called ``universal feature selection”, which has close connections to some of the popular learning algorithms such as matrix completion and deep-learning. We will use this problem to motivate definitions of new information metrics for partial information and the relevance to specific queries. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:00 |
Miguel Rodrigues: On Deep Learning for Inverse Problems ↓ Deep neural networks (DNNs) – which consist of a series of non-linear transformations whose parameters are learned from training data – have been achieving state-of-the-art results in a wide range of applications such as computer vision, automatic speech recognition, automatic speech translation, natural language processing, and more. However, these remarkable practical successes have not been accompanied by foundational contributions that provide a rationale for the performance of this class of algorithms.
This talk concentrates on the characterization of the generalization properties of deep neural network architectures. In particular, the key ingredient of our analysis is the so-called Jacobian matrix of the deep neural network that defines how distances are preserved between points at the input and output of the network.
Our analysis – which applies to a wide range of network architectures – shows how the properties of the Jacobian matrix affect the generalization properties of deep neural network; it also inspires new regularization strategies for deep neural networks. Finally, our contributions also bypass some of the limitations of other characterizations of the generalization error of deep neural networks in the literature. (TCPL 201) |
11:00 - 11:30 |
Salman Salamatian: Principal Inertia Components & Applications ↓ We will discuss Principal Inertia Components (PICs), a theoretical framework to finely decompose the joint distribution between two random variables X and Y. The débute of PICs under different guises can be traced back to the works of Hirschfeld(1935), Gebelein (1941), and Rényi (1959). We show how the PICs connect and extend various concept in Statistics and Information Theory such as Maximal Correlation, Spectral Clustering of probability graphs, and Common Information. We then present applications of this technique to problems in Privacy against inference, Correspondence Analysis at scale, and black-box model comparisons. This is joint work with: Ali Makhdoumi, Muriel Médard (MIT), Hsiang Hsu, and Flavio Calmon (Harvard). (TCPL 201) |
11:30 - 11:45 | Discussion of afternoon activities (TCPL 201) |
12:00 - 13:30 | Lunch (Vistas Dining Room) |
13:30 - 17:30 | Free Afternoon (Banff National Park) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Thursday, November 1 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 10:00 |
Shirin Jalali: Using compression codes for efficient data acquisition ↓ With more than a century of research and development, data compression is a relatively mature field with impressive practical results on one hand, and a solid information-theoretic background on the other. Commercial image and video compression codes are carefully-designed algorithms that take advantage of intricate structures that exist in natural images or video files to encode them as efficiently as possible. On the other hand, exploiting a signal's structure to design more efficient data acquisition systems, as done in compressed sensing, phase retrieval or snapshot compressed sensing, is a relatively new research endeavor with its actual impact starting to emerge in few industries. However, comparing signal structures used by typical data compression codes with those used by modern data acquisition systems readily reveals the big gap between the two. This motivates us to ask the following question: can we design a data acquisition algorithm that employs an existing compression code as a mechanism to define and impose structure? An affirmative answer to this question potentially leads to much more efficient data acquisition systems that exploit complex structures, much beyond those already used in such systems. In this talk, we focus on addressing this question and show that not only the answer to this question is theoretically positive, but there exist efficient compression-based recovery algorithms that can achieve state-of-the-art performance, for instance in imaging systems. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:00 |
Rayan Saab: New and Improved Binary Embeddings of Data (and Quantization for Compressed Sensing with Structured Random Matrices) ↓ We discuss two related problems that arise in the acquisition and processing of high-dimensional data. First, we consider distance-preserving fast binary embeddings. Here we propose fast methods to replace points from a subset of R^N with points in a lower-dimensional cube {±1}^m, which we endow with an appropriate function to approximate Euclidean distances in the original space. Second, we consider a problem in the quantization (i.e., digitization) of compressed sensing measurements. Here, we deal with measurements arising from the so-called bounded orthonormal systems and partial circulant ensembles, which arise naturally in compressed sensing applications. In both these problems we show state-of-the art error bounds, and to our knowledge, some of our results are the first of their kind. This is joint work with Thang Huynh. (TCPL 201) |
11:00 - 11:30 |
Laurent Jacques: Dithered quantized compressive sensing with arbitrary RIP matrices ↓ Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals (e.g., sparse vectors in a given basis, low-rank matrices) with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there also exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases.
In this talk, we will see that a large class of random matrix constructions, i.e., known to respect the restricted isometry property (RIP) in the compressive sensing literature, can be made "compatible" with a simple scalar and uniform quantizer (e.g., a rescaled rounding operation). This compatibility is simply ensured by the addition of a uniform random vector, or random "dither", to the compressive signal measurements before quantization.
As a result, for these sensing matrices, there exists (at least) one method, the projected back projection (PBP), capable to estimate low-complexity signals from their quantized compressive measurements with an error decaying when the number of measurements increases. Our analysis is developed around a new property satisfied with high probability by the dithered quantized random mappings, i.e., the limited projection distortion (LPD), which enables both uniform and non-uniform (or fixed signal) reconstruction error guarantees for PBP. (TCPL 201) |
11:30 - 13:30 | Lunch (Vistas Dining Room) |
13:30 - 14:30 |
Waheed Bajwa: Sample complexity bounds for dictionary learning from vector- and tensor-valued data ↓ During the last decade, dictionary learning has emerged as one of the most powerful methods for data-driven extraction of features from data. While the initial focus on dictionary learning had been from an algorithmic perspective, recent years have seen an increasing interest in understanding the theoretical underpinnings of dictionary learning. Many such results rely on the use of information-theoretic analytical tools and help understand the fundamental limitations of different dictionary learning algorithms. This talk focuses on the theoretical aspects of dictionary learning and summarizes existing results that deal with dictionary learning from both vector-valued data and tensor-valued (i.e., multiway) data, which are defined as data having multiple modes. These results are primarily stated in terms of lower and upper bounds on the sample complexity of dictionary learning, defined as the number of samples needed to identify or reconstruct the true dictionary underlying data from noiseless or noisy samples, respectively. In addition to highlighting the effects of different parameters on the sample complexity of dictionary learning, this talk also brings out the potential advantages of dictionary learning from tensor data and concludes with a set of open problems that remain unaddressed for dictionary learning.
(This talk is based on a book chapter with the same title that is slated to appear in the edited volume "Information-Theoretic Methods in Data Science," edited by M. R. D. Rodrigues and Y. C. Eldar.) (TCPL 201) |
14:30 - 15:00 |
Maxim Goukhshtein: Distributed Coding of Compressively Sensed Sources ↓ In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity. (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:40 - 16:10 |
Vincent Schellekens: Compressive Learning with quantized embedding of datasets ↓ In a context where learning from voluminous datasets becomes more and more common, efficiency (i.e., in time and memory) of learning algorithm is critical. Compressive Learning is a framework where such large datasets are first compressed to a small summary (a sketch vector), and subsequent learning tasks performed on the sketch are much more resource-efficient. However, this dataset sketch must first be computed in software. In this work, propose a new sketch function inspired by universal 1-bit embeddings to facilitate the sketch computation, and show how to learn parameters from this new quantized dataset sketch. (TCPL 201) |
16:10 - 16:40 |
Xiugang Wu: Minimax Learning for Remote Prediction ↓ The classical problem of supervised learning is to infer an accurate predictor of a target variable $Y$ from a measured variable $X$ by using a finite number of labeled training samples. Motivated by the increasingly distributed nature of data and decision making, in this talk we consider a variation of this classical problem in which the prediction is performed remotely based on a rate-constrained description $M$ of $X$. Upon receiving $M$, the remote node computes an estimate $\hat Y$ of $Y$. We follow the recent minimax approach to study this learning problem and show that it corresponds to a one-shot minimax noisy source coding problem. We then establish information theoretic bounds on the risk-rate Lagrangian cost, which is approximately given by a robust generalized information bottleneck function. This leads to a general method for designing near-optimal descriptor-estimator pairs, which can be viewed as a rate-constrained analog to the maximum conditional entropy principle used in the classical minimax learning problem. Interestingly, here we show that a naive estimate-compress scheme for rate-constrained prediction is not in general optimal.
Joint work with Cheuk Ting Li, Ayfer Ozgur and Abbas El Gamal. (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Friday, November 2 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 10:00 | Discussion: Fostering community interactions and developing connections between topics. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:30 |
Discussion: Interesting directions and open problems in Information Theory ↓ Miguel will lead the discussion (TCPL 201) |
11:30 - 12:00 |
Checkout by Noon ↓ 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) |
12:00 - 13:30 | Lunch from 11:30 to 13:30 (Vistas Dining Room) |
13:30 - 15:00 | Structured open discussion (TBA) (TCPL 201) |