Schedule for: 23w5042 - Leveraging Model- and Data-Driven Methods in Medical Imaging
Beginning on Sunday, June 25 and ending Friday June 30, 2023
All times in UBC Okanagan, Canada time, PDT (UTC-7).
Sunday, June 25 | |
---|---|
16:00 - 23:00 | Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk Nechako Residence) |
20:00 - 22:00 | Informal gathering (TBA) |
Monday, June 26 | |
---|---|
08:00 - 09:00 | Breakfast (Main Meeting Room) |
10:30 - 10:45 | Introduction and Welcome by BIRS-UBCO Staff (Main Meeting Room) |
10:45 - 11:00 | Opening (Main Meeting Room) |
11:00 - 11:30 | Coffee Break (ARTS 112) |
11:30 - 12:15 |
Thomas Pock: Posterior-Variance-Based Error Quantification for Inverse Problems in Imaging ↓ In this work, a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that in
practice, the obtained error bounds are rather tight. For realizing the numerical experiments, also a novel primal-dual Langevin algorithm for sampling from non-smooth distributions is introduced in this work. (Main Meeting Room) |
12:30 - 14:15 | Lunch (Sunshine - Administration Building) |
14:15 - 15:00 |
Matteo Santacesaria: Continuous generative neural networks for inverse problems ↓ Generative models are a large class of deep learning architectures, trained to describe a subset of a high dimensional space with a small number of parameters. Popular models include variational autoencoders, generative adversarial networks, normalizing flows and, more recently, score-based diffusion models. In the context of inverse problems, generative models can be used to model prior information on the unknown with a higher level of accuracy than classical regularization methods.
In this talk we will present a new data-driven approach to solve inverse problems based on generative models. Taking inspiration from well-known convolutional architectures, we construct and explicitly characterize a class of injective generative models defined on infinite dimensional functions spaces. The construction is based on wavelet multi resolution analysis: one of the key theoretical novelties is the generalization of the strided convolution between discrete signals to an infinite dimensional setting. After an off-line training of the generative model, the proposed reconstruction method consists in an iterative scheme in the low-dimensional latent space. The main advantages are the faster iterations and the reduced ill-posedness, which is shown with new Lipschitz stability estimates. We also present numerical simulations validating the theoretical findings. (Main Meeting Room) |
15:00 - 15:45 |
Allan Greenleaf: A Cubic Correction in EIT Imaging ↓ Virtual Hybrid Edge Detection (VHED) is a proposed method for applying analysis of complex principal type operators to voltage-to-current data in EIT. This allows one to obtain 2D images which, while still low resolution, highlight discontinuities of the electrical conductivity, and is potentially useful, e.g., in continuous monitoring of stroke patients where higher quality CT or MRI imaging
is not feasible. It also appears useful for pre-processing EIT data before applying machine learning algorithms (work of Agnelli, et al.)
The original VHED was based on just the linear term in a Neumann expansion of Astala-P\"{a}iv\"{a}rinta-type solutions; I will describe a possible improvement using third order correction terms. (Main Meeting Room) |
15:45 - 16:15 | Coffee Break (ARTS 112) |
16:15 - 17:00 |
Filippo De Mari: Unitarization of the Radon transform ↓ We consider the Radon transform associated to pairs $(X,\Xi)$, a variant of Helgason's notion of dual pair, where $X=G/K$ and $\Xi=G/H$, $G$ being a locally compact group and $K$ and $H$ closed subgroups thereof. Under some technical assumptions, we prove that if the quasi regular representations of $G$ acting on $L^2(X)$ and $L^2(\Xi)$ are irreducible, then the Radon transform
admits a unitarization intertwining the two representations. If, in addition, the representations are square integrable, we provide an
inversion formula for the Radon transform based on the voice transform associated to these representations.
The general assumptions (in particular irreducibility and square integrability of the representations) fail in the case when $X$ is either a noncompact symmetric space or a homogeneous tree and $\Xi$ is the corresponding space of horocycles.
Nonetheless, a unitarization theorem holds true in both cases and the outcoming unitary operator does intertwine the quasi regular representations.
This is joint work with G. Alberti, F. Bartolucci, E. De Vito, M. Monti and F. Odone. (Main Meeting Room) |
17:30 - 19:00 | Dinner (Sunshine/ADM) |
Tuesday, June 27 | |
---|---|
08:00 - 09:00 | Breakfast (Main Meeting Room) |
09:30 - 10:15 |
Giovanni S. Alberti: Compressed sensing for the sparse Radon transform ↓ Compressed sensing allows for the recovery of sparse signals from few measurements, whose number is proportional, up to logarithmic factors, to the sparsity of the unknown signal. The classical theory mostly considers either random linear measurements or subsampled isometries. In particular, the case with the subsampled Fourier transform finds applications to undersampled magnetic resonance imaging. In this talk, I will show how the theory of compressed sensing can also be rigorously applied to the sparse Radon transform, in which only a finite number of angles are considered. One of the main novelties consists in the fact that the Radon transform is associated to an ill-posed inverse problem, and the result follows from a new theory of compressed sensing for abstract inverse problems. (Main Meeting Room) |
10:15 - 11:00 |
Jurgen Frikel: Modelling data incompleteness in tomography ↓ In this talk, we consider the reconstruction problem of X-ray tomography where only incomplete data are available. We review the known mathematical characterizations of limited data reconstructions and explain the impact of data incompleteness on reconstruction quality. In particular, we discuss why certain features of the searched object cannot be reconstructed reliably from incomplete tomographic data without imposing strong a priori assumptions or without the use of machine learning techniques. We will also explain why and what kind of artifacts can be generated during the reconstruction process, and how the classical models can be modified to mitigate artifact generation. Finally, we discuss how these theoretical insights can be used in practice, e.g. to design dedicated reconstruction techniques or to provide information about reliably reconstructed features. (Main Meeting Room) |
11:00 - 11:30 | Coffee Break (ARTS 112) |
11:30 - 12:00 |
Maximilian Kiss: Using the 2DeteCT data collection as training data for data-driven methods in medical imaging ↓ Recent research in computational imaging largely focuses on developing machine learning (ML) techniques for image reconstruction which requires large-scale training datasets consisting of measurement data and ground-truth images. For many important imaging modalities such as X-ray Computed Tomography (CT), especially in the field of medical CT, suitable experimental datasets are scarce and many methods are developed and evaluated on simulated data, only.
To overcome this challenge some data-driven methods employ prior knowledge through mathematical and/or physical models, others train on more abundant image training datasets such as ImageNet and use transfer learning on a smaller subset of medical training images to reach high performance. We propose the use of a more adequate large training dataset that contains 2D-CT instead of natural images.
We acquired a versatile, open 2D CT dataset suitable for developing ML techniques for image reconstruction tasks such as low-dose reconstruction, limited or sparse angular sampling, beam-hardening artifact reduction, super-resolution, region-of-interest tomography or segmentation.
For this we designed a sophisticated, semi-automatic scan procedure that utilizes a highly-flexible laboratory X-ray CT set-up. A diverse mix of samples with high natural variability in shape and density resembling abdominal CT scans, was scanned slice-by-slice in a 2D fan-beam geometry. Each of the 5000 slices was scanned with very high angular and spatial resolution and three different beam characteristics: A high-fidelity, a low-dose and a beam-hardening-inflicted mode.
In addition, 850 out-of-distribution slices were scanned with sample and beam variations. The total scanning time was 850 hours. We provide the complete image reconstruction pipeline: raw projection data, pre-processing and reconstruction scripts using open software, and reference reconstructions and segmentations. (Main Meeting Room) |
12:00 - 12:30 |
Tim Roith: Bregman Iterations for sparse neural networks and architecture search ↓ I will present a novel learning framework based on stochastic Bregman iterations. It allows to train sparse neural networks with an inverse scale space approach, starting from a very sparse network and gradually adding significant parameters. Furthermore, I will provide a sparse parameter initialization strategy and a stochastic convergence analysis of the loss decay, and additional convergence proofs in the convex regime. It turns out, that the Bregman learning framework can also be applied to Neural Architecture Search. It can for instance, unveil an autoencoder structure for denoising or deblurring problems. This can be further applied to biomedical imaging. By additionally introducing learnable skip connections this allows to learn a U-Net like architecture. (Main Meeting Room) |
12:30 - 14:15 | Lunch (Sunshine - Administration Building) |
12:30 - 12:45 | Group Photo (Main Meeting Room) |
14:15 - 15:00 |
Luca Ratti: Learned variational regularization for linear inverse problems ↓ Variational regularization is a well-established technique to tackle instability of inverse problems, and it requires solving a minimization problem in which a mismatch functional is endowed with a suitable regularization term. The choice of such a functional is a crucial task, and it usually relies on theoretical suggestions as well as a priori information on the desired solution.
A promising approach to this task is provided by data-driven strategies, based on the statistical learning paradigm: supposing that the exact solution and the measurements are distributed according to a joint probability distribution, which is partially known thanks to a suitable training sample, we can take advantage of this statistical model to design operators.
In this talk, I will consider linear inverse problems (associated with relevant applications, e.g., in signal processing and in medical imaging), and aim at learning the optimal regularization operator, among the ones belonging to some classes described by suitable parameters. I will first focus on the family of generalized Tikhonov regularizers, for which it is possible to prove theoretical properties of the optimal operator and error bounds for its approximation as the size of the sample grows, both with a supervised-learning strategy and with an unsupervised-learning one. Finally, I will discuss the extension to different families of regularization functionals, with a particular interest in sparsity-promotion.
This is based on joint work with G. S. Alberti, E. De Vito, M. Santacesaria (University of Genoa), and M. Lassas (University of Helsinki) (Main Meeting Room) |
15:00 - 15:45 |
Tan Bui-Thanh: Towards real-time solutions for inverse and imaging problems with uncertainty quantification ↓ Deep Learning (DL) by design is purely data-driven and in general does not require physics. This is the strength of DL but also one of its key limitations. DL methods in their original forms are not capable of respecting the underlying mathematical models or achieving desired accuracy even in big-data regimes. On the other hand, many data-driven science and engineering problems, such as inverse problems, typically have limited experimental or observational data, and DL would overfit the data in this case. Leveraging information encoded in the underlying mathematical models not only compensates missing information in low data regimes but also provides opportunities to equip DL methods with the underlying physics and hence obtaining higher accuracy. This talk introduces a Tikhonov Network (TNet) that is capable of learning Tikhonov regularized inverse problems. We rigorously show that our TNet approach can learn information encoded in the underlying mathematical models, and thus can produce consistent or equivalent inverse solutions, while naive purely data-based counterparts cannot. Furthermore, we theoretically study the error estimate between TNet and Tikhhonov inverse solutions and under which conditions they are the same. Extension to statistical inverse problems will also be presented. (Main Meeting Room) |
15:45 - 16:15 | Coffee Break (ARTS 112) |
16:15 - 17:00 |
Carlos Esteve-Yagüe: Spectral decomposition of atomic structures in heterogeneous cryo-EM ↓ In this talk I will present a recent work in collaboration with Willem Diepeveen, Ozan Öktem and Carola-Bibiane Schönlieb. We consider the problem of recovering the three-dimensional atomic structure of a flexible macromolecule from a heterogeneous cryo-EM dataset. Our method combines prior biological knowledge about the macromolecule of interest with the cryo-EM images. The goal is to determine the deformation of the atomic structure in each image with respect to a specific conformation, which is assumed to be known. The prior biological knowledge is used to parametrize the space of possible atomic structures. The parameters corresponding to each conformation are then estimated as a linear combination of the leading eigenvectors of a graph Laplacian, constructed by means of the cryo-EM dataset, which approximates the spectral properties of the manifold of conformations of the underlying macromolecule. (Main Meeting Room) |
17:30 - 19:00 | Dinner (Sunshine/ADM) |
Wednesday, June 28 | |
---|---|
08:00 - 09:00 | Breakfast (Main Meeting Room) |
09:30 - 10:15 |
Bart Goossens: Efficient Region-of-interest CT Reconstruction Using Near-Orthogonal Shearlet-based Discrete Projection Transforms with Effective Pre- and Post-Conditioning Schemes ↓ In a previous work, we introduced a CT reconstruction algorithm that leverages the robust width property to achieve high numerical accuracy under relaxed data consistency conditions. This algorithm jointly operates on projection and image data and has shown promising results. To further enhance its computational efficiency and reconstruction quality, here we investigate specific joint projection and image data transforms that are orthogonal, namely, we consider a class of composed Radon-shearlet transforms endowed with an intertwining property and having (near) orthogonal basis functions. However, when the continuous Radon transform is replaced by discrete parallel beam/fan beam projectors, orthogonality is lost. To manage this situation, we introduce an effective CG pre- and postconditioning scheme to take advantage of near-orthogonal composed transforms. (Main Meeting Room) |
10:15 - 11:00 |
Sören Dittmer: Reinterpreting survival analysis in the universal approximator age ↓ In this talk, we will explore the intersection of survival analysis and deep learning. While survival analysis has been an essential part of statistics for a long time, it only recently gained attention from the deep learning community. This is likely in part due to the COVID-19 pandemic. We discuss how to fully harness the potential of survival analysis in deep learning. On the one hand, we discuss how survival analysis connects to classification and regression. On the other hand, we present technical tools: a new loss function, evaluation metrics, and the first universal approximating network that provably produces survival curves without numeric integration. We show that the loss function and model outperform other approaches on medical data and seamlessly integrate image data. (Main Meeting Room) |
11:00 - 11:30 | Coffee Break (ARTS 112) |
11:30 - 12:00 |
Johannes Hertrich: The Power of Patches for Training Normalizing Flows ↓ In this talk we introduce two kinds of data-driven patch priors learned from very few images: First, the Wasserstein patch prior penalizes the Wasserstein-2 distance between the patch distribution of the reconstruction and a possibly small reference image.
Such a reference image is available for instance when working with materials' microstructures or textures. The second regularizer learns the patch distribution using a normalizing flow. Since already a small image contains a large number of patches, this enables us to train the regularizer based on very few training images.
For both regularizers, we show that they induce indeed a probability distribution such that they can be used
within a Bayesian setting. We demonstrate the performance of patch priors for MAP estimation and posterior sampling within Bayesian inverse problems.
For both approaches, we observe numerically that only very few clean reference images are required to achieve
high-quality results and to obtain stability with respect to small pertubations of the problem. (Main Meeting Room) |
12:00 - 12:30 |
Yolanne Lee: Generalizing PINNs to Complex Geometries ↓ Partial differential equations (PDEs) are ubiquitous in the world around us, modelling phenomena from heat and sound to quantum systems residing in Euclidean and on complex geometries. Defining such laws implicitly through neural network architectures motivates the concept of physics-informed neural networks (PINNs), which use PDEs as soft constraints. Whilst PINNs have been proposed to solve PDEs in the non-Euclidean domain, current methods fundamentally discretize the domain. To date, there is no clear method to inform PINNs about the continuous topology of the PDE domain. Implicit neural representations (INRs) have emerged as a method to learn a continuous function on its entire domain. In this work, the INR framework is extended to propose Manifold-PINNs, a modified INR which can incorporate PINN constraints to approximate the solution of PDE on embedded manifolds within the domain. (Main Meeting Room) |
12:30 - 14:15 | Lunch (Sunshine - Administration Building) |
14:15 - 19:00 |
Free Afternoon ↓ Kelowna (Main Meeting Room) |
17:30 - 19:00 | Dinner (Sunshine/ADM) |
Thursday, June 29 | |
---|---|
08:00 - 09:00 | Breakfast (Main Meeting Room) |
09:30 - 10:15 |
Andreas Mang: Shape Classification through the Lens of Geodesic Flows of Diffeomorphisms ↓ We present work on statistical analysis on infinite-dimensional shape spaces $\mathcal{S}$. Our goal is to provide a mathematical framework for automatic classification and clustering of $k$-dimensional shapes $s \in \mathcal{S}$ in $\mathbb{R}^3$. The applications of our work are in biomedical imaging; we target the discrimination of clinically distinct patient groups through the lens of geodesic flows of diffeomorphisms.
In a Riemannian setting, we can express the similarity between two shapes $s_0, s_1 \in \mathcal{S}$ in terms of an energy minimizing diffeomorphism $y \in \mathcal{Y}$ that maps $s_0$ to $s_1$, i.e., $y \cdot s_0 = s_1$. We will discuss different variational formulations to compute $y$, and showcase effective numerical algorithms to their solution. We will see that this is an ill-posed inverse problem, resulting in high-dimensional ill-conditioned optimality systems that are challenging to solve in an efficient way. We will assess their performance in terms of computational complexity, rate of convergence, time-to-solution, and inversion accuracy. In addition, we will assess the discriminiative power of machine learning techniques implemented on several features derived from the computed map $y$ to classify clinical data. (Main Meeting Room) |
10:15 - 11:00 |
Jeff Fessler: Dynamic MRI Reconstruction with Locally Low-Rank Regularizers ↓ Many dynamic image reconstruction problems involve models that assume that an image or image sequence
satisfy low-rank or locally low-rank properties. These models often involve optimization problems involve nuclear norms or Schatten p-norms, so that the dynamics are learned from the data.
Many machine learning problems like robust PCA also involve such regularizers. First-order proximal optimization methods
like FISTA and POGM have worst-case convergence rates that are slower than the asymptotic convergence rates of smooth optimization algorithms like (limited memory) quasi-Newton algorithms that bring in second-order information.
Furthermore, first-order methods are not easily applicable to locally low-rank models that involve regularizers that sum numerous nuclear norms of overlapping patches, because such regularizers are not prox friendly.
This work-in-progress explores the use of smooth approximations to nuclear norms to facilitate gradient-based optimization methods for regularizers based on global and local low-rank models. (Main Meeting Room) |
11:00 - 11:30 | Coffee Break (ARTS 112) |
11:30 - 12:15 |
Rashmi Murthy: Combing deep learning with the Electrical Impedance Tomography to classify stroke ↓ Electrical impedance tomography (EIT) is an imaging method based on probing an unknown conductive body with electrical currents. Voltages resulting from the current feeds are measured at the surface, and the conductivity distribution inside is reconstructed. This is a promising technique in medical imaging as various organs and tissues have different conductivities. The motivation of this talk arises from classifying the two different kinds of strokes in the brain, ischemic or haemorrhagic. Typical EIT images are not optimal for stroke-EIT because of blurred images. In this talk we present a neural network approach to classify the stroke using the EIT boundary measurements. Here, we first approximate the idealised boundary condition, that is Dirichlet-to-Neumann (DN) map and use this approximation of idealised DN map to extract robust features called Virtual Hybrid Edge Detection (VHED) functions that have a geometric interpretation and whose computation from EIT data does not involve calculating a full image of the conductivity. We report the measures of accuracy for the stroke prediction using VHED functions on datasets that differ from the training data used for the training of neural network. (Online - UBCO) |
12:30 - 14:15 | Lunch (Sunshine - Administration Building) |
14:15 - 15:00 |
Clarice Poon: Smooth over-parametrized solvers for non-smooth structured optimisation ↓ Non-smooth optimization is a core ingredient of many imaging or machine learning pipelines. Non-smoothness encodes structural constraints on the solutions, such as sparsity, group sparsity, low-rank and sharp edges. It is also the basis for the definition of robust loss functions such as the square-root lasso. Standard approaches to deal with non-smoothness leverage either proximal splitting or coordinate descent. The effectiveness of their usage typically depend on proper parameter tuning, preconditioning or some sort of support pruning.
In this work, we advocate and study a different route. By over-parameterization and marginalising on certain variables (Variable Projection), we show how many popular non-smooth structured problems can be written as smooth optimization problems. The result is that one can then take advantage of quasi-Newton solvers such as L-BFGS and this, in practice, can lead to substantial performance gains. Another interesting aspect of our proposed solver is its efficiency when handling imaging problems that arise from fine discretizations (unlike proximal methods such as ISTA whose convergence is known to have exponential dependency on dimension). On a theoretical level, one can connect gradient descent on our over-parameterized formulation with mirror descent with a varying Hessian metric. This observation can then be used to derive dimension free convergence bounds and explains the efficiency of our method in the fine-grids regime. (Main Meeting Room) |
15:00 - 15:45 |
Manabu Machida: Nonlinear Rytov Approximation as a Practial Inversion Scheme for Optical Tomography ↓ The Rytov approximation has been commonly used for optical tomography.
It is known that the Rytov approximation often gives better reconstructed
images than the Born approximation. In the conventional Rytov approximation,
however, nonlinear inverse problems must be linearized. In this talk,
nonlinear reconstruction with the inverse Rytov series will be discussed. (Main Meeting Room) |
15:45 - 16:15 | Coffee Break (ARTS 112) |
17:30 - 19:00 | Dinner (Sunshine/ADM) |
Friday, June 30 | |
---|---|
08:00 - 09:00 | Breakfast (Main Meeting Room) |
09:30 - 10:15 |
Andrea Aspri: Data driven regularization by projection ↓ In this talk I will speak about some recent results on the study of linear inverse problems under the premise that the forward operator is not at hand but given indirectly through some input-output training pairs. We show that regularisation by projection and variational regularisation can be formulated by using the training data only and without making use of the forward operator. We will provide some information regarding convergence and stability of the regularized solutions. Moreover, we show, analytically and numerically, that regularisation by projection is indeed capable of learning linear operators.
This is a joint work with Leon Frischauf (University of Vienna), Yury Korolev (University of Cambridge) and Otmar Scherzer (University of Vienna and RICAM). (Online - UBCO) |
10:15 - 10:30 | Closing remarks (Main Meeting Room) |
10:30 - 11:00 | Checkout by 11AM (Front Desk Nechako Residence) |
11:00 - 11:30 | Coffee Break (Main Meeting Room) |
11:30 - 13:00 | Lunch (Sunshine/ADM) |