Schedule for: 17w5072 - Computational Uncertainty Quantification
Beginning on Sunday, October 8 and ending Friday October 13, 2017
All times in Banff, Alberta time, MDT (UTC-6).
Sunday, October 8 | |
---|---|
16:00 - 17:30 | Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
20:00 - 22:00 | Informal gathering (Corbett Hall Lounge (CH 2110)) |
Monday, October 9 | |
---|---|
07:00 - 08:45 |
Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
08:45 - 09:00 | Introduction and Welcome by BIRS Station Manager (TCPL 201) |
09:00 - 09:20 |
Habib Najm: Limited data challenges in uncertainty quantification ↓ In many problems of practical relevance, uncertainty quantification (UQ) studies are strongly challenged by the lack of data. This includes both forward and inverse UQ problems, and the context of both experimental and computational data. In this talk, I will go over some of these challenges in specific UQ problem scenarios, and will discuss recent and ongoing algorithmic developments for dealing with associated difficulties. (TCPL 201) |
09:20 - 09:40 |
Raúl Tempone: Multilevel and Multi-index Monte Carlo methods for the McKean-Vlasov equation ↓ Our goal here is to approximate functionals of a system of a large number of particles, described by a coupled system of Ito stochastic differential equations (SDEs). To this end, our Monte Carlo simulations use systems with finite numbers of particles and the Euler-Maruyama time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two discretization parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that the optimal work complexity of MLMC to estimate a given smooth functional in a standard setting with an error tolerance of $\text{tol}$ is $\text{O}({\text{tol}^{-3}})$. We also propose a partitioning estimator that applies our novel Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of $\text{O}({\text{tol}^{-2}\log(\text{tol})^2})$. Our numerical results with a Kuramoto system of oscillators provide a complete agreement with the outlined theory. (TCPL 201) |
09:40 - 10:00 |
Omar Knio: Data enabled approaches to sensitivity analysis in general circulation models ↓ This talk discusses the exploitation of large databases of model realizations for assessing model sensitivities to uncertain inputs and for calibrating physical parameters. Attention is focused on databases of individual realizations of ocean general circulation models, built through efficient sampling approaches. The realizations are exploited to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. We illustrate the implementation of these techniques through extreme-scale applications, inclduing inference physical parametrizations and quantitative assessment and visualization of forecast uncertainties. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 10:50 |
Alireza Doostan: Uncertainty quantification using low-fidelity data ↓ The use of model reduction has become widespread as a means to reduce computational cost for uncertainty quantification of PDE systems. In this work we present a model reduction technique that exploits the low-rank structure of the solution of interest, when exists, for fast propagation of high-dimensional uncertainties. To construct this low-rank approximation, the proposed method utilizes models with lower fidelities (hence cheaper to simulate) than the intended high-fidelity model. After obtaining realizations to the lower fidelity models, a set of reduced basis and an interpolation rule are identified and applied to a small set of high-fidelity realizations to obtain this low-rank, bi-fidelity approximation. In addition to the construction of this bi-fidelity approximation, we present convergence analysis and numerical results.
This is a joint work with Hillary Fairbanks (CU Boulder), Jerrad Hampton (CU Boulder), and Akil Narayan (U of Utah). (TCPL 201) |
10:50 - 11:10 |
Anthony Nouy: Principal component analysis for the approximation of high-dimensional functions in tree-based tensor formats ↓ We present an algorithm for the approximation of high-dimensional functions using tree-based low-rank approximation formats (tree tensor networks). A multivariate function is here considered as an element of a Hilbert tensor space of functions defined on a product set equipped with a probability measure. The algorithm only requires evaluations of functions on a structured set of points which is constructed adaptively. The algorithm is a variant of higher-order singular value decomposition which constructs a hierarchy of subspaces associated with the different nodes of a dimension partition tree and a corresponding hierarchy of interpolation operators. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format.
Reference :
A. Nouy. Higher-order principal component analysis for the approximation of tensors in tree-based low rank formats. arxiv preprint arXiv:1705.00880, 2017. (TCPL 201) |
11:10 - 11:30 |
Rebecca Morrison: Beyond normality: Learning sparse probabilistic graphical models in the non-Gaussian setting ↓ In this talk, I will present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another. This is joint work with Ricardo Baptista and Youssef Marzouk. (TCPL 201) |
11:30 - 13:00 |
Lunch ↓ Lunch is served daily between 11:30am and 1:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
11:30 - 11:50 |
Olivier Zahm: Dimension reduction of the input parameter space of vector-valued functions ↓ Approximation of multivariate functions is a difficult task when the number of input parameters is large. Identifying the directions where the function doesn't significantly vary is a key step for complexity reduction. Among other dimension reduction techniques, the Active Subspace method uses gradients of a scalar-valued function to reduce the parameter space. In this talk, we extend this methodology for vector-valued functions, e.g. functions with multiple scalar outputs or functions taking values in function spaces. Numerical examples reveal the importance of the choice of the metric to measure errors. (TCPL 201) |
11:50 - 12:10 |
Jianbing Chen: The probability density evolution method for uncertainty quantification and global reliability of complex civil structures ↓ The distinguished properties of stochastic systems in civil structures subjected to dynamic disastrous actions include: (1) Randomness involved in both structural properties, which are essentially random fields, and external dynamic excitations such as strong earthquakes and wind, which are essentially stochastic processes; (2) Strong nonlinearity of restoring force, including strength degradation and stiffness degradation, which could not be described by polynomials and should be captured by the elastoplastic damage mechanics; and (3) Large degrees of freedom in the order of magnitude of millions or larger. The coupling of randomness and nonlinearity in large degrees of freedom systems leads to great difficulty in uncertainty quantification and global reliability of such real-world civil structures, and hinders effective design trading off safety and economical efficiency.
In this presentation, the probability density evolution method (PDEM) will be outlined. In this method, by combining the principle of preservation of probability and the underlying physical mechanism, a state variables decoupled generalized density evolution equation (GDEE) could be derived. This equation reveals that the change of probabilistic information of the response is determined by the change of the underlying physical state. The technically most appealing property of this partial differential equation is that the dimension of this equation depends only on the number of quantity of interest, rather than the dimension of the embedded system. Consequently, combining the embedded deterministic analyses, from which the mechanism of propagation of uncertainty is captured, and the solution of GDEE, will lead to instantaneous probability density function of the quantity of interest. The applications to the seismic response and global reliability of real-world civil structures will be exemplified. Challenging problems to be further resolved, as well as most recent advancement, will be discussed. (TCPL 201) |
14:00 - 14:20 |
Group Photo ↓ Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL Foyer) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 17:30 | Discussion (small groups) (TCPL 201) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
Tuesday, October 10 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:20 |
Claude Le Bris: Coarse approximation of highly oscillatory, possibly random elliptic problems ↓ We approximate an elliptic problem with oscillatory coefficients using a problem of the same type, but with constant coefficients. We deliberately take an engineering perspective, where the information on the oscillatory coefficients in the equation can be incomplete. A theoretical foundation of the approach in the limit of infinitely small oscillations of the coefficients is provided, using the classical theory of homogenization. We present a comprehensive study of the implementation aspects of our method, and a set of numerical tests and comparisons that show the potential practical interest of the approach.
This is joint work with Frédéric Legoll (Ecole des Ponts and Inria) and other collaborators, in interaction with Albert Cohen (Université Pierre & Marie Curie, Paris). Reference: https://arxiv.org/abs/1612.05807 (TCPL 201) |
09:20 - 09:40 |
Mohammad Motamed: Hybrid fuzzy-stochastic predictive modeling and computation ↓ Predictive computational science is an emerging discipline concerned with assessing the predictability of mathematical and computational tools, particularly in the presence of inevitable uncertainty and limited information. In this talk, I will present a new comprehensive predictive methodology embedded in a new hybrid fuzzy-stochastic framework to predict physical events described by partial differential equations (PDEs) and subject to both random (aleatoric) and non-random (epistemic) uncertainty. In the new framework the uncertain parameters will be characterized by random fields with fuzzy moments. This will result in a new class of PDEs with hybrid fuzzy-stochastic parameters, coined fuzzy-stochastic PDEs, for which forward and inverse problems need to be solved. I will demonstrate the importance and feasibility of the new methodology by applying it to a complex problem: prediction of the response of materials with hierarchical microstructure to external forces. This model problem will serve as an illustrative example, one that cannot be tackled by today’s UQ methodologies. (TCPL 201) |
09:40 - 10:00 |
Régis Cottereau: Fully scalable implementation of a volume coupling scheme for the modeling of random polycrystalline materials ↓ This contribution presents a new implementation of a multi-scale, multi-model stochastic-deterministic coupling algorithm, with a proposed parallelization scheme for the construction of the coupling terms between the models. This allows one to study such problems with a fully scalable algorithm on large computer clusters, even when the models and/or the coupling have a high number of degrees of freedom. As an application example, we will consider a system composed by a homogeneous, macroscopic elasto-plastic model and a stochastic heterogeneous polycrystalline material model, with a volume coupling based on the Arlequin framework. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 10:50 |
Malenova Gabriela: A sparse stochastic collocation technique for high-frequency wave propagation with uncertainty ↓ We consider high frequency waves, i.e. solutions to the scalar wave equation with highly oscillatory initial data. The speed of propagation as well as the initial data are considered to be uncertain, described by a finite number of independent random variables with known probability distributions. To compute quantities of interest (QoI) of the solution and their statistics, we combine two methods: the Gaussian beam method to treat the high frequencies and the sparse stochastic collocation to deal with the (possibly high-dimensional) uncertainty. The numerical steepest descent method is finally used to evalute the QoI in fast way. (TCPL 201) |
10:50 - 11:10 |
Olof Runborg: Stochastic regularity of a quadratic observable of high frequency waves ↓ We consider uncertainty quantification for high frequency waves. A crucial assumption for stochastic collocation methods to converge rapidly is the stochastic regularity of the QoI. In the high frequency regime, the derivatives in the stochastic variable should preferably be bounded independently of the wavelength. We show that, despite the highly oscillatory character of the waves, QoIs defined as local averages of the squared modulus of the wave solution approximated by Gaussian beams, indeed have this property. (TCPL 201) |
11:10 - 11:30 |
Shi Jin: Uncertainty quantification for multiscale kinetic equations with uncertain coefficients ↓ In this talk we will study the generalized polynomial chaos-stochastic Galerkin (gPC-SG) approach to kinetic equations with uncertain coefficients/inputs, and multiple time or space scales, and show that they can be made asymptotic-preserving, in the sense that the gPC-SG scheme preserves various asymptotic limits in the discrete space. This allows the implementation of the gPC methods for these problems without numerically resolving (spatially, temporally or by gPC modes) the small scales. Rigorous analysis, based on hypocoercivity of the collision operator, will be provided for both linear transport and nonlinear Vlasov-Poisson-Fokker-Planck system to study the regularity and long-time behavior (sensitivity analysis) of the solution in the random space, and to prove that these schemes are stochastically asymptotic preserving. (TCPL 201) |
11:30 - 13:30 | Lunch (Vistas Dining Room) |
11:30 - 11:50 |
Liu Liu: Hypocoercivity and exponential decay to global equilibrium for collisional kinetic models with random inputs and multiple scales--Analysis and numerics ↓ Based on the previous work by Mouhot-Neumann 06' and Marc Briant 15', the author first reviews the hypocoercivity assumptions for general deterministic kinetic models, in particular Boltzmann, Landau equations, etc. The multiple scalings considered include the compressible Euler and the incompressible Navier-Stokes. Uncertainties, arisen from measurement errors etc, can enter into the system through initial data and collision kernels. It is crucial to study how randomness affect the solution as time goes to infinity. The regularity of the solution in the random space, in addition to the exponential decay of the solution to the global equilibrium, are established under the standard Sobolev norms, which are uniform with respect to the scaling parameters.
In the second part, the author will discuss about the generalized polynomial chaos based Stochastic Galerkin (gPC-SG) method for solving kinetic equations with random inputs. The uniform spectral accuracy of the gPC-SG method as well as numerical examples are shown to validate the efficiency and accuracy of the proposed method.
This is a joint work with Shi Jin. (TCPL 201) |
11:50 - 12:10 |
Håkon Hoel: Numerical methods for stochastic conservation laws ↓ Stochastic conservation laws (SCL) with quasilinear multiplicative rough path dependence arise in modeling of mean field games. An impressive collection of theoretical results has been developed for SCL in recent years by Gess, Lions, Perthame, and Souganidis. In this talk we present numerical methods for pathwise solutions of scalar SCL with, for instance, Gaussian randomness in rough fluxes. We provide convergence rates for the numerical methods and show how rough path oscillations may lead to cancellations in the flow map solution operator, which again leads to more efficient numerical methods. (TCPL 201) |
13:30 - 15:00 | Discussion (small-groups) (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 17:30 | Discussion (small groups) (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Wednesday, October 11 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:20 |
Fabio Nobile: Convergence analysis of Padé approximations for Helmholtz problems with parametric/ stochastic wavenumber ↓ The present work concerns the approximation of the solution map S associated to the parametric Helmholtz boundary value problem, i.e., the map which associates to each (real) wavenumber belonging to a given interval of interest the corresponding solution of the Helmholtz equation. We introduce a least squares rational Padé-type approximation technique applicable to any meromorphic Hilbert space-valued univariate map, and we prove the uniform convergence of the Padé approximation error on any compact subset of the interval of interest that excludes any pole. This general result is then applied to the Helmholtz solution map S, which is proven to be meromorphic in ℂ, with a pole of order one in every (single or multiple) eigenvalue of the Laplace operator with the considered boundary conditions. Numerical tests are provided that confirm the theoretical upper bound on the Padé approximation error for the Helmholtz solution map. The Padé-type approximation can then be used to compute the probability distribution of quantities of interest associated to the Helmholtz problem with random wavenumber. (TCPL 201) |
09:20 - 09:40 |
Olivier Le Maître: A domain decomposition method for stochastic elliptic differential equations ↓ In this talk I will discuss the use of a Domain Decomposition method to reduced the computational complexity of classical problems arising in Uncertainty Quantification and stochastic Partial Differential equations. The first problem concerns the determination of the Karhunen-Loeve decomposition of a stochastic process given its covariance function. We propose to solve independently the decomposition problem over a set of subdomains, each with low complexity cost, and subsequently assemble a reduced problem to determined the global problem solution. We propose error estimates to control the resulting approximation error. Second, these ideas are extended to construct an efficient sampling approach for elliptic problems with stochastic coefficients expanded in a KL form. Here, we rely on the resolution of low complexity local stochastic elliptic problems to exhibit contributions to the condensed stochastic problem for the unknown boundary values at the internal subdomain boundaries. By relying intensively on local resolutions, that can be performed independently, the proposed approaches are naturally suited to parallel implementation and we will provide scalability results. (TCPL 201) |
09:40 - 10:00 |
Lorenzo Tamellini: Uncertainty quantification of geochemical and mechanical compaction in layered sedimentary basins ↓ In this work we propose an Uncertainty Quantification methodology for the evolution of sedimentary basins undergoing mechanical and geochemical compaction processes, which we model as a coupled, time-dependent, non-linear, monodimensional (depth-only) system of PDEs with uncertain parameters.
Specifically, we consider multi-layered basins, in which each layer is characterized by a different material. The multi-layered structure gives rise to discontinuities in the dependence of the state variables on the uncertain parameters. Because of these discontinuites, an appropriate treatment is needed for surrogate modeling techniques such as sparse grids to be effective.
To this end, we propose a two-steps methodology which relies on a change of coordinate system to align the discontinuities of the target function within the random parameter space. Once this alignement has been computed, a standard sparse grid approximation of the state variables can be performed. The effectiveness of this procedure is due to the fact that the physical locations of the interfaces among layers feature a smooth dependence on the random parameters and are therefore amenable to sparse grid polynomial approximations.
We showcase the capabilities of our numerical methodologies through some synthetic test cases.
References:
Ivo Colombo, Fabio Nobile, Giovanni Porta, Anna Scotti, Lorenzo Tamellini, Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins, Computer Methods in Applied Mechanics and Engineering, https://doi.org/10.1016/j.cma.2017.08.049, 2017. (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 10:50 |
Michael Eldred: Multilevel-multifidelity methods for uncertainty quantification and design under uncertainty with deployment to computational fluid dynamics ↓ In the simulation of complex physics, multiple model forms of varying fidelity and resolution are commonly available. In computational fluid dynamics, for example, common model fidelities include potential flow, inviscid Euler, Reynolds-averaged Navier-Stokes, and large-eddy simulation, each potentially supporting a variety of spatio-temporal resolution/discretization settings. While we seek results that are consistent with the highest fidelity, the computational cost of directly applying UQ in high random dimensions quickly becomes prohibitive. In this presentation, we focus on the development and deployment of multilevel-multifidelity algorithms that fuse information from multiple model fidelities and resolutions in order to reduce the overall computational burden.
For forward uncertainty quantification, we are developing multilevel-control variate approaches for variance reduction in Monte Carlo methods as well as multilevel emulator approaches that employ compressed sensing and tensor trains to exploit sparse and low rank structure. The latter emulator-based approaches also enable multilevel Bayesian inference by accelerating the MCMC process using MAP pre-solves and Hessian-based proposal covariance. Finally, similar concepts are being explored for multilevel-multifidelity design optimization, using multigrid optimization and recursive trust-region model management. These techniques are being demonstrated on both model problems and engineered systems such as aircraft nozzles and scramjets. (TCPL 201) |
10:50 - 11:10 |
Abdul-Lateef Haji-Ali: MLMC for value-at-risk ↓ This talk looks at Monte Carlo methods to estimate the Value-at-Risk (VaR) of a portfolio, which is a measure of the value and probability of the expected total loss of the portfolio in some short time horizon. It turns out that estimating VaR involves approximating a nested expectation where the outer expectation is taken with respect to stock values at the risk horizon and the inner expectation is taken with respect to the option index and stock values at some final time.
Following (Giles, 2008), our approach is to use MLMC to approximate the outer expectation where deeper levels use more samples in the Monte Carlo estimate of the inner expectation. We look at various control variates to reduce the variance of such an estimate. We also explore using an adaptive strategy from (Broadie et. al, 2011) to determine the number of samples used in estimating the inner expectation. Finally, we discuss using unbiased MLMC (Rhee & Glynn, 2015) when simulating stocks requires time discretization. Our results show that using MLMC to approximate VaR with an error tolerance of $\varepsilon$ are able to get an optimal complexity of approximately $\mathcal O(\varepsilon^{-2})$ that is independent of the number of options, for a large enough number of options.
This is joint work with Mike Giles. (TCPL 201) |
11:10 - 11:30 |
Sören Wolfers: Multilevel weighted least squares approximation ↓ Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method. (TCPL 201) |
11:30 - 13:30 | Lunch (Vistas Dining Room) |
11:30 - 11:50 |
Kody Law: Multilevel Monte Carlo methods for Bayesian inference ↓ For half a century computational scientists have been numerically simulating complex systems. Uncertainty is recently becoming a requisite consideration in complex applications which have been classically treated deterministically. This has led to an increasing interest in recent years in uncertainty quantification (UQ). Another recent trend is the explosion of available data. Bayesian inference provides a principled and well-defined approach to the integration of data into an a priori known distribution. The posterior distribution, however, is known only point-wise (possibly with an intractable likelihood) and up to a normalizing constant. Monte Carlo methods have been designed to sample such distributions, such as Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) samplers. Recently, the multilevel Monte Carlo (MLMC) framework has been extended to some of these cases, so that numerical approximation error can be optimally balanced with statistical sampling error, and ultimately the Bayesian inverse problem can be solved for the same asymptotic cost as solving the deterministic forward problem. This talk will concern the recent development of various MLMC algorithms for Bayesian inference problems. (TCPL 201) |
11:50 - 12:10 |
Paul Constantine: Parameter space dimension reduction for forward and inverse uncertainty quantification ↓ Scientists and engineers use computer simulations to study relationships between a physical model's input parameters and its output predictions. However, thorough parameter studies---e.g., constructing response surfaces, optimizing, or averaging---are challenging, if not impossible, when the simulation is expensive and the model has several inputs. To enable parameter studies in these cases, the engineer may attempt to reduce the dimension of the model's input parameter space. I will (i) describe computational methods for discovering low-dimensional structures in the parameter-to-quantity-of-interest map, (ii) propose strategies for exploiting the low-dimensional structures to enable otherwise infeasible parameter studies, and (iii) review results from several science and engineering applications. For more information, visit activesubspaces.org (TCPL 201) |
13:30 - 17:30 | Free Afternoon (Banff National Park) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Thursday, October 12 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:20 |
Omar Ghattas: Efficient and scalable methods for large-scale stochastic PDE-constrained optimal control ↓ We consider optimal control problems governed by PDEs with uncertain parameter fields, and in particular those with objective functions given by the mean and variance of the control objective. To make the problem tractable, we invoke a quadratic Taylor approximation of the objective with respect to the uncertain parameter field. This enables deriving explicit expressions for the mean and variance of the control objective in terms of its gradient and Hessian with respect to the uncertain parameter. The stochastic optimal control problem is then formulated as a PDE-constrained optimization problem with constraints given by the forward and adjoint PDEs defining these gradients and Hessians. The expressions for the mean and variance of the control objective under the quadratic approximation involve the trace of the (preconditioned) Hessian, and are thus prohibitive to evaluate for (discretized) infinite-dimensional parameter fields. To overcome this difficulty, we employ a randomized eigensolver to extract the dominant eigenvalues of the decaying spectrum. The resulting objective functional can now be readily differentiated using adjoint methods along with eigenvalue sensitivity analysis to obtain its gradient with respect to the controls. Along with the quadratic approximation and truncated spectral decomposition, this ensures that the cost of computing the objective and its gradient with respect to the control--measured in the number of PDE solves--is independent of the (discretized) parameter and control dimensions, leading to an efficient quasi-Newton method for solving the optimal control problem. Finally, the quadratic approximation can be employed as a control variate for accurate evaluation of the objective at greatly reduced cost relative to sampling the original objective. Several applications with high-dimensional uncertain parameter spaces will be presented. This work is joint with Peng Chen and Umberto Villa (UT Austin). (TCPL 201) |
09:20 - 09:40 |
Guillaume Bal: Long time propagation of stochasticity by dynamical PCEs ↓ This talk concerns the long-time evolution of stochastic ODE and PDE with random coefficients and white noise forcing by the method of PDEs. (TCPL 201) |
09:40 - 10:00 | Sonjoy Das: Accurate and efficient estimation of probability of failure in design of large-scale systems (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 10:50 |
Youssef Marzouk: Inference via low-dimensional couplings ↓ Integration against an intractable probability measure is among the fundamental challenges of statistical inference, particularly in the Bayesian setting. A principled approach to this problem seeks a deterministic coupling of the measure of interest with a tractable "reference" measure (e.g., a standard Gaussian). This coupling is induced by a transport map, and enables direct simulation from the desired measure simply by evaluating the transport map at samples from the reference. Yet characterizing such a map---e.g., representing, constructing, and evaluating it---grows challenging in high dimensions.
We use the conditional independence structure of the target measure to establish the existence of certain low-dimensional couplings, induced by transport maps that are sparse or decomposable. We also describe conditions, common in Bayesian inverse problems, under which transport maps have a particular low-rank structure. Our analysis not only facilitates the construction of couplings in high-dimensional settings, but also suggests new inference methodologies. For instance, in the context of nonlinear and non-Gaussian state space models, we will describe new variational algorithms for nonlinear smoothing and sequential parameter estimation. We will also outline a new class of nonlinear filters induced by local couplings, for inference in high-dimensional spatiotemporal processes with chaotic dynamics.
This is joint work with Alessio Spantini and Daniele Bigoni. (TCPL 201) |
10:50 - 11:10 |
Tim Wildey: A consistent Bayesian approach for stochastic inverse problems ↓ Uncertainty is ubiquitous in computational science and engineering. Often, parameters of interest cannot be measured directly and must be inferred from observable data. The mapping between these parameters and the measureable data is often referred to as the forward model and the goal is to use the forward model to gain knowledge about the parameters given the observations on the data. Bayesian inference is the most common approach for incorporating stochastic data into probabilistic descriptions of the input parameters. We have recently developed an alternative Bayesian solution to the stochastic inverse problem based on the measure-theoretic principles. We prove that this approach, which we call consistent Bayesian inference, produces a posterior distribution that is consistent in the sense that the push-forward probability density of the posterior through the model will match the distribution on the observable data, i.e., the posterior is consistent with the model and the data. Our approach only requires approximating the push forward probability measure/density of the prior through the computational model, which is fundamentally a forward propagation of uncertainty. Numerical results will be presented to highlight various aspects of this consistent Bayesian approach and to compare with the standard Bayesian formulation. (TCPL 201) |
11:10 - 11:30 |
Joakim Beck: Bayesian optimal experimental design using Laplace-based importance sampling ↓ In this talk, the focus is on optimizing strategies for the efficient computation of the inner loop of the classical double-loop Monte Carlo for Bayesian optimal experimental design. We propose the use of the Laplace approximation as an effective means of importance sampling, leading to a substantial reduction in computational work. This approach also efficiently mitigates the risk of numerical underflow. Optimal values for the method parameters are derived, where the average computational cost is minimized subject to a desired error tolerance. We demonstrate the computational efficiency of our method, as well as for a more recent approach that approximates using the Laplace method the return value of the inner loop. Finally, we present a set of numerical examples showing the efficiency of our method. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The last example deals with sensor placements in electrical impedance tomography to recover the fiber orientation in laminate composites. (TCPL 201) |
11:30 - 13:30 | Lunch (Vistas Dining Room) |
11:30 - 11:50 |
Prasanth Nair: Bayesian learning of governing equations from measurement data ↓ We present a Bayesian framework for learning governing equations from noisy dynamic measurement data. The central idea is to specify a Gaussian process prior for the dynamic response together with a hierarchical prior over a dictionary of candidate terms in the governing equations. The model hyperparameters are subsequently estimated either by maximizing the marginal likelihood or through variational inference. We show that when equipped with appropriate priors, the present approach enables the estimation of a parsimonious governing equation with random coefficients. As a result, it becomes possible to construct statistical error bars for predictions made using the identified governing equations. Numerical results for some test cases involving the learning of systems of ordinary and partial differential equations from time-series and spatio-temporal datasets will be presented to illustrate the proposed methodology. (TCPL 201) |
11:50 - 12:10 |
Gabriel Terejanu: Fast Bayesian filtering for high dimensional nonlinear dynamical systems ↓ A general and fast Bayesian filtering for high dimensional nonlinear dynamical systems is presented. Currently, real-time data assimilation techniques are overwhelmed by data volume and velocity and increased complexity of computational models. The importance of this novel framework is that it is agnostic to physical models, does not require derivatives, and thus it can be easily integrated with legacy computational codes. The algorithm does not require computing high-dimensional sample covariance matrices, thus it provides significant computational speed-up. Furthermore, the processing of completed simulations is done incrementally and asynchronously, which allows to respond to real-time situations when one cannot wait for all computations to finish. This property also allows the algorithm to be inherently fault tolerant. (TCPL 201) |
13:30 - 15:30 | Discussion (small groups) (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 17:30 | Discussion with all participants (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Friday, October 13 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 10:00 | Discussion (small groups) (TCPL 201) |
10:00 - 10:30 | Coffee Break (TCPL Foyer) |
10:30 - 11:30 | Discussion (small groups) and report writing (TCPL 201) |
11:30 - 12:00 |
Checkout by Noon ↓ 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) |
12:00 - 13:30 | Lunch from 11:30 to 13:30 (Vistas Dining Room) |