Schedule for: 22w5079 - Combining Causal Inference and Extreme Value Theory in the Study of Climate Extremes and their Causes
Beginning on Sunday, June 26 and ending Friday July 1, 2022
All times in UBC Okanagan, Canada time, PDT (UTC-7).
Sunday, June 26 | |
---|---|
16:00 - 23:59 | Nechako Residence Check-in time 4pm (Nechako Residence) |
20:00 - 21:00 | Informal gathering for on-site participants (Meeting point ouside the Nechako residence (or in the lobby in case of rain)) |
Monday, June 27 | |
---|---|
07:30 - 08:30 | Breakfast (Sunshine Café / Starbucks / Tim Hortons) |
08:30 - 08:45 | Welcome (Arts Building room 386) |
08:45 - 09:45 | Aurélien Ribes: Overview on Climatology (Arts Building room 386 (ZOOM)) |
09:45 - 10:00 | Break (ART 218) |
10:00 - 11:00 | Anthony Davison: Overview on Extreme Value Theory (Arts Building room 386 (ZOOM)) |
11:00 - 11:15 | Break (ART 218) |
11:15 - 12:15 | Linbo Wang: Overview on Causal Inference (Arts Building room 386 (IN PERSON)) |
12:15 - 13:15 | Lunch (Sunshine Café) |
13:15 - 14:00 | Campus tour for on-site participants (Meeting point outside the Sunshine Café) |
14:00 - 16:00 | Brainstorming Session on Site (ART 386 / ASC 301A) |
Tuesday, June 28 | |
---|---|
07:30 - 08:15 | Breakfast (Tim Hortons) |
08:15 - 08:45 |
Gabi Hegerl: Past and future changes in the probability of extreme temperature events ↓ This talk considers changes in temperature extremes over the historical period and into the future. Historical regional temperature extremes show strong variability in the past that link to past impacts, even though the frequency and intensity has clearly increased with the warming signal. Over many regions, climate model simulated future changes in extreme temperature events seem to most clearly show a shift towards warmer extremes without much evidence for a change in other aspects of the distribution, yet there appear to be some exceptions. The first is in high latitudes, with a substantial change in annual maximum temperature distribution near ice covered regions. Perhaps more interestingly, there appear also some regions where the distribution is expected to widen in several climate models, among these the Amazon region and Central and Eastern Europe. These regions show diverse changes across different climate models, with clear differences between some models and climatology and a diverse response into the future. We have experimented with observational constraints on changes in extremes and results suggest that selecting more realistic models can influence simulated future changes, particularly in the tropics. Use of observational constraints both from performance in simulating climatology and processes, as well as in simulating trends will be important to better predict future extremes. Furthermore, historical examples highlight the potential for strong change in extremes due to compound events or human induced changes in the land surface. (Arts Building room 386 (ZOOM)) |
08:45 - 09:15 |
Manuela Brunner: Classification reveals varying drivers of severe and moderate hydrological droughts in Europe ↓ Streamflow droughts are generated by a variety of processes including rainfall deficits and anomalous snow availability or evapotranspiration. The importance of different driver sequences may vary with event severity, however, it is yet unclear how. To study the variation of driver importance with event severity, we propose a formal classification scheme for streamflow droughts and apply it to a large sample of catchments in Europe. The scheme assigns events to one of eight drought types – each characterized by a set of compounding drivers - using information about seasonality, precipitation deficits, and snow availability. Our findings show that drought driver importance varies regionally, seasonally, and by event severity. More specifically, we show that rainfall deficit droughts are the dominant drought type in western Europe while northern Europe is most often affected by cold snow season droughts. Second, we show that rainfall deficit and cold snow season droughts are important from autumn to spring, while snowmelt and wet to dry season droughts are important in summer. Last, we demonstrate that moderate droughts are mainly driven by rainfall deficits while severe events are mainly driven by snowmelt deficits in colder climates and by streamflow deficits transitioning from the wet to the dry season in warmer climates. This high importance of snow-influenced and evapotranspiration-influenced droughts for severe events suggests that these potentially high-impact events might undergo the strongest changes in a warming climate because of their close relationship to temperature. The proposed classification scheme provides a template that can be expanded to include other climatic regions and human influences. (Arts Building room 386 (ZOOM)) |
09:15 - 09:30 | Break (ART 218) |
09:30 - 10:00 |
Maud Thomas: Non-asymptotic bounds for probability weighted moment estimators ↓ In hydrology and other applied fields, Probability weighted moments (PWM) have been frequently used to estimate the parameters of classical extreme value distributions (see [de Haan and Ferreira, 2006]). This method-of-moment technique can be applied when second moments are finite, a reasonable assumption in hydrology. Two advantages of PWM estimators are their ease of implementation and their close connection to the well-studied class of U-statistics. Consequently, precise asymptotic properties can be deduced. In practice, sample sizes are always finite and, depending on the application at hand, the sample length can be small, e.g. a sample of only 30 years of maxima of daily precipitation is quite common in some regions of the globe. In such a context, asymptotic theory is on a shaky ground and it is desirable to get non-asymptotic bounds. Deriving such bounds from off-the-shelf techniques (Chernoff method) requires exponential moment assumptions, which are unrealistic in many settings. To bypass this hurdle, we propose a new estimator for PWM, inspired by the median-of-means framework of Devroye et al. [2016]. This estimator is then shown to satisfy a sub- Gaussian inequality, with only second moment assumptions. This allows us to derive non-asymptotic bounds for the estimation of the parameters of extreme value distributions, and of extreme quantiles. This is a joint work with Anna Ben-Hamou and Philippe Naveau . (Arts Building room 386 (ZOOM)) |
10:00 - 10:30 |
Jenny Wadsworth: Statistical inference for multivariate extremes via a geometric approach ↓ A geometric representation for multivariate extremes, based on the shapes of sample clouds in light-tailed margins and their limit sets, has recently been shown to connect several existing extremal dependence concepts. However, these results are purely probabilistic, and the geometric approach itself has not been exploited for statistical inference. We outline a method for parametric estimation of the limit set shape, which includes a useful non/semi-parametric estimate as a pre-processing step. More fundamentally, our approach allows for extrapolation further into the tail of the distribution via simulation from the fitted model, and such models can accommodate any combination of simultaneous / non-simultaneous extremes through appropriate parametric forms for the limit set. (Arts Building room 386 (ZOOM)) |
10:30 - 11:00 |
Thordis Thorarinsdottir: Consistent estimation of extreme precipitation and flooding across multiple durations ↓ Infrastructure design commonly requires assessments of extreme quantiles of precipitation and flooding, with different types of infrastructure requiring estimates for different durations. This requires consistent estimates across multiple durations to ensure that e.g. the 0.99 quantile of annual maxima of 2 hour precipitation is larger than that for 1 hour precipitation. We discuss alternative approaches to ensure this consistency, both parametric and semi-parametric, which all assume that the annual maxima of a given duration follow a generalized extreme value (GEV) distribution. (Arts Building room 386 (ZOOM)) |
11:00 - 11:15 | Break (ART 218) |
11:15 - 11:45 |
Gloria Buriticá: Assessing time dependencies for heavy rainfall modeling ↓ Heavy rainfall distributional modeling is essential in any impact studies linked to the water cycle, e.g., flood risks. Still, statistical analyses taking into account extreme rainfall's temporal and multivariate nature are rare, and often, a complex de-clustering step is needed to make extreme rainfall temporally independent. A natural question is how to bypass this de-clustering step in a multivariate context. To address this issue, we introduce the stable sums method. Our goal is to thoughtfully incorporate the temporal and spatial dependencies in the analysis of heavy tails. To reach our goal, we build on large deviations of regularly varying stationary time series. Our novel approach enhances return levels inference and is robust concerning time dependencies. We implement it alike on independent and dependent observations to obtain accurate return levels confidence intervals. (Arts Building room 386 (ZOOM)) |
11:45 - 12:15 |
Jonathan Jalbert: Frequency analysis of projected discharges on ungauged river sections using a large set of hydrological simulations ↓ Following the exceptional floods of 2017 and 2019 in the province of Québec, the provincial Government has launched a vast project to update the mapping of flood zones for more than 13,000 river sections in southern Quebec. For almost all of these sections, no discharge measurements are available, but discharge simulated by several configurations of a hydrological model are available. A model was developed to study the extreme values of this set of simulations in order to take into account the uncertainty associated with the fact that discharge are not directly measured. In addition to allowing the estimation of extreme value, the developed model also allows the estimation of the true series of annual maxima that would have been observed by combining the information from the different hydrological simulations. This model is then used in a statistical post-processing procedure to estimate future extreme flows for all river sections. (Arts Building room 386 (ZOOM)) |
12:15 - 12:45 |
Dáithí Stone: The effect of experiment conditioning on estimates of human influence on extreme weather ↓ There a many experiment designs for assessing the role of anthropogenic emissions in specific extreme weather events, ranging from methods based on free-running atmosphere-ocean climate models through to methods based on highly constrained numerical weather forecasts. While technically these different experiment designs are each addressing different particular questions, in practice some methods may be more feasible or scientifically defensible than others. It would be helpful then if one experiment design could be substituted for another without much loss of potential accuracy. How transferable are conclusions based on different experiments designs, allowing substitution of one experiment design for another? Here we examine event attribution metrics for five different extreme events occurring over Aotearoa New Zealand during the past four years, using atmosphere-ocean climate model experiments, atmosphere-only model experiments, numerical weather forecast experiments, and reanalysis experiments. We conclude that there is a strong dependence on experiment design for extreme hot events, but that any dependence for extreme wet events is small in relation to various sampling uncertainties. This is joint work with Suzanne Rosier, Sapna Rana, Steven Stuart, Luke Harrington, Sam Dean. (Arts Building room 386 (IN PERSON)) |
12:45 - 14:00 | Lunch (Sunshine Café) |
14:00 - 16:00 | Brainstorming Session on Site (ART 386 / ASC 301A / ASC 307) |
Wednesday, June 29 | |
---|---|
07:30 - 08:15 | Breakfast (Tim Hortons) |
08:15 - 08:45 |
Anna Kiriliouk: Estimating failure probabilities for high-dimensional extremes ↓ An important problem in extreme-value theory is the estimation of the probability that a d-dimensional random vector X falls into a given extreme "failure set". If d is large, non-parametric methods suffer from the curse of dimensionality and parametric tail dependence models can be hard to estimate because of a large number of parameters. We focus on a generalisation of the so-called tail pairwise dependence matrix (TPDM), which gives a partial summary of tail dependence for all pairs of components of X. It has a close connection to the max-linear model: Cooley and Thibaud (2019) showed that a completely positive decomposition of the TPDM of X gives the parameter matrix of a max-linear model whose TPDM is equal to that of X. Unfortunately, exact algorithms for obtaining completely positive decompositions tend to be computationally heavy. We propose an algorithm to obtain an approximate positive decomposition of the TPDM. The decomposition is easy to compute and applicable to dimensions in the order of hundreds. When $X$ follows a max-linear model with a triangular coefficient matrix, the decomposition is exact; in all other cases, it is exact for off-diagonal entries of the TPDM but may overestimate its diagonal entries. We apply the proposed decomposition algorithm to maximal wind speeds in the Netherlands. (Arts Building room 386 (ZOOM)) |
08:45 - 09:15 |
Claudia Klüppelberg: Max-linear Bayesian networks ↓ Graphical models can represent multivariate distributions in an intuitive way and, hence, facilitate statistical analysis of high-dimensional data. Such models are usually modular so that high-dimensional distributions can be described and handled by careful combination of lower dimensional factors. Furthermore, graphs are natural data structures for algorithmic treatment. Conditional independence and Markov properties are essential features for graphical models. Moreover, graphical models can allow for causal interpretation, often provided through a recursive system on a directed acyclic graph (DAG) and the max-linear model we introduced in 2018 is a specific example. In this talk I present some conditional independence properties of max-linear Bayesian networks and exemplify the difference to linear networks. (Arts Building room 386 (ZOOM)) |
09:15 - 09:45 |
Mario Krali: Detecting max-linear structural equation models in extremes ↓ Recursive max-linear vectors provide models for the causal dependence between large values of observed random variables using a directed acyclic graph (DAG), but the standard assumption that all nodes of such a DAG are observed is unrealistic. We provide sufficient and necessary conditions for the recovery of a recursive max-linear model (RMLM) from a partially-observed DAG whose node variables have regularly varying tails. We use a scaling technique and causal dependence relations between pairs of nodes to propose a new method to detect the presence of hidden confounders. The method relies on regular variation, and on the properties of the minimal representation of a max-linear DAG, which only takes max-weighted paths into account. (Arts Building room 386 (ZOOM)) |
09:45 - 10:00 | Coffee Break (ART 218) |
10:00 - 10:30 |
Andreas Gerhardus: Numerical study of constraint-based time series causal discovery algorithms on synthetic data with heavy-tailed noise distributions ↓ Time series data are ubiquitious and of great importance to many fields of science and beyond. Recent years have therefore seen an increasing interest in adaptions and generalizations of iid causal discovery algorithm to the temporal setting. We here consider the PCMCI, PCMCI+ and LPCMCI algorithms for time series causal discovery, which employ the constraint-based approach and can thus be flexibly combined with different conditional independence tests in order to adapt to different types of functional relationships. In this talk we present a numerical study of these algorithms on artifically generated data with heavy-tailed noise distributions. The goal is to understand whether and how the performance of said algorithms is influenced by the presence of extreme values. (Arts Building room 386 (ZOOM)) |
10:30 - 11:00 |
Leonard Henckel: HSIC-X: an estimator exploiting independent instruments ↓ Instrumental variables may allow us to identify the causal function between a treatment and an outcome, even in the presence of unobserved confounding. Most of the existing estimators assume that the error term in the outcome and the hidden confounders are uncorrelated with the instruments. This is often motivated by a graphical separation, an argument that also justifies independence. Positing an independence restriction, however, leads to strictly stronger identifiability results. We provide HSIC-X, a practical method for exploiting independent instruments. We also see that taking into account higher moments may yield better finite sample results. This is joint work with Sorawit Saengkyongam, Niklas Pfister, and Jonas Peters. (Arts Building room 386 (IN PERSON)) |
11:00 - 11:15 | Coffee break (ART 218) |
11:15 - 11:45 |
Dan Cooley: Transformed Linear Prediction for Extremes ↓ First, we consider the problem of performing prediction when observed values are at extreme levels. We assume the framework of regular variation in the postitive orthant in order to better focus on the upper tail which is assumed to be the direction of interest. We develop a transformed-linear approach, which is familiar to linear prediction in the non-extreme setting, but which is based on tail-dependence measures rather than covariances, and which performs its linear operations in the postive orthant. We begin by constructing an inner product space of nonnegative random variables from transformed-linear combinations of independent regularly varying random variables.
The matrix of inner products provides the pairwise information necessary to find the best transformed linear predictor, which turns out to have the same form as the BLUP in the non-extreme setting. Under the reasonable modeling assumption that our variables arise from a positive subset of our inner product space, the inner product matrix corresponds to the "tail pairwise dependence matrix" a matrix of known extreme dependence summaries. Because the geometry of regular variation is not elliptical, uncertainty quantification must be done differently than in the non-exteme case. We find the 2x2 TPDM between $X_{p+1}$ and $\hat X_{p+1}$ and use it to construct an angular measure which contains this dependence. The advantages of our method is that it is relatively simple, and it is similar to prediction methods in the non-extreme setting. We apply our method to air pollution data in Washington DC and financial data. (Arts Building room 386 (ZOOM)) |
11:45 - 12:15 |
Emma Simpson: Capturing varied extremal dependence structures via mixtures of conditional extremes models ↓ In multivariate extreme value modelling, we often focus on distinguishing between asymptotic independence and asymptotic dependence. However, data may exhibit more complicated extremal dependence structures that are not fully described by these two cases, where different subsets of variables can be simultaneously large while others are of smaller order. A variety of methods have recently been developed to estimate this structure, which can be implemented as a preliminary step to aid subsequent model selection. Conditional extremes models, originating with Heffernan and Tawn (2004), offer a flexible option to capture either asymptotic independence or asymptotic dependence. The idea is to condition on one variable being above some high threshold and model the corresponding behaviour of the remaining ones. In this work, we propose to construct mixtures of conditional extremes models to capture the different possible extremal dependence structures that may arise. Conditioning on one variable being extreme, each mixture component allows for a different subset of the remaining variables to be simultaneously large while the others are small. Combining information across all possible conditioning variables allows us to fully describe the extremal dependence structure. We develop a model selection technique to determine the mixture components that should be used, ensuring consistency across the models selected via different conditioning variables. This allows us to simultaneously estimate the extremal dependence structure and construct a model for the extremes.
In this talk, we will focus mainly on the bivariate case, but also briefly discuss extensions for modelling additional variables, where consistency across dimension becomes an important consideration. (Arts Building room 386 (IN PERSON)) |
12:15 - 12:45 |
Richard Smith: Modeling Trends in Spatial Extremes and their Causal Determination ↓ Extreme weather events such as Hurricane Harvey raise questions of a broad nature about the spatial distribution of extreme events. Harvey itself led to extreme precipitations in a region around the city of Houston, Texas, but the potential for this kind of extreme covers a far wider area, that leads to questions about how to characterize extreme event probabilities and return levels spatially across a wide region. Another problem of the same nature is about sea level surges from Atlantic hurricanes along the US east coast. One technique for such problems is to treat some global meteorological indicator as a covariate, e.g. global mean surface temperature or mean sea surface temperature over a relevant area. Since a lot of work has been done on causal determination for these large-scale meteorological variables, that creates a conceptual methodology for causal inference about more localized extreme events as well. These ideas will be illustrated by some previous analyses concerned with Hurricane Harvey, and an ongoing study looking at east coast US sea level surges. (Arts Building room 386 (IN PERSON)) |
12:45 - 13:00 | Photo shooting for on-site participants (Arts Building room 386 (IN PERSON)) |
13:00 - 14:00 | Lunch (Sunshine Café) |
14:00 - 18:00 | Excursion for on-site participants (Outdoors) |
Thursday, June 30 | |
---|---|
07:30 - 08:15 | Breakfast (Sunshine Café) |
08:15 - 08:45 |
Raphael Huser: Identifying US wildfire drivers using partially-interpretable neural networks for high-dimensional extreme quantile regression ↓ Risk management for extreme wildfires requires an understanding of the associations and causal mechanisms that drive both ignition and spread. Useful metrics for quantifying such risk are extreme quantiles of aggregated burnt area conditioned on predictor variables that describe climate, biosphere and environmental states, as well as the abundance of fuel. Typically, these quantiles lie outside the range of observable data and so, for estimation, require specification of parametric extreme value models within a regression framework. Classical approaches in this context utilize linear or additive relationships between predictor and response variables and suffer in either their predictive capabilities or computational efficiency; moreover, their simplicity is unlikely to capture the truly complex structures that lead to the creation of extreme wildfires. In this talk, we propose a new methodological framework for performing extreme quantile regression using artificial neutral networks, which are able to capture complex non-linear relationships and scale well to high-dimensional data. The "black box" nature of neural networks means that they lack the desirable trait of interpretability often favored by practitioners; thus, we combine aspects of linear, and additive, models with deep learning to create partially interpretable neural networks that can be used for statistical inference but retain high prediction accuracy. To complement this methodology, we further propose a novel point process model for extreme values which overcomes the finite lower-endpoint problem associated with the generalized extreme value class of distributions. Our approach is applied to U.S. wildfire data with a high-dimensional predictor set and we illustrate vast improvements in predictive performance over linear and spline-based regression techniques. Our method is able to identify key drivers causing devastating wildfires and may thus be used for future wildfire risk assessment and mitigation. Joint work with Jordan Richards. (Arts Building room 386 (ZOOM)) |
08:45 - 09:15 |
Yan Gong: Partial tail correlation coefficient applied to extremal network learning ↓ In this talk, we propose a novel extremal dependence measure, called the partial tail correlation coefficient (PTCC), which is an analogy of the partial correlation coefficient in the non-extreme setting. The construction of this coefficient is based on the framework of multivariate regular variation and transformed-linear algebra operations. We show that this coefficient is helpful to identify "partial asymptotic independence" relationships between variables. Unlike other recently introduced asymptotic independence frameworks for extremes, our proposed coefficient relies on minimal modeling assumptions and can thus be used generally in exploratory analyses to learn extremal graphical models. Moreover, thanks to its links to traditional graphical models (whose edges are obtained as the non-zero entries of a precision matrix), classical inference methods for high-dimensional data, such as the graphical LASSO with Laplacian spectral constraints, can here also be exploited to efficiently learn extremal networks via the PTCC. We apply these new tools to assess risks in two different network applications, namely extreme river discharges in the upper Danube basin, and historical global currency price data. (Arts Building room 386 (ZOOM)) |
09:15 - 09:45 |
Juraj Bodík: Causal inference for Extreme dependence ↓ A classical approach to model non-stationarity in EVT is done by allowing
the parameters to vary with covariates. For example, assuming $Y|X \sim {\rm GEV} (\mu(X), \sigma(X), \xi(X))$ for some unknown functions $\mu$, $\sigma$, $\xi$. In the multivariate case, the dependence structure of extremes is characterized by the Pickands dependence function A, and can be similarly modelled as a function of covariates $A(\cdot | X)$. Our main question is the following: ''Which of the covariates are causal?''. What is a causal driving factor of extreme events? Inferring this is (in most cases) impossible from just a random sample. However, if we perturb the system and observe different environments, we can observe some patterns of invariance that lead to causal covariates. We will show details of why and how we can use it for a better prediction of extreme events. (Arts Building room 386 (ZOOM)) |
09:45 - 10:00 | Break (ART 218) |
10:00 - 10:30 |
Sebastian Engelke: Estimation and Inference of Extremal Quantile Treatment Effects ↓ Causal inference for rare events has important applications in many fields such medicine, climate science and finance. We introduce an extremal quantile treatment effect as the difference of extreme quantiles of the potential outcome distributions. Estimation of this effect is based on extrapolation results from extreme value theory in combination with a new counterfactual Hill estimator that uses propensity scores as adjustment. We establish the asymptotic theory of this estimator and propose a variance estimation procedure that allows for valid statistical inference. Our method is applied to analyze the effect of college education on high wages. (Arts Building room 386 (IN PERSON)) |
10:30 - 11:00 |
Nicola Gnecco: Causal discovery in heavy-tailed models ↓ Causal questions are omnipresent in many scientific problems. While much progress has been made in analyzing causal relationships between random variables, these methods are not well suited if the causal mechanisms only manifest themselves in extremes. This work aims to connect the two fields of causal inference and extreme value theory. We define the causal tail coefficient that captures asymmetries in the extremal dependence of two random variables. In the population case, the causal tail coefficient reveals the causal structure if the distribution follows a linear structural causal model. This holds even with latent common causes with the same tail index as the observed variables. Based on a consistent estimator of the causal tail coefficient, we propose a computationally highly efficient algorithm that estimates the causal structure. We prove that our method consistently recovers the causal order, and we compare it to other well-established and non-extremal approaches in causal discovery on synthetic and real data. The code is available as an open-access R package. (Arts Building room 386 (IN PERSON)) |
11:00 - 11:30 |
Mila Sun: Principal stratification for quantile causal effects under partial compliance ↓ Within the principal stratification framework in causal inference, the majority of the literature has focused on binary compliance with an intervention and modelling means. Yet in some research areas, compliance is partial, and research questions – and hence analyses – are concerned with causal effects on (possibly high) quantiles rather than on shifts in average outcomes. Modelling partial compliance is challenging because it can suffer from lack of identifiability. We develop an approach to estimate quantile causal effects within a principal stratification framework, where principal strata are defined by the bivariate vector of (partial) compliance to the two levels of a binary intervention. We propose a conditional copula approach to impute the missing potential compliance and estimate the principal quantile treatment effect surface at high quantiles, allowing the copula association parameter to vary with the covariates. A bootstrap procedure is used to estimate the parameter to account for inflation due to imputation of missing compliance. Moreover, we describe precise assumptions on which the proposed approach is based, and investigate the finite sample behaviour of our method by a simulation study. The proposed approach is used to study the 90th principal quantile treatment effect of executive stay-at-home orders on mitigating the risk of COVID-19 transmission in the United States. (Arts Building room 386 (IN PERSON)) |
11:30 - 11:45 | Break (ART 218) |
11:45 - 12:15 |
Jevenijs Ivanovs: Graphical models for extremes and Levy processes - a unified framework ↓ Measures exploding at the origin are fundamental in the study of extremes and stochastic processes. For example, they appear in construction of max-infinitely divisible distributions and Lévy processes. We define a notion of conditional independence for such measures and establish a number of its equivalent characterizations. In particular, it includes the recent notion of extremal conditional independence for multivariate Pareto distributions without a density assumption. Furthermore, structural max-linear models can be put into this framework as well. This is a joint work with Sebastian Engelke, Kirstin Strokorb and Jakob Thøstesen. (Arts Building room 386 (IN PERSON)) |
12:15 - 12:45 |
Stanislav Volgushev: Learning graphical models for extremes ↓ Extremal graphical models are sparse statistical models for multivariate extreme events. The underlying graph encodes conditional independencies and enables a visual interpretation of the complex extremal dependence structure. In practice, the graph is usually unknown and needs to be learned from data. We discuss methods that allow to do this in several settings of different generality: for data in the domain of attraction of general multivariate Pareto distributions that form graphical models on trees and for general graphs when the data are in the domain of attraction of a Husler-Reiss distribution. (Arts Building room 386 (IN PERSON)) |
12:45 - 14:00 | Lunch (Sunshine Café) |
14:00 - 16:00 | Brainstorming Session on Site (ART 386 / ASC 301A / ASC 307) |
Friday, July 1 | |
---|---|
07:30 - 08:30 | Breakfast (Sunshine Café) |
08:30 - 09:30 |
Linda Mhalla: Mentoring Panel ↓ Career advise panel addressing topics such as publishing, grant writing, student supervision, networking and collaborative work.
With Valérie Chavez-Demoulin, Anthony Davison, Christian Genest and Claudia Klüppelberg. (Arts Building room 386 (ZOOM)) |
09:30 - 09:45 | Break (ART 218) |
09:45 - 10:15 | Jakob Runge: Presentation of the CauseMe Platform (Arts Building room 386 (ZOOM)) |
10:15 - 11:00 | Break and check-out for on-site participants (ART 218) |
11:00 - 11:45 |
Johanna Neslehova: Roundtable Discussion on Future Challenges ↓ Round table discussion addressing future challenges in causal inference, climatology, extreme-value analysis, and causal inference for extremes.
With Sebastian Engelke, Leonard Henckel, Emma Simpson, and Dáithí Stone. (Arts Building room 386 (IN PERSON)) |
11:45 - 12:00 | Closing Remarks (Arts Building room 386) |