Schedule for: 21w5239 - Geometry & Learning from Data (Online)
Beginning on Sunday, October 24 and ending Friday October 29, 2021
All times in Oaxaca, Mexico time, CDT (UTC-5).
Monday, October 25 | |
---|---|
08:50 - 09:00 | Introduction and Welcome (Zoom) |
09:00 - 09:45 |
Juergen Jost: Geometry and topology of data ↓ We introduce concepts from metric geometry to provide a new perspective on topological data analysis. (Zoom) |
10:00 - 10:45 |
Anna Seigal: Groups and symmetries in Gaussian graphical models ↓ We can use groups and symmetries to define new statistical models, and to investigate them. In this talk, I will discuss two families of multivariate Gaussian models:
1. RDAG models: graphical models on directed graphs with coloured vertices and edges,
2. Gaussian group models: multivariate Gaussian models that are parametrised by a group.
I will focus on maximum likelihood estimation, an optimisation problem to obtain parameters in the model that best fit observed data. For RDAG models and Gaussian group models, the existence of the maximum likelihood estimate relates to linear algebra conditions and to stability notions from invariant theory. This talk is based on joint work with Carlos Améndola, Kathlén Kohn, Visu Makam, and Philipp Reichenbach. (Zoom) |
11:00 - 11:45 |
Shantanu Joshi: Aligning Shape Data from Brain Imaging: applications to fMRI time series, diffusion tractography ↓ This talk consists of three parts. We will present ideas and applications for aligning shape data from brain imaging.
The first part consists of alignment of functional magnetic resonance time series data. We achieve temporal alignment of both amplitude and phase of the functional magnetic resonance imaging (fMRI) time course and spectral densities. The second part is a recent approach for matching collections of shapes. In particular we present an idea for aligning tractography representations from diffusion weighted imaging. The third part of the talk ties both of them together and presents a recent approach for accelerating the alignment process using deep learning in a fully unsupervised manner. (Zoom) |
12:00 - 12:40 | Lunch (Zoom/Gathertown) |
12:50 - 13:00 | Group Photo (Zoom) |
13:00 - 13:45 |
Nancy Arana-Daniel: Environmental object mapping using geometric algebra and machine learning ↓ In this session, we will talk about an algorithm that solves the robotic mapping problem by combining geometric algebra and machine learning to obtain environmental maps. We tackle one crucial challenge of this problem by getting a map that is information-rich (i.e., a map that preserves main structures of the environment and object shapes) yet still has a low memory cost. So, a new object-mapping algorithm will be presented for approximating point clouds with multiple ellipsoids and other quadratic surfaces. We show that this algorithm creates maps rich in information yet low in-memory cost and have features suitable for other robotics problems such as navigation and pose estimation. (Zoom) |
14:00 - 14:45 |
Benjamin Sanchez-Lengeling: Evaluating attribution with Graph Neural Networks ↓ This talk is about how different interpretability techniques can fail when data or labels are perturbed. We focus on attribution techniques, graph data and neural networks on graphs (GNNs). Attribution is one approach to interpretability, which highlights input dimensions that are influential to a neural network’s prediction. Evaluation of these methods is largely qualitative for image and text models, because acquiring ground truth attributions requires expensive and unreliable human judgment. Attribution has been comparatively understudied for GNNs, a model class of growing importance that makes predictions on arbitrarily-sized graphs. Graph-valued data offer an opportunity to quantitatively benchmark attribution methods, because challenging synthetic graph problems have computable ground-truth attributions. We evaluate commonly-used attribution methods for GNNs using the axes of attribution accuracy, stability, faithfulness and consistency. (Zoom) |
15:00 - 16:00 |
Break ↓ Freedom. (Zoom) |
16:00 - 16:30 |
Discussion session in Gathertown ↓ Open space for interaction and collaboration. (Gathertown) |
16:30 - 18:00 |
Poster session in Gathertown ↓ Participants will present a poster of their work.
(Gathertown) 1.- Renata Turkes "Noise robustness of persistent homology on greyscale images across filtrations and signatures" Abstract: 2.- Pradeep Kr Banerjee "PAC-Bayes and Information Complexity" Abstract: 3.- Stefan Schonsheck "Chart Auto-Encoders for Manifold Structured Data" Abstract: 4.- Hui Jin "Asymptotics of the generalization error for GP regression" Abstract: 5.- Hanna Tseran "On the Expected Complexity of Maxout Networks" Abstract: 6.- Marzieh Eidi "Topological Learning from Dynamics on Data" Abstract: 7.- Johannes Müller "The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Max Planck Institute for Horizon Partially Observable Markov Decision Processes" Abstract: 8.- Emery Pierson "A Riemannian Framework for Analysis of Human Body Surface" Abstract: 9.- Miguel Evangelista "Topologically representative datasets of 3D point clouds and meshes" Abstract: 10.- José C "Areas on the space of smooth probability density functions on S2" Abstract: 11.- Miguel Evangelista "Computational Poisson Geometry" Abstract: 12.- Ben Bowman "Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks" Abstract: 13.- Wu Lin "Tractable Structured Natural-Gradient Descent Using Local Parameterizations" Abstract: 14.- Rahul Ramesh "Model Zoo: A Growing Brain That Learns Continually" Abstract: 15.- Yu Guang Wang "How Framelets Enhance Graph Neural Networks" Abstract: |
Tuesday, October 26 | |
---|---|
09:00 - 09:45 |
Sophie Achard: Learning from brain data ↓ Noninvasive neuroimaging of the brain while functioning is providing
very promising data sets to study the complex organisation of brain
areas. It is not only possible to identify responses of brain areas to a
cognitive stimulus but also to model the interactions between brain
areas. The human brain can be modelled as a network or graph where
brain areas are nodes of the graph and interactions of pairs are the
edges of the graph. The brain connectivity networks is small-world with
a combination of segregation and integration characteristics. In this
talk, I will present recent advances to understand and compare brain
data using learning approaches. A particular focus on the reliability of
the methods will be given. Finally, examples on various pathologies will
highlight the possible alterations and resilience of the brain network. (Zoom) |
10:00 - 10:45 |
Nihat Ay: On the invariance of the natural gradient for learning in deep neural networks ↓ Information geometry suggests the Fisher-Rao metric for learning in deep neural networks, leading to the so-called natural gradient method. There are two natural geometries associated with such learning systems consisting of visible and hidden units. One geometry is related to the full system, the other one to the visible sub-system. In principle, these two geometries imply different natural gradients. We compare them and prove an invariance property that distinguishes the Fisher-Rao metric from other Riemannian metrics based on Chentsov’s classical characterisation theorem. (Zoom) |
11:00 - 11:45 |
Facundo Memoli: The ultrametric Gromov-Wasserstein distance ↓ We investigate compact ultrametric measure spaces which form a subset Mw of the collection of all metric measure spaces Uw. Similar to the case of the ultrametric Gromov-Hausdorff distance on the collection of ultrametric spaces U, we define ultrametric versions of two metrics on Uw, namely of Sturm's distance of order p and of the Gromov-Wasserstein distance of order p. We study the basic topological and geometric properties of these distances as well as their relation and derive for p=∞ a polynomial time algorithm for their calculation. Further, several lower bounds for both distances are derived and some of our results are generalized to the case of finite ultra-dissimilarity spaces. (Zoom) |
12:00 - 13:00 | Lunch (Zoom/Gathertown) |
13:00 - 13:45 |
Ruriko Yoshida: Tree Topologies along a Tropical Line Segment ↓ Tropical geometry with the max-plus algebra has been applied to statistical learning models over tree spaces because geometry with the tropical metric over tree spaces has some nice properties such as convexity in terms of the tropical metric. One of the challenges in applications of tropical geometry to tree spaces is the difficulty interpreting outcomes of statistical models with the tropical metric. We focus on combinatorics of tree topologies along a tropical line segment, an intrinsic geodesic with the tropical metric, between two phylogenetic trees over the tree space and we show some properties of a tropical line segment between two trees. Specifically we show that a probability of a tropical line segment of two randomly chosen trees going through the origin (the star tree) is zero if the number of leave is greater than four, and we also show that if two given trees differ only one nearest neighbor interchange (NNI) move, then the tree topology of a tree in the tropical line segment between them is the same tree topology of one of these given two trees with possible zero branch lengths. This is joint work with Shelby Cox. (Zoom) |
14:00 - 14:45 |
Jun Zhang: Information Geometry: A Tutorial ↓ Information Geometry is the differential geometric study of the set of all probability distributions on a given sample space, modeled as a differentiable manifold where each point represents one probability distribution with its parameter serving as local coordinates. Such manifold is equipped with a natural Riemannian metric (Fisher-Rao metric) and a family of affine connections (alpha-connections) that define parallel transport of score functions as tangent vectors. Starting from the motivating example of the family of univariate normal distribution on a continuous support and of the probability simplex as a family on discrete support, I will explain how divergence functions (or contrast functions) measuring directed distance on a manifold, e.g., Kullback-Leibler divergence, Bregman divergence, f-divergence, etc. are tied to Legendre duality and convex analysis, and how they in turn generate the underlying dualistic geometry of the what is known as the “statistical manifold”. The case of maximum entropy (or minimum divergence) inference will be highlighted, since it is linked to the exponential family and the dually-flat (Hessian) geometric structure, the simplest and the most well-understood example of information geometry. If time permits, I will introduce new development including the state-of-the-art understanding of deformation models, in which generalized entropy (for instance, Tsallis entropy, Renyi entropy, phi-entropy) replaces Shannon entropy and deformed divergence replaces KL and Bregman divergences. Deformed exponential families reveal an “escort statistics” and “gauge freedom” that is buried in the standard exponential family. This tutorial attempts to give a gentle introduction to information geometry to a non-geometric audience. (Zoom) |
15:00 - 16:00 |
Break ↓ Freedom. (Zoom) |
16:00 - 16:30 |
Discussion session in Gathertown ↓ Open space for interactions and conversations. (Gathertown) |
16:30 - 17:30 |
Yalbi Itzel Balderas-Martinez: Panel: AI & Public Institutions, with Dr. Eduardo Ulises Moya, Dra. Paola Villareal, and Dra. Yalbi Itzel Balderas Martinez. ↓ A conversation with public actors and stake-holders, with a focus on AI in use cases in Mexican Public Institutions (Government, Science Planning, and Healthcare). With Dr. Eduardo Ulises Moya, Dra. Paola Villareal, and Dra. Yalbi Itzel Balderas Martinez.
Dr. Yalbi Balderas has training as a computer technician, she completed her degree in biology at the Universidad Veracruzana and later she studied her Ph.D. in Biomedical Sciences at the Computational Genomics Program of the Genomic Sciences Center (UNAM). At Dr. Julio Collado's lab, she analyzed the regulatory network of the bacterium Escherichia coli K-12 contributing to RegulonDB database. Later, she carried out her postdoctoral research with a DGAPA grant at the Faculty of Sciences of the UNAM at Dr. Annie Pardo's lab applying text mining in scientific articles related to Idiopathic Pulmonary Fibrosis. After the postdoctoral stay, she was selected for a Cátedra CONACYT working with bioinformatic models in the study of chronic lung diseases with Dr. Moisés Selman. She is currently a researcher at the National Institute of Respiratory Diseases. She is a member of the National System of Researchers in Mexico Level 1 and participates in different research societies on respiratory diseases, and bioinformatics.
Dr. Eduardo Ulises Moya Sanchez is currently the Artificial Intelligence director of the Jalisco government, being the first director of this area in the public administration in Mexico. He has PhD from CINVESTAV with a research stay in the laboratory of applied mathematics to the image of the University of La Rochelle, a master in medical physics at UNAM, a physics degree from the University of Guadalajara, and he is a member of the National System of Researchers of CONACYT level 1. He is a founding partner of Nética, a company that seeks to train young talents in STEAM. He recently collaborated at the Barcelona Supercomputer Center in the high-performance artificial intelligence group, in deep learning. In 2019 he was recognized with the Fulbright García-Robles scholarship to collaborate with the Quantitative Bioimaging Laboratory of the University of Texas at Dallas and the University of Texas Southwestern Medical Center.
Paola Villarreal is a data scientist and full-stack developer with over 22 years of international experience leading multidisciplinary teams in the public and non-profit sector. She is the former head of data science and engineering at the National Council for Science and Technology of the Government of Mexico where she helped coordinate the Covid-19 Data Efforts. For her work on the field of Public Interest Data Science she's been recognized as an MIT Innovators under 35 LATAM, BBC's 100 Inspiring Women and was awarded a fellowship at the Berkman Klein Center for Internet and Society at Harvard University. (Zoom) |
Wednesday, October 27 | |
---|---|
09:00 - 09:45 |
Maks Ovsjanikov,: Efficient learning on curved surfaces via diffusion ↓ In this talk I will describe several approaches for learning on curved surfaces, represented as point clouds or triangle meshes. I will first give a brief overview of geodesic convolutional neural networks (GCNNs) and their variants and then present a recent approach that replaces this paradigm with an efficient framework that is based on diffusion. The key properties of this approach is that it avoids potentially error-prone and costly operations, such as local patch discretization with robust and efficient building blocks that are based on learned diffusion and gradient computation. I will then show several applications, ranging from RNA surface segmentation to non-rigid shape correspondence, while highlighting the invariance of this technique to sampling and triangle mesh structure. (Zoom) |
10:00 - 10:45 |
Xavier Pennec: Curvature effects in Geometric statistics : empirical Frechet mean and parallel transport accuracy. ↓ Two fundamental tools for statistics on objects living in non-linear manifolds are the Fréchet mean and the parallel transport. We present in this talk new results based on Gavrilov's tensorial series expansions allow us to quantify the accuracy of these two fundamental tools and to put forward the impact of the manifold curvature.
A central limit theorem for the empirical Fréchet mean was established in Riemannian manifolds by Bhattacharya & Patrangenaru in 2005. We propose an asymptotic development valid in Riemannian and affine cases which better explain the role of the curvature in the concentration of the empirical Fréchet mean towards the population mean with a finite number of samples. We also establish a new non-asymptotic (small sample) expansion in high concentration conditions which shows a statistical bias on the empirical mean in the direction of the average gradient of the curvature. These curvature effects become important with large curvature and can drastically modify the estimation of the mean. They could partly explain the phenomenon of sticky means recently put into evidence in stratified spaces with negative curvature, and smeary means in positive curvature.
Parallel transport is a second major tool, for instant to transport longitudinal deformation trajectories from each individuals towards a template brain shape before for performing group-wise statistics in longitudinal analyses. More generally, parallel transport should be the natural geometric formulation for domain adaptation in machine learning in non-linear spaces. In previous works, we have build on the Schild's ladder principle to engineer a more symmetric discrete parallel transport scheme based on iterated geodesic parallelograms, called pole ladder. This scheme is surprisingly exact in only one step on symmetric spaces, which makes it quite interesting for many applications involving simple symmetric manifolds. For general manifolds, Schild's and pole ladders were thought to be of first order with respect to the number of steps, similarly to other schemes based on Jacobi fields. However, the literature was lacking a real convergence performance analysis when the scheme is iterated. We show that pole ladder naturally converges with quadratic speed, and that Schild's ladder can be modified to perform identically even when geodesics are approximated by numerical schemes. This contrasts with Jacobi fields approximations that are bound to linear convergence. The extra computational cost of ladder methods is thus easily compensated by a drastic reduction of the number of steps needed to achieve the requested accuracy.
* Xavier Pennec. Curvature effects on the empirical mean in Riemannian and affine Manifolds: a non-asymptotic high concentration expansion in the small-sample regime. Note: Working paper or preprint, June 2019. ARXIV : 1906.07418
* Nicolas Guigui and Xavier Pennec. Numerical Accuracy of Ladder Schemes for Parallel Transport on Manifolds. Foundations of Computational Mathematics, June 2021. ARXIV : 2007.07585 (Zoom) |
11:00 - 11:45 |
Chris Connell: Tensor decomposition based network embedding algorithms for prediction tasks on dynamic networks. ↓ Classical network embeddings create a low dimensional representation of the learned relationships between features across nodes. Such embeddings are important for tasks such as link prediction and node classification. We consider low dimensional embeddings of “dynamic networks” -- a family of time varying networks where there exist both temporal and spatial link relationships between nodes. We present novel embedding methods for a dynamic network based on higher order tensor decompositions for tensorial representations of the dynamic network. Our embeddings are analogous to certain classical spectral embedding methods for static networks. We demonstrate the effectiveness of our approach by comparing our algorithms' performance on the link prediction task against an array of current baseline methods across three distinct real-world dynamic networks. Finally, we provide a mathematical rationale for this effectiveness in the regime of small incremental changes. This is joint work with Yang Wang. (Zoom) |
12:00 - 13:00 | Lunch (Zoom/Gathertown) |
13:00 - 13:45 |
Nina Miolane: Geomstats: a Python Package for Riemannian Geometry in Statistics and Machine Learning ↓ We introduce Geomstats, an open-source Python package for computations and statistics on nonlinear manifolds that appear in machine learning applications, such as: hyperbolic spaces, spaces of symmetric positive definite matrices, Lie groups of transformations, and many more. We provide object-oriented and extensively unit-tested implementations. Manifolds come equipped with families of Riemannian metrics with associated exponential and logarithmic maps, geodesics, and parallel transport. Statistics and learning algorithms provide methods for estimation, clustering, and dimension reduction on manifolds. All associated operations provide support for different execution backends --- namely NumPy, Autograd, PyTorch, and TensorFlow. This talk presents the package, compares it with related libraries, and provides relevant code examples. We show that Geomstats provides reliable building blocks to both foster research in differential geometry and statistics and democratize the use of Riemannian geometry in statistics and machine learning. The source code is freely available under the MIT license at https://github.com/geomstats/geomstats. (Zoom) |
14:00 - 14:45 |
Katy Craig: A Blob Method for Diffusion and Applications to Sampling and Two Layer Neural Networks ↓ Given a desired target distribution and an initial guess of that distribution, composed of finitely many samples, what is the best way to evolve the locations of the samples so that they accurately represent the desired distribution? A classical solution to this problem is to allow the samples to evolve according to Langevin dynamics, a stochastic particle method for the Fokker-Planck equation. In today’s talk, I will contrast this classical approach with a deterministic particle method corresponding to the porous medium equation. This method corresponds exactly to the mean-field dynamics of training a two layer neural network for a radial basis function activation function. We prove that, as the number of samples increases and the variance of the radial basis function goes to zero, the particle method converges to a bounded entropy solution of the porous medium equation. As a consequence, we obtain both a novel method for sampling probability distributions as well as insight into the training dynamics of two layer neural networks in the mean field regime. (Zoom) |
15:00 - 16:00 |
Break ↓ Freedom. (Zoom) |
16:00 - 16:30 |
Discussion session in Gathertown ↓ Open space for conversation and collaboration. (Gathertown) |
16:30 - 17:30 |
Panel: Professional Development, with Prof. Tina Eliassi-Rad and Prof. Jesús de Loera ↓ Tina Eliassi-Rad is a Professor of Computer Science at Northeastern University. She is also a core faculty member at Northeastern's Network Science Institute and the Institute for Experiential AI. In addition, she is an external faculty member at the Santa Fe Institute and the Vermont Complex Systems Center. Prior to joining Northeastern, Tina was an Associate Professor of Computer Science at Rutgers University; and before that she was a Member of Technical Staff and Principal Investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison. Her research is at the intersection of data mining, machine learning, and network science. She has over 100 peer-reviewed publications (including a few best paper and best paper runner-up awards); and has given over 200 invited talks and 14 tutorials. Tina's work has been applied to personalized search on the World-Wide Web, statistical indices of large-scale scientific simulation data, fraud detection, mobile ad targeting, cyber situational awareness, and ethics in machine learning. Her algorithms have been incorporated into systems used by the government and industry (e.g., IBM System G Graph Analytics) as well as open-source software (e.g., Stanford Network Analysis Project). In 2017, Tina served as the program co-chair for the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (a.k.a. KDD, which is the premier conference on data mining) and as the program co-chair for the International Conference on Network Science (a.k.a. NetSci, which is the premier conference on network science). In 2020, she served as the program co-chair for the International Conference on Computational Social Science (a.k.a. IC2S2, which is the premier conference on computational social science). Tina received an Outstanding Mentor Award from the Office of Science at the US Department of Energy in 2010; became a Fellow of the ISI Foundation (in Turin Italy) in 2019; and was named one of the 100 Brilliant Women in AI Ethics for 2021.
\(
\\
\)
Jesús A. De Loera's is professor of Mathematics and Chair of the Graduate Group in Applied Mathematics at the University of California, Davis. For his contributions to discrete mathematics, optimization, and algorithms, he was elected a fellow of both the American Mathematical Society and the Society of Industrial and Applied Mathematics. In 2020 he won the Farkas Prize of the INFORMS optimization society for his work in optimization algorithms. He received the 2018 Distinguished Teaching Award of the College of Letters
and Science and the 2013 Chancellor's award in undergraduate research mentoring. In 2017 he won the Golden Section Teaching award from the
Mathematical Association of America. He has directed 15 Ph.D dissertations, and over 60 undergraduate theses. He is a proud alum of UNAM (vivan los pumas!) (Zoom) |
Thursday, October 28 | |
---|---|
09:00 - 09:45 |
Alex Cloninger: Learning with Optimal Transport ↓ Discriminating between distributions is an important problem in a number of scientific fields. This motivated the introduction of Linear Optimal Transportation (LOT), which has a number of benefits when it comes to speed of computation and to determining classification boundaries. We characterize a number of settings in which the LOT embeds families of distributions into a space in which they are linearly separable. This is true in arbitrary dimensions, and for families of distributions generated through a variety of actions on a fixed distribution. We also establish results on discrete spaces using Entropically Regularized Optimal Transport, and establish results about active learning with a small number of labels in the space of LOT embeddings. This is joint work with Caroline Moosmueller (UCSD). (Zoom) |
10:00 - 10:45 |
Ron Kimmel: On Geometry and Learning ↓ Geometry means understanding in the sense that it involves finding the most basic invariants or Ockham’s razor explanation for a given phenomenon. At the other end, modern Machine Learning has little to do with explanation or interpretation of solutions to a given problem.
I’ll try to give some examples about the relation between learning and geometry, focusing on learning geometry, starting with the most basic notion of planar shape invariants, efficient distance computation on surfaces, and treating surfaces as metric spaces within a deep learning framework. I will introduce some links between these two seemingly orthogonal philosophical directions. (Zoom) |
11:00 - 11:45 |
Pratik Chaudhari: Does the Data Induce Capacity Control in Deep Learning? ↓ Deep networks are mysterious. These over-parametrized
machine learning models, trained with rudimentary optimization
algorithms on non-convex landscapes in millions of dimensions have
defied attempts to put a sound theoretical footing beneath their
impressive performance.
This talk will shed light upon some of these mysteries. The first part
of this talk will employ ideas from thermodynamics and optimal
transport to paint a picture of the training process of deep networks
and unravel a number of peculiar properties of algorithms like
stochastic gradient. The second part of the talk will argue that these
peculiarities observed during training, as also the anomalous
generalization, may be coming from data that we train upon. This part
will discuss how typical datasets are “sloppy“, i.e., the data
correlation matrix has a strong structure and consists of a large
number of small eigenvalues that are distributed uniformly over
exponentially a large range . This structure is completely mirrored in
a trained deep network: a number of quantities such as the Hessian,
the Fisher Information Matrix, as well as others activation
correlations and Jacobians, are also sloppy. This talk will develop
these concepts to demonstrate analytical non-vacuous generalization
bounds.
This talk will discuss work from the following two papers.
1. Does the data induce capacity control in deep learning?. Yang
Rubing, Mao Jialin, Chaudhari Pratik. [arXiv preprint, 2021]
https://arxiv.org/abs/2110.14163.
2. Stochastic gradient descent performs variational inference,
converges to limit cycles for deep networks. Pratik Chaudhari and
Stefano Soatto [ICLR ’18] https://arxiv.org/abs/1710.11029 (Zoom) |
12:00 - 13:00 | Lunch (Zoom/Gathertown) |
13:00 - 13:45 |
David Alvarez Melis: Principled Data Manipulation with Optimal Transport ↓ Success stories in machine learning seem to be ubiquitous, but they tend to be concentrated on ‘ideal’ scenarios where clean, homogenous, labeled data are abundant. But machine learning in practice is rarely so 'pristine’. In most real-life applications, clean data is typically scarce, is collected from multiple heterogeneous sources, and is often only partially labeled. Thus, successful application of ML in practice often requires substantial effort in terms of dataset preprocessing, such as augmenting, merging, mixing, or reducing datasets. In this talk I will present some recent work that seeks to formalize all these forms of dataset ‘manipulation’ under a unified approach based on the theory of Optimal Transport. Through applications in machine translation, transfer learning, and dataset shaping, I will show that besides enjoying sound theoretical footing, these approaches yield efficient, flexible, and high-performing algorithms. This talk is based on joint work with Tommi Jaakkola, Stefanie Jegelka, Nicolo Fusi, Youssef Mroueh, and Yair Schiff. (Zoom) |
14:00 - 14:45 |
Elizabeth Gross: Learning phylogenetic networks using invariants ↓ Phylogenetic networks provide a means of describing the evolutionary history of sets of species believed to have undergone hybridization or gene flow during the course of their evolution. The mutation process for a set of such species can be modeled as a Markov process on a phylogenetic network. Previous work has shown that a site-pattern probability distributions from a Jukes-Cantor phylogenetic network model must satisfy certain algebraic invariants. As a corollary, aspects of the phylogenetic network are theoretically identifiable from site-pattern frequencies. In practice, because of the probabilistic nature of sequence evolution, the phylogenetic network invariants will rarely be satisfied, even for data generated under the model. Thus, using network invariants for inferring phylogenetic networks requires some means of interpreting the residuals, or deviations from zero, when observed site-pattern frequencies are substituted into the invariants. In this work, we propose a machine learning algorithm utilizing invariants to infer small, level-one phylogenetic networks. Given a data set, the algorithm is trained on model data to learn the patterns of residuals corresponding to different network structures to classify the network that produced the data. This is joint work with Travis Barton, Colby Long, and Joseph Rusinko. (Zoom) |
15:00 - 15:45 |
Break ↓ Freedom. (Zoom) |
15:45 - 16:30 |
Soledad Villar: Equivariant machine learning structured like classical physics ↓ There has been enormous progress in the last few years in designing neural networks that respect the fundamental symmetries and coordinate freedoms of physical law. Some of these frameworks make use of irreducible representations, some make use of high-order tensor objects, and some apply symmetry-enforcing constraints. Different physical laws obey different combinations of fundamental symmetries, but a large fraction (possibly all) of classical physics is equivariant to translation, rotation, reflection (parity), boost (relativity), and permutations. In this talk we show that it is simple to parameterize universally approximating polynomial functions that are equivariant under these symmetries, or under the Euclidean, Lorentz, and Poincare groups, at any dimensionality d. The key observation is that nonlinear O(d)-equivariant (and related-group-equivariant) functions can be expressed in terms of a lightweight collection of scalars---scalar products and scalar contractions of the scalar, vector, and tensor inputs. Our numerical results show that our approach can be used to design simple equivariant deep learning models for classical physics with good scaling. (Zoom) |
16:30 - 17:30 |
Ilke Demir: Panel; AI & Industry, with Dr Juan Carlos Catana, Dr Ilke Dermir, Dr David Alvares Melis ↓ A conversation with several actors and researchers about their roles in AI & Industry, with Ilke Demir (Intel), Juan Carlos Catana (HP Labs Mx) and David Alvarez Melis (Microsoft Research). (Zoom) |
Friday, October 29 | |
---|---|
09:00 - 09:45 |
Michael Bronstein: Neural diffusion PDEs, differential geometry, and graph neural networks ↓ In this talk, I will make connections between Graph Neural Networks (GNNs) and non-Euclidean diffusion equations. I will show that drawing on methods from the domain of differential geometry, it is possible to provide a principled view on such GNN architectural choices as positional encoding and graph rewiring as well as explain and remedy the phenomena of oversquashing and bottlenecks. (Zoom) |
10:00 - 10:45 |
Nina Otter: A topological perspective on weather regimes ↓ In this talk I will discuss recent and ongoing work on using topology to define and study weather regimes. The talk is based on joint work with K. Strommen, M. Chantry and J. Dorrington, with preprint available at https://arxiv.org/abs/2104.03196 (Zoom) |
11:00 - 11:45 |
Joe Kileel: Structure in point clouds by tensor decompositions ↓ In this talk, I will discuss how higher-order tensor decompositions can be used to identify geometric structure in noisy point clouds in Euclidean space. I will present methods that start with a point cloud, construct an associated tensor, and then suitably decompose that tensor to obtain information about the original point cloud. Special attention will be paid to the big challenges surrounding tensor methods – non-convexity and high-dimensionality. Real-world applications include protein structure determination in cryo-electron microscopy and rigid motion segmentation in computer vision. (Zoom) |
12:00 - 13:00 | Zoom/Gathertown (Zoom) |
13:00 - 13:45 |
Eliza O'Reilly: Random Tessellation Features and Forests ↓ The Mondrian process in machine learning is a recursive partition of space with random axis-aligned cuts used to build random forests and Laplace kernel approximations. The construction allows for efficient online algorithms, but the restriction to axis-aligned cuts does not capture dependencies between features. By viewing the Mondrian as a special case of the stable under iterated (STIT) process in stochastic geometry, we resolve open questions about the generalization of cut directions. We utilize the theory of stationary random tessellations to show that STIT processes approximate a large class of stationary kernels and STIT forests achieve minimax rates for Lipschitz functions (forests and trees) and C^2 functions (forests only). This work opens many new questions at the novel intersection of stochastic geometry and machine learning. Based on joint work with Ngoc Tran. (Zoom) |
14:00 - 14:45 |
Carina Curto: Topological analysis of symmetric matrices and applications to neural data ↓ Betti curves of symmetric matrices are class of matrix invariants that depend only on the relative ordering of matrix entries. These invariants are computed using persistent homology, and can be used to detect underlying structure in biological data that may otherwise be obscured by monotone nonlinearities. Here we review some previous applications of Betti curves to study hippocampal and olfactory data. We then show some new theorems that characterize Betti curves of rank 1 symmetric matrices, and illustrate how these Betti curve signatures arise in natural data obtained from calcium imaging of neural activity in zebrafish. (Zoom) |
15:00 - 15:45 |
Concluding Remarks and Conversations in Zoom/Gathertown ↓ Thanks to everyone! (Zoom/Gathertown) |
16:00 - 16:30 |
Open sessions in Zoom/Gathertown ↓ Open spaces for conversation and collaboration. (Zoom) |