Schedule for: 16w5044 - Computational Complexity
Beginning on Sunday, September 4 and ending Friday September 9, 2016
All times in Banff, Alberta time, MDT (UTC-6).
Sunday, September 4 | |
---|---|
16:00 - 17:30 | Check-in begins at 16:00 on Sunday and is open 24 hours (Front Desk - Professional Development Centre) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
20:00 - 22:00 | Informal gathering (Corbett Hall Lounge (CH 2110)) |
Monday, September 5 | |
---|---|
07:00 - 08:45 |
Breakfast ↓ Breakfast is served daily between 7 and 9am in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
08:45 - 09:00 | Introduction and Welcome by BIRS Station Manager (TCPL 201) |
09:00 - 09:50 |
Gil Cohen: Recent advances in randomness extractors and applications ↓ We will survey recent developments in randomness extractors, in particular non-malleable extractors and their applications to two-source extractors and privacy amplification, as well as their inner workings which is based on two new pseudo-random objects: independence-preserving mergers and correlation breakers. (TCPL 201) |
09:50 - 10:10 | Coffee Break (TCPL Foyer) |
10:10 - 11:00 |
David Zuckerman: Explicit Two-Source Extractors and Resilient Functions ↓ We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least log^C n for a large enough constant C. Our extractor outputs one bit and has error n^{-\Omega(1)}. The best previous extractor, by Bourgain, required each source to have min-entropy .499n.
A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n^{1-delta}, for any delta>0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n-q bits are chosen almost polylog(n)-wise independently, and the remaining q=n^{1-\delta} bits are chosen by an adversary as an arbitrary function of the n-q bits. The best previous construction, by Viola, achieved q=n^{1/2 - \delta}.
Our explicit two-source extractor directly implies an explicit construction of a 2^{(log log N)^{O(1)}}-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen.
Joint work with Eshan Chattopadhyay. (TCPL 201) |
11:05 - 11:55 |
Eshan Chattopadhyay: Alternating Extraction, and its applications to Non-Malleable Extractors ↓ Alternating Extraction was introduced by Dziembowski and Pietrzak as a technique in cryptography, and has since found many applications in explicit constructions of extractors and related pseudorandom objects.
I will present this technique and show its applications in constructing non-malleable extractors, based on work with Goyal and Li (STOC '16), and a follow-up that is joint work with Xin Li (FOCS '16). (TCPL 201) |
12:00 - 13:00 | Lunch (Vistas Dining Room) |
13:10 - 14:00 |
Avi Wigderson: Theory and applications of operator scaling (I) ↓ In these talks we will explain and explore the ``non-commutative symbolic singularity problem'', and its myriad incarnations in commutative and non-commutative algebra, computational complexity, optimization, quantum information theory, analytic inequalities and other areas. We will describe two efficient algorithms solving all these related problems, and how their analysis combines ideas from all these areas.
The problem these algorithms solve is non-convex, and we hope they will have many other applications.
Joint on two joint works with Leonid Gurvits and Rafael Olivera,
as well as works of Derksen-Makam and Ivanyos-Qiao-Subrahmanyam (TCPL 201) |
14:00 - 14:20 |
Group Photo ↓ Meet in foyer of TCPL to participate in the BIRS group photo. The photograph will be taken outdoors, so dress appropriately for the weather. Please don't be late, or you might not be in the official group photo! (TCPL Foyer) |
14:20 - 15:10 |
Ankit Garg: Theory and applications of operator scaling (II) ↓ In these talks we will explain and explore the ``non-commutative symbolic singularity problem'', and its myriad incarnations in commutative and non-commutative algebra, computational complexity, optimization, quantum information theory, analytic inequalities and other areas. We will describe two efficient algorithms solving all these related problems, and how their analysis combines ideas from all these areas.
The problem these algorithms solve is non-convex, and we hope they will have many other applications.
Joint on two joint works with Leonid Gurvits and Rafael Olivera,
as well as works of Derksen-Makam and Ivanyos-Qiao-Subrahmanyam (TCPL 201) |
15:10 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 15:55 |
Benjamin Rossman: The Formula Complexity of Subgraph Isomorphism ↓ I will report recent progress on the formula complexity of the (colorful) subgraph isomorphism problem.
(1) In previous work with Li and Razborov (FOCS 2014), we showed that the AC0-circuit complexity of G-subgraph isomorphism is n^{Theta~(treewidth(G))}. In new work, I show that the AC0-formula complexity of G-subgraph isomorphism is n^{poly(treedepth(G))} via a generalization of the "pathset technique" of R. (STOC 2014). This result has connections to finite model theory (a new proof of an improved homomorphism preservation theorem) and graph minor theory (a new excluded-minor approximation of bounded treedepth, joint with Ken-ichi Kawarabayashi).
(2) Another extension of the “pathset technique” gives optimal size-depth trade-offs, including a tight n^{Omega(dk^{1/d})} formula-size lower bound for distance-k st-connectivity for small d and k, improving recent results of Chen et al (STOC 2016). (TCPL 201) |
16:00 - 16:25 |
Venkatesan Guruswami: What fraction of worst-case bit deletions are correctable? ↓ We will discuss recent code constructions for recovery from a large fraction of worst-case bit deletions. Specifically, we will construct a family of binary codes of positive rate allowing efficient recovery from a fraction of deletions approaching sqrt{2}-1 > 0.414 (though we might focus on a simpler construction for deletion fraction 1/3 for the talk). Previously, even non-constructively the largest deletion fraction known to be correctable with positive rate was around 0.17. For alphabet size k, we construct codes to correct a deletion fraction exceeding (k-1)/(k+1), with (k-1)/k being a trivial upper limit. Whether a deletion fraction approaching 1/2 is correctable by binary codes remains a tantalizing open question. (Joint work with Boris Bukh and Johan Hastad) (TCPL 201) |
17:30 - 19:30 |
Dinner ↓ A buffet dinner is served daily between 5:30pm and 7:30pm in the Vistas Dining Room, the top floor of the Sally Borden Building. (Vistas Dining Room) |
20:00 - 21:00 | Open Problems session (TCPL 201) |
Tuesday, September 6 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:50 |
Boaz Barak: Sum of squares and computational bayesanism ↓ The Sum of Squares (SOS) algorithm (Parrilo, Lasserre) is a powerful convex programming hierarchy that captures many of the best-known techniques for various algorithmic problems.
In this talk I will discuss how we can interpret the state of this algorithm as modelling the beliefs of a computationally bounded agent about some unknown quantity via Bayesian "pseudo probabilities" and moreover how this viewpoint helps in obtaining new upper and lower bounds for this algorithm.
In particular I will outline how we can use this approach to show that the SOS algorithm cannot distinguish in polynomial-time between a random Erdos-Renyi G(n,1/2) graph and such a graph in which a clique of size n^{1/2-epsilon} was planted in for every epsilon > 0.
I will also show how the approach can be used to connect the Best Separable State problem (also known as the QMA(2) verification probability problem) of finding a non-entangled state maximizing a certain measurement to questions related to the log-rank conjecture from communication complexity. We use this connection, together with the techniques underlying Lovett's bounds on the log rank question question to obtain a better-than-brute-force algorithm for the Best Separable State problem.
The talk will be based on joint works with Sam Hopkins, Jon Kelner, Pravesh Kothari, Ankur Moitra, Aaron Potechin and David Steurer.
It will be a high level talk assuming no prior knowledge on SOS. If there is interest, I (or my co authors) could offer talks with the full proofs and more details in an afternoon talk. (TCPL 201) |
09:50 - 10:10 | Coffee Break (TCPL Foyer) |
10:10 - 11:00 |
David Steurer: Polynomial-time tensor decomposition with sum-of-squares ↓ Tensor decomposition is at the heart of many computational problems that arise in machine learning and other kinds of data analyses.
However, unlike for matrices, it is NP-hard to compute good decompositions for higher-order tensors.
Nevertheless, a sequence of results shows that under simple mild assumptions on the instances good tensor decompositions can be found in polynomial time.
I will present a recent framework for tensor decompositon based on the sum-of-squares method that unifies and extends these results.
As a consequence, we obtain the first polynomial-time algorithm to decompose overcomplete 4-tensors in the smoothed analysis settings even in the presence of a polynomial amount of noise.
Based on a joint work with Tengyu Ma and Jonathan Shi (to appear at FOCS 2016). (TCPL 201) |
11:05 - 11:55 |
Tselil Schramm: Strongly refuting random constraint satisfaction problems below the spectral threshold ↓ Random CSPs are known to be unsatisfiable with high probability when
the number of clauses is at least linear. However, the task of
refuting random CSPs is hypothesized to be computationally hard if
there are O(N) clauses--in fact, the best known efficient algorithms
for the strong refutation of most k-CSPs require instances with at
least \Omega(N^{k/2}) clauses, a polynomial factor beyond the
unsatisfiability threshold at \Theta(N).
In this talk, I will describe a new spectral algorithm that strongly
refutes random CSPs with fewer clauses, given more time. Our algorithm
interpolates between the efficient algorithms at the spectral
threshold and brute-force search at the unsatisfiability threshold.
This result also implies tight upper bounds on the value of
sum-of-squares relaxations for random CSPs, as well as the value of
sum-of-squares relaxations for random polynomial maximization over the
unit sphere.
Based on joint work with Prasad Raghavendra and Satish Rao. (TCPL 201) |
12:00 - 13:00 | Lunch (Vistas Dining Room) |
13:10 - 14:00 |
Mika Göös: Extension Complexity of Independent Set Polytopes ↓ We exhibit an $n$-node graph whose independent set polytope requires extended formulations of size exponential in $\Omega(n/\log n)$. Previously, no explicit examples of $n$-dimensional $0/1$-polytopes were known with extension complexity larger than exponential in $\Theta(\sqrt{n})$. Our construction is inspired by a relatively little-known connection between extended formulations and (monotone) circuit depth.
Joint work with Rahul Jain and Thomas Watson. (TCPL 201) |
14:05 - 14:55 |
Robert Robere: Exponential Lower Bounds for Monotone Computation by Algebraic Gaps ↓ In 1990, following up on the (now renowned) work of Karchmer and Wigderson connecting communication complexity and formula size, Razborov introduced a simple rank-based complexity measure and used it to give a nice proof of quasipolynomial monotone formula lower bounds for a monotone function in NP. It turns out that the rank measure can be used to lower bound both undirected switching network size and span program size. Despite this, essentially the only known lower bounds against the rank measure follow from Razborov's original result.
In this talk, we will discuss our recent results providing the first exponential lower bounds on the rank measure for functions in monotone P. This has a number of corollaries in monotone complexity theory, including: the first exponential lower bounds for monotone span programs (indeed, the first superpolynomial lower bounds for monotone span programs computing a function in monotone P); a new and arguably simpler proof of the beautiful lower bounds by Potechin and Chan/Potechin against monotone switching networks; and the first lower bounds against monotone comparator circuits. Our lower bound is obtained by relating the rank measure to a new algebraic complexity measure on search problems that we call the algebraic gap complexity, which may be of independent interest. This work is joint with Steven Cook, Toniann Pitassi, and Ben Rossman. (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 15:55 |
Raghu Meka: Approximating CSPs requires sub-exponential size linear programs ↓ We show that linear programming relaxations need sub-exponential size to beat trivial random guessing for approximately satisfying constraint satisfaction problems. In fact, we show that for such problems sub-exponential size relaxations are as powerful as n^(Omega(1))-rounds of Sherali-Adams hierarchy. Previously, only super-polynomial (~ n^(Omega(log n)) bounds were known any CSP [CLRS13].
Our bounds are obtained by exploiting and extending the recent progress in communication complexity for "lifting" query lower bounds to communication problems. The core of our results is a new decomposition theorem for "high-entropy rectangles" into structurally nice distributions and may be of independent interest in communication complexity.
Joint work with Pravesh Kothari and Prasad Raghavendra. (TCPL 201) |
16:00 - 16:25 |
Dana Moshkovitz: Candidate Hard Unique Game ↓ We propose a candidate reduction for ruling out polynomial-time algorithms for unique games, either under plausible complexity assumptions, or unconditionally for Lasserre semidefinite programs with a constant number of rounds. We analyze the completeness and Lasserre solution of our construction, and provide a soundness analysis in a certain setting of interest. Addressing general settings is tightly connected to a question on Gaussian isoperimetry.
Our construction is based on our previous work on the complexity of approximately solving a system of
linear equations over reals, which we suggested as an avenue towards a (positive) resolution of the Unique Games Conjecture.
The construction employs a new encoding scheme that we call the {\em real code}. The real code has two useful properties: like the long code, it has a unique local test, and like the Hadamard code, it has the so-called {\em sub-code covering} property. (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
20:00 - 20:50 |
David Steurer: Subexponential time algorithm for the best separable state problem ↓ We give an algorithm for the problem of finding a rank-1 nxn matrix close to a given linear subspace. The running time is exponential in the square root of n.
The algorithm extends to the problem of finding the best separable state from quantum information theory and is based on a generalization of Lovett's log rank bound.
Joint work with Boaz Barak and Pravesh Kothari. (TCPL 201) |
Wednesday, September 7 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:50 |
Ryan Williams: Polynomial Representations of Threshold Functions and Algorithmic Applications (and Yes, Complexity-Theoretic Applications Too) ↓ We design new polynomials for representing threshold functions in three different regimes: probabilistic polynomials of low degree, which need far less randomness than previous constructions, polynomial threshold functions (PTFs) with "nice" threshold behavior and degree almost as low as the probabilistic polynomials, and a new notion of probabilistic PTFs where we combine the above techniques to achieve even lower degree with similar "nice" threshold behavior. Utilizing these polynomial constructions, we design faster algorithms for a variety of problems.
For this complexity-theoretic crowd, I will focus more on the implications for SAT algorithms and new lower bounds for circuits with linear threshold functions. In particular, we give a satisfiability algorithm for $AC^0[m]\circ LTF\circ LTF$ circuits with a subquadratic number of unbounded linear threshold gates on the bottom layer, and a subexponential number of gates on the other layers, that runs in deterministic $2^{n-n^\epsilon}$ time. This also implies nearly-quadratic-size circuit lower bounds for threshold circuits. We also give a randomized $2^{n-n^\epsilon}$-time SAT algorithm for subexponential-size $MAJ\circ AC^0\circ LTF\circ AC^0\circ LTF$ circuits, where the top $MAJ$ gate and middle $LTF$ gates have $O(n^{6/5-\delta})$ fan-in.
Joint work with Josh Alman (Stanford) and Timothy M. Chan (Waterloo) (TCPL 201) |
09:50 - 10:10 | Coffee Break (TCPL Foyer) |
10:10 - 11:00 |
Li-Yang Tan: Derandomized search for CNF satisfying assignments in almost polynomial time ↓ We consider the fundamental derandomization problem of deterministically finding a satisfying assignment to a CNF formula that has many satisfying assignments. We give a deterministic algorithm which, given an n-variable poly(n)-clause CNF formula F that has |F^{-1}(1)| \geq \eps 2^n, runs in time
n^{\tilde{O}(\log\log n)^2}
for \eps \ge 1/\polylog(n) and outputs a satisfying assignment of F. Prior to our work the fastest known algorithm for this problem was simply to enumerate over all seeds of a pseudorandom generator for CNFs; using the best known PRGs for CNFs [DETT10], this takes time n^{\tilde{\Omega}(\log n)} even for constant \eps.
Joint work with Rocco Servedio. (TCPL 201) |
11:05 - 11:55 |
Amir Shpilka: Proof complexity lower bounds from algebraic circuit complexity ↓ Proof complexity studies the complexity of mathematical proofs, with the aim of exhibiting (true) statements whose proofs are always necessarily long. One well-known proof system is Hilbert’s Nullstellensatz, which shows that if the family F={f1,…,fm} of n-variate polynomials have no common solution to the system f1=…=fm=0, then there is a proof of this fact of the following form: there are polynomials G={g1,…,gm} such that f1.g1+…+fm.gm=1 is an identity. From the perspective of computer science, it is most natural to ask how succinctly one can express the proof G.
Substantial work on the Nullstellensatz system has measured the complexity of G in terms of their degree or sparsity, and obtained the desired lower bounds for these measures. Grochow and Pitassqi have recently argued that it is natural to measure the complexity of G by the size needed to express them as algebraic circuits. They called the resulting system the Ideal Proof System (IPS), and showed that it captures the power of well-known strong proof systems such as the Frege proof system, as well as showing that certain natural lower bounds for the size of IPS proofs would imply VP≠VNP, an algebraic analogue of P≠NP. This is in contrast to other proof systems, where direct ties to computational lower bounds are often lacking.
Motivated by their work, we study the IPS proof system further. We first show that weak subsystems of IPS can be quite powerful. We then consider lower bounds for subclasses of IPS. We obtain certain extensions to existing lower bound techniques, obtaining "functional lower bounds" as well as "lower bounds for multiples". Using these extensions, we show that variants of the subset-sum axiom require super-polynomially large proofs to prove their unsatisfiability when the size of the algebraic circuits are measured as read-once algebraic branching programs, sums of powers of low-degree polynomials, or multilinear formulas. No specific knowledge of proof complexity or algebraic circuit complexity will be assumed.
Joint work with Michael Forbes, Iddo Tzameret, and Avi Wigderson. (TCPL 201) |
12:00 - 14:00 | Lunch (Vistas Dining Room) |
13:30 - 17:30 | Free Afternoon (Banff National Park) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
Thursday, September 8 | |
---|---|
07:00 - 09:00 | Breakfast (Vistas Dining Room) |
09:00 - 09:50 |
Ran Raz: Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning ↓ We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial.
More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a stream of samples $(a_1, b_1), (a_2, b_2)... $, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $n^2/25$ bits of memory, requires an exponential number of samples.
Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is $O(n)$ (where $n$ is the space needed to store one sample).
We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length $n$, as well as time complexity of $n$ per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than $n^2/25$ memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption. (TCPL 201) |
09:50 - 10:10 | Coffee Break (TCPL Foyer) |
10:10 - 11:00 |
Marco Carmosino: Learning algorithms from natural proofs ↓ Circuit analysis algorithms such as learning, SAT, minimum circuit size, and compression imply circuit lower bounds. A natural question is: given that these algorithms are sufficient for circuit lower bounds, to what degree are they necessary? We make progress on this question by showing a generic implication in the opposite direction: from natural proofs of circuit lower bounds (in the sense of Razborov and Rudich) to randomized learning and compression algorithms. This is the first such implication outside of the derandomization setting. As an application, we use known natural lower bounds for AC0[p] circuits (due to Razborov and Smolensky) to get the first quasi-polynomial time algorithm for learning AC0[p] functions, in the PAC model over the uniform distribution, with membership queries. (TCPL 201) |
11:05 - 11:55 |
Rahul Santhanam: Stronger Connections between Learning, Lower Bounds and Pseudorandomness ↓ We prove several results giving new and stronger connections between
learning theory, circuit complexity and pseudorandomness, including:
1. (Learning Speedups) Let C be any "typical" class of Boolean circuits, and
C[s(n)] denote n-variable C-circuits of size at most s(n). If C[poly] admits a
randomized weak learning algorithm under the uniform distribution with
membership queries that runs in time 2^n/n^{\omega(1)}, then for every k and
\varepsilon > 0 the class C[n^k] can be learned to high accuracy in time
O(2^{n^\varepsilon}).
2.(Equivalences) There is \varepsilon > 0 such that C[2^{n^{\varepsilon}}] can
be learned in time 2^n/n^{\omega(1)} if and only if C[poly] can be learned in
time 2^{O((\log n)^c)}. Furthermore, our proof technique yields equivalences
between various randomized learning models in the exponential and
sub-exponential time settings, such as learning with membership queries,
learning with membership and equivalence queries, and the related framework of
truth-table compression.
3.(Learning versus Pseudorandom Functions) In the non-uniform setting, there is
non-trivial learning for C[poly(n)] if and only if there are no secure
pseudorandom function families computable in C[poly].
4.(Lower Bounds from Nontrivial Learning Algorithms) Let C be any class of
Boolean circuits closed under projection. If for each k, C[n^k] admits a
randomized weak learning algorithm with membership queries under the uniform
distribution that runs in time 2^n/n^{\omega(1)}, then for each k, BPE is not
contained in C[n^k]. If there are P-natural proofs useful against
C[2^{n^{o(1)}}], then ZPEXP is not contained in C[poly].
Among the proof techniques we use are the uniform hardness-randomness tradeoffs
of [Impagliazzo-Wigderson, Trevisan-Vadhan], the easy witness method of
[Kabanets, Impagliazzo-Kabanets-Wigderson], the interpretation of the
Nisan-Wigderson generator as a pseudo-random function generator due to
[Carmosino-Impagliazzo-Kabanets-Kolokolova] and the approximate min-max theorem
of [Althofer, Lipton-Young].
This is joint work with Igor Carboni Oliveira. (TCPL 201) |
12:00 - 13:00 | Lunch (Vistas Dining Room) |
13:10 - 14:00 |
Daniel Kane: A New Approach to Distribution Testing ↓ We study the problem of determining whether or not a
discrete distribution has a given property from a small number of
samples. We present a new technique in this field that operates by
reducing many such problems in a black box way to a simple L^2 tester.
We show how this new technique can recover simple proofs of several
known, optimal results and how it can be extended to provide optimal
solutions to a number of previously unsolved problems. (TCPL 201) |
14:05 - 14:55 |
Shachar Lovett: Structure of protocols for XOR functions ↓ Let f be a boolean function on n variables. Its associated XOR function is the two-party function F(x, y) = f(x xor y). If f has a parity decision tree of depth d, then it gives a natural deterministic protocol for F which sends 2d bits. We establish the reverse direction: any deterministic protocol for F which sends c bits, implies a parity decision tree for f of depth poly(c). The proof combines a novel technique of entropy reduction for protocols, with existing techniques in Fourier analysis and additive combinatorics.
Joint work with Hamed Hatami and Kaave Hosseini. (TCPL 201) |
15:00 - 15:30 | Coffee Break (TCPL Foyer) |
15:30 - 15:55 |
Anindya De: Learning "Sums of independent commonly supported integer random variables" ↓ Let S be a fixed subset of natural numbers and consider the family of random variables which can be generated as sums of independent random variables each supported on S (abbreviated as SICSIRVs on S). So, for example, if S = {0,1}, then the SICSIRVs on S is the class of sums of independent Bernoulli random variables (aka Poisson Binomial random variables).
Recent work by Daskalakis et. al. (STOC 12, FOCS 13) has shown that if S is a subset of [-k, .. , k], then the complexity of learning SICSIRVs on S is poly(k/\epsilon) (and hence independent of n, the number of summands!). We investigate the problem of learning SICSIRVs on S for arbitrary S. Our results are as follows:
(a) If |S| <=3, then the complexity of learning SICSIRV on S is poly(1/\epsilon) (and independent of S).
(b) There exists S ={a_1, a_2, a_3, a_4} such that the complexity of learning SICSIRV on S is \Omega(log log max a_i) and this is tight.
Thus, our results show a sharp transition in the complexity of learning of SICSIRVs when the size of S goes from 3 to 4. The upper bounds are based on a new central limit theorem while the lower bound exploits delicate constructions based on continued fractions.
Joint work with Phil Long and Rocco Servedio. (TCPL 201) |
16:00 - 16:50 |
Oded Regev: The Reverse Minkowski Theorem ↓ Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.
Here we prove a reverse form of the Minkowski's theorem, showing that locally dense lattice are also globally dense (in the appropriate sense).
We also discuss the many applications of this result. In particular, it resolves a conjecture by Saloff-Coste on the behavior of random walks on flat tori, has implications to the complexity of lattice problems, and also makes progress on conjectures by Kannan and Lovasz, and by Shapira and Weiss that are closely related to a celebrated conjecture by Minkowski.
Based on joint works with Daniel Dadush and Noah Stephens-Davidowitz. (TCPL 201) |
17:30 - 19:30 | Dinner (Vistas Dining Room) |
20:00 - 20:50 |
Scott Aaronson: Complexity-Theoretic Foundations of Quantum Supremacy Experiments ↓ Depending on time, I'll discuss some subset of the following results.
(1) There exists an oracle relative to which classical and quantum
computers can solve the same approximate sampling problems in
polynomial time, and yet the polynomial hierarchy is infinite. This
means that any "BosonSampling-type theorem" will need to be
non-relativizing.
(2) Can P be separated from BQP by an efficiently-computable oracle
(one in P/poly)? Yes if quantum-resistant one-way functions exist; no
if P=PSPACE. The latter builds on recent results in quantum query
complexity by myself and Shalev Ben-David.
(3) How hard is it to approximately sample the output distribution of
a random quantum circuit? One can do it (and even calculate the
probabilities) in polynomial space and m^O(n) time, where m is the
number of gates and n is the number of qubits. But if that's
essentially optimal for estimating probabilities, then approximate
sampling also requires exponential time.
Joint work with Lijie Chen (Tsinghua University) (TCPL 201) |
Friday, September 9 | |
---|---|
07:00 - 08:30 | Breakfast (Vistas Dining Room) |
08:30 - 08:55 |
Shubhangi Saraf: High rate locally-correctable and locally-testable codes with sub-polynomial query complexity ↓ We study locally correctable and locally testable codes in the high rate regime. The tradeoff between the rate of a code and the locality/efficiency of its decoding and testing algorithms has been studied extensively in the past decade, with numerous applications to complexity theory and pseudorandomness.
In this talk I will discuss some recent results giving efficient sub-polynomial query decoding and testing algorithms for high rate error correcting codes. I will also highlight some of the most interesting challenges that remain.
Based on joint work with Swastik Kopparty, Or Meir and Noga Ron-Zewi (TCPL 201) |
09:00 - 09:25 |
Swastik Kopparty: Locally testable and locally correctable codes approaching the Gilbert-Varshamov bound ↓ We show that there exist binary locally testable codes (for all rates) and locally correctable codes (for low rates) with rate and distance approaching the Gilbert-Varshamov bound (which is the best rate-distance tradeoff known for general binary error-correcting codes).
Our constructions use a number of ingredients: Thommesen's random concatenation technique, the Guruswami-Sudan-Indyk strategy for list-decoding concatenated codes, the Alon-Edmonds-Luby distance amplification method, and the local list-decodability and local testability of Reed-Muller codes. Interestingly, this seems to be the first time that local testability is used in the construction of locally correctable codes.
Joint work with Sivakanth Gopi, Rafael Oliveira, Noga Ron-Zewi and Shubhangi Saraf (TCPL 201) |
09:30 - 10:30 | Informal discussions (Corbett Hall Lounge (CH 2110)) |
11:30 - 12:00 |
Checkout by Noon ↓ 5-day workshop participants are welcome to use BIRS facilities (BIRS Coffee Lounge, TCPL and Reading Room) until 3 pm on Friday, although participants are still required to checkout of the guest rooms by 12 noon. (Front Desk - Professional Development Centre) |
12:00 - 13:30 | Lunch from 11:30 to 13:30 (Vistas Dining Room) |