Talk Abstracts

Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication

Josh Alman (MIT)

We study the known techniques for designing Matrix Multiplication (MM) algorithms. The two main approaches are the Laser method of Strassen, and the Group theoretic approach of Cohn and Umans. We define a generalization based on zeroing outs which subsumes these two approaches, which we call the Solar method, and an even more general method based on monomial degenerations, which we call the Galactic method.

We then design a suite of techniques for proving lower bounds on the value of omega, the exponent of MM, which can be achieved by algorithms using many tensors T and the Galactic method. Our main result is that there is a universal constant c > 2 such that a large class of tensors generalizing the Coppersmith-Winograd tensor CW_q cannot be used within the Galactic method to show a bound on omega better than c, for any q.

In this talk, I’ll begin by giving a high-level overview of the algorithmic techniques involved in the best known algorithms for MM, and then I will tell you about our lower bounds. No prior knowledge of MM algorithms will be assumed.

Joint work with Virginia Vassilevska Williams to appear in FOCS 2018.

Bio:

I’m a PhD student at MIT, co-advised by Ryan Williams and Virginia Vassilevska Williams. I’m interested in algorithms and complexity, and especially how to apply algebraic tools to both areas.

 


 

Parameterized Inapproximability

Pasin Manurangsi (UC Berkeley)

The area of parameterized complexity seeks a more fine-grained understanding of NP-hard problems by relaxing the notion of efficient algorithms to the so-called fixed-parameter tractability (FPT). After more than two decades of work, the parameterized complexity of many fundamental problems have been classified; that is, each problem is either shown to be in FPT or shown to be hard for some complexity class that is believed to not be in FPT. However, for most problems not in FPT, their approximability statuses remain open. Specifically, there were few techniques to prove hardness of approximation in the parameterized regime. This has somewhat changed in the last few years where more tools have been developed to tackle such problems. In this talk, I will survey some these recent developments and interesting questions that remain open.

Bio:

Pasin is a fourth-year PhD student at University of California, Berkeley where he is advised by Prof. Luca Trevisan and Prof. Prasad Raghavendra. His main research interest is hardness of approximation.

 


 

Approximation on Scheduling under Precedence Constraints

Shi Li (University at Buffalo)

In this talk, I will present some of recent results on approximation algorithms for the classic problems of scheduling precedence-constrained jobs on identical machines, that improve upon many of 20-year-old state-of-art results. I will focus on two technical themes from these results: time-indexed LP relaxations for designing algorithms for the problems with the weighted completion time objective, and the use of Sherali-Adams hierarchy for the setting where the number of machines is a constant.

Based on the work of [Li, FOCS 2017], [Kulkarni, Garg, Li 2018] and my ongoing work with Kulkarni, Tarnawski and Ye.

Bio:

Dr. Li is an assistant professor at the department of computer science and engineering at the University at Buffalo since 2015. Before joining the university, he was a research assistant professor at Toyota Technological Institute at Chicago. Dr. Li received his Ph.D. in 2013 from the Department of Computer Science at Princeton University, supervised by Prof. Moses Charikar. He received his Bachelor’s degree in 2008 from the Department of Computer Science and Technology at Tsinghua University, where he was also a student of Andrew Chih-Chi Yao’s Special Pilot Class. Dr. Li’s main research focus is on designing efficient approximation algorithms for classic NP-hard combinatorial problems. He has been honored with a Princeton University Wallace Memorial Fellowship, early career researcher award and early career teaching award by the department of computer science and engineering at University at Buffalo, and a computer and information science and engineering research initiation initiative (CRII) award by the national science foundation (NSF) of United States. He was also a recipient of an ICALP best student paper award in 2011, the FOCS best paper award of 2012 and the COCOON best paper award of 2018.

 


 

An Almost-linear Time Algorithm for Uniform Random Spanning Tree Generation

Aaron Schild (UC Berkeley)

We give an $m^{1+o(1)}\beta^{o(1)}$-time algorithm for generating uniformly random spanning trees in weighted graphs with max-to-min weight ratio $\beta$. In the process, we illustrate how fundamental tradeoffs in graph partitioning can be overcome by eliminating vertices from a graph using Schur complements of the associated Laplacian matrix.

Our starting point is the Aldous-Broder algorithm, which samples a random spanning tree using a random walk. As in prior work, we use fast Laplacian linear system solvers to shortcut the random walk from a vertex $v$ to the boundary of a set of vertices assigned to $v$ called a “shortcutter.” We depart from prior work by introducing a new way of employing Laplacian solvers to shortcut the walk. To bound the amount of shortcutting work, we show that most random walk steps occur far away from an unvisited vertex. We apply this observation by charging uses of a shortcutter $S$ to random walk steps in the Schur complement obtained by eliminating all vertices in $S$ that are not assigned to it.

Bio:

Aaron Schild is a EECS PhD student at UC Berkeley who is co-advised by Satish Rao and Nikhil Srivastava. His research interests revolve around graph algorithms and their applications. In the past, he has worked on spectral graph theory and planar graph algorithms.

 


 

Dynamic Ordered Sets with Approximate Queries

Or Zamir (Tel Aviv University)

We consider word RAM data structures for maintaining ordered sets of integers whose select and rank operations are allowed to return approximate results, i.e., ranks or items whose rank differ by at most D from the exact answer, where D=D(n) is an error parameter. While exact select and rank have identical operation times, both in comparison-based and word RAM implementations, an interesting separation emerges between approximate versions of these operations in the word RAM model. It turns out that approximate select is much easier than approximate rank. We also consider approximate nearest neighbors and approximate priority queues. We present optimal bounds for all these approximate problems with matching cell-probe lower bounds for select, rank, and nearest neighbors, and an equivalence to a well-studied static problem for approximate priority queues.

Bio:

A 22 year old graduate student at Tel Aviv University, working mostly on Algorithms and Data structures.

 


 

Non-Malleable Codes for Small-Depth Circuits

Marshall Ball (Columbia University)

We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0 tampering functions), our codes have codeword length n=k^{1+o(1)} for a k-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2^O(k^{1/2}). Our construction remains efficient for circuit depths as large as Θ(log(n)/loglog(n)) (indeed, our codeword length remains n≤k^{1+ϵ}), and extending our result beyond this would require separating P from NC1.

We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.

This is based on joint work with Dana Dachman-Soled, Siyao Guo, Tal Malkin, Li-Yang Tan.

Bio:

I am a PhD Candidate at Columbia University under Tal Malkin. I am also currently a visiting researcher at IDC Herzliya. My primary research interests are in the foundations of cryptography and complexity theory.

 


 

Secret Sharing and CDS

Tianren Liu (MIT)

We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for $n$ parties is associated to a monotone function $F:\{0,1\}^n\to\{0,1\}$. In such a scheme, a dealer distributes shares of a secret $s$ among $n$ parties. Any subset of parties $T \subseteq [n]$ should be able to put together their shares and reconstruct the secret $s$ if $F(T)=1$, and should have no information about $s$ if $F(T)=0$. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions $F$.

There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general $F$ has shares of size $2^{n-o(n)}$, but the best lower bound is $\Omega(n^2/\log n)$. Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for $F$. Indeed, several researchers have suggested the existence of a {\em representation size barrier} which implies that the right answer is closer to the upper bound, namely, $2^{n-o(n)}$.

We overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size $2^{0.994n}$ and a linear secret sharing scheme for any access structure with shares of size $2^{0.999n}$. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size $2^{\tilde{O}(\sqrt{n})}$ for double exponential different monotone access structures.

Our secret sharing schemes build on our new protocol for conditional disclosure of secrets (CDS), where two parties want to disclose a secret to a third party if and only if their respective inputs satisfy some predicate. For genearl predicates where the input size is $n$ for both parties, the best protocols prior to our work required

communication complexity $O(2^{n/2})$.

We present the first protocol with $2^{O(\sqrt(n\log n))}$ communication complexity. To obtain these results, we draw upon techniques for non-linear reconstruction developed in the context of information-theoretic private information retrieval (PIR).

Based on joint works with Vinod Vaikuntanathanand and Hoeteck Wee.

 

Bio:

I’m a fifth year student in MIT studying cryptography, advised by Prof Vinod Vaikuntanathanand. Before graduated school, I was studying in Yao class until 2014. My research forces on information theoretic secure cryptography, including secret sharing and secure computation. I also study the impossibility of basing cryptography on NP-hardness.

 


 

Simple Optimal Hitting Sets for Small-Success RL

William Hoza (UT Austin)

We give a simple explicit hitting set generator for read-once branching programs of width w and length r with known variable order. When r = w, our generator has seed length O(log^2 r + log(1/eps)). When r = polylog w, our generator has optimal seed length O(log w + log(1/eps)). For intermediate values of r, our generator’s seed length smoothly interpolates between these two extremes.

Our generator’s seed length improves on recent work by Braverman, Cohen, and Garg (STOC ’18). In addition, our generator and its analysis are dramatically simpler than the work by Braverman et al. Our generator’s seed length improves on all the classic generators for space-bounded computation (Nisan Combinatorica ’92; Impagliazzo, Nisan, and Wigderson STOC ’94; Nisan and Zuckerman JCSS ’96) when eps is small.

As a corollary of our construction, we show that every RL algorithm that uses r random bits can be simulated by an NL algorithm that uses only O(r/log^c n) nondeterministic bits, where c is an arbitrarily large constant. Finally, we show that any RL algorithm with small success probability eps can be simulated deterministically in space O(log^{3/2} n + log n log log(1/eps)). This improves on work by Saks and Zhou (JCSS ’99), who gave an algorithm that runs in space O(log^{3/2} n + sqrt(log n) log(1/eps)).

Joint work with David Zuckerman.

Bio:

I’m a third-year PhD student at UT Austin advised by David Zuckerman. I’m especially interested in the L vs. BPL problem. More generally, I’m interested in pseudorandomness, derandomization, and complexity theory.

 


 

Tight Hardness for Shortest Cycles and Paths in Sparse Graphs

Andrea Lincoln (MIT)

Fine-grained reductions have established equivalences between many core problems with \tilde{O}(n^3)-time algorithms on n-node weighted graphs, such as Shortest Cycle, All-Pairs Shortest Paths (APSP), Radius, Replacement Paths, Second Shortest Paths, and so on. These problems also have ~O(mn)-time algorithms on m-edge n-node weighted graphs, and such algorithms have wider applicability. Are these mn bounds optimal when m << n^2?

Starting from the hypothesis that the minimum weight (2 ℓ +1)-Clique problem in edge weighted graphs requires n^{2 ℓ +1-o(1)} time, we prove that for all sparsities of the form m = \Theta(n^{1+1/ ℓ }), there is no O(n^2 + mn^{1- ϵ }) time algorithm for ϵ >0 for any of the below problems

  • Minimum Weight (2 ℓ +1)-Cycle in a directed weighted graph,
  • Shortest Cycle in a directed weighted graph,
  • APSP in a directed or undirected weighted graph,
  • Radius (or Eccentricities) in a directed or undirected weighted graph,
  • Wiener index of a directed or undirected weighted graph,
  • Replacement Paths in a directed weighted graph,
  • Second Shortest Path in a directed weighted graph,
  • Betweenness Centrality of a given node in a directed weighted graph.

That is, we prove hardness for a variety of sparse graph problems from the hardness of a dense graph problem. Our results also lead to new conditional lower bounds from several related hypothesis for unweighted sparse graph problems including k-cycle, shortest cycle, Radius, Wiener index and APSP.

Joint work with Virginia Vassilevska Williams and Ryan Williams.

Bio:

I am a PhD student at MIT. I am lucky to have Virginia Vassilevska Williams as my advisor. I hope to build and refine theoretical models that crystallize concerns about complex systems. Correct choices of models lead to strong theoretical statements that capture properties of interest in real systems.

Research Interests:

Fine Grained Complexity, Algorithms, Lower Bounds, Space Complexity

 


 

Complexity Theory of the LOCAL Model

Seth Pettie (University of Michigan)

Linial (1992) defined the LOCAL model of distributed computing in order to study the locality of information needed to solve symmetry-breaking problems. In the intervening years nearly all research efforts in this area were dedicated to understanding specific symmetry-breaking problems (colorings, matchings, MIS, rulings sets, packing/covering LPs, etc.). Relatively little

attention was paid to the most basic complexity-theoretic questions that one should ask when confronted with a new computational model. Among them:

* Does more time always let you solve more problems? (Is there a “time hierarchy” analogous to Turing machines?)

* Does randomness help? Is derandomness possible, and at what cost?

* Are there “complete” problems for natural complexity classes?

A flurry of research from 2016-18 has answered all of these questions, and has given us an almost perfect understanding of the complexity landscape of the LOCAL model on bounded degree graphs. In this talk I will survey developments from the last 3 years, and highlight a few of the hardest remaining open problems.

Bio:

Seth Pettie is a Professor and Associate Chair of Computer Science and Engineering at the University of Michigan. He received his Ph.D. in Computer Science from the University of Texas at Austin, in 2003, and was an Alexander von Humboldt Postdoctoral Fellow at the Max Planck Institute for Computer Science, in Saarbrucken, from 2003-2006.


 

How to Match When All Vertices Arrive Online

Yuhao Zhang (University of Hong Kong)

We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors’ arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc.

We show that the Ranking algorithm by Karp et al.\ (STOC 1990) is $0.5211$-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al.\ (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats $0.5$ on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al.\ (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between $0.5541$ and $0.5671$.

Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be $0.6317 < 1-\frac{1}{e}$-competitive in our model even for bipartite graphs.

Bio:

Current PHD student(2016~now) in the University of Hong Kong, Supervised by Dr.Zhiyi Huang. My research area is Theoretical Computer Science. my current research interest is about online algorithms.

 


 

Optimal Streaming and Tracking Distinct Elements with High Probability

Jarosław Błasiok (Harvard University)

The distinct elements problem is one of the fundamental problems in streaming algorithms — given a stream of integers in the range $\{1,\ldots,n\}$, we wish to provide a $(1+\varepsilon)$ approximation to the number of distinct elements in the input. After a long line of research optimal solution for this problem with constant probability of success, using $\mathcal{O}(\frac{1}{\varepsilon^2}+\log n)$ bits of space, was given by Kane, Nelson and Woodruff in 2010.

The standard approach used in order to achieve low failure probability $\delta$, is to take a median of $\log \delta^{-1}$ parallel repetitions of the original algorithm and report the median of computed answers. We show that such a multiplicative space blow-up is unnecessary: we provide an optimal algorithm using $\mathcal{O}(\frac{\log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space — matching known lower bounds for this problem. That is, the $\log\delta^{-1}$ factor does not multiply the $\log n$ term. This settles completely the space complexity of the distinct elements problem with respect to all standard parameters.

We consider also \emph{strong tracking} (or \emph{continuous monitoring}) variant of the distinct elements problem, where we want an algorithm which provides an approximation of the number of distinct elements seen so far, at all times of the stream. We show that this variant can be solved using $\mathcal{O}(\frac{\log \log n + \log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space, which we show to be optimal.

Bio:

Jarosław Błasiok is a PhD student at Harvard University, advised by Jelani Nelson. He has broad insterests ranging across various subareas of theoretical computer science and mathematics. His main research focus are streaming algorithms and related communication complexity lower bounds, but he also made contributions in coding theory, differential privacy and machine learning theory.

 


 

Beyond Kasteleyn: A New Polynomial-time Algorithm for Six-Vertex Models and a Complexity Trichotomy

Shuai Shao (University of Wisconsin-Madison)

We discover a new polynomial time algorithm for the partition functions of six-vertex models on planar graphs for parameter settings that had no known P-time algorithm before. These are distinct from those derivable from (either directly or by a reduction to) Kasteleyn’s algorithm for counting planar perfect matchings, which can be used for certain parameter settings. This new algorithm is obtained by a non-local connection to counting constraint satisfaction problems (#CSP) using topological means. Then we prove that this algorithm together with Kasteleyn’s algorithm constitute a complete list: They exhaust all polynomial time computable six-vertex models on planar graphs (that are #P-hard on general graphs, assuming FP ≠ #P). Combined with a known classification on general graphs, we obtain the following exact complexity classification: For every setting of the parameters in C for the six-vertex model, the computation of the partition function is either (1) computable in polynomial time for every graph, or (2) #Phard for general graphs but computable in polynomial time for planar graphs, or (3) #P-hard even for planar graphs. The classification has an explicit criterion. Problems in type (2) come in exactly two subtypes: One subtype consists of problems computable by (or a holographic transformation to) Kasteleyn’s algorithm. The other subtype is obtained by our new algorithm and provably cannot be subsumed by the first subtype.

Bio:

Shuai is a 3rd year phd student at UW-Madison. His research interests lie in theoretical computer science. Currently, He is working on the complexity classification of counting problems defined over graphs, especially those in the Holant framework. A central goal of this area is to prove dichotomy or trichotomy theorems. He received his B.Sc. degree in June 2014 from the University of Science and Technology of China.

 


 

Torpid Mixing of Markov Chains for the Six-vertex Model on Z^2

Tianyu Liu (University of Wisconsin-Madison)

The six-vertex model originates in statistical mechanics, as an abstraction of crystal lattices with hydrogen bonds, e.g. water ice. States of the six-vertex model on Z^2 are orientations of the edges satisfying the ice-rule — every vertex has two incoming edges and two outgoing edges — namely Eulerian orientations. There are six possible ways to arrange directions of the edges around a vertex, and hence the name of the six-vertex model. Since it was introduced by Linus Pauling in 1935, the six-vertex model has attracted enormous interest in many disciplines of science, and become one of the most fundamental models defined on the square lattice.

Markov chain Monte Carlo (MCMC) is the most popular tool to study the six-vertex model. In a recent work, we investigate the mixing time of two widely used Markov chain algorithms for the six-vertex model, Glauber dynamics and the directed-loop algorithm, on the square lattice Z^2. We prove, for the first time that, on finite regions of the square lattice these Markov chains are torpidly mixing under parameter settings in the ferroelectric phase and the anti-ferroelectric phase. In this talk, I will outline the main results and techniques in this work.

Bio:

Tianyu Liu is a PhD candidate at the Computer Sciences Department of the University of Wisconsin-Madison, advised by Professor Jin-Yi Cai. His research interest lies in theoretical computer science, especially in the computational complexity of approximate counting and sampling.

 


 

A General Theory of Sample Complexity for Multi-Item Profit Maximization

Ellen Vitercik (CMU)

The design of profit-maximizing multi-item mechanisms is a notoriously challenging problem with tremendous real-world impact. The mechanism designer’s goal is to field a mechanism with high expected profit on the distribution over buyers’ values. Unfortunately, if the set of mechanisms he optimizes over is complex, a mechanism may have high empirical profit over a small set of samples but low expected profit. This raises the question, how many samples are sufficient to ensure that the empirically optimal mechanism is nearly optimal in expectation? We uncover structure shared by a myriad of pricing, auction, and lottery mechanisms that allows us to prove strong sample complexity bounds: for any set of buyers’ values, profit is a piecewise linear function of the mechanism’s parameters. We prove new bounds for mechanism classes not yet studied in the sample-based mechanism design literature and match or improve over the best-known guarantees for many classes. The profit functions we study are significantly different from well-understood functions in machine learning, so our analysis requires a sharp understanding of the interplay between mechanism parameters and buyer values. This is based on joint work with Nina Balcan and Tuomas Sandholm.

Bio:

Ellen Vitercik is a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Tuomas Sandholm. Her primary research interests are theoretical machine learning and computational economics. Her honors include a National Science Foundation Graduate Research Fellowship. She graduated summa cum laude and Phi Beta Kappa with a bachelors degree in math from Columbia University.

 


 

Provably Efficient Algorithms for Learning Simple Convolutional Neural Networks

Surbhi Goel (UT Austin)

Developing provably efficient algorithms for learning commonly used neural network architectures continues to be a core challenge in machine learning. The underlying difficulty arises from the highly non-convex nature of the optimization problems posed by neural networks. Obtaining provable guarantees for learning even very basic architectures remains open.

In the first part of the talk, I will present our algorithm “Convotron” for efficiently learning a one hidden layer convolutional network. Convotron uses a simple, iterative update rule that is stochastic in nature and tolerant to noise. In contrast to gradient descent, Convotron requires no special initialization or learning-rate tuning and converges to the global optimum. Additionally, we prove that our algorithm works for commonly used schemes from computer vision, including one-dimensional and two-dimensional “patch and stride” convolutions.

In the second part of the talk, I will discuss a follow-up work that extends the above mentioned guarantees to handle unknown weights in the second layer. This work combines ideas from landscape analysis with the Convotron”algorithm to give a layer-by-layer recovery algorithm.

Joint work with Simon Du, Adam Klivans and Raghu Meka.


 

Yes, There is an Oblivious RAM Lower Bound!

Kasper Green Larsen (Aarhus University)

An Oblivious RAM (ORAM) introduced by Goldreich and Ostrovsky [JACM’96] is a (possibly randomized) RAM, for which the memory access pattern reveals no information about the operations performed. The main performance metric of an ORAM is the bandwidth overhead, i.e., the multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. In their seminal paper introducing the ORAM, Goldreich and Ostrovsky proved an amortized Omega(log n) bandwidth overhead lower bound for ORAMs with memory size n. Their lower bound is very strong in the sense that it applies to the “offline” setting in which the ORAM knows the entire sequence of operations ahead of time.

However, as pointed out by Boyle and Naor [ITCS’16] in the paper “Is there an oblivious RAM lower bound?”, there are two caveats with the lower bound of Goldreich and Ostrovsky: (1) it only applies to “balls in bins” algorithms, i.e., algorithms where the ORAM may only shuffle blocks around and not apply any sophisticated encoding of the data, and (2), it only applies to statistically secure constructions. Boyle and Naor showed that removing the “balls in bins” assumption would result in super linear lower bounds for sorting circuits, a long standing open problem in circuit complexity. As a way to circumventing this barrier, they also proposed a notion of an “online” ORAM, which is an ORAM that remains secure even if the operations arrive in an online manner. They argued that most known ORAM constructions work in the online setting as well.

In this talk, we present an Omega(log n) lower bound on the bandwidth overhead of any online ORAM, even if we require only computational security and allow arbitrary representations of data, thus greatly strengthening the lower bound of Goldreich and Ostrovsky in the online setting. The lower bound applies to ORAMs with memory size n and any word size w >= log n. The bound therefore asymptotically matches the known upper bounds when w = Omega(log^2 n) or when m=Omega(n^eps) for any constant eps>0.

This work received the CRYPTO’18 Best Paper Award and is joint with Jesper Buus Nielsen from Aarhus University.

Bio:

Kasper Green Larsen (born 1986) is an Associate Professor at the Department of Computer Science at Aarhus University, Denmark. Kasper received his Ph.D. from Aarhus University in 2014. He is a member of the Young Academy under the Royal Danish Academy of Sciences and Letters. Kasper is also a recipient of the Hartmann Foundation’s Diploma prize 2017, the Aarhus University Ph.D. prize 2014, the Danish Minister of Science’s EliteForsk travel scholarship (2011) and a Google European Doctoral Fellowship in Search and Information Retrieval (2010). Kasper received multiple best paper awards for his research: The CRYPTO’18 best paper, the STOC’12 best paper, STOC’12 best student paper and the FOCS’11 best student paper award. His research interests spans many areas of theoretical computer science, including data structures, lower bounds, range searching, dimensionality reduction, streaming algorithms and fine grained complexity.



Selling Partially-Ordered Items: Exploring the Space between Homogeneous and Heterogeneous Mechanism Design

Ariel Schvartzman (Princeton University)

In this talk, I will present recent work that aims to characterize optimal mechanisms and worst-case instances for multi-item auctions that lie between fully homogeneous and fully heterogeneous settings.

We will begin with a quick overview of classical results in the AGT community: selling a single item or multiple identical items is simple whereas optimally selling multiple distinct items is complex and unlikely to have a closed form solution. In view of these results, [Fiat et al. 2016] introduce a setting where there are multiple items that are neither identical nor incomparable: they form a total ordering. They characterize optimal mechanisms and show that the menu complexity of this problem is at most exponential in the number of types (and this was later shown by us to be tight [Saxena et. al 2017]). In upcoming work, [Devanur et. al 2018] go even a step further: we consider a setting where the items form a partial ordering. For this setting, we provide algorithms to find optimal mechanisms and show instances for which, in the worst case, the menu complexity is unbounded but finite!

These results provide a nice landscape of results. The menu complexity going from fully homogeneous to fully heterogeneous settings goes from 1 (homogeneous) to exponential (totally ordered) to unbounded but finite (partially ordered) to infinite. The techniques to show upper and lower bounds for both intermediate settings use tools from Lagrangian duality, a framework that has become increasingly useful to the mechanism design community.

Bio:

Ariel is a fourth year graduate student at Princeton University working under the supervision of Matt Weinberg. His work relates to designing optimal or approximately optimal mechanisms in settings that are more complex than single-dimensional settings yet simple enough that some structure can be recovered. In particular, most of his work in this area revolves around the “menu complexity” of a problem, which is the least number of options a seller must present a bidder in order to extract optimal or nearly optimal revenue. He is also interested more broadly in tournament design, mechanism design for cryptocurrencies and online matching problems.

 


 

Linear Algebraic Analogues of the Graph Isomorphism Problem and the Erdős-Rényi Model

Yinan Li (Centrum Wiskunde & Informatica)

A classical difficult isomorphism testing problem is to test isomorphism of p-groups of class 2 and exponent p in time polynomial in the group order. It is known that this problem can be reduced to solving the alternating matrix space isometry problem over a finite field in time polynomial in the underlying vector space size. We propose a venue of attack for the latter problem by viewing it as a linear algebraic analogue of the graph isomorphism problem. This viewpoint leads us to explore the possibility of transferring techniques for graph isomorphism to this long-believed bottleneck case of group isomorphism. More precisely, we devise an average-case efficient algorithm for the alternating matrix space isometry problem over a key range of parameters, inspired by the first average-case efficient algorithm for graph isomorphism [Babai, Erdős, Selkow, SIAM J Computing, 1980]. Our average-case analysis is performed in a random model of alternating matrix spaces in vein of the Erdős–Rényi model of random graphs. For this, we develop a linear algebraic analogue of the classical individualisation technique, a technique belonging to a set of combinatorial techniques that has been critical for the progress on the worst-case time complexity for graph isomorphism, but was missing in the group isomorphism context. As a consequence of the main algorithm, we establish a weaker linear algebraic analogue of Erdős–Rényi’s classical result that most graphs have the trivial automorphism group.

The talk is based on a joint work (arXiv:1708.04501) with Youming Qiao.

Bio:

Yinan Li Joined the Research Center for Quantum Software (QuSoft) and Centrum Wiskunde & Informatica (CWI), the Netherlands, as a postdoctoral researcher in 2018. He obtained his PhD in the Centre of Quantum Software and Information at the University of Technology Sydney, under the supervision of Prof. Runyao Duan and Prof. Youming Qiao. Yinan is broadly interested in quantum information and complexity theory, with emphasises on entanglement theory, zero-error information theory and combinatorial & algebraic isomorphism problems.

 


 

Prediction with a Short Memory

Vatsal Sharan (Stanford University)

We consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage long-range dependencies. Perhaps surprisingly, our positive results show that for a broad class of sequences, there is an algorithm that predicts well on average, and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations.

Specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by H, then a simple Markov model over the most recent H/epsilon observations can obtain KL error epsilon with respect to the optimal predictor with access to the entire past. For a Hidden Markov Model with n hidden states, H is bounded by log n, a quantity that does not depend on the mixing time, and we show that the trivial prediction algorithm based on the empirical frequencies of length O(log n/epsilon) windows of observations achieves this error. We also show that the sample complexity of the Markov model is nearly optimal for any polynomial time algorithm based on computational complexity conjectures.

This is based on joint work (https://arxiv.org/abs/1612.02526) with Sham Kakade, Percy Liang and Greg Valiant.

Bio:

Vatsal Sharan is a graduate student at Stanford, advised by Greg Valiant. He is interested in machine learning theory, and particularly in algorithms and lower bounds for machine learning under resource constraints, such as memory constraints.



Dispersion for Data-Driven Algorithm Design, Online Learning, and Private Optimization

Travis Dick (CMU)

In this work we provide online and differentially private algorithms for maximizing the sum of piecewise Lipschitz functions, an important family of non-convex optimization problems. The discontinuities of these functions makes them challenging to optimize online and privately, since even a small perturbation can lead to severe suboptimality. We introduce a sufficient and general condition on collections of piecewise Lipschitz functions, dispersion, that enables strong online and private utility guarantees. When the functions are dispersed, we give matching upper and lower bounds on regret under full information, and regret bounds under bandit feedback. Similarly, we give matching upper and lower bounds on suboptimality when privately maximizing dispersed functions in the batch setting.

Our analysis applies to a variety of problems in algorithm configuration (parameter tuning) and computational economics. Under mild (or even no) assumptions, these problems satisfy dispersion. Specifically, we consider tuning the parameters of greedy algorithms for subset selection problems, such as the knapsack problem, and of SDP-rounding schemes for problems formulated as integer quadratic programs, such as max-cut and correlation clustering. We also analyze dispersion for multi-item second price auctions with reserves and multi-item posted price mechanisms.

This is joint work with Nina Balcan and Ellen Vitercik to appear in FOCS 2018.

Bio:

Travis Dick is a Ph.D. student in the Computer Science Department of Carnegie Mellon University, advised by Nina Balcan. He is interested in the theoretical foundations of machine learning, and especially in questions about the impact of machine learning on society. His recent research has focused on incorporating social values like privacy and fairness into machine learning algorithms and includes projects on envy-free classification, and differentially private clustering, parameter tuning, and pricing problems. He is also actively working on data-driven algorithm configuration/parameter tuning. He has worked on distributed learning, actor-critic methods for reinforcement learning, online learning, and label efficient multi-class learning.

 


 

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

Zhize Li (Tsinghua University)

Due to the increasing popularity of deep learning, nonconvex optimization has attracted significant attention. In this talk, I focus on the more general nonsmooth nonconvex case. Firstly, I review several classical algorithms (e.g., proximal gradient descent (ProxGD), ProxSGD, ProxSVRG, etc.) and their convergence results for obtaining an $\epsilon$-accurate solution. Then, I show that our ProxSVRG+ algorithm recovers these existing convergence results, and improves/generalizes them. In particular, ProxSVRG+ prefers moderate minibatch size which is more suitable for parallelism and statistical generalization, and it can further automatically exploit the local curvature to obtain a faster convergence.

Bio:

Zhize Li is currently a 5th-year PhD student at Institute for Interdisciplinary Information Sciences (IIIS), Tsinghua University (Advisor: Prof. Jian Li). His research interests lie in the broad area of theoretical computer science and machine learning, in particular optimization algorithms, machine learning, algorithms and data structures, and game theory.


 

Counting Hypergraph Colorings in the Local Lemma Regime

Chao Liao (Shanghai Jiao Tong University)

Hypergraph coloring is a classic and important topic in combinatorics. The Lovász local lemma implies that for $q$ colors and a $k$-uniform hypergraph $H$ with maximum degree $\Delta$, if $q > C\Delta^\frac{1}{k-1}$ for some constant $C$ then $H$ has at least one proper coloring. In this talk, we consider the problem of approximately counting proper colorings in $k$-uniform hypergraphs. Previously, the best result requires a condition that is much stronger than the condition implied by the local lemma. We close this gap and obtain first efficient algorithms for both approximate counting and almost uniform sampling.

Joint work with Heng Guo, Pinyan Lu and Chihao Zhang.

Bio:

I am a PhD student at Shanghai Jiao Tong University since Fall 2016. I have a broad interest in various aspects of theoretical computer science and mathematics. Currently, I focus on approximate counting algorithms.


The Lovász number and beyond: when graph meets quantum

Runyao Duan (Baidu INC)

In 1979 Lovász introduced a novel upper bound for the zero-error capacity of a graph, thus resolving an open problem by Shannon in 1956. However, the complete interpretation of this Lovász number, has been a mystery since then. Motivated by quantum information theory, we continue to explore the behaviors of this number in the context of zero-errror communication. We first show a new property of this number under strong graph product: for any graph G, we can always find another graph H so that the ratio between the independent number of GxH (strong product) and its Lovász number is arbitrarily close to 1. We then define a generalized Lovász number for non-commutative graphs, and find it share many similar properties to its classical counterparts. Finally if we use physical resources beyond quantum entanglement such as no-signaling correlations in communication, the classic Lovász number of a graph is exactly its zero-error classical capacity.

Bio:

Dr. Runyao Duan has been the Director of Baidu Quantum Computing Institute since 7 March 2018. Earlier he was a Professor and the Founding Director of Centre for Quantum Software and Information at University of Technology Sydney. He obtained his BE and PhD both from the Department of Computer Science and Technology at Tsinghua University. He has been working in the field of quantum computing since 2001, and is a winner of the 2006 China Computer Federal (CCF) Outstanding PhD Dissertation Award. He aims to become a SIMPLE and reliable person, and is in a quantum superposition of “Scientist, Interdisciplinary researcher, Marathon amateur,Professor, Leader, and Engineer”.



PPP-Completeness with Connections to Cryptography

Manolis Zampetakis (MIT)

Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with profound connections to the complexity of the fundamental cryptographic primitives: collision-resistant hash functions and one-way permutations. In contrast to most of the other subclasses of TFNP, no complete problem is known for PPP. Our work identifies the first PPP-complete problem without any circuit or Turing Machine given explicitly in the input, and thus we answer a longstanding open question from [Papadimitriou1994]. Specifically, we show that constrained-SIS, a generalized version of the well-known Short Integer Solution problem (SIS) from lattice-based cryptography, is PPP-complete.

In order to give intuition behind our reduction for constrained-SIS, we identify another PPP-complete problem with a circuit in the input but closely related to lattice problems. We call this problem BLICHFELDT and it is the computational problem associated with Blichfeldt’s fundamental theorem in the theory of lattices.

Building on the inherent connection of PPP with collision-resistant hash functions, we use our completeness result to construct the first natural hash function family that captures the hardness of all collision-resistant hash functions in a worst-case sense, i.e. it is natural and universal in the worst-case. The close resemblance of our hash function family with SIS, leads us to the first candidate collision-resistant hash function that is both natural and universal in an average-case sense.

Finally, our results enrich our understanding of the connections between PPP, lattice problems and other concrete cryptographic assumptions, such as the discrete logarithm problem over general groups.

This is a joint work with Katerina Sotiraki and Giorgos Zirdelis.

Bio:

My name is Emmanouil (Manolis) Zampetakis and I am a PhD student in the Department of Electrical Engineering and Computer Science at Massachusetts Institute of Technology (MIT). I am very fortunate to be advised by Constantinos Daskalakis! I joined the graduate program of MIT on September 2014 in the Theory of Computation Group at CSAIL. With my advisor Constantinos Daskalakis we are working on a wide range of problems on theoretical machine learning, learning theory, complexity theory and algorithmic game theory. Before MIT, I was an undergraduate student at the Department of Electrical Engineering and Computer Science at National Technical University of Athens where I completed my Diploma Degree. During my time there, I was fortunate to work with Dimitris Fotakis on Mechanism Design with Verification and Scheduling Map-Reduce jobs. My research interests include: Theoretical Machine Learning, Computational Statistics, Computational Complexity, Sublinear Algorithms and Algorithmic Game Theory. website: http://www.mit.edu/~mzampet/.


Polar Codes for Compressing Hidden Markov Sources

Preetum Nakkiran (Harvard University)

Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessaarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial both in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and finite polynomial bounds were not available with respect to either the gap to capacity or mixing time.

Our results operate in the setting where the source (or the channel) is known. If the source is unknown then compression at such short lengths would lead to effective algorithms for learning parity with noise — thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown.

Joint work with Venkatesan Guruswami and Madhu Sudan.

Bio:

Preetum Nakkiran is a PhD student at Harvard, advised by Madhu Sudan. He did his undergrad in EECS at UC Berkeley. Preetum is generally interested in many areas of theory, science, and theory-of-science. He has worked in areas including coding theory, complexity, machine-learning theory, and machine-learning practice.

 


 

Matrix Product States for Quantum Stochastic Modeling

Chengran Yang (Nanyang Technological Unversity)

In stochastic modeling, there has been a significant effort towards finding predictive models that predict a stochastic process’ future using minimal information from its past. Meanwhile, in condensed matter physics, matrix product states (MPS) are known as a particularly efficient representation of 1D spin chains. In this letter, we associate each stochastic process with a suitable quantum state of a spin chain. We then show that the optimal predictive model for the process leads directly to an MPS representation of the associated quantum state. Conversely, MPS methods offer a systematic construction of the best known quantum predictive models. This connection allows an improved method for computing the quantum memory needed for generating optimal predictions. We prove that this memory coincides with the entanglement of the associated spin chain across the past/future-bipartition.

Bio:

Chengran Yang is a PhD student in school of physical and mathematical sciences, Nanyang technological university. He got his B.S. in math at Zhejiang university in 2014. He studied in Institute for Interdisciplinary Information Sciences, Tsinghua university from 2014 to 2016. His research interest includes complex systems, tensor networks and quantum machine learning.

 


 

Quantifying the Coherence of Operations

Thomas Theurer (Ulm University)

To describe certain facets of non-classicality, it is enlightening to quantify properties of operations instead of states. This is the case if one wants to quantify how well an operation detects non-classicality, which is a necessary prerequisite to use it in quantum technologies. To do this rigorously, we build resource theories on the level of operations, exploiting the concept of resource destroying maps. We discuss the two basic ingredients of these resource theories, the free operations and the free super-operations, which are sequential and parallel concatenations with free operations. This leads to defining properties of functionals that are well suited to quantify the resources of operations. We introduce these concepts at the example of coherence. In particular, we present two measures quantifying the ability of an operation to detect, i.e. to use, coherence and provide programs to evaluate them. One of them quantifies how well the map can be simulated by a stochastic process on the populations.

 

Bio:

Contact

Institute of Theoretical Physics and IQST, Universität Ulm

Albert-Einstein-Allee 11

D-89069 Ulm

Germany

thomas.theurer@uni-ulm.de

Education

02/2017 – present

10/2011 – 12/2016

09/2013 – 12/2013

Ph.D. Student at Universität Ulm, advised by Prof. Martin B. Plenio

B.Sc. and M.Sc. in Physics at Universität Ulm

Exchange semester at Université Rennes 2 (France)

Research interests

Quantum Information

Quantum Resource Theories

Coherence Theory

Non-classicality

Publications

Theurer, T., Egloff, D., Zhang, L., Plenio, M. B. (2018).

Quantifying the Coherence of Operations.

arXiv:1806.07332.

Egloff, D., Matera, J. M., Theurer, T., Plenio, M. B. (2018).

Of Local Operations and Physical Wires.

Phys. Rev. X 8, 031005

Theurer, T., Streltsov, A., Plenio, M. B. (2018).

Stochastic Coherence Theory for Qubits.

arXiv:1804.09467

Theurer, T., Killoran, N., Egloff, D., Plenio, M. B. (2017).

Resource Theory of Superposition.

Phys. Rev. Lett. 119, 230401.