Home

Keynote Talks

Christos Papadimitriou – Berkeley

Computational Insights and the Theory of Evolution

I shall discuss recent work (much of it joint with biologists Adi Livnat and Marcus Feldman) on some central problems in Evolution that was inspired and informed by computational ideas. Considerations about the performance of genetic algorithms led to a novel theory on the role of sex in Evolution based on the concept of mixability. And a natural random process on Boolean functions can help us understand better Waddington’s genetic assimilation phenomenon, in which an acquired trait becomes genetic.

Multiplicative Updates in Coordination Games and the Theory of Evolution (Paper)

Vijay Vazirani – Georgia Tech

Dispelling an Old Myth about an Ancient Algorithm

Myth — and grapevine — has it that the Micali-Vazirani maximum matching algorithm is “too complicated”.

The purpose of this talk is to show the stunningly beautiful structure of minimum length alternating paths and to convince the audience that our algorithm exploits this structure in such a simple and natural manner that the entire algorithm can fit into one’s mind as a single thought.

For all practical purposes, the Micali-Vazirani algorithm, discovered in 1980, is still the most efficient known algorithm (for very dense graphs, slight asymptotic improvement can be obtained using fast matrix multiplication). Yet, no more than a handful of people have understood this algorithm; we hope to change this.

Based on the following two papers:

http://www.cc.gatech.edu/~vazirani/MV.pdf

http://www.cc.gatech.edu/~vazirani/Theory-of-matching.pdf

 

Eric Allender – Rutgers, State University of NJ

The strange link between incompressibility and computational complexity

This talk centers around some audacious conjectures that attempt to forge firm links between computational complexity classes and the study of Kolmogorov complexity.

More specifically, let R denote the set of Kolmogorov-random strings. Two familiar complexity classes are:

* BPP: the class of problems that can be solved with negligible error in polynomial-time, and

* NEXP: the class of problems solvable in nondeterministic exponential time.

Conjecture 1: NEXP = NP^R.

Conjecture 2: BPP is the class of problems non-adaptively polynomial-time reducible to R.

The first thing to note is that these conjectures are not only audacious; they are obviously false! R is not a decidable set, and thus it is absurd to suggest that the class of problems reducible to it constitutes a complexity class.

The absurdity fades if, for example, we interpret “NP^R” to be “the class of problems that are NP-Turing reducible to R, no matter which universal machine we use in defining Kolmogorov complexity”. The lecture will survey the body of work (some of it quite recent) that suggests that, when interpreted properly, the conjectures may actually be true.

 

Talks

Richard Peng – CMU

Algorithm Design using Spectral Graph Theory

Spectral Graph Theory is the interplay between linear algebra and combinatorial graph theory. Laplace’s equation and its discrete form, the Laplacian Matrix, appears ubiquitously in mathematical physics. Due to the recent discovery of very fast solvers for these equations, they are also becoming increasingly popular in combinatorial optimization, computer vision, computer graphics and machine learning.

This talk will overview of some of the applications, then describe the state of art algorithms for solving these linear systems. At the heart of them is a routine that efficiently finds a chain of similar but increasingly sparser graphs, which may be of independent interest.

Topics in this talk represent joint work with Guy Blelloch, Hui Han Chin, Anupam Gupta, Jon Kelner, Yiannis Koutis, Aleksander Madry, Gary Miller and Kanat Tangwongsan.

 

Chris Wilkens – Berkeley

Single-Call Mechanisms

Truthfulness is fragile and demanding. It is oftentimes computationally harder than solving the original problem. Even worse, truthfulness can be utterly destroyed by small uncertainties in a mechanism’s outcome. One obstacle is that truthful payments depend precisely on outcomes other than the one realized, such as the lengths of non-shortest-paths in a shortest-path auction. Single-call mechanisms are a powerful tool that circumvents this obstacle — they implicitly charge truthful payments, guaranteeing truthfulness in expectation using only the outcome realized by the mechanism. The cost of such truthfulness is a trade-off between the expected quality of the outcome and the risk of large payments.

We largely settle when and to what extent single-call mechanisms are possible. The first single-call construction was discovered by Babaioff, Kleinberg, and Slivkins [BKS10] in single-parameter domains. They give a transformation that turns any monotone, single-parameter allocation rule into a truthful-in-expectation single-call mechanism. Our first result is a natural complement to [BKS10]: we give a new transformation that produces a single-call VCG mechanism from any allocation rule for which VCG payments are truthful. Second, in both the single-parameter and VCG settings, we precisely characterize the possible transformations, showing that that a wide variety of transformations are possible but that all take a very simple form. Finally, we study the inherent trade-off between the expected quality of the outcome and the risk of large payments. We show that our construction and [BKS10] simultaneously optimize a variety of metrics in their respective domains.

Our study is motivated by settings where uncertainty in a mechanism renders other known techniques untruthful. As an example, we analyze pay-per-click advertising auctions, where the truthfulness of the standard VCG-based auction is easily broken when the auctioneer’s estimated click-through-rates are imprecise.

 

Alistair Stewart – University of Edinburgh

Polynomial time algorithms for Branching Markov (Decision) Processes

We give polynomial-time algoithms for approximating the extinction probability of multi-type branching processes, and the optimal extinction probability for branching markov decision processes, where the controller is trying to either maximize or minimize the extinction probability. That is, we can approximate these probabilities to within ϵ > 0, in time polynomial in log(1/ϵ) and the encoding size of the model.

Branching processes (BPs) are a classic family of stochastic processes, with applications in several fields (including population biology and nuclear physics). Branching Markov Decision Processes(BMDPs) are a generalisation of multi-type BPs, and form a family of infinite-state MDPs.

These models give rise to systems of equations – a system of probabilistic polynomials for BPs, and a system of (Bellman) equations containing max or min, as well as probabilistic polynomials, for BMDPs. Computing their optimal extinction probabilities to within desired precision is equivalent to computing the least fixed-point solution of the corresponding equations. We also consider some applications of these results to stocastic context-free grammars.

Gil Cohen – Weizmann

Non-Malleable Extractors and Applications to Privacy Amplification

Motivated by the classical problem of privacy amplification, Dodis and Wichs (STOC ’09) introduced the notion of a non-malleable extractor, significantly strengthening the classical notion of a strong extractor. A non-malleable extractor is an efficiently computable function that takes as input two binary strings: a sample from a weak source and an independent seed S. The output of the extractor is a string that is nearly uniform given the seed S as well as the output of the extractor computed for any seed S’ different from S, that may be determined as an arbitrary function of S.

In this talk we introduce the notion of a non-malleable extractor, discuss its motivating application, and present an explicit construction of a non-malleable extractor.

The talk is based on a joint work with Ran Raz and Gil Segev. No prior knowledge is assumed.

 

Sanjam Garg – UCLA

Adaptively Secure Multi-Party Computation with Dishonest Majority

Abstract- Secure multi-party computation (MPC) allows a group of n mutually distrustful parties to jointly compute any functionality f in such a manner that the honest parties obtain correct outputs and no group of malicious parties learns anything beyond their inputs and prescribed outputs. In this setting we can consider an adversary that can adaptively corrupt parties throughout the protocol execution depending on its view during the execution. Canetti, Feige, Goldreich and Naor [STOC96] constructed the first adaptively secure MPC protocol, however, assuming the presence of an honest majority. The question of doing so in the setting of dishonest majority was left open.

It is folklore belief that this question can be resolved by composing known protocols from the literature. We show that in fact, this belief is fundamentally mistaken. We prove that no round efficient protocol can adaptively securely realize a (natural) n-party functionality with a black-box simulator. Additionally we show that this impossibility result can be circumvented using non-black box techniques.

(Joint work with Amit Sahai)

 

Karl Bringmann – MPI Saarbrucken

Efficient Sampling Methods for Discrete Distributions

We study the fundamental problem of the exact and efficient generation of random values from a finite and discrete probability distribution. Suppose that we are given n distinct events with associated probabilities p_1,…,p_n. We consider the problem of sampling a subset, which includes the i-th event independently with probability p_i, and the problem of sampling from the distribution, where the i-th event has a probability proportional to p_i. For both problems, we present on two different classes of inputs — sorted and general probabilities — efficient preprocessing algorithms that allow for asymptotically optimal querying, and prove (almost) matching lower bounds for their complexity.

Li-Yang Tan – Columbia

Attribute-efficient learning and weight-degree tradeoffs for polynomial threshold functions

We study the challenging problem of learning decision lists attribute-efficiently, giving both positive and negative results.

Our main positive result is a new tradeoff between the running time and mistake bound for learning length-k decision lists over n Boolean variables. When the allowed running time is relatively high, our new mistake bound improves significantly on the mistake bound of the best previous algorithm of Klivans and Servedio.

Our main negative result is a new lower bound on the weight of any degree-d polynomial threshold function (PTF) that computes a particular decision list over k variables. This lower bound establishes strong limitations on the effectiveness of the Klivans and Servedio approach and suggests that it may be difficult to improve on our positive result. The main tool used in our lower bound is a new variant of Markov’s classical inequality which may be of independent interest; it provides a bound on the derivative of a univariate polynomial in terms of both its degree and the size of its coefficients.

Joint work with Rocco Servedio and Justin Thaler

 

 

Kasper Green Larsen – Aarhus University

The Cell Probe Complexity of Dynamic Range Counting

In this talk we present a new technique for proving lower bounds on the update time and query time of dynamic data structures in the cell probe model. With this technique, we prove the highest lower bound to date for any explicit problem, namely a lower bound of $t_q=\Omega((\lg n/\lg(wt_u))^2)$. Here $n$ is the number of update operations, $w$ the cell size, $t_q$ the query time and $t_u$ the update time. In the most natural setting of cell size $w=\Theta(\lg n)$, this gives a lower bound of $t_q=\Omega((\lg n/\lg \lg n)^2)$ for any polylogarithmic update time. This bound is almost a quadratic improvement over the highest previous lower bound of $\Omega(\lg n)$, due to Patrascu and Demaine [SICOMP'06].

We prove our lower bound for the fundamental problem of weighted orthogonal range counting. In this problem, we are to support insertions of two-dimensional points, each assigned a $\Theta(\lg n)$-bit integer weight. A query to this problem is specified by a point $q=(x,y)$, and the goal is to report the sum of the weights assigned to the points dominated by $q$, where a point $(x’,y’)$ is dominated by $q$ if $x’ < x$ and $y’ < y$. In addition to being the highest cell probe lower bound to date, our lower bound is also tight for data structures with update time $t_u = \Omega(\lg^{2+\eps}n)$, where $\eps>0$ is an arbitrarily small constant.

This work was presented at STOC’12 where it received both the Best Paper Award and the Best Student Paper Award.

Bruno Loff – CWI

Hardness of approximation for the knapsack problem

It is well-known that the knapsack problem has a fully polynomial-time approximation scheme, in particular if eps >= 1/poly(n), then we can approximate the optimal solution with error ratio (1 + eps) in polynomial-time. In 30 minutes we will prove that, under sufficient hardness assumptions, the knapsack problem can not be approximated any better. For example, under the exponential-time hypothesis, there is no polynomial-time algorithm to approximate knapsack with error ration 1 + eps, where eps < 2 ^ -(log n)^3.

 

Klim Efremenko – Tel Aviv University

From Irreducible Representations to Locally Decodable Codes

A q-query Locally Decodable Code (LDC) is an error-correcting code that allows to read any particular symbol of the message by reading only q symbols of the codeword.

In this talk we present a new approach for the construction of LDCs.

We show that if there exists an irreducible representation (\rho, V) of G and q elements g1,g2,…, g_q in G such that there exists a linear combination of matrices \rho(gi) that is of rank one, then we can construct a q-query Locally Decodable Code C:V-> F^G.

We show the potential of this approach by constructing constant query LDCs of sub-exponential length matching the parameters of the best known constructions.

In this talk we will not assume any previous knowledge in representation theory.

 

Yuli Ye – University of Toronto

A Fairy Tale of Greedy Algorithms

Despite their wide applications in practice, greedy algorithms have received little attention in the theory community. Some consider them too easy (or is it too hard?) to be analyzed; some believe that they are too simple and almost never give a good performance guarantee; some think there are always better alternatives.

In this talk, we will provide some insights for designing a good greedy algorithm, explore the flexibility and power of this algorithmic paradigm, and show you the beauty and mystery of greedy algorithms. There will be no proofs in this talk but a plenty of examples. In particular, we are going to give greedy algorithms for vertex cover, subset sum and the max-sum diversification problem, with different approximation guarantees.

 

Chengu Wang – Tsinghua University

Communication Complexity of Some Graph Problems

Let’s consider the following communication complexity problem. Alice and Bob each holds a graph on the same vertex set. They want to communicate as less as possible to compute some properties of the union of the two graphs. The graph properties we will discuss are connectivity, bipartiteness, girth, and the approximation of clique number. I will show non-trivial randomized lower bounds of these problems. They also imply space lower bounds in data streaming model for randomized algorithms. The lower bounds of connectivity, bipartiteness and girth are proved by reductions from the cycle-counting problem, which counts the number of cycles in the product of two permutations. The clique number approximation problem is reduced from the set-disjointness problem, using combinatorial constructions.

These are joint works with Magnus M. Halldorsson, Xiaoming Sun, Mario Szegedy and Wei Yu.

Hao Song – Tsinghua University

Space-Bounded Communication Complexity

While communication complexity is a popular tool for proving lower bounds in diverse areas of computation complexity, it’s a well known fact that no super-linear lower bound can be proven in the classical communication complexity model. And the implied lower bounds in other areas suffer somewhat from this limitation.

Putting a space-bound on the classical communication complexity model sounds like a possible way to overcome this limitation while preserving the generality of the original model. Lam et.al. and Beame et.al. explored this possibility more than two decades ago by adopting techniques from arithmetic/boolean circuits and branching programs research to the more powerful communication setting, and achieved several space-communication results.

In this talk, we will take a different approach. We will start with a conceptually simple and purely information theoretical model, and explore its variants. We will look at some of the basic problems in this model, like memory hierarchy theorems. We will show some incomputability results for some specific non-boolean functions. And we will show that by studying how the memory is used in the communication/computation process, we can prove some interesting differences in the way problems like equality and inner product are solved in the space-bounded communication model.

This talk is based on a joint work with Joshua Brody, Shiteng Chen,Periklis Papakonstantinou and Xiaoming Sun.

Chris Beck – Princeton

Time Space Tradeoffs in Proof Complexity

For modern SAT solvers based on DPLL with clause learning, the two major bottlenecks are the time and memory used by the algorithm. This raises the question of whether this memory bottleneck is inherent to Resolution based approaches, or an artifact of the particular search heuristics currently used in practice? There is a well known correspondence between these algorithms and the Resolution proof system, in which these resources correspond to the length and space of proofs. While every tautology has a linear-space proof, this proof is in general of exponential size, raising the issue of size-space tradeoffs: perhaps, in high space, there is a short proof, but with constrained space only much longer proofs exist. Space complexity and time-space tradeoffs have been the subject of much recent work in proof complexity, but until our recent work no such bound applied to super-linear amounts of space. In recent work with Paul Beame and Russell Impagliazzo, we obtained strong time-space tradeoff lower bounds beyond the linear space threshold — for instance, for each k, we give a sequence of formulas with a polynomial size proof using space n^k, but for which any proof in space n^{k/2} has superpolynomial size. Thus, on these instances, if you want to run in polynomial time, you need a large polynomial amount of space. In fact we obtain formulas at all level of hardness for which the proof space must be within a polynomial factor of the optimal proof size, or else the proof size blows up quasipolynomially.

This result was recently simplified and extended in work of myself, Jakob Nordstr{\”o}m, and Bangsheng Tang; among other results, we obtain these same tradeoffs in Polynomial Calculus Resolution. Polynomial Calculus Resolution is an extension of Resolution relevant to algebraic SAT solvers using Gr{\”o}bner basis methods; our results imply that there are some formulas whose time space tradeoff characteristics are not significantly improved by adding algebraic reasoning on top of resolution in this way.

 

Aris Filos-Ratsikas – Aarhus University

An Improved 2-Agent Kidney Exchange Mechanism

We study a mechanism design version of matching computation in graphs that models the game played by hospitals participating in pairwise kidney exchange programs. We present a new randomized matching mechanism for two agents which is truthful in expectation and has an approximation ratio of 3/2 to the maximum cardinality matching. This is an improvement over a recent upper bound of 2 [Ashlagi et al., EC 2010] and, furthermore, our mechanism beats for the first time the lower bound on the approximation ratio of deterministic truthful mechanisms. We complement our positive result with new lower bounds. Among other statements, we prove that the weaker incentive compatibility property of truthfulness in expectation in our mechanism is necessary; universally truthful mechanisms that have an inclusion-maximality property have an approximation ratio of at least 2.

Joint work with Ioannis Caragiannis and Ariel D. Procaccia


Rasmus Ibsen-Jensen – Aarhus University

A Faster Algorithm for Solving One-Clock Priced Timed Games

One-clock priced timed games is a class of two-player, zero-sum, continuous-time games that was defined and thoroughly studied in previous works. We show that one-clock priced timed games can be solved in time m 12^n n^O(1), where n is the number of states and m is the number of actions. The best previously known time bound for solving one-clock priced timed games was 2^O(n^2+m), due to Rutkowski. For our improvement, we introduce and study a new algorithm for solving one-clock priced timed games, based on the sweep-line technique from computational geometry and the strategy iteration paradigm from the algorithmic theory of Markov decision processes. As a corollary, we also improve the analysis of previous algorithms due to Bouyer, Cassez, Fleury, and Larsen; and Alur, Bernadsky, and Madhusudan.

 

The talk will focus on the class simple priced timed games, which is equivalent with priced timed games up to a polynomial factor.

 

Jan Bulanek – Czech Academy of Sciences

Tight lower bounds for online labeling and file maintenance problem

The online labeling problem is a task where the input is a stream of N items with defined linear ordering and you have to assign one of M integer labels to each item so that the ordering of labels will respect the ordering of items. However, since you do not know the stream in advance, sometimes it could be necessary to reassign some of already assigned labels. The goal is to make as few reassignments as possible. In file maintenance problem M=O(N) and items instead of being labeled are inserted into the array which is kept sorted.

In this talk we present our recent results in which we found tight lower bound for all M. For example for the most interesting case when M=O(N) we get the tight lower bound O(N log^2 N), for M=O(N^{1+c}), where c>1 we get O(N log N).

This is joint work with M. Babka, V. Čunát, M. Koucký and M. Saks.

 

Radu Curticapean - Universitet des Saarlandes

Parameterized counting complexity: Weighted matchings and parameterized holographic reductions

After a general introduction to the complexity of parameterized counting problems, we make a step towards proving the conjecture that counting matchings of size k is hard for the parameterized counting class #W[1]. More precisely, we show that weighted counting of k-matchings is #W[1]-hard in edge-weighted graphs if the weight of a matching M is given by the product of the weights of the edges in M.

Next, we give a brief introduction to the field of holographic algorithms and observe that holographic reductions, which were introduced for the study of classical counting problems, also admit applications in parameterized counting complexity. As an example for such an application, we obtain a strange correspondence between matchings and path-cycle covers of cubic graphs.

Pablo Azar – MIT

Rational Proofs

We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are “good” or “malicious” provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin’s single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts.

This is joint work with Silvio Micali.

Ruta Mehta – IIT Bombay

Efficient Nash Equilibrium Computation in Two Player Rank-1 Games

A game with two players, each with finitely many strategies, can be represented by two payoff matrices (A,B); a bimatrix game. Kannan and Theobald (2007) defined rank-based hierarchy of bimatrix games, where the rank of game (A,B) is defined as rank(A+B). Rank-0 (zero-sum) games form base of this hierarchy, and are polynomial time solvable (computing Nash equilibrium). Solving a rank-1 game, the smallest extension of rank-0, can be formulated as a rank-1 QP, and for this no polynomial time algorithm was known. Moreover, solving a rank-1 QP is known to be NP-hard in general.

Given a rank-1 bimatrix game, we construct a suitable linear subspace of the rank-1 game space and characterize the structure of its Nash equilibrium correspondence. Using this characterization we give the first polynomial time algorithm for computing an exact Nash equilibrium of a rank-1 bimatrix game. This settles an open question posed by Kannan and Theobald (2007). In addition, we give a novel algorithm to enumerate all the Nash equilibria of a rank-1 game and show that a similar technique may also be applied to find a Nash equilibrium of any bimatrix game.

Our approach provides new proofs of important classical results such as the existence and oddness of Nash equilibria, and the index theorem (Shapley 1976) for bimatrix games. Further, we extend the rank-1 characterization to a fixed rank game space, and give a fixed point formulation on [0,1]^k for solving a rank-k game. The fixed point formulation is piece-wise linear with some monotonicity properties. It may provide insights on complexity of fixed rank games.

This talk is based on a joint work with Bharat Adsul, Jugal Garg and Milind Sohoni.

Jugal Garg – IIT Bombay

Linear Complementarity Problem and Market Equilibria

Linear complementarity problems (LCPs) arise in engineering, economics and optimization due to their ability to capture interesting cases of quadratic programming: Not only are LCPs powerful enough to capture tractable problems such as linear programming and convex quadratic programming, several hard problems such as Nash-Equilibrium in 2-player games and Knapsack can also be formulated as LCPs. In spite of their ability to capture very general problems, there is a rich mathematical theory around it: in particular a method due to Lemke which is aimed at solving any LCP. The trouble with Lemke’s method is that it may not terminate in general, let alone in polynomial time and, hence, can be dismissed at a first glance. However, I will show that for the problem of computing equilibrium prices in several markets, for which one can derive LCP-like formulations, the insights from this machinery can be used to obtain novel structural and complexity-theoretic results and, surprisingly, extremely efficient algorithms in various settings.

No background in either LCPs or market equilibrium will be assumed.

This talk is based on joint works with Ruta Mehta, Milind Sohoni, Vijay Vazirani and Nisheeth Vishnoi.

Kevin Lewi - Stanford

Preventing Unraveling in Social Networks: The Anchored k-Core Problem

We consider a model of user engagement in social networks, where each player incurs a cost of remaining engaged but derives a benefit proportional to the number of its engaged neighbors. The natural equilibrium of this model corresponds to the k-core of the social network — the maximal induced subgraph with minimum degree at least k.

We study the problem of “anchoring” a small number of vertices to maximize the size of the corresponding anchored k-core — the maximal induced subgraph in which every non-anchored vertex has degree at least k. This problem corresponds to preventing “unraveling” — a cascade of iterated withdrawals. We provide polynomial-time algorithms for general graphs with k=2, and for bounded-treewidth graphs with arbitrary k. We prove strong non-approximability results for general graphs and k >= 3.

 

Pavel Hubacek – Aarhus University

Cryptographic Cheap Talk

We contribute to the study of the value of cheap talk in two party games. Cheap talk is actions which do not have any direct influence on the outcome of the game, except that they serve to signal to the other agent to facilitate coordination. This topic is closely related to implementing correlated equillibria of two party normal form games using cryptography; one of the first problems studied within the scope of rational cryptography. The original approach by Dodis et al., however, achieves only a computational Nash equilibrium, a weak solution concept in the context of cryptographic protocols modeled as extensive form games. More recent work of Gradwohl et al. explicitly treats the insufficiency of Nash equilibria, but it is restricted only to a very limited class of correlated equillibria. We extend the work of Gradwohl et al. to implement other interesting correlated equillibria using techniques from rational secret sharing.

This is a joint work with Jesper Buus Nielsen.