Talks and Abstracts

Keynote Talks

Dana Ron – Tel Aviv University

Sublinear algorithms for approximating graph parameters

Giulio Chiribella – Tsinghua University

Quantum Replication at the Heisenberg Limit

No process in nature can perfectly clone an arbitrary quantum state. But is it possible to engineer processes that replicate quantum information with vanishingly small error? Here we demonstrate the possibility of probabilistic super-replication phenomena where N equally prepared quantum systems are transformed into a much larger number M nearly perfect replicas, with an error that rapidly vanishes whenever M is small compared to the square of N. The quadratic replication rate is the ultimate limit imposed by quantum mechanics and is fundamentally linked with the Heisenberg limit of quantum metrology.

John Steinberger – Tsinghua University

Information-Theoretic Indistinguishability and Patarin’s H-coefficient Technique

We will discuss the basic problem of upper bounding the advantage of an information-theoretic distinguisher D at distinguishing between two oracles A and B. (Simplest example: A is random permutation of {0,1}^n that can only be queried in the forward direction, while B is a random *function* from {0,1}^n to {0,1}^n. Etc.) In general, D is adaptive and can ask some prescribed number of queries q to its oracle, and the object is to upper bound D’s distinguishing advantage as a function of q. We will very briefly review the popular game-playing technique in this context, and then move on to the lesser known (so-called) “H-coefficient technique” pioneered by Patarin. The H-coefficient technique is the current record-holder for various state-of-the-art bounds. We will illustrate the technique in the specific case of key-alternating ciphers, for which tight security bounds were only recently obtained by myself and Shan Chen. No prerequisites.

 

Kousha Etessami – University of Edinburgh

The complexity of analyzing infinite-state recursive Markov chains, Markov decision processes, and stochastic games

I will discuss recent results on the computational complexity of key analysis problems for several important classes of finitely-presented, countably infinite-state, recursive Markov chains, Markov decision processes, and stochastic games. Such recursive stochastic models arise in a variety of fields, and have been studied by various different communities. Examples of recursive stochastic models include multi-type branching processes, stochastic context-free grammars, quasi-birth-death processes, backbutton processes, and probabilistic pushdown & one-counter automata.

I will, in particular, discuss recent joint works with Alistair Stewart and Mihalis Yannakakis (in papers that appeared at STOC’12, ICALP’12, ICALP’13, and CAV’13) in which we have obtained the first polynomial time algorithms for some of the central problems in this area.

A number of key problems in this area can be boiled down to computing/approximating the least-fixed-point (LFP) solution of certain multi-variate monotone polynomial (min/max) systems of equations. Our algorithms for approximating the LFP of such monotone systems of equations combine variants and generalizations of Newton’s method with other techniques, including spectral methods and linear programming. The algorithms are fairly easy to implement, but analyzing their worst-case running time is mathematically quite involved.

Joint work with Alistair Stewart (U. Edinburgh) and Mihalis Yannakakis (Columbia U.

Oded Goldreich – The Weizmann Institute of Science

Property Testing: Sublinear-Time Approximate Decisions

 

Talks

Angela Zottarel – Aarhus University

On the Connection between Leakage Tolerance and Adaptive Security

We revisit the notion of leakage-tolerance for interactive protocols as defined by Bitanski, Canetti and Halevi (TCC 2012) comparing it to adaptive security.

We show that any n-party protocol tolerates leakage from one party at the end of the protocol execution, if and only if the protocol has passive adaptive security against an adaptive corruption of one party at the end of the protocol execution. This shows that as soon as a little leakage is tolerated, leakage tolerance becomes as expensive as adaptive security.

We give evidence that for leakage from more than one party, equivalence with adaptive security is connected to the leakage being a joint function of the secret states of all the parties.

This is a joint work with Jesper Buus Nielsen and Daniele Venturi.

Aris Filos-Ratsikas – Aarhus University

Truthful Approximations to Range Voting

We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well- known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare maximization in this setting. With m being the number of alternatives, we exhibit a randomized truthful- in-expectation ordinal mechanism implementing an outcome whose expected social welfare is at least an Ω(m^(-3/4)) fraction of the social welfare of the socially optimal alternative. On the other hand, we show that for sufficiently many agents and any truthful-in-expectation ordinal mechanism, there is a valuation profile where the mechanism achieves at most an O(m^(-2/3)) fraction of the optimal social welfare in expectation. We get tighter bounds for the natural special case of m = 3, and in that case furthermore obtain separation results concerning the approximation ratios achievable by natural restricted classes of truthful-in-expectation mechanisms. In particular, we show that for m = 3 and a sufficiently large number of agents, the best mechanism that is ordinal as well as mixed-unilateral has an approximation ratio between 0.610 and 0.611, the best ordinal mechanism has an approximation ratio between 0.616 and 0.641, while the best mixed-unilateral mechanism has an approximation ratio bigger than 0.660.In particular, the best mixed-unilateral non-ordinal (i.e., cardinal) mechanism strictly outperforms all ordinal ones, even the non-mixed-unilateral ordinal ones.

Joint work with Peter Bro Miltersen

 

Avishay Tal – Weizman

Properties and Applications of Boolean Function Composition

This talk focuses on a simple but useful tool for separating complexity measures: Boolean function composition. For Boolean functions f, g defined on n, m input variables respectively, the composition of f and g is the value of f on n inputs, each of them is the calculation of g on a distinct set of m Boolean variables. Previous works have used this tool to achieve some of the best separations between complexity measures of Boolean functions such as sensitivity, block sensitivity, certificate complexity, degree, decision tree complexity and quantum query complexity. This was due to the multiplicative nature of these measures with respect to composition.

We use this multiplicative behavior to establish several new applications.

First, we give a counter-example using composition for a conjecture made by Adam Kalai in 2004, which if true would imply a better algorithm for the learning junta problem.

Second, we show an unusual behaviour of block sensitivity with respect to composition which was not noticed before. We define a new complexity measure called “fractional block sensitivity” which better explains this behaviour.

Last, we show how composition can tighten relations between complexity measures, by improving a result by Nisan and Szegedy about the relation between block sensitivity and degree as a polynomial.

 

Bryan Wilkinson – Aarhus University

Concurrent Range Reporting in Two-Dimensional Space

In the concurrent range reporting (CRR) problem, the input is L sets S_1, …, S_L of points in R^d with a total of N points. The goal is to preprocess the sets into a structure such that, given a query range r and an arbitrary subset Q of {1, …, L}, we can efficiently report all the points of S_i that lie in r for each i in Q. The problem was studied as early as 1986 by Chazelle and Guibas and has recently re-emerged when studying higher-dimensional complexity of orthogonal range reporting.

We focus on the one- and two-dimensional cases of the problem. We prove that in the pointer-machine model (as well as comparison models such as the real RAM model), answering queries requires Omega(|Q| log (L/|Q|) + log N + K) time in the worst case. In one dimension, we achieve this query time with a linear-space dynamic data structure that requires optimal O(log N) time to update. We also achieve this query time in the static case for dominance and halfspace queries in the plane. For three-sided ranges, we get close to within an inverse Ackermann (alpha(.)) factor: we answer queries in O(|Q| log (L/|Q|) alpha(L) + log N + K) time, improving the best previously known query times of O(|Q| log (N/|Q|) + K) and O(2^L L + log N + K).

This is joint work with Peyman Afshani, Cheng Sheng, and Yufei Tao.

 

Chenggang Wu – Tsinghua University

Testing Surface Area

We consider the problem of estimating the surface area of an unknown n-dimensional set F given membership oracle access. In contrast to previous work, we do not assume that F is convex, and in fact make no assumptions at all about F. By necessity this means that we work in the property testing model; we seek an algorithm which, given parameters $A$ and $\eps$, satisfies:

(1) [Completeness] If the surface area of F is at most $A$, then the algorithm accepts (whp);

(2) [Soundness] If F is not $\eps$-close to some set $G$ with surface area at most $\kappa A$, then the algorithm rejects (whp).

We call $\kappa \geq 1$ the “approximation factor” of the testing algorithm.

The $n = 1$ case (in which surface area is the number of endpoints) was introduced by Kearns and Ron, who solved the problem with $\kappa = 1/\eps$ and $O(1/\eps)$ oracle queries. Later, Balcan et al. [BBBY12] solved it with $\kappa = 1$ and $O(1/\eps^4)$ queries.

We give the first result for higher dimensions n. Perhaps surprisingly, our algorithm completely evades the “curse of dimensionality”: for any n and any $\kappa > \frac{4}{\pi} \approx 1.27$, we give a test that uses $O(1/\eps)$ queries. For small $n$ we have improved bounds. For n = 1 we can achieve $\kappa = 1$ with $O(1/\eps^{3.5})$ queries (slightly improving [BBBY12]), or any $\kappa > 1$ with $O(1/\eps)$ queries (improving [KR98]). For n = 2, 3 we obtain $\kappa \approx 1.08, 1.125$ respectively, with $O(1/\eps)$ queries. Getting an arbitrary $\kappa > 1$ for n > 1 remains an open problem.

Finally, motivated by the learning results from [KOS08], we extend our techniques to obtain a similar tester for Gaussian surface area for any n, with query complexity $O(1/\eps)$ and any approximation factor $\kappa> \frac{4}{\pi} \approx 1.27$.

Joint work with Pravesh Kothari, Amir Nayyeri and Ryan O’Donnell.

 

Christoph Berkholz – Aachen

The Complexity of K-Consistency and Bounded Width Resolution

One approach to attack NP-hard satisfiability problems such as 3-SAT or the Constraint Satisfaction Problem (CSP) is to design algorithms that run in polynomial time but do not always succeed. A prominent family of such algorithms are the k-consistency tests that try to detect local inconsistencies in unsatisfiable CSP instances. Applied to the 3-SAT problem (that can be seen as a special CSP), the k-consistency test is equivalent to the search for resolution refutations of bounded width.

The k-consistency test and the search for width-k resolution refutations can be implemented in time n^O(k), hence are in polynomial time for every fixed k. We show that this cannot be done faster: Deciding whether a given 3-CNF formula has a resolution refutation of width k requires time n^{ck} for an absolute constant c>0. This unconditional complexity lower bound is ultimately obtained from the deterministic time hierarchy theorem using a reduction via known pebble games. In addition, the problem is EXPTIME-complete if k is part of the input.

 

Darrel Hoy – Northwestern

Simple Auctions for Risk-Averse Agents

We study simple and approximately optimal auctions for agents with a particular form of risk-averse preferences. We show that, for symmetric agents, the optimal revenue (given a prior distribution over the agent preferences) can be approximated by the first-price auction (which is prior independent), and, for asymmetric agents, the optimal revenue can be approximated by an auction with simple form. These results are based on two technical methods. The first is for upper-bounding the revenue from a risk-averse agent. The second gives a payment identity for mechanisms with pay-your-bid semantics.

 

Elad Haramaty – Technion

Absolutely Sound Testing of Reed Muller Codes (and Lifted Codes)

We consider the problem of testing if a given function f is close to a n-variate degree d polynomial over the finite field of q elements. The natural, low-query, test for this property would be to pick the smallest dimension t= t(q,d )≈ d/q such that every function of degree greater than d reveals this feature on some t-dimensional affine subspace and to test that f when restricted to a random t-dimensional affine subspace is a polynomial of degree at most d on this subspace. Such a test makes only qt queries, independent of n. Previous works, by Alon et al., and Kaufman and Ron and Jutla et al. , showed that this natural test rejected functions that were Ω(1)-far from degree d-polynomials with probability at least Ω(q^−t).

In our works we give an optimal analysis of this test, showing that the natural test rejects functions that are Ω(1)-far from degree d polynomials with Ω(1)-probability. Moreover, we are even extending this result to general lifted codes.

Joint work with Noga Ron-Zewi, Amir Shpilka and Madhu Sudan.

Huy L. Nguyen – Princeton

Approximate Near Neighbor Search: Beyond Locality Sensitive Hashing

The approximate near neighbor search problem is defined as follows: given a set P of n points in a d-dimensional space, build a data structure that, given a query point q, if there exists a point within distance r from q, then it reports a point within distance cr from q. Here c is the approximation factor of the algorithm.

We present a new data structure for the c–approximate near neighbor problem (ANN) in the Euclidean space. For n points in R^d, our algorithm achieves O(dn^ρ) query time and O(n^{1+?ρ} + nd) space, where ρ ≤ 7/(8c^2) + O(1/c^3) + o(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality–sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ITCS 2011). By a standard reduction we obtain a data structure for the Hamming space and ℓ1 norm with ρ ≤ 7/(8c) + O(1/c^{3/2}) + o(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998).

The talk is based on joint work with Alexandr Andoni, Piotr Indyk, and Ilya Razenshteyn.

 

Inbal Talgam-Cohen – Stanford

Optimal and Near-Optimal Mechanism Design with Interdependent Values

We study optimal and approximately-optimal mechanism design questions in the interdependent values model, which generalizes the standard setting of independent and private values. We focus our attention

on ex post incentive compatible and individually rational mechanisms, and develop an analog of Myerson’s optimal auction theory that applies to many interdependent settings of interest. We demonstrate two applications for specific interdependent settings: First, a parallel result to the well-known optimality of the second-price auction with reserve for i.i.d. bidders, where the English auction replaces the second-price one. Second, we identify good prior-independent auctions — auctions with near-optimal expected revenue across a wide range of priors — for certain interdependent value settings.

Joint work with Tim Roughgarden.

Jesper A. S. Nielsen – Aarhus University

Expected Linear Time Sorting for Large Integers

Sorting $n$ integers in the word-RAM model is a fundamental problem and a long-standing open problem is whether we can sort in linear time when the word size is $\omega(\log n)$. In this talk we give an algorithm for sorting integers in expected linear time when the word size is $\Omega(\log^2 n \log \log n)$. The algorithm uses a new packed sorting algorithm with expected running time $\O(n/b(\log n + \log^2 b))$, where $b$ is the number of integers packed in a word.

 

Magnus Find – SDU

On Cancellation-Free Linear Circuits

Cancellation-free circuits is a subclass of Boolean linear circuits (XOR circuits) with the restriction that the circuit cannot use the identity that a \oplus a = 0.

So for any matrix, the smallest corresponding cancellation-free circuit is at least as large as the smallest linear circuit, however it is not a priori clear how much larger they need to be in the worst case.

In the talk I will introduce cancellation-free circuits, argue why this is a subclass worth studying, and show an almost optimal separation of $\Theta(n/log^2 n)$. The proof uses communication complexity in a slightly unexpected way.

Joint work with Joan Boyar

 

Marvin Kunneman – MPI

A Quantization Framework for Smoothed Analysis on Euclidean Optimization Problems

In this talk, we consider the smoothed analysis of Euclidean optimization problems. Here, input points are sampled according to density functions that are bounded by a sufficiently small smoothness parameter \phi. For such smoothed inputs, we provide a general and systematic approach that allows to design linear-time approximation algorithms whose output is asymptotically optimal, both in expectation and with high probability.

Applications of this framework include maximum matching, maximum TSP, and the classical problems of k-means clustering and bin packing. Apart from generalizing corresponding average-case analyses, the framework recovers a polynomial-time probable approximation scheme on smoothed instances for multidimensional bin packing (Karger and Onak, SODA 2007). Both techniques and applications of our rounding-based approach are orthogonal to the only other framework for smoothed analysis on Euclidean problems we are aware of (Bläser et al., Algorithmica 2012).

This is joint work with Radu Curticapean.

 

Matt Weinberg – MIT

Understanding Incentives: Reducing Mechanism- to Algorithm-Design

We provide a computationally efficient black-box reduction from mechanism design to algorithm design in very general settings. Specifically, we give an approximation-preserving reduction from truthfully maximizing any objective under arbitrary feasibility constraints with arbitrary bidder types to (not necessarily truthfully) maximizing the same objective plus virtual welfare (under the same feasibility constraints). Our reduction is based on a fundamentally new approach: we describe a mechanism’s behavior indirectly only in terms of the expected value it awards bidders for certain behavior, and never directly access the allocation rule at all. Applying our new approach to revenue, we exhibit settings where our reduction holds both ways. That is, we also provide an approximation-sensitive reduction from (non-truthfully) maximizing virtual welfare to (truthfully) maximizing revenue, and therefore the two problems are computationally equivalent. With this equivalence in hand, we show that both problems are NP-hard to approximate within any polynomial factor, even for a single monotone submodular bidder.

We further demonstrate the applicability of our reduction by providing a truthful mechanism maximizing fractional max-min fairness.

 

Navid Talebanfard – Aarhus University

Exponential Lower Bounds for the PPSZ k-SAT Algorithm

In 1998, Paturi, Pudlak, Saks, and Zane presented PPSZ, an elegant randomized algorithm for k-SAT. Fourteen years on, this algorithm is still the fastest known worst-case algorithm. They proved that its expected running time on k-CNF formulas with n variables is at most 2^{(1 – epsilon_k)n}, where epsilon_k\in Omega(1/k). So far, no exponential lower bounds at all have been known.

In this talk I outline the construction of hard instances for PPSZ on which the expected running time is at least 2^{(1 – epsilon_k)n}, for epsilon \in O( \log^2 k / k).

Joint work with Shiteng Chen, Dominik Scheder and Bangsheng Tang.

 

Nir Bitansky - Tel Aviv University

Getting Inside the Adversary’s Head: The Non-Black-Box Way

The concept of “knowledge” in a computationally bounded world is quite elusive: Is knowing an encryption of a secret x the same as knowing x itself? Is knowing how to prove an NP statement the same as knowing a witness? Intuitively, the answer to these questions depends on our ability to efficiently extract such knowledge. In this talk, we will examine the question of “knowledge extraction” from a cryptographic perspective. Indeed, extracting useful knowledge from the adversary is often the primary challenge in the security analysis of cryptographic schemes and protocols.

A main question in the context of knowledge extraction is “how is *the code* of the adversary used in the security analysis?” The traditional answer would be “it is only run as a black-box”. However, it turns out that black-box techniques are simply not enough for certain (rather desirable) cryptographic goals. But, can non-black-box knowledge extraction really achieve more? what can one do with the adversary’s code except run it?

We will sneak a quick peek into the story of non-black-box techniques in cryptogtraphy, overview some of the flavors of non-black-box knowledge extraction, recent developments and applications, and main open challenges.

 

Pavel Hubacek – Aarhus University

Rational Arguments: Non-interactive Delegation with Sublinear Verification

Rational proofs, recently introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is neither honest nor malicious, but rather rational. The advantage of rational proofs over their classical counterparts is that they allow for extremely low communication and verification time.

We consider rational arguments, in which the prover is additionally restricted to be computationally bounded, and investigate on their applicability for search problems computable by log-space uniform threshold circuits. The goal is to obtain non-interactive delegation scheme with sub-linear verification time. While this would provide a weaker (yet arguably meaningful) guarantee of soundness, such rational protocols would compare favourably with each one of the known delegation schemes in at least one aspect; yielding simplicity, relying on standard complexity hardness assumptions, providing correctness guarantee for all instances, and avoiding pre-processing.

This is a joint work with Siyao Guo, Alon Rosen, and Margarita Vald.

 

Pravesh Kothari – Austin

Structural Properties of Submodular Functions with Applications to Representation, Learning and Privacy

Submodular functions have been extensively studied in Combinatorial Optimization, Machine Learning and Algorithmic Game Theory and generalize well known functions such as Graph Cuts, Matroid-Rank and Set-Cover. The problem of learning submodular functions has been of considerable recent interest and has uncovered several interesting properties of these functions when the inputs come from fixed distributions.

In this talk, I will present useful structural properties of submodular functions on product distributions based on their Fourier spectrum. I will also present an approximate representation for the class shallow decision trees. Finally, I will describe the applications of these structural results to problems in learning and privacy.

This is based on joint work with Mahdi Cheraghchi, Adam Klivans, Homin Lee, Vitaly Feldman and Jan Vondrak.

 

Reut Levi – Tel Aviv University

A Quasi-Polynomial Time Partition Oracle for Graphs with an Excluded Minor

Motivated by the problem of testing planarity and related properties, we study the problem of designing efficient {\em partition oracles\/}.

A {\em partition oracle\/} is a procedure that, given access to the incidence lists representation of a bounded-degree graph $G= (V,E)$ and a parameter $\eps$, when queried on a vertex $v\in V$, returns the part (subset of vertices) which $v$ belongs to in a partition of all graph vertices. The partition should be such that all parts are small, each part is connected, and if the graph has certain properties, the total number of edges between parts is at most $\eps |V|$.

In this work we give a partition oracle for graphs with excluded minors whose query complexity is quasi-polynomial in $1/\eps$, thus improving on the result of Hassidim et al. ({\em Proceedings of FOCS 2009}) who gave a partition oracle with query complexity exponential in $1/\eps$. This improvement implies corresponding improvements in the complexity of testing planarity and other properties that are characterized by excluded minors as well as sublinear-time approximation algorithms that work under the promise that the graph has an excluded minor.

 

Sangxia Huang – Stockholm

Improved Hardness of Approximating Chromatic Number

A vertex coloring of a graph is an assignment of colors to its vertices such that no two adjacent vertices receive the same color. The minimum number of colors needed for such a coloring is called the chromatic number of G. We mainly consider the approximation of the chromatic number, that is given a K-colorable graph, we would like to color it with as few colors as possible in polynomial time. As a classical combinatorial optimization problem, vertex coloring is closely related to other problems such as finding maximum independent sets, Max CSPs, and probabilistically checkable proofs (PCPs) with certain special properties. In addition to being an important theoretical challenge, graph coloring also has a number of applications such as scheduling and register allocation.

Building on recent breakthrough in the construction of efficient PCPs, we show that it is NP-hard to distinguish a K-colorable graph from a graph that has chromatic number at least 2^Omega(K^{1/3}) for sufficiently large K. This improves the previous best known lower-bound of 2^{Omega((log K)^2)}.

 

Sebastian Tavenas – Paris

Improved Bounds for Reduction to Depth $4$ and Depth $3$

Koiran showed that if an $n$-variate polynomial of degree $d$ (with $d=n^{O(1)}$) is computed by a circuit of size $s$, then it is also computed by a homogeneous circuit of depth four and of size $2^{O(\sqrt{d}\log(d)\log(s))}$. Using this result, Gupta, Kamath, Kayal and Saptharishi gave an $\exp\left(O\left(\sqrt{d\log(d)\log(n)\log(s)}\right)\right)$ upper bound for the size of the smallest depth three circuit computing an $n$-variate polynomial of degree $d=n^{O(1)}$ given by a circuit of size $s$.

We will see in this talk how to improve Koiran’s bound. Indeed, we show that if we reduce an arithmetic circuit to depth four, then the size becomes $\exp\left(O\left(\sqrt{d\log(ds)\log(n)}\right)\right)$. Mimicking Gupta, Kamath, Kayal and Saptharishi’s proof, it also implies the same upper bound for depth three circuits.

 

Sergey Gorbunov – Toronto

Attribute-based Encryption for Circuits

In a predicate encryption scheme, a ciphertext is associated with an L-bit public index PIND (in addition to a message), and a secret key is associated with a predicate P. The secret key can decrypt the ciphertext iff P(PIND) = 1. Moreover, the scheme should be secure against collusions of users. That is, given secret keys for polynomially many predicates, an adversary learns nothing about the message if none of the secret keys can individually decrypt the ciphertext.

We present predicate encryption schemes for circuits of any arbitrary polynomial size, where the public parameters and ciphertext grow linearly with the depth of the circuit. Our construction is secure under the standard learning with errors (LWE) assumption. Previous constructions of predicate encryption were for Boolean formulas, captured by the complexity class NC1. In the course of our construction, we present a new framework for constructing predicate encryption schemes. Our techniques lead to the first construction of (weakly) reusable garbled circuits, a problem that was open for 25 years.

Joint work with Vinod Vaikuntanathan and Hoeteck Wee.

 

Simina Branzei- Aarhus University

Equilibria of cake cutting protocols

Classic cake cutting protocols, which fairly allocate a divisible good among agents with heterogeneous preferences are susceptible to manipulation. Do their strategic outcomes still guarantee fairness? To answer this question we adopt a novel algorithmic approach, proposing a concrete computational model and reasoning about the game-theoretic properties of algorithms that operate in this model. Speci cally, we show that each protocol in the class of generalized cut and choose (GCC) protocols which includes the most important discrete cake cutting protocols is guaranteed to have an epsilon equilibrium for all epsilon > 0. Moreover, we observe that the (approximate) equilibria of proportional protocols, which guarantee each of the n agents a 1/n - fraction of the cake must be (approximately) proportional, and design a GCC protocol where all equilibrium outcomes satisfy the stronger fairness notion of envy-freeness. Finally, we show that under an obliviousness restriction (which still admits the computation of approximately envy-free allocations) GCC protocols are guaranteed to have exact Nash equilibria.

Virginie Lerays – Paris

Lower bounds on information complexity via zero-communication protocols

In the communication complexity setting, we have two players: Alice who knows x and Bob who knows y. They want to compute together the value of f(x,y) using as less communication as they can.

In this complexity model, we count the number of bits transmitted during the conversation but one may ask if each bit carry the same amount of information? That’s why another measure called information complexity has been defined to study the amount of information which is carried in the conversation.

Information complexity is a powerful tool to study communication complexity and one important open question is to understand if these quantities are in fact equivalent or not. We show here that almost all known lower bound methods for communication complexity are also lower bounds for information complexity. To do that, we define a relaxed version of the partition bound technique of Jain and Klauck, which subsumes all rectangle and norm-based techniques, and we show that it lower bounds the information complexity.

Our result uses a connection between rectangle techniques and zero-communication protocols where players can abort (Laplante, Lerays, Roland 2012). More precisely, the maximum achievable probability that the protocol doesn’t abort, which is called efficiency, gives a lower bound on communication complexity which corresponds to the relaxed partition bound. We then use compression techniques to relate IC to efficiency.

Based on works with Iordanis Kerenidis, Sophie Laplante, Jérémie Roland and David Xiao


Zeyu Guo – CalTech

Randomness-Efficient Curve Samplers

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using $O(\log N+\log(1/\delta))$ random bits exist, where $N$ is the domain size and $\delta$ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in Ta-Shma and Umans (FOCS 2006) where they obtained curve samplers with near-optimal randomness complexity.

We present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor), sampling curves of degree $\lb m\log_q(1/\delta)\rb^{O(1)}$ in $\mathbb{F}_q^m$. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.


Zihe Wang – Tsinghua University

Optimal Mechanisms with Simple Menus

We consider optimal mechanism design for one-buyer two-item setting where buyer’s valuations towards the two items are independent and additive. Even for this simple setting, optimal mechanism is unknown for general valuation distributions. Say two item’s densities function are f(x) and g(y). The mechanism is represented as a menu of allocation-payment pairs. We prove that

1.when “xf’(x)/f(x)+yg’(y)/g(y)>=-3″, optimal menu can be sorted.

2.when “xf’(x)/f(x)+yg’(y)/g(y)<=-3″, optimal menu is simpl