Talks and Abstracts

Keynote Talks

Ryan Williams – Stanford University:

Algorithms for Circuits and Circuits for Algorithms

Recently, strong connections have been developed between the existence of non-trivial circuit-analysis algorithms and proofs of circuit size lower bounds. Algorithms that can check whether a given circuit from some class satisfies a simple property (such as computing the all-zeroes function) can be used to prove limitations on that circuit class, provided the algorithms run slightly faster than exhaustive search. More precisely, a non-trivial SAT algorithm for a circuit class can be used to find functions computable in nondeterministic exponential time that cannot be computed by that circuit class. Informally, this connection can be interpreted as saying “good algorithms for circuits imply there are no good circuits for some algorithms.” This talk will explore the ideas behind this theme, review what we have learned so far, and survey what further progress can be made.

Mikkel Thorup – University of Copenhagen
The Power of Tabulation Hashing

Randomized algorithms are often enjoyed for their simplicity, but the hash functions used to yield the desired theoretical guarantees are often neither simple nor practical. Here we show that the simplest possible tabulation hashing provides unexpectedly strong guarantees. The scheme itself dates back to Carter and Wegman (STOC’77). Keys are viewed as consisting of c characters. We initialize c tablesT1,…,Tc mapping characters to random hash codes. A key x=(x1,…,xc) is hashed toT1[x1]⊕⋯⊕Tc[xc], where ⊕ denotes xor. While this scheme is not even 4-independent, we show that it provides many of the guarantees that are normally obtained via higher independence, e.g., Chernoff-type concentration, min-wise hashing for estimating set intersection, and cuckoo hashing. We shall also discuss a twist to simple tabulation that leads to extremely robust performance for linear probing with small buffers.

Thore Husfeldt – IT University of Copenhagen:
Exponential time algorithms


Yuval Ishai – Technion

The Complexity of Cryptography

How efficient can cryptography be? I will survey some of what we know, what we want to know, and how this question is related to problems from other domains

Harry Buhrman – CWI

Position-based cryptography

Position-based cryptography uses the geographic position of a party as its sole credential. Normally digital keys or biometric features are used. A central building block in position-based cryptography is that of position-verification. The goal is to prove to a set of verifier that one is at a certain geographical location. Protocols typically assume that messages can not travel faster than the speed of light. By responding to a verier in a timely manner one can guarantee that one is within a certain distance of that verifier. Quite recently it was shown that position-verification protocols only based on this relativistic principle can be broken by two attackers who simulate being at a the claimed position while physically residing elsewhere in space. Because of the no-cloning property of quantum information (qubits) it was believed that with the use of quantum messages one could devise protocols that were resistant to such collaborative attacks. Several schemes were proposed that later turned out to be insecure. Finally it was shown that also in the quantum case no unconditionally secure scheme is possible. We will review the field of position-based quantum cryptography and highlight some of the research currently going on in order to develop, using reasonable assumptions on the capabilities of the attackers, protocols that are secure in practice.

Talks

Ilya Volkovich – Technion:

Black-Box Identity Testing Of Depth-4 Multiliear Circuits

A central problem in algebraic complexity theory and algorithms design is the problem of Polynomial Identity Testing (PIT): given an arithmetic circuit $C$ over a field $\F$, with input variables $x_1, x_2, … , x_n$, determine whether $C$ computes the identically zero polynomial. Numerous applications and connections to other algorithmic and number theoretic problems further emphasize the significance of PIT. Among the examples are algorithms for finding perfect matchings in graphs \cite{Lovasz79,MVV87}, primality testing \cite{AKS04}, and many more. We study the problem of PIT for multilinear $\Sigma \Pi \Sigma \Pi (k) $ circuits, i.e. multilinear depth-4 circuits with fan-in k at the top + gate. We give the first polynomial-time deterministic PIT algorithm for such circuits. Our results also hold in the black-box setting. The importance of this model arises from \cite{AgrawalVinay08}, where it was shown that derandomizing black-box polynomial identity testing for general depth-4 circuits implies a derandomization of polynomial identity testing (PIT) for general arithmetic circuits. We obtain our results by showing a strong structural result for multilinear $\Sigma \Pi \Sigma \Pi (k) $ circuits that compute the zero polynomial.

Ramprasad Saptharishi – IIT, Kanpur

Jacobian hits all circuits

We present a unified approach to polynomial identity testing that generalizes all known black-box PITs for constant depth formulae over characteristic zero fields. In particular, we give generalizations of the [Kayal-Saxena] bounded fan-in depth 3 PIT, and generalizations of PITs for read-k bounded depth formulae [Saraf-Volkovich, Anderson-vanMelkebeek-Volkovich]. The approach for all of these results is via a powerful tool called the Jacobian.

Bangsheng Tang – Tsinghua Univeristy

Width-parameterized SAT: Time-Space Tradeoffs

Width parameterizations of SAT, such as tree-width and path-width, enable the study of computationally more tractable and practical SAT instances. We give two simple algorithms. One that runs simultaneously in time-space $(O^*(2^{2tw(\phi)}), O^*(2^{tw(\phi)}))$ and another that runs in time-space $(O^*(3^{tw(\phi)\log{|\phi|}}),|\phi|^{O(1)})$, where $tw(\phi)$ is the tree-width of a formula $\phi$ with $|\phi|$ many clauses and variables. This partially answers the question of Alekhnovitch and Razborov, who also gave algorithms exponential both in time and space, and asked whether the space can be made smaller. We conjecture that every algorithm for this problem that runs in time $2^{tw(\phi)\mathbf{o(\log{|\phi|})}}$ necessarily blows up the space to exponential in $tw(\phi)$. We introduce a novel way to combine the two simple algorithms that allows us to trade \emph{constant} factors in the exponents between running time and space. Our technique gives rise to a family of algorithms controlled by two parameters. By fixing one parameter we obtain an algorithm that runs in time-space $(O^*(3^{1.441(1-\epsilon)tw(\phi)\log{|\phi|}}), O^*(2^{2\epsilon tw(\phi)}))$, for every $0<\epsilon<1$. We systematically study the limitations of this technique, and show that these algorithmic results are the best achievable using this technique. We also study further the computational complexity of width parameterizations of SAT. We prove non-sparsification lower bounds for formulas of path-width $\omega(\log|\phi|)$, and a separation between the complexity of path-width and tree-width parametrized SAT modulo plausible complexity assumptions.

Timon Hertli – ETH Zurich

3-SAT Faster and Simpler – Unique-SAT Bounds for PPSZ Hold in General

The PPSZ algorithm by Paturi, Pudl\’ak, Saks, and Zane [1998] is the fastest known algorithm for Unique k-SAT, where the input formula does not have more than one satisfying assignment. For k>=5 the same bounds hold for general k-SAT. We show that this is also the case for k=3,4, using a slightly modified PPSZ algorithm. We do the analysis by defining a cost for satisfiable CNF formulas, which we prove to decrease in each PPSZ step by a certain amount. This improves our previous best bounds with Moser and Scheder [2011] for 3-SAT to O(1.308^n) and for 4-SAT to O(1.469^n).

Yuan Zhou – CMU

Approximation Algorithms and Hardness of the k-Route Cut Problem

We study the k-route cut problem: given an undirected edge-weighted graph G=(V,E), a collection {(s_1,t_1),(s_2,t_2),…,(s_r,t_r)} of source-sink pairs, and an integer connectivity requirement k, the goal is to find a minimum-weight subset E’ of edges to remove, such that the connectivity of each pair (s_i, t_i) falls below k. Specifically, in the edge-connectivity version, EC-kRC, the requirement is that there are at most (k-1) edge-disjoint paths connecting s_i to t_i in G \ E’, while in the vertex-connectivity version, VC-kRC, the same requirement is for vertex-disjoint paths. Prior to our work, poly-logarithmic approximation algorithm has been known for the special case where k=2, but no non-trivial approximation algorithms were known for any value k>2, except in the single-source setting. We show an O(k log^{3/2}r)-approximation algorithm for EC-kRC with uniform edge weights, and several polylogarithmic bi-criteria approximation algorithms for EC-kRC and VC-kRC, where the connectivity requirement k is violated by a constant factor. Our results improve the previously known result for the case k=2. We also prove that VC-kRC is hard to approximate up to a factor of k^{\epsilon} for some fixed \epsilon>0. We then turn to study a simpler version of VC-kRC, where only one source-sink pair is present. We present a simple bi-criteria approximation algorithm for this case. Then we show evidence that even this restricted version of the problem may be hard to approximate. For example, we show that the single source-sink pair version of VC-kRC has no constant-factor approximation, assuming Feige’s Random k-AND assumption.

Yuichi Yoshida - Kyoto University

Testing Juntas of Symmetric Functions

We consider testing properties of Boolean functions, that is, distinguishing the case an input function satisfies a predetermined property from the case it is “far” from satisfying the property. A function on n variables is called a k-Junta when it only depends on at most k variables. We first give a new proof that being k-Junta is testable with the query complexity same as the previous best. Our proof is combinatorial and greatly simplifies previous proofs, which heavily use Fourier analysis. The key ingredient in our proof is a new connection with intersecting families. Secondly, we introduce a new class of functions, called k-Juntas of Symmetric Functions (k-JoSF), which is a generalization of k-Juntas and symmetric functions. A function is called a k-JoSF if it depends on at most k variables and the Hamming weight of the remaining variables. Extending our new proof for k-Juntas, we show that k-JoSF is also testable. Finally, we give an algorithm that tests whether an input function is isomorphic to a predetermined k-JoSF. We stress that k-JoSF includes all the known functions to which the isomorphism is testable, and our result unifies them all.

Elisa Celis – University of Washington

Balls and Bins: Smaller Hash Families and Faster Evaluation

A fundamental fact in the analysis of randomized algorithm is that when n balls are hashed into n bins independently and uniformly at random, with high probability each bin contains at most O(log n / log log n) balls. In various applications, however, the assumption that a truly random hash function is available is not always valid, and explicit functions are required. In this talk we consider the size (equivalently, description length) and evaluation time of families of functions that guarantee a maximal load of O(log n / log log n) with high probability. We construct two such families based on two different approaches which exhibit trade-offs between the size and the evaluation time. The first construction uses “gradually increasing independence”, resulting in functions that are described using O(log n log og n) bits and evaluated in time O(log n log log n). The second l construction is based on derandomization techniques for space-bounded computations combined with a tailored construction of a pseudorandom generator, resulting in functions that are described using O(log^(3/2) n) bits and evaluated in time O(sqrt log n).

Bernhard Haeupler – MIT

Analyzing Network Coding Gossip Made Easy

This talk will introduce projection analysis – a new technique to analyze the stopping time of gossip protocols that are based on random linear network coding (RLNC). Projection analysis drastically simplifies, extends and strengthens previous results. It shows, in most settings for the first time, that the RLNC gossip converges with high probability in order optimal time. Most stopping times are of the form O(k + T), where k is the number of messages to be distributed and T is the time it takes to disseminate one message. This means RLNC gossip achieves “perfect pipelining”. The analysis directly extends to highly dynamic networks in which the topology can change completely at any time. This remains true, even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state. Virtually nothing besides simple O(kT) sequential flooding protocols was previously known for such a setting. While RLNC gossip works in this wide variety of networks the projection analysis remains the same and extremely simple.

Mahmoud Fouz – Saarland University

Social Networks Spread Rumors in Sublogarithmic Time

With the prevalence of social networks, it has become increasingly important to understand their features and limitations. It has been observed that information spreads extremely fast in social networks. We study the performance of randomized rumor spreading protocols on graphs in the preferential attachment model. The well-known random phone call model of Karp et al. (FOCS 2000) is a push-pull strategy where in each round, each vertex chooses a random neighbor and exchanges information with it. We prove the following. • The push-pull strategy delivers a message to all nodes within O(log n) rounds with high probability. The best known bound so far was O(log^2 n).• If we slightly modify the protocol so that contacts are chosen uniformly from all neighbors but the one contacted in the previous round, then this time reduces to O(log n/ loglog n), which is the diameter of the graph. This is the first time that a sublogarithmic broadcast time is proven for a natural setting. Also, this is the first time that avoiding double-contacts reduces the run-time to a smaller order of magnitude.

Tiancheng Lou – Tsinghua University

Who Will Follow You Back? Reciprocal Relationship Prediction

We study the extent to which the formation of a two-way relationship can be predicted in a dynamic social network. A two-way (called reciprocal) relationship, usually developed from a one-way (parasocial) relationship, represents a more trustful relationship between people. Understanding the formation of two-way relationships can provide us insights into the micro-level dynamics of the social network, such as what is the underlying community structure and how users influence each other.Employing Twitter as a source for our experimental data, we propose a learning framework to formulate the problem of reciprocal relationship prediction into a graphical model. The framework incorporates social theories into a machine learning model. We demonstrate that it is possible to accurately infer 90% of reciprocal relationships in a dynamic network. Our study provides strong evidence of the existence of the structural balance among reciprocal relationships. In addition, we have some interesting findings, e.g., the likelihood of two ‘elite’ users creating a reciprocal relationships is nearly 8 times higher than the likelihood of two ordinary users. More importantly, our findings have potential implications such as how social structures can be inferred from individuals’ behaviors.

George Pierrakos – Berkely

On Optimal Single-Item Auctions

We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction through a duality theorem, resulting in an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an inapproximability result for three or more bidders.

Simina Branzei – Aarhus University

Matching games with compact externalities

In this talk we present a compact model of matchings with externalities. Our formulation achieves tractability of the representation at the expense of full expressivity. achieves tractability of the representation at the expense of full expressivity. We formulate a general stability concept, in which a potentially deviating group evaluates the reaction of the outside players before deciding whether to deviate or not. Then we consider several solution concepts that fall under this general notion of stability, namely the neutral, optimistic, and pessimistic setwise-stable sets. We study the properties of these stable sets in many-to-many and one-to-one matchings, show how they are related to each other, analyze their complexity, and provide polynomial-time algorithms where applicable.

Liwen Xu – Tsinghua University

Major Coefficients Recovery: a Compressed Scheme for Data Gathering in Wireless Sensor Network

For large-scale sensor networks deployed for data gathering, energy efficiency is critical. Eliminating the data correlation is a promising technique for energy efficiency. Compressive Data Gathering (CDG), which employs distributed coding to compress data correlation, is an important approach in this area. However, the CDG scheme uses a uniform pattern in data transmission, where all nodes transmit the same amount of data regardless of their hop distances to the sink, making it inefficient in saving transmission costs in 2-D networks. In this paper, the Major Coefficient Recovery (MCR) scheme is proposed, where the Discrete Cosine Transformation (DCT) is applied in a distributed fashion to the original sensed data. A non-uniform data transmission pattern is proposed by exploiting the energy concentration property of DCT and QR decomposition techniques so that sensors with larger hop-count can transmit fewer messages for network energy efficiency. The sink node recovers only the major coefficients of the DCT to reconstruct the original data accurately. MCR reduces the transmission overhead to O(kn – k^2), an improvement by O(log n) over CDG in both 1-D and 2-D cases. The recovery performance of MCR is verified by extensive simulations.

Shayan Oveis Gharan – Stanford University

A Randomized Rounding Approach to the Traveling Salesman Problem

We give a randomized rounding approach for both the symmetric and asymmetric versions of the traveling salesman problem. Similar to Christofides’ algorithm, our algorithm finds a spanning tree whose cost is upper bounded by the optimum, then it finds the minimum cost Eulerian augmentation of that tree. The main difference is in the selection of the spanning tree. Except in certain cases where the solution of LP is nearly integral, we select the spanning tree randomly by sampling from a maximum entropy distribution defined by the linear programming relaxation. Despite the simplicity of the algorithm, the analysis builds on a variety of ideas such as properties of random spanning trees and strongly Rayleigh measures from probability theory, graph theoretical results on the structure of near minimum cuts, and the integrality of the T-join polytope and the integral circulation polytope from polyhedral theory.

Roy Schwartz – Technion

A Unified Continuous Greedy Algorithm for Submodular Maximization

The study of combinatorial problems with a submodular objective function has attracted much attention in recent years, and is partly motivated by the importance of such problems to economics, algorithmic game theory and combinatorial optimization. Classical works on these problems are mostly combinatorial in nature. Recently, however, many results based on continuous algorithmic tools have emerged. The main bottleneck of such continuous techniques is how to approximately solve a non-convex relaxation for the submodular problem at hand. Thus, the efficient computation of better fractional solutions immediately implies improved approximations for numerous applications. A simple and elegant method, called “continuous greedy”, successfully tackles this issue for monotone submodular objective functions, however, only much more complex tools are known to work for general non-monotone submodular objectives. In this work we present a new unified continuous greedy algorithm which finds approximate fractional solutions for both the non-monotone and monotone cases, and improves on the approximation ratio for many applications. For general non-monotone submodular objective functions, our algorithm achieves an improved approximation ratio of about $1/e$. For monotone submodular objective functions, our algorithm achieves an approximation ratio that depends on the density of the polytope defined by the problem at hand, which is always at least as good as the previously known best approximation ratio of $1 – 1/e$. Some notable immediate implications are an improved $1/e$-approximation for maximizing a non-monotone submodular function subject to a matroid or $O(1)$-knapsack constraints, and information-theoretic tight approximations for Submodular-Max-SAT and Submodular-Welfare with $k$ players, for {\em any} number of players $k$.

Pushkar Tripathi – Georgia Tech

Matching with Commitments

Consider the following stochastic optimization problem first introduced by Chen et al. We are given a vertex set of a random graph where each possible edge is present with probability p_e. We do not know which edges are actually present unless we scan/probe an edge. However whenever we probe an edge and find it to be present, we are constrained to picking the edge and both its end points are deleted from the graph. We wish to find the maximum matching in this model. We compare our results against the optimal omniscient algorithm that knows the edges of the graph and present a factor 0.573 factor algorithm using a novel sampling technique. We also prove that no algorithm can attain a factor better than 0.896 in this model.

Allison Lewko – Univeristy of Texas at Austin

The Contest Between Simplicity and Efficiency in Asynchronous Byzantine Agreement

In the wake of the decisive impossibility result of Fischer, Lynch, and Paterson for deterministic consensus protocols in the aynchronous model with just one failure, Ben-Or and Bracha demonstrated that the problem could be solved with randomness, even for Byzantine failures. Both protocols are natural and intuitive to verify, and Bracha’s achieves optimal resilience. However, the expected running time of these protocols is exponential in general. Recently, Kapron, Kempe, King, Saia, and Sanwalani presented the first efficient Byzantine agreement algorithm in the asynchronous, full information model, running in polylogarithmic time. Their algorithm is Monte Carlo and drastically departs from the simple structure of Ben-Or and Bracha’s Las Vegas algorithms. We begin an investigation of the question: to what extent is this departure necessary? Might there be a much simpler and intuitive Las Vegas protocol that runs in expected polynomial time? We will show that the exponential running time of Ben-Or and Bracha’s algorithms is no mere accident of their specific details, but rather an unavoidable consequence of their general symmetry and round structure. We define a natural class of “fully symmetric round protocols” for solving Byzantine agreement in an asynchronous setting and show that any such protocol can be forced to run in expected exponential time by an adversary in the full information model. We assume the adversary controls $t$ Byzantine processors for $t = cn$, where $c$ is an arbitrary positive constant $< \frac{1}{3}$. We view our result as a step toward identifying the level of complexity required for a polynomial-time algorithm in this setting, and also as a guide in the search for new efficient algorithms.


Abhishek Jain – UCLA

Leakage-Resilient Protocols

Traditionally, in cryptography, we assume that the secret keys are completely hidden from the adversary. Unfortunately, in reality, there are several physical attacks, such as timing, power, and acoustic attacks, that an adversary can perform to leak information about the secret keys. Towards this end, the last few years has seen the advent of “leakage-resilient cryptography”, where the goal is to build cryptosystems that are secure against leakage attacks. However, almost all prior work only deals with “stateless” primitives such as encryption, signatures, etc. In this talk, I will focus on the goal of building leakage-resilient “interactive protocols”. In the first part of the talk, I will consider the setting of Zero-Knowledge (ZK) Proofs, where a cheating verifier is allowed to obtain arbitrary bounded leakage on the entire state (including the witness and the random coins) of the prover during the entire protocol execution. Here, we give a new definition of leakage-resilient ZK proofs that intuitively guarantees that “the protocol does not yield anything beyond the validity of the statement and the leakage obtained by the verifier.” Further, we give constructions of leakage-resilient ZK (both in the interactive and non-interactive setting) under standard assumptions, and also demonstrate some applications of our new notions. In the second part of the talk, I will move to the more general setting of Secure Multiparty Computation, where an adversary is allowed to arbitrary bounded leakage on the entire states of the honest parties. Here, I will discuss some new results on leakage-resilient Multiparty Computation.

Sigurd Melgaard – Aarhus University

Oblivious RAM in the standard model

When storing data at an untrusted server, one can hide the contents by encrypting it. But still the access pattern to the data might reveal some information. An oblivious RAM (ORAM) is a protocol might reveal some information. An oblivious RAM (ORAM) is a protocol for implementing a RAM functionality leaking only the size of the stored data given a constant-size local memory and access to an external RAM that leaks the access pattern. In this talk we study the problem of ORAM, and present an algorithm for implementing a secure ORAM where the access pattern is perfectly hidden in the information theoretic sense, without assuming that the CPU has access to a random oracle or other cryptographic primitives.

Jon Ullman – Havard University

Privately Releasing Conjunctions and the Statistical Query Barrier

Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all statistical queries. A trivial solution is to exhaustively ask all statistical queries. A trivial solution is to exhaustively ask all statistical queries. A trivial solution is to exhaustively ask all statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? *We show that the number of statistical queries necessary and sufficient for this task is—up to polynomial factors—equal to the agnostic learning complexity of C in Kearns’ statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. *We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to an algorithm that efficiently releases differentially private answers to all Boolean conjunctions with 1% average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.

Jakob Funder – Aarhus University

Superposition Attacks on Cryptographic Protocols

Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece of information. We introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security of several cryptographic primitives in this model.

Thomas D. Hansen – Aarhus University

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm

The simplex algorithm is among the most widely used algorithms for solving linear programs in practice. Most deterministic pivoting rules are known, however, to need an exponential number of steps to solve some linear programs (Klee-Minty examples). No non-polynomial lower bounds on the expected number of steps were known, prior to this work, for randomized pivoting rules. We provide the first subexponential (i.e., of the form 2^(Omega(n^alpha)), for some alpha>0) lower bounds for the two most natural, and most studied, randomized pivoting rules suggested to date.

Jop Briet – CWI

Grothendieck Inequalities for Semidefinite Programs with Rank Constraint

Grothendieck inequalities are fundamental inequalities which are frequently used in many areas of mathematics and computer science. They can be interpreted as upper bounds for the integrality gap between two optimization problems: A difficult semidefinite program with rank-1 constraint and its easy semidefinite relaxation where the rank constrained is dropped. For instance, the integrality gap of the Goemans-Williamson approximation algorithm for MAX CUT can be seen as a Grothendieck inequality. In this paper we consider Grothendieck inequalities for ranks greater than 1 and we give one application in statistical mechanics: Approximating ground states in the n-vector model. This talk is based on joint work with Fernando de Oliveira-Filho and Frank Vallentin.

Yuval Filmus – University of Toronto

A combinatorial algorithm for maximum coverage over a matroid.

We present an optimal 1-1/e approximation algorithm for maximum coverage over a matroid constraint using non-oblivious local search. Calinescu, Chekuri, Pál and Vondrák have given an optimal 1-1/e approximation algorithm for the more general problem of monotone submodular optimization over a matroid constraint. Our algorithm is much simpler and faster, and is completely combinatorial. We are working on extending our algorithm to handle arbitrary monotone submodular functions.

Daniel Dadush – Georgia Tech

Enumerative Lattice Algorithms in any Norm via M-ellipsoid Coverings

We give a novel algorithm for enumerating lattice points in any convex body, and give applications to several classic lattice problems, including the Shortest and Closest Vector Problems (SVP and CVP, respectively) and Integer Programming (IP). Our enumeration technique relies on a classical concept from asymptotic convex geometry known as the M-ellipsoid, and uses as a crucial subroutine the recent algorithm of Micciancio and Voulgaris (STOC 2010) for lattice problems in the l_2 norm. As a main technical contribution, which may be of independent interest, we build on the techniques of Klartag (Geometric and Functional Analysis, 2006) to give an expected 2^O(n)-time algorithm for computing an M-ellipsoid for any n-dimensional convex body. As applications, we give deterministic 2^{O(n)}-time and -space algorithms for solving exact SVP, and exact CVP when the target point is sufficiently close to the lattice, on n-dimensional lattices in any (semi-)norm given an M-ellipsoid of the unit ball. In many norms of interest, including all l_p norms, an M-ellipsoid is computable in deterministic poly(n) time, in which case these algorithms are fully deterministic. Here our approach may be seen as a derandomization of the “AKS sieve” for exact SVP and CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). As a further application of our SVP algorithm, we derive an expected O(f*(n))^n-time algorithm for Integer Programming, where f*(n) denotes the optimal bound in the so-called “flatness theorem,” which satisfies f*(n) = O(n^{4/3} \polylog(n)) and is conjectured to be f*(n)=\Theta(n). Our runtime improves upon the previous best of O(n^{2})^{n} by Hildebrand and Koppe (2010).

Casper Kejlberg-Rasmussen – Aarhus University
Cache-Oblivious Implicit Predecessor Dictionaries with the Working-Set Property

In this talk we present an implicit dynamic dictionary with the working-set property, supporting \op{insert}($e$) and \op{delete}($e$) in $\O(\log n)$ time, \op{predecessor}($e$) in $\O(\log \ws{\pred(e)})$ time, \op{successor}($e$) in $\O(\log \ws{\succ(e)})$ time and \op{search}($e$) in $\O(\log \min(\ws{\pred(e)},\ws{e}, \ws{\succ(e)}))$ time, where $n$ is the number of elements stored in the dictionary, $\ws{e}$ is the number of distinct elements searched for since element~$e$ was last searched for and $\pred(e)$ and $\succ(e)$ are the predecessor and successor of $e$, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size~$n$ using \emph{no} additional space. In the cache-oblivious model the $\log$ is base $B$ and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures required $\O(\log n)$ time.