The following is a list of the papers accepted to ICS2010. A list of the papers with titles and authors only may be found here.
- Title: Cryptography by Cellular Automata or How Fast Can Complexity Emerge in Nature?
Authors: Benny Applebaum, Yuval Ishai, Eyal Kushilevitz
Abstract: Computation in the physical world is restricted by the following spatial locality constraint: In a single unit of time, information can only travel a bounded distance in space. A simple computational model which captures this constraint is a cellular automaton: A discrete dynamical system in which cells are placed on a grid and the state of each cell is updated via a local deterministic rule that depends only on the few cells within its close neighborhood. Cellular automata are commonly used to model real world systems in nature and society. Cellular automata were shown to be capable of a highly complex behavior. However, it is not clear how fast this complexity can evolve and how common it is with respect to all possible initial config-urations. We examine this question from a computational perspective, identifying “complexity” with computational intractability. More concretely, we consider an n-cell automaton with a random initial configuration, and study the minimal number of computation steps t = t(n) after which the following problems can become computationally hard:
• The inversion problem. Given the configuration y at time t, find an initial configuration x which leads to y in t steps.
• The prediction problem. Given an arbitrary sequence of ℓ > n intermediate values of cells in the computation, predict some value in the sequence based on the previous values with a significant advantage over guessing.
These two problems capture the natural goals of inferring the past from the present and predicting the future based on partial observations of the past. Our main results show that, under widely believed con-jectures, there are cellular automata for which both problems become hard even after a single computa-tion step. This is done by constructing cryptographic one-way functions and pseudorandom generators which are computed by a single step of a cellular automaton. Our results support the view that computa-tional forms of complexity can emerge from simple local interactions in a very common and immediate way.Our results build on and strengthen previous results of Applebaum et al. (FOCS 2004, CRYPTO 2007) on the parallel complexity of cryptography. These previous works implement cryptographic primitives by circuits with constant depth, constant fan-in and constant fan-out, but inherently fail to satisfy the strong spatial locality requirement.
- Title: Breaking and Making Quantum Money: Toward a New Quantum Cryptographic Protocol
Authors: Andrew Lutomirski, Scott Aaronson, Edward Farhi, David Gosset
Abstract: Public-key quantum money is a cryptographic protocol in which a bank can create quantum states which anyone can verify but no one except possibly the bank can clone or forge. There are no secure public-key quantum money schemes in the literature; as we show in this paper, the only previously published scheme is insecure. We introduce a category of quantum money protocols which we call collision-free. For these protocols, even the bank cannot prepare multiple identicallooking pieces of quantum money. We present a blueprint for how such a protocol might work as well as a concrete example which we believe may be insecure.
- Title: Analytical Tools for Natural Algorithms
Authors: Bernard Chazelle
Abstract: We introduce an analytical tool to study the convergence of bidirectional multiagent agreement systems and use it to sharpen the analysis of various natural algorithms, including flocking, opinion consensus, and synchronization systems. We also improve classic bounds about colored random walks and discuss the usefulness of algorithmic proofs.
- Title: A New Approach to Strongly Polynomial Linear Programming
Authors: Mihály Bárász and Santosh Vempala
Abstract: We present an af_ne-invariant approach for solving linear programs. Unlike previous approaches, the potential strong polynomiality of the new approach does not require that graphs of polytopes have polynomial diameter (the Hirsch conjecture or weaker versions). We prove that two natural realizations of the approach work ef_ciently for deformed products [AZ99], a class of polytopes that generalizes all known dif_cult examples for variants of the simplex method, e.g., the Klee-Minty [KM72] and Goldfarb-Sit [GS79] cubes.
- Title: Computational Complexity and Information Asymmetry in Financial Products (Extended Abstract)
Authors: Sanjeev Arora, Boaz Barak, Markus Brunnermeier, Rong Ge
Abstract: This paper introduces notions from computational complexity into the study of financial derivatives. Traditional economics argues that derivatives, like CDOs and CDSs, ameliorate the negative costs imposed due to asymmetric information between buyers and sellers. This is because securitization via these derivatives allows the informed party to find buyers for the information-insensitive part of the cash flow stream of an asset (e.g., a mortgage) and retain the remainder. In this paper we show that this viewpoint may need to be revised once computational complexity is brought into the picture. Assuming reasonable complexity-theoretic conjectures, we show that derivatives can actually amplify the costs of asymmetric information instead of reducing them. We prove our results both in the worst-case setting, as well as the more realistic average case setting. In the latter case, to argue that our constructions result in derivatives that “look like” real-life derivatives, we use the notion of computational indistinguishability a la cryptography.
- Title: Pan-Private Streaming Algorithms
Authors: Cynthia Dwork, Moni Naor, Toniann Pitassi, Guy N. Rothblum, Sergey Yekhanin
Abstract: Collectors of confidential data, such as governmental agencies, hospitals, or search engine providers, can be pressured to permit data to be used for purposes other than that for which they were collected. To support the data curators, we initiate a study of pan-private algorithms; roughly speaking, these algorithms retain their privacy properties even if their internal state becomes visible to an adversary. Our principal focus is on streaming algorithms, where each datum may be discarded immediately after processing.
- Title: Robustly Leveraging Collusion in Combinatorial Auctions
Authors: Jing Chen, Silvio Micali and Paul Valiant
Abstract: Because of its devastating effects in auctions and other mechanisms, collusion is prohibited and legally prosecuted. Yet, colluders have always existed, and may continue to exist. We thus raise the following question for mechanism design:
What desiderata are achievable, and by what type of mechanisms, when any set of players who wish to collude are free to do so without any restrictions on the way in which they cooperate and coordinate their actions?
In response to this question we put forward and exemplify the notion of a collusion-leveraging mechanism. In essence, this is a mechanism aligning its desiderata with the incentives of all its players, including colluders, to a significant and mutually beneficial extent. Of course such mechanisms may exist only for suitable desiderata.
In unrestricted combinatorial auctions, where classical mechanisms essentially guarantee 0 social welfare and 0 revenue in the presence of just two colluders, we prove that it is possible for collusion-leveraging mechanisms to guarantee that the sum of social welfare and revenue is always high, even when all players are collusive.
To guarantee better performance, collusion-leveraging mechanisms in essence “welcome” collusive players, rather than pretending they do not exist, raising a host of new questions at the intersection of cooperative and non-cooperative game theory.
- Title: Robust Perfect Revenue From Perfectly Informed Players
Authors: Jing Chen, Avinatan Hassidim and Silvio Micali
Abstract: Maximizing revenue in the presence of perfectly informed players is a well known goal in mechanism design. Yet, all current mechanisms for this goal are vulnerable to equilibrium selection, collusion, privacy and complexity problems, and therefore far from guaranteeing that maximum revenue will be obtained. In this paper we both clarify and rectify this situation by
•Proving that no weakly dominant-strategy mechanism (traditionally considered immune to equilibrium selection) guarantees an arbitrarily small fraction of the maximum possible revenue; and, more importantly,
•Constructing a new mechanism, of extensive-form and with a unique sub-game-perfect equilibrium, which
(a) guarantees a fraction arbitrarily close to 1 of the maximum possible revenue;
(b) is provably robust against equilibrium selection, collusion, complexity, and privacy problems; and
(c) works for any number of players n > 1, and without relying on special conditions for the players utilities.
- Title: Playing Games without Observing Payoffs
Authors: Michal Feldman, Adam Kalai and Moshe Tennenholtz
Abstract: Optimization under uncertainty is central to a number of fields, and it is by now well-known that one can learn to play a repeated game without a priori knowledge of the game, as long as one observes the payoffs (even if one does not see the opponent’s play). We consider the complementary scenario in which one does not observe the payoffs but does observe the opponent’s play. Curiously, we show that for an interesting class of games one can still learn to play well. In particular, for any symmetric two-person game, one can nearly match the payoff of the opponent (who may have full knowledge of the game), without ever observing a single payoff. The approach employed is a familiar one: attempt to mimic the opponent’s play. However, one has to be careful about how one mimics an opponent who may know that they are being mimicked.
This paper has two contributions: it (a) extends our understanding of optimization under uncertainty by modeling a new setting in which one can play games optimally; and (b) introduces a new algorithm for being a copycat, one which is strategically sound even against an opponent with a superior informational advantage.
- Title: Adversarial Leakage in Games
Authors: Noga Alon, Yuval Emek, Michal Feldman and Moshe Tennenholtz
Abstract: While the maximin strategy has become the standard, and most agreed-upon solution for decision-making in adversarial settings, as discussed in game theory, computer science and other disciplines, its power arises from the use of mixed strategies, a.k.a. probabilistic algorithms. Nevertheless, in adversarial settings we face the risk of information leakage about the actual strategy instantiation. Hence, real robust algorithms should take information leakage into account. To address this fundamental issue, we introduce the study of adversarial leakage in games. We consider two models of leakage. In both of them the adversary is able to learn the value of b binary predicates about the strategy instantiation. In one of the models these predicates are selected after the decision-maker announces its probabilistic algorithm and in the other one they are decided in advance. We give tight results about the effects of adversarial leakage in general zero-sum games with binary payoffs as a function of the level of leakage captured by b in both models. We also compare the power of adversarial leakage in the two models and the robustness of the original maximin strategies of games to adversarial leakage. Finally, we study the computation of optimal strategies for adversarial leakage models. Together, our study introduces a new framework for robust decision-making, and provides rigorous fundamental understanding of its properties.
- Title: Game Theory with Costly Computation: Formulation and Application to Protocol Security
Authors: Joseph Y. Halpern and Rafael Pass
Abstract: We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations to observed behavior in well-studied games (such as finitely repeated prisoner’s dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application of this framework, we develop a definition of protocol security relying on game-theoretic notions of implementation. We show that a natural special case of this this definition is equivalent to a variant of the traditional cryptographic definition of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with “deviating” players in two different communities—Nash equilibrium in game theory and zero-knowledge “simulation” in cryptography—are intimately related.
- Title: Bounding Rationality by Discounting Time
Authors: Lance Fortnow and Rahul Santhanam
Abstract: Consider a game where Alice generates an integer and Bob wins if he can factor that integer. Traditional game theory tells us that Bob will always win this game even though in practice Alice will win given our usual assumptions about the hardness of factoring.
We define a new notion of bounded rationality, where the payoffs of players are discounted by the computation time they take to produce their actions. We use this notion to give a direct correspondence between the existence of equilibria where Alice has a winning strategy and the hardness of factoring. Namely, under a natural assumption on the discount rates, there is an equilibriumwhere Alice has a winning strategy iff there is a linear-time samplable distribution with respect to which Factoring is hard on average.
We also give general results for discounted games over countable action spaces, including showing that any game with bounded and computable payoffs has an equilibrium in our model, even if each player is allowed a countable number of actions. It follows, for example, that the Largest Integer game has an equilibrium in our model though it has no Nash equilibria or ∈-Nash equilibria.
- Title: Market Equilibrium under Separable, Piecewise-Linear, Concave Utilities
Authors: Vijay V. Vazirani and Mihalis Yannakakis
Abstract:
We consider Fisher and Arrow-Debreu markets under additively-separable, piecewise-linear, concave utility functions, and obtain the following results:
•For both market models, if an equilibrium exists, there is one that is rational and can be written using polynomially many bits.
•There is no efficiently checkable necessary and sufficient condition for the existence of an equilibrium: The problem of checking for existence of an equilibrium is NP-complete for both market models; the same holds for existence of an∈-approximate equilibrium, for ∈ = O(n−5).
•Under standard (mild) sufficient conditions, the problem of finding an exact equilibrium is in PPAD for both market models. We note that this is the first result showing membership in PPAD for a market model defined by an important, broad class of utility functions.
•Finally, building on the techniques of [CDDT09] we prove that under these sufficient conditions, finding an equilibrium for Fisher markets is PPAD-hard.
- Title: Beyond Equilibria: Mechanisms for Repeated Combinatorial Auctions
Authors: Brendan Lucier
Abstract: We study the design of mechanisms in combinatorial auction domains. We focus on settings where the auction is repeated, motivated by auctions for licenses or advertising space. We consider models of agent behaviour in which they either apply common learning techniques to minimize the regret of their bidding strategies, or apply short-sighted best-response strategies. We ask: when can a black-box approximation algorithm for the base auction problem be converted into a mechanism that approximately preserves the original algorithm’s approximation factor on average over many iterations? We present a general reduction for a broad class of algorithms when agents minimize external regret. We also present a mechanism for the combinatorial auction problem that attains an O(√m) approximation on average when agents apply best-response dynamics.
- Title: A New Look at Selfish Routing
Authors: Christos Papadimitriou and Gregory Valiant
Abstract: We revisit price of anarchy in network routing, in a new model in which routing decisions are made by self-interested components of the network, as opposed to by the flows as in [12]. This significant departure from previous work on the problem seeks to model Internet routing more accurately. We propose two models: the latency model in which the network edges seek to minimize the average latency of the flow through them on the basis of knowledge of latency conditions in the whole network, and the pricing model in which network edges advertise pricing schemes to their neighbors and seek to maximize their profit.
We show two rather surprising results: the price of stability in the latency model is unbounded —Ω(n1/60)— even with linear latencies (as compared with 4/3 in [12] for the case in which routing decisions are made by the flows themselves). However, in the pricing model in which edges advertise pricing schemes—how the price varies as a function of the total amount of flow—we show that, under a condition ruling out monopolistic situations, all Nash equilibria have optimal flows; that is, the price of anarchy in this model is one, in the case of linear latencies with no constant term.
- Title: Local Algorithms for Finding Interesting Individuals in Large Networks
Authors: Mickey Brautbar and Michael Kearns
Abstract: We initiate the study of local, sublinear time algorithms for finding vertices with extreme topological properties — such as high degree or clustering coefficient — in large social or other networks. We introduce a new model, called the Jump and Crawl model, in which algorithms are permitted only two graph operations. The Jump operation returns a randomly chosen vertex, and is meant to model the ability to discover “new” vertices via keyword search in the Web, shared hobbies or interests in social networks such as Facebook, and other mechanisms that may return vertices that are distant from all those currently known. The Crawl operation permits an algorithm to explore the neighbors of any currently known vertex, and has clear analogous in many modern networks.
We give both upper and lower bounds in the Jump and Crawl model for the problems of finding vertices of high degree and high clustering coefficient. We consider both arbitrary graphs, and specializations in which some common assumptions are made on the global topology (such as power law degree distributions or generation via preferential attachment). We also examine local algorithms for some related vertex or graph properties, and discuss areas for future investigation.
- Title: Circumventing the Price of Anarchy: Leading Dynamics to Good Behavior
Authors: Maria-Florina Balcan and Avrim Blum and Yishay Mansour
Abstract: Many natural games can have a dramatic difference between the quality of their best and worst Nash equilibria, even in pure strategies. Yet, nearly all work to date on dynamics shows only convergence to some equilibrium, especially within a polynomial number of steps. In this work we study how agents with some knowledge of the game might be able to quickly (within a polynomial number of steps) find their way to states of quality close to the best equilibrium. We consider two natural learning models in which players choose between greedy behavior and following a proposed good but untrusted strategy and analyze two important classes of games in this context, fair cost-sharing and consensus games. Both games have extremely high Price of Anarchy and yet we show that behavior in these models can efficiently reach low-cost states.
- Title: Reaching Consensus on Social Networks
Authors: Elchanan Mossel and Grant Schoenebeck
Abstract: Research in sociology studies the effectiveness of social networks in achieving computational tasks. Typically the agents who are supposed to achieve a task are unaware of the underlying social network except their immediate friends. They have limited memory, communication, and coordination. These limitations result in computational obstacles in achieving otherwise trivial computational problems.
One of the simplest problems studied in the social sciences involves reaching a consensus among players between two alternatives which are otherwise indistinguishable. In this paper we formalize the computational model of social networks. We then analyze the consensus problem as well as the problem of reaching a consensus which is identical to the majority of the original signals. In both models we seek to minimize the time it takes players to reach a consensus.
- Title: Robustness of the Learning with Errors Assumption
Authors: Shafi Goldwasser, Yael Kalai, Chris Peikert and Vinod Vaikuntanathan
Abstract: Starting with the work of Ishai-Sahai-Wagner and Micali-Reyzin, a new goal has been set within the theory of cryptography community, to design cryptographic primitives that are secure against large classes of side-channel attacks. Recently, many works have focused on designing various cryptographic primitives that are robust (retain security) even when the secret key is “leaky”, under various intractability assumptions. In this work we propose to take a step back and ask a more basic question: which of our cryptographic assumptions (rather than cryptographic schemes) are robust in presence of leakage of their underlying secrets?
Our main result is that the hardness of the learning with error (LWE) problem implies its hardness with leaky secrets. More generally, we show that the standard LWE assumption implies that LWE is secure even if the secret is taken from an arbitrary distribution with sufficient entropy, and even in the presence of hard-to-invert auxiliary inputs. We exhibit various applications of this result.
1. Under the standard LWE assumption, we construct a symmetric-key encryption scheme that is robust to secret key leakage, and more generally maintains security even if the secret key is taken from an arbitrary distribution with sufficient entropy (and even in the presence of hard-to-invert auxiliary inputs).
2. Under the standard LWE assumption, we construct a (weak) obfuscator for the class of point functions with multi-bit output. We note that in most schemes that are known to be robust to leakage, the parameters of the scheme depend on the maximum leakage the system can tolerate, and hence the efficiency degrades with the maximum anticipated leakage, even if no leakage occurs at all! In contrast, the fact that we rely on a robust assumption allows us to construct a single symmetric-key encryption scheme, with parameters that are independent of the anticipated leakage, that is robust to any leakage (as long as the secret key has sufficient entropy left over). Namely, for any k < n (where n is the size of the secret key), if the secret key has only entropy k, then the security relies on the LWE assumption with secret size roughly k.
- Title: Distribution-Specific Agnostic Boosting
Authors: Vitaly Feldman
Abstract: We consider the problem of boosting the accuracy of weak learning algorithms in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992). Known algorithms for this problem (Ben-David et al., 2001; Gavinsky, 2002; Kalai et al., 2008) follow the same strategy as boosting algorithms in the PAC model: the weak learner is executed on the same target function but over different distributions on the domain. Application of such boosting algorithms usually requires a distributionindependent weak agnostic learners. Here we demonstrate boosting algorithms for the agnostic learning framework that only modify the distribution on the labels of the points (or, equivalently, modify the target function). This allows boosting a distribution-specific weak agnostic learner to a strong agnostic learner with respect to the same distribution. Our algorithm achieves the same guarantees on the final error as the boosting algorithms of Kalai et al. (2008) but is substantially simpler and more efficient. When applied to the weak agnostic parity learning algorithm of Goldreich and Levin (1989) our algorithm yields a simple PAC learning algorithm for DNF and an agnostic learning algorithm for decision trees over the uniform distribution using membership queries. These results substantially simplify Jackson’s famous DNF learning algorithm (1994) and the recent result of Gopalan et al. (2008). We also strengthen the connection to hard-core set constructions discovered by Klivans and Servedio (1999) by demonstrating that hard-core set constructions that achieve the optimal hard-core set size (given by Holenstein (2005) and Barak et al. (2009)) imply distribution-specific agnostic boosting algorithms. Conversely, our boosting algorithm gives a simple hard-core set construction with an (almost) optimal hard-core set size.
- Title: Space-Efficient Estimation of Robust Statistics and Distribution Testing
Authors: Steve Chien, Katrina Ligett and Andrew McGregor
Abstract: The generic problem of estimation and inference given a sequence of i.i.d. samples has been extensively studied in the statistics, property testing, and learning communities. A natural quantity of interest is the sample complexity of the particular learning or estimation problem being considered. While sample complexity is an important component of the computational efficiency of the task, it is also natural to consider the space complexity: do we need to store all the samples as they are drawn, or is it sufficient to use memory that is significantly sublinear in the sample complexity? Surprisingly, this aspect of the complexity of estimation has received significantly less attention in all but a few specific cases. While space-bounded, sequential computation is the purview of the field of data-stream computation, almost all of the literature on the algorithmic theory of data-streams considers only “empirical problems”, where the goal is to compute a function of the data present in the stream rather than to infer something about the source of the stream.
Our contributions are two-fold. First, we provide results connecting space efficiency to the estimation of robust statistics from a sequence of i.i.d. samples. Robust statistics are a particularly interesting class of statistics in our setting because, by definition, they are resilient to noise or errors in the sampled data. We show that this property is enough to ensure that very space-efficient stream algorithms exist for their estimation. In contrast, the numerical value of a “non-robust” statistic can change dramatically with additional samples, and this limits the utility of any finite length sequence of samples. Second, we present a general result that captures a trade-off between sample and space complexity in the context of distributional property testing.
- Title: Cryptographic Complexity Classes and Computational Intractability Assumptions
Authors: Hemanta K. Maji, Manoj Prabhakaran and Mike Rosulek
Abstract: Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question.
We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi-party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study.
In our detailed investigation of large classes of 2-party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of one-way functions (OWF) or of semi-honest (equivalently, standalone-secure) oblivious transfer protocols (sh-OT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework.
If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.
- Title: Hard Instances for Satisfiability and Quasi-one-way Functions
Authors: Andrej Bogdanov, Kunal Talwar and Andrew Wan
Abstract: We give an efficient algorithm that takes as input any (probabilistic) polynomial time algorithm A which purports to solve SAT and finds, for infinitely many input lengths, SAT formulas Φand witnesses ω such that A claims Φ is unsatisfiable, but w is a satisfying assignment for Φ (assuming NP 6 ? BPP). This solves an open problem posed in the work of Gutfreund, Shaltiel, and Ta-Shma (CCC 2005). Following Gutfreund et al., we also extend this to give an efficient sampling algorithm (a “quasi-hard” sampler) which generates hard instance/witness pairs for all algorithms running in some fixed polynomial time. We ask how our sampling algorithm relates to various cryptographic notions. We show that our sampling algorithm gives a simple construction of quasi-one-way functions, a weakened notion of standard one-way functions. We also investigate the possibility of obtaining pseudorandom generators from our quasi-one-way functions and show that a large class of reductions that work in the standard setting must fail.
- Title: On the Construction of One-Way Functions from Average Case Hardness
Authors: Noam Livne
Abstract: In this paper we study the possibility of proving the existence of one-way functions based on average case hardness. It is well-known that if there exists a polynomial-time sampler that outputs instance-solution pairs such that the distribution on the instances is hard on average, then one-way functions exist. We study the possibility of constructing such a sampler based on the assumption that there exists a sampler that outputs only instances, where the distribution on the instances is hard on the average. Namely, we study the possibility of "modifying" an ordinary sampler $S$ that outputs (only) hard instances of some search problem $R$, to a sampler that outputs instance-solution pairs of the same search problem $R$. We show that under some restriction, not every sampler can be so modified. That is, we show that if some hard problem with certain properties exists (which, in particular is implied by the existence of one-way permutations), then for every polynomial $\lambda$, there exists a search problem $R$ and a polynomial-time sampler $S$ such that (1) $R$ is hard under the distribution induced by $S$, and (2) there is no sampler $S^*$ with randomness complexity bounded by $\lambda$, that outputs instance-solution pairs of $R$, where the distribution on the instances of $S^*$ is closely related to that of $S$ (i.e., dominates it). A possible interpretation of our result is that a generic approach for transforming samplers of instances to samplers of instance-solution pairs cannot succeed.
- Title: Proof-Carrying Data and Hearsay Arguments from Signature Cards
Authors: Alessandro Chiesa and Eran Tromer
Abstract: Design of secure systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation among mutually-untrusting parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants.
We propose a new approach, proof-carrying data (PCD), which circumnavigates the threat of faults and leakage by reasoning about properties of the output data, independently of the preceding computation. In PCD, the system designer prescribes the desired properties of the computation’s outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system’s components. Each such proof attests that the message’s data and all of its history comply with the specified properties.
We construct a general protocol compiler that generates, propagates and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on “hearsay evidence”: previous arguments about other statements. To this end, we attain a particularly strong proof of knowledge. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality— essentially, a signature card.
- Title: Are Stable Instances Easy?
Authors: Yonatan Bilu and Nathan Linial
Abstract: We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP–hard problems are easier to solve. In particular, whether there exist algorithms that solve correctly and in polynomial time all sufficiently stable instances of some NP–hard problem. The paper focuses on the Max–Cut problem, for which we show that this is indeed the case.
- Title: A New Approximation Technique for Resource-Allocation Problems
Authors: Barna Saha and Aravind Srinivasan
Abstract: We develop a rounding method based on random walks in polytopes, which leads to improved approximation algorithms and integrality gaps for several assignment problems that arise in resource allocation and scheduling. In particular, it generalizes the work of Shmoys & Tardos on the generalized assignment problem in two different directions, where the machines have hard capacities, and where some jobs can be dropped. We also outline possible applications and connections of this methodology to discrepancy theory and iterated rounding.
- Title: Global Alignment of Molecular Sequences via Ancestral State Reconstruction
Authors: Alexandr Andoni, Constantinos Daskalakis, Avinatan Hassidim and Sebastien Roch
Abstract: Molecular phylogenetic techniques do not generally account for such common evolutionary events as site insertions and deletions (known as indels). Instead tree building algorithms and ancestral state inference procedures typically rely on substitution-only models of sequence evolution. In practice these methods are extended beyond this simplified setting with the use of heuristics that produce global alignments of the input sequences---an important problem which has no rigorous model-based solution. In this paper we open a new direction on this topic by considering a version of the multiple sequence alignment in the context of stochastic indel models. More precisely, we introduce the following {\em trace reconstruction problem on a tree} (TRPT): a binary sequence is broadcast through a tree channel where we allow substitutions, deletions, and insertions; we seek to reconstruct the original sequence from the sequences received at the leaves of the tree. We give a recursive procedure for this problem with strong reconstruction guarantees at low mutation rates, providing also an alignment of the sequences at the leaves of the tree. The TRPT problem without indels has been studied in previous work (Mossel 2004, Daskalakis et al. 2006) as a bootstrapping step towards obtaining information-theoretically optimal phylogenetic reconstruction methods. The present work sets up a framework for extending these works to evolutionary models with indels. In the TRPT problem we begin with a random sequence $x_1, \ldots, x_k$ at the root of a $d$-ary tree. If vertex $v$ has the sequence $y_1, \ldots y_{k_{v}}$, then each one of its $d$ children will have a sequence which is generated from $y_1, \ldots y_{k_{v}}$ by flipping three biased coins for each bit. The first coin has probability $p_s$ for Heads, and determines whether this bit will be substituted or not. The second coin has probability $p_d$, and determines whether this bit will be deleted, and the third coin has probability $p_i$ and determines whether a new random bit will be inserted. The input to the procedure is the sequences of the $n$ leaves of the tree, as well as the tree structure (but not the sequences of the inner vertices) and the goal is to reconstruct an approximation to the sequence of the root (the DNA of the ancestral father). For every $\epsilon > 0$, we present a deterministic algorithm which outputs an approximation of $x_1, \ldots, x_k$ if $p_i + p_d < O(1/k^{2/3} \log n)$ and $(1 - 2 p_s)^2 > O(d^{-1} \log d)$. To our knowledge, this is the first rigorous trace reconstruction result on a tree in the presence of indels.
- Title: Effectively Polynomial Simulations
Authors: Toniann Pitassi and Rahul Santhanam
Abstract: We introduce a more general notion of efficient simulation between proof systems, which we call effectively-p simulation. We argue that this notion is more natural from a complexity-theoretic point of view, and by revisiting standard concepts in this light we obtain some surprising new results. First, we give several examples where effectively-p simulations are possible between different propositional proof systems, but where p-simulations are impossible (sometimes under complexity assumptions). Secondly, we prove that the rather weak proof system G0 for quantified propositional logic (QBF) can effectively-p simulate any proof system for QBF. Thus our definition sheds new light on the comparative power of proof systems. We also give some evidence that with respect to Frege and Extended Frege systems, an effectively-p simulation may not be possible. Lastly, we prove new relationships between effectively-p simulations, automatizability, and the P versus NP question.
- Title: Circuit Lower Bounds, Help Functions, and the Remote Point Problem
Authors: V. Arvind and Srikanth Srinivasan
Abstract: We investigate the power of Algebraic Branching Programs (ABPs) augmented with help polynomials, and constant-depth Boolean circuits augmented with help functions. We relate the problem of proving explicit lower bounds in both these models to the Remote Point Problem (introduced in [3]). More precisely, proving lower bounds for ABPs with help polynomials is related to the Remote Point Problem w.r.t. the rank metric, and for constant-depth circuits with help functions it is related to the Remote Point Problem w.r.t. the Hamming metric. For algebraic branching programs with help polynomials with some degree restrictions we show exponential size lower bounds for explicit polynomials.
- Title: Derandomizing Algorithms on Product Distributions and Other Applications of Order-Based Extraction
Authors: Ariel Gabizon and Avinatan Hassidim
Abstract: Getting the deterministic complexity closer to the best known randomized complexity is an important goal in algorithms and communication protocols. In this work, we investigate the case where instead of one input, the algorithm/protocol is given multiple inputs sampled independently from an arbitrary unknown distribution. We show that in this case a strong and generic derandomization result can be obtained by a simple argument.
Our method relies on extracting randomness from “same-source” product distributions, which are distributions generated from multiple independent samples from the same source. The extraction process succeeds even for arbitrarily low minentropy, and is based on the order of the values and not on the values themselves (this may be seen as a generalization of the classical method of Von-Neumann [26] extended by Elias [8] for extracting randomness from a biased coin.) The tools developed in the paper are generic, and can be used in several other problems. We present applications to streaming algorithms, and to implicit probe search [9]. We also refine our method to handle product distributions, where the i’th sample comes from one of several arbitrary unknown distributions. This requires creating a new set of tools, which may also be of independent interest.
- Title: Symmetric LDPC Codes and Local Testing
Authors: Tali Kaufman and Avi Wigderson
Abstract: Coding theoretic and complexity theoretic considerations naturally lead to the question of generating symmetric, sparse, redundant linear systems. This paper provides new way of constructions with better parameters and new lower bounds.
Low Density Parity Check (LDPC) codes are linear codes defined by short constraints (a property essential for local testing of a code). Some of the best (theoretically and practically) used codes are LDPC. Symmetric codes are those in which all coordinates “look the same”, namely there is some transitive group acting on the coordinates which preserves the code. Some of the most commonly used locally testable codes (especially in PCPs and other proof systems), including all “low-degree” codes, are symmetric. Requiring that a symmetric binary code of length n has large (linear or near-linear) distance seems to suggest a “conflict” between 1/rate and density (constraint length). In known constructions, if one is constant then the other is almost worst possible - n=poly(log n). Our main positive result simultaneously achieves symmetric low density, constant rate codes generated by a single constraint. We present an explicit construction of a symmetric and transitive binary code of length n, near-linear distance n=(log log n)2, of constant rate and with constraints of length (log n)4. The construction is in the spirit of Tanner codes, namely the codewords are indexed by the edges of a sparse regular expander graph. The main novelty is in our construction of a transitive (non Abelian!) group acting on these edges which preserves the code. Our construction is one instantiation of a framework we call Cayley Codes developed here, that may be viewed as extending zig-zag product to symmetric codes.
Our main negative result is that the parameters obtained above cannot be significantly improved, as long as the acting group is solvable (like the one we use). More specifically, we show that in constant rate and linear distance codes (aka ”good” codes) invariant under solvable groups, the density (length of generating constraints) cannot go down to a constant, and is bounded below by log(Ω(l)) n if the group has a derived series of length l. This negative result precludes natural local tests with constantly many queries for such solvable ”good” codes.
- Title: Weight Distribution and List-Decoding Size of Reed-Muller Codes
Authors: Tali Kaufman, Shachar Lovett and Ely Porat
Abstract: We study the weight distribution and list-decoding size of Reed-Muller codes. Given a weight parameter, we are interested in bounding the number of Reed-Muller codewords with a weight of up to the given parameter. Additionally, given a received word and a distance parameter, we are interested in bounding the size of the list of Reed-Muller codewords that are within that distance from the received word. In this work, we make a new connection between computer science techniques used for studying low-degree polynomials and these coding theory questions. Using this connection we progress significantly towards resolving both the weight distribution and the list-decoding problems.
Obtaining tight bounds for the weight distribution of Reed-Muller codes has been a long standing open problem in coding theory, dating back to 1976 and seemingly resistent to the common coding theory tools. The best results to date are by Azumi, Kasami and Tokura [1] which provide bounds on the weight distribution that apply only up to 2:5 times the minimal distance of the code. We provide asymptotically tight bounds for the weight distribution of the Reed-Muller code that apply to all distances.
List-decoding has numerous theoretical and practical applications in various fields. To name a few, hardness amplification in complexity [15], constructing hard-core predicates from one way functions in cryptography [4] and learning parities with noise in learning theory [10]. Many algorithms for list-decoding such as [4, 6] as well as [15] have the crux of their analysis lying in bounding the list-decoding size. The case for Reed–Muller codes is similar, and Gopalan et. al [7] gave a list-decoding algorithm, whose complexity is determined by the list-decoding size. Gopalan et. al provided bounds on the list-decoding size of Reed–Muller codes which apply only up to the minimal distance of the code. We provide asymptotically tight bounds for the list-decoding size of Reed–Muller codes which apply for all distances.
- Title: Non-Malleable Codes
Authors: Stefan Dziembowski, Krzysztof Pietrzak and Daniel Wichs
Abstract: We introduce the notion of “non-malleable codes” which relaxes the notion of errorcorrection and error-detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to errorcorrection and error-detection, non-malleability can be achieved for very rich classes of modifications. We construct an efficient code that is non-malleable with respect to modifications that effect each bit of the codeword arbitrarily (i.e. leave it untouched, flip it or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a non-malleable code for every “small enough” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient non-malleable codes in the random-oracle model for very general classes of tampering functions—e.g. functions where every bit in the tampered codeword can depend arbitrarily on any 99% of the bits in the original codeword.
As an application of non-malleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g. signature cards) against “tampering attacks”. In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem, was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security” (ATP). We show that non-malleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tamperin
- Title: Interactive Proofs For Quantum Computations
Authors: Dorit Aharonov, Michael Ben-Or and Elad Eban
Abstract: The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked [Vaz07]: If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions?
To provide answers to these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs [GMR85] the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: ”Any language in BQP has a QPIP, and moreover, a fault tolerant one” (providing a partial answer to a challenge posted in [Aar07]). We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to Ben-Or, Cr´epeau, Gottesman, Hassidim, and Smith [BOCG+06], combined with quantum fault tolerance and secure multiparty quantum computation techniques. A slight modification of our constructions makes the protocol “blind”: the quantum computation and input remain unknown to the prover.
- Title: On the Power of a Unique Quantum Witness
Authors: Rahul Jain, Iordanis Kerenidis,Greg Kuperberg, Miklos Santha, Or Sattath and Shengyu Zhang
Abstract: In a celebrated paper, Valiant and Vazirani [29] raised the question of whether the difficulty of NP- complete problems was due to the wide variation of the number of witnesses of their instances. They gave a strong negative answer by showing that distinguishing between instances having zero or one witnesses is as hard as recognizing NP, under randomized reductions.
We consider the same question in the quantum setting and investigate the possibility of reducing quantum witnesses in the context of the complexity class QMA, the quantum analogue of NP. The natural way to quantify the number of quantum witnesses is the dimension of the witness subspace W in some appropriate Hilbert space H. We present an efficient deterministic procedure that reduces any problem where the dimension d of W is bounded by a polynomial to a problem with a unique quantum witness. The main idea of our reduction is to consider the Alternating subspace of the tensor powerHd. Indeed, the intersection of this subspace with Wd is one-dimensional, and therefore can play the role of the unique quantum witness.
- Title: Bounds on the Quantum Satisfiability Threshold
Authors: Sergey Bravyi, Cristopher Moore and Alexander Russell
Abstract: Quantum k-SAT is the problem of deciding whether there is a n-qubit state which is perpendicular to a set of vectors, each of which lies in the Hilbert space of k qubits. Equivalently, the problem is to decide whether a particular type of local Hamiltonian has a ground state with zero energy. We consider random quantum k-SAT formulas with n variables and m = αn clauses, and ask at what value of α these formulas cease to be satisfiable. We show that the threshold for random quantum 3-SAT is at most 3.594. For comparison, convincing arguments from statistical physics suggest that the classical 3-SAT threshold is αc ≈ 4.267. For larger k, we show that the quantum threshold is a constant factor smaller than the classical one. Our bounds work by determining the generic rank of the satisfying subspace for certain gadgets, and then using the technique of differential equations to analyze various algorithms that partition the hypergraph into a collection of these gadgets.
- Title: Memory Consistency Conditions for Self-Assembly Programming
Authors: Aaron Sterling
Abstract: Perhaps the two most significant theoretical questions about the programming of selfassembling agents are: (1) necessary and sufficient conditions to produce a unique terminal assembly, and (2) error correction. We address both questions, by reducing two well-studied models of tile assembly to models of distributed shared memory (DSM), in order to obtain results from the memory consistency conditions induced by tile assembly systems when simulated in the DSM setting. The Abstract Tile Assembly Model (aTAM) can be simulated by a DSM system that obeys causal consistency, and the locally deterministic tile assembly systems in the aTAM correspond exactly to the concurrentwrite free programs that simulate tile assembly in such a model. Thus, the detection of the failure of local determinism (which had formerly been an open problem) reduces to the detection of data races in simulating programs. Further, the Kinetic Tile Assembly Model can be simulated by a DSM system that obeys GWO, a memory consistency condition defined by Steinke and Nutt. (To our knowledge, this is the first natural example of a DSM system that obeys GWO, but no stronger consistency condition.) We combine these results with the observation that self-assembly algorithms are local algorithms, and there exists a fast conversion of deterministic local algorithms into deterministic self-stabilizing algorithms. This provides an “immediate” generalization of a theorem by Soloveichik et al. about the existence of tile assembly systems that simultaneously perform two forms of self-stabilization: proofreading and self-healing. Our reductions and proof techniques can be extended to the programming of self-assembling agents in a variety of media, not just DNA tiles, and not just two-dimensional surfaces.
- Title: Cache Replacement Policies for Multicore Processors
Authors: Avinatan Hassidim
Abstract: Almost all of the modern computers use multiple cores, and the number of cores is expected to increase as hardware prices go down, and Moore’s law fails to hold. Most of the theoretical algorithmic work so far has focused on the setting where multiple cores are performing the same task. Indeed, one is tempted to assume that when the cores are independent then the current design performs well. This work infirms this assumption by showing that even when the cores run completely independent tasks, there exist dependencies arising from running on the same chip, and using the same cache. These dependencies cause the standard caching algorithms to underperform. To address the new challenge, we revisit some aspects of the classical caching design.
More specifically, we focus on the page replacement policy of the first cache shared between all the cores (usually the L2 cache). We make the simplifying assumption that since the cores are running independent tasks, they are accessing disjoint memory locations (in particular this means that maintaining coherency is not an issue). We show, that even under this simplifying assumption, the multicore case is fundamentally different then the single core case. In particular
1. LRU performs poorly, even with resource augmentation.
2. The offline version of the caching problem is NP complete.
Any attempt to design an efficient cache for a multicore machine in which the cores may access the same memory has to perform well also in this simpler setting. We provide some intuition to what an efficient solution could look like, by
1. Partly characterizing the offline solution, showing that it is determined by the part of the cache which is devoted to each core at every timestep.
2. Presenting a PTAS for the offline problem, for some range of the parameters. In the recent years, multicore caching was the subject of extensive experimental research. The conclusions of some of these works are that LRU is inefficient in practice. The heuristics which they propose to replace it are based on dividing the cache between cores, and handling each part independently. Our work can be seen as a theoretical explanation to the results of these experiments.