Lower Bounds for Oblivious Data Structures
Riko Jacob
Abstract: An oblivious data structure is a data structure where the memory access patterns reveals no information about the operations performed on it. Such data structures were introduced by Wang et al. [ACM SIGSAC’14] and are intended for situations where one wishes to store the data structure at an untrusted server. One way to obtain an oblivious data structure is simply to run a classic data structure on an oblivious RAM (ORAM). Until very recently, this resulted in an overhead of $\omega(\lg n)$ for the most natural setting of parameters. Moreover, a recent lower bound for ORAMs by Larsen and Nielsen [CRYPTO’18] show that they always incur an overhead of at least $\Omega(\lg n)$ if used in a black box manner. To circumvent the $\omega(\lg n)$ overhead, researchers have instead studied classic data structure problems more directly and have obtained efficient solutions for many such problems such as stacks, queues, deques, priority queues and search trees. However, none of these data structures process operations faster than $\Theta(\lg n)$, leaving open the question of whether even faster solutions exist. In this paper, we rule out this possibility by proving $\Omega(\lg n)$ lower bounds for oblivious stacks, queues, deques, priority queues and search trees.
Is there an Oblivious RAM Lower Bound for Online Reads?
Mor Weiss
Abstract: The question of the overhead of Oblivious RAM (ORAM) has been studied since the inception of ORAM by Goldreich and Ostrovsky (JACM 1996). They gave a lower bound of $\Omega(\log n)$ overhead per access for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. But can we get lower bounds for general ORAM, beyond “balls and bins”?
Boyle and Naor (ITCS ’16) showed this is unlikely in the offline setting (where all the accesses to be performed need to be specified ahead of time), by constructing an offline ORAM with $o(\log n)$ overhead assuming the existence of small sorting circuits. Though we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an $\Omega(\log n)$ lower bound for general online ORAM.
The current state of affairs still leaves the question open for online read-only ORAM, or for read/write ORAM where we want very small overhead for the read operations. In this talk, I will show that a lower bound in these settings is also unlikely, by constructing an online ORAM where reads (but not writes) have an $o(\log n)$ overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes. Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
Joint work with Daniel Wichs.
Bio: Mor is a postdoctoral researcher at the FACT center at IDC Herzliya. Prior to that she was a postdoctoral researcher at Northeastern University in Boston. She holds a PhD from the Technion, where she was advised by Prof. Yuval Ishai. Her research is in the field of Cryptography.
Private Stateful Information Retrieval
Kevin Yeo
Abstract: Private information retrieval (PIR) is a fundamental tool for preserving query privacy when accessing outsourced data. All previous PIR constructions have significant costs preventing widespread use. In this work, we present private stateful information retrieval (PSIR), an extension of PIR, allowing clients to be stateful and maintain information between multiple queries. Our design of the PSIR primitive maintains three important properties of PIR: multiple clients may simultaneously query without complex concurrency primitives, query privacy should be maintained if the server colludes with other clients, and new clients should be able to enroll into the system by exclusively interacting with the server. We present a PSIR framework that reduces an online query to performing one single-server PIR on a sub-linear number of database records. All other operations beyond the single-server PIR consist of cryptographic hashes or plaintext operations. In practice, the dominating costs of resources occur due to the public-key operations involved with PIR. By reducing the input database to PIR, we are able to limit expensive computation and avoid transmitting large ciphertexts. We show that various instantiations of PSIR reduce server CPU by up to 10x and online network costs by up to 10x over the previous best PIR construction. This is a joint work with Sarvar Patel and Giuseppe Persiano.
Bio: I am a member of the Applied Cryptography team in Google NYC where I work on both the theoretical and practical aspects of cryptography. My research focuses mainly on privacy-preserving storage protocols. Prior to joining Google, I completed my undergraduate studies at the University of Waterloo.
Meltdown, Spectre and Foreshadow: How a Small Side Channel Leakage Becomes a Big Problem
Daniel Genkin
Abstract: The security of a system is only as good as its weakest link. Even if the system’s software is perfectly secure, security threats originating from the system’s hardware are far from being properly understood. Side channel attacks extract secret information by exploiting delicate interactions between the system’s software and hardware components (such as instruction timing and electromagnetic radiation). Despite being an active research area since the 90’s, the systematic exploration of side channel leakage from complex devices has only begun recently, often with devastating security consequences.
In this talk I will cover the recent Spectre, Meltdown and Foreshadow attacks as well as their implications on the security of computer systems. The talk will be self-contained and include live demonstrations.
Data Oblivious ISA Extensions for Side Channel-Resistant and High Performance Computing
Chris Fletcher
Abstract: Blocking microarchitectural (digital) side channels is one of the most pressing challenges in hardware security today. Recently, there has been a surge of effort that attempts to block these leakages by writing programs data obliviously. In this model, programs are written to avoid placing sensitive data-dependent pressure on shared resources. Despite recent efforts, however, running data oblivious programs on modern machines today is insecure and low performance. First, writing programs obliviously assumes certain instructions in today’s ISAs will not leak privacy, whereas today’s ISAs and hardware provide no such guarantees. Second, writing programs to avoid data-dependent behavior is inherently high performance overhead.
This talk will present a new ISA extension called a “Data Oblivious ISA” to tackle both the security and performance aspects of this problem. On the security side, we present ISA design principles to block microarchitectural side channels, and embody these ideas in a concrete ISA capable of safely executing existing data oblivious programs. On the performance side, we extend to ISA to provide explicit, efficient support for core data oblivious operations, such as memory oblivious computation, and show how the ISA allows modern hardware optimizations, e.g., out-of-order speculative execution, to remain enabled in the common case.
Time permitting, the talk will summarize our hardware prototype, built on top of the RISC-V out-of-order, speculative BOOM processor, formal analysis of our ISA applied to an abstract BOOM-style machine, and performance evaluation using existing “constant time” and data oblivious codes.
OptORAMa: Optimal Oblivious RAM
Wei-Kai Lin
Abstract: Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC ’87 and J. ACM ’96) is a technique for provably obfuscating programs’ access patterns, such that the access patterns leak no information about the programs’ secret inputs. To compile a general program to an oblivious counterpart, it is well-known that Ω(log N) amortized blowup is necessary, where N is the size of the logical memory. This was shown in Goldreich and Ostrovksy’s original ORAM work for statistical security and in a somewhat restricted model (the so called balls-and-bins model), and recently by Larsen and Nielsen (CRYPTO ’18) for computational security.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this talk, I will present the first secure ORAM with O(log N) amortized blowup, assuming one-way functions. This result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS ’18) who gave a construction with O(log N · loglog N) amortized blowup, assuming one-way functions.
One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of n elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. I will present our O(n) algorithm for tight compaction, which improves the previously best known deterministic or randomized algorithms whose running time is O(n · log n) or O(n · loglog n), respectively.
This is a joint work with Gilad Asharov, Ilan Komargodski, Kartik Nayak, Enoch Peserico, and Elaine Shi.
Bio: Wei-Kai is a third-year PhD student in Cornell, advised by Prof. Elaine Shi. His research focuses on oblivious algorithms, and he is interested in theoretic cryptography in general. Before Cornell, Wei-Kai got bachelor and master degrees in National Taiwan University, then worked in Mstar Semiconductor. Afterwards, he is supervised by Dr. Kai-Min Chung as an RA in Academia Sinica.
Scaling ORAM for Secure Computation
Jack Doerner
Abstract: We design and implement a Distributed Oblivious Random Access Memory (DORAM) data structure that is optimized for use in two-party secure computation protocols. We improve upon the access time of previous constructions by a factor of up to ten, their memory overhead by a factor of one hundred or more, and their initialization time by a factor of thousands. We are able to instantiate ORAMs that hold 234 bytes, and perform operations on them in seconds, which was not previously feasible with any implemented scheme.
Unlike prior ORAM constructions based on hierarchical hashing, permutation, or trees, our Distributed ORAM is derived from the new Function Secret Sharing scheme introduced by Boyle, Gilboa and Ishai. This significantly reduces the amount of secure computation required to implement an ORAM access, albeit at the cost of O(n) efficient local memory operations.
We implement our construction and find that, despite its poor O(n) asymptotic complexity, it still outperforms the fastest previously known constructions, Circuit ORAM and Square-root ORAM, for datasets that are 32 KiB or larger, and outperforms prior work on applications such as stable matching or binary search by factors of two to ten.
InvisiSpec: Making Speculative Execution Invisible in the Cache Hierarchy
Mengjia Yan
Abstract: Hardware speculation offers a major surface for micro-architectural covert and side channel attacks. Unfortunately, defending against speculative execution attacks is challenging. The reason is that speculations destined to be squashed execute incorrect instructions, outside the scope of what programmers and compilers reason about. Further, any change to micro-architectural state made by speculative execution can leak information.
In this paper, we propose InvisiSpec, a novel strategy to defend against hardware speculation attacks in multiprocessors by making speculation invisible in the data cache hierarchy. InvisiSpec blocks micro-architectural covert and side channels through the multiprocessor data cache hierarchy due to speculative loads. In InvisiSpec, unsafe speculative loads read data into a speculative buffer, without modifying the cache hierarchy. When the loads become safe, InvisiSpec makes them visible to the rest of the system. InvisiSpec identifies loads that might have violated memory consistency and, at this time, forces them to perform a validation step. We propose two InvisiSpec designs: one to defend against Spectre-like attacks and another to defend against futuristic attacks, where any speculative load may pose a threat. Our simulations with 23 SPEC and 10 PARSEC workloads show that InvisiSpec is effective. Under TSO, using fences to defend against Spectre attacks slows down execution by 74% relative to a conventional, insecure processor; InvisiSpec reduces the execution slowdown to only 21%. Using fences to defend against futuristic attacks slows down execution by 208%; InvisiSpec reduces the slowdown to 72%.
Efficient Zero-Knowledge Protocols: The Modular Approach
Yuval Ishai
Abstract: I will discuss the following modular approach for the design of efficient zero-knowledge protocols: (1) design an “information-theoretic” probabilistic proof system; (2) apply a cryptographic compiler to obtain a zero-knowledge protocol whose security is based on cryptographic assumptions. Most efficient zero-knowledge protocols from the literature can be cast into the above framework. The talk will give an overview of different types of proof systems and cryptographic compilers that give rise to a wide range of efficiency and security tradeoffs.
Bio: Yuval Ishai is presently a professor of Computer Science at the Technion, Israel. He received his PhD at the Technion, served as a postdoctoral researcher at DIMACS Center, AT&T Labs Research and Princeton University, and spent two extended sabbaticals at UCLA. Yuval’s research focuses mostly on cryptography and its interactions with computational complexity theory. His works were recognized by the FOCS 2004, Crypto 2007, and Crypto 2016 Best Paper Awards and the SIAM Outstanding Paper Prize. He chaired the 2011 Theory of Cryptography Conference and will co-chair the Eurocrypt 2019 and Eurocrypt 2020 conferences. He was inducted as an IACR Fellow in 2018.
Zero-Knowledge from MPC
Carmit Hazay
Abstract: In this talk I will discuss zero-knowledge from MPC based on the general transformation of Ishai et al. (STOC 2007). I will start with a discussion of ZKB++ (Chase et al. CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016). For the remainder of my talk I will focus on the Ligero scheme (Ames et al. CCS 2017), a recent simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. Similarly to ZKB++, this protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.
Ligero can be viewed as a simple zero-knowledge interactive PCP based on “interleaved” Reed-Solomon codes and is attractive not only for very large verification Boolean and arithmetic circuits, but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2^{-40} soundness error, the communication complexity is roughly 34KB, the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++.
I will conclude my talk with a more recent paper by Katz et al. (CCS 2018) which continues this line of works in the preprocessing model. Finally, I will mention an important application of zero-knowledge protocols to post-quantum signatures.
Bio: Carmit Hazay is presently an associate professor of Engineering faculty at Bar-Ilan University, Israel. She received her PhD at Bar-Ilan University, served as a postdoctoral researcher at Weizmann Institute of Science and IDC Israel, and at Aarhus University, Denmark. Carmit’s research lies in foundations of cryptography with special focus on secure computation and zero-knowledge proofs, both theory and practice. She co-authored the book ‘Efficient Secure Two-Party Protocols — Techniques and Constructions’.
Hyrax: An IP-based Approach to ZK for Blockchains
Abhi Shelat
Abstract: I will first present Hyrax, [Wahby, Tzialla, shelat, Thaler, Walfish’ 2018], a zero-knowledge argument for NP with low communication complexity, low concrete cost for both the prover and the verifier, and no trusted setup, based on standard cryptographic assumptions (DDH) and the random oracle model. Specifically, communication is *sublinear* in the size of the witness plus $d⋅log(G)$ where $d$ is the depth and $G$ is the width of the verifying circuit. When applied to batched or data-parallel statements, the prover’s cost is linear and the verifier’s cost is sub-linear in the verifying circuit size, both with good constants. Together, these properties represent a new point in the tradeoffs among setup, complexity assumptions, proof size, and computational cost. We evaluate Hyrax on three benchmarks, SHA-256 Merkle trees, image transformation, and matrix multiplication and show scalability and smaller proof sizes versus prior work.
Recently, there are several other interesting zero-knowledge argument systems with sublinear proof sizes: Ligero, STARKs, Aurora, and bulletproofs. I will present comparisons between these 4 systems and share some commentary on how these ideas may lead to even better schemes.
Aurora: Transparent zkSNARKs for R1CS
Nick Spooner
Abstract: Pairing-based zkSNARKs, such as those used in the zCash protocol, have many advantages: they are non-interactive, short (a few hundred bytes), and cheap to verify (a few milliseconds). Unfortunately, they also have major downsides in terms of security: they rely on relatively heavy asymmetric cryptography (and are, in particular, not post-quantum secure) and, more seriously, they rely on trusted parameter generation for security. A major open problem in this area is to design zero knowledge arguments which share the advantages of pairing-based SNARKs but avoid these downsides.
In this work we present Aurora, a zkSNARK for R1CS which is secure in the random oracle model (with no additional assumptions). Since Aurora uses only symmetric cryptography, it is plausibly post-quantum secure and requires no trusted setup (i.e., it is transparent). We design, analyse and implement the protocol, and show that for circuit problems it achieves better proof length and verification time than Ligero and STARK. Underlying the protocol is an interactive oracle proof of linear size and logarithmic query complexity, which relies on a novel protocol for a univariate version of the sumcheck problem.
Bulletproofs (and beyond?): Designing Efficient Zero-Knowledge Proofs
Jonathan Bootle
Abstract: Bulletproofs is a zero-knowledge proof system, which can be used to give short proofs that a secret number lies in a given range, without revealing anything about the number. Bulletproofs were recently deployed by Monero, and have helped to reduce transaction sizes, and transaction fees, dramatically. Bulletproofs does not have a trusted setup phase and only uses the same cryptographic ingredients as an ECDSA signature. But how does Bulletproofs work? This talk will explain the main design ideas behind the Bulletproofs proof-system. Time-permitting, we may also discuss what happens when we try to use the same ideas to build zero-knowledge proofs which are secure against quantum computers.
Bio: I am a postdoctoral researcher in the area of cryptography, working at IBM Research, Zurich. I am currently working on efficient zero-knowledge proofs, especially those based on lattice assumptions. Before joining IBM, I was a PhD student at University College London, supervised by Dr Jens Groth and Dr Sarah Meiklejohn.
Zether: Towards Privacy in a Smart Contract World
Benedikt Bünz
Abstract: Decentralized smart contract platforms such as Ethereum have recently become immensely popular. Unlike Bitcoin’s transaction-based system, smart contracts are arbitrary pieces of code that sit on a blockchain and facilitate fine-grained control over the distribution of assets. Unfortunately, as of today, there is no fully-decentralized way to conduct transactions privately on such a powerful platform. In this paper, we propose Zether, a private payment mechanism that is compatible with Ethereum and other account-based payment systems. Zether can provide both confidentiality (by hiding payment amounts) and anonymity (by hiding the identities of senders and recipients).
We take an account-based approach similar to Ethereum for efficiency and usability. Namely, we implement a new Ethereum smart contract that stores confidential account information for users and provides privacy-preserving functionalities for depositing, transferring, and withdrawing funds to/from these accounts with only a small overhead. We describe techniques to protect Zether against replay attacks and front-running situations and provide mechanisms to allow Zether interoperate with arbitrary smart contracts to support applications such as sealed-bid auctions, payment channels, stake voting, and confidential proof-of-stake. As a part of our protocol, we propose Sigma-Bullets, an improvement of the existing zero-knowledge proof system, Bulletproofs. $\Sigma$-Bullets make Bulletproofs more inter-operable with Sigma protocols, which is of general interest.
Bio: Benedikt Bünz is a 3rd year PhD student at Stanford University. He is advised by Dan Boneh. His research focusses on applied cryptography with a special focus on cryptocurrencies. He has worked on zero-knowledge proofs, verifiable delay functions, cryptocurrency light clients and cryptographic accumulators.
Decentralized Mining in Centralized Pools
Jiasun Li
Abstract: A permissionless blockchain’s well-functioning relies on adequate decentralization, yet the rise of mining pools that provide risk sharing leads to centralization, calling into question the viability of such systems. We show that mining pools as a financial innovation significantly exacerbates the arms race and thus energy consumption for proof-of-work-based blockchains. Nevertheless, cross-pool diversification and endogenous pool fees generally sustain decentralization — dominant pools better internalize the mining externality, charge higher fees, attract disproportionately fewer miners, and thus grow less quickly. Consequently, aggregate growth in mining pools may not be accompanied by over-concentration of pools. Empirical evidence from Bitcoin mining supports our model predictions, and the economic insight applies to many other blockchain protocols. I will also introduce a related paper “Diversification across pools: Optimal mining strategies under PoW” that implements some results of this paper.
Scaling Nakamoto Consensus to Thousands of Transactions per Second
Fang Long
Abstract: I will present Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Conflux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes.
Bio: Fan Long, an Assistant Professor at University of Toronto. Fan graduated from Tsinghua University Yao Class, and he holds Ph.D of Computer Science from MIT. His research interests include systems security, programming language, and blockchain. He is a recipient of ACM SIGSOFT Outstanding Dissertation Award and MIT Best Dissertation Award. He has more than 20 publications on top computer science conferences with more than 1000 citations.
An Economic Analysis of the Bitcoin Payment System
Jacob Leshno
Abstract: We model Bitcoin as a platform that intermediates between users and miners and use the model to derives closed form formulas of the fees and waiting times and studies their properties; compare the economics of the Bitcoin payment system (BPS) to that of a traditional payment system operated by a profit maximizing firm; and suggest protocol design modification to enhance the platform’s efficiency.
Bio: Jacob Leshno is an Assistant Professor of Economics at the University of Chicago Booth School of Business. Professor Leshno uses game theory, applied mathematics and microeconomic theory to study allocation mechanism and the design of marketplaces. His research looks into the design of school choice procedures that guide the allocation of students to schools, the assignment of patients to nursing homes, and the design of decentralized cryptocurrencies.
Before joining Chicago Booth, Jacob Leshno was an Assistant Professor in the Decision, Risk & Operations group in the Graduate School of Business, Columbia University. He spent a year as a post-doc researcher in Microsoft Research New England, and previously worked at Yahoo! and IBM. Jacob holds a PhD in economics from Harvard University, completed under the supervision of Prof. Alvin Roth. He holds a M.Sc. and B.Sc. in pure math from Tel Aviv University.
HotStuff: BFT Replication in the Lens of Blockchain
Ted Yin
Abstract: We present HotStuff, a new BFT replication protocol in the partially synchronous model. In HotStuff, a correct leader, once designated, makes progress without waiting the maximum network delay while incurring linear communication in the number of replicas. To our knowledge, HotStuff is the first partially synchronous BFT replication protocol exhibiting these combined properties. HotStuff is built around a novel framework that forms a bridge between classical BFT foundations and blockchains. It allows the expression of other known protocols (DLS, PBFT, Tendermint, Casper), and ours, in a common framework.
Our deployment of HotStuff over a network with over 100 replicas achieves throughput and latency comparable to that of BFT-SMaRt, while enjoying linear communication footprint during proposer failover (vs. quadratic with BFT-SMaRt).
Bio: Ted is a third year CS PhD student at Cornell University, advised by Prof. Emin Gün Sirer. He is generally interested in designing and building practical distributed systems. Currently, he focuses more on fundamental problems of fault-tolerance, consensus protocols and peer-to-peer systems. In general, he keeps a simple but vital taste: to build the systems which are provably correct and practically effective.
Initial Coin Offerings and Platform Building
Jiasun Li
Abstract: In a typical initial coin offering (ICO), an entrepreneur pre-sells digital tokens which will later serve as the medium of exchange on a peer-to-peer platform. We present a model rationalizing ICOs for launching such platforms: By transparently distributing tokens before the platform operation begins, an ICO overcomes later coordination failures during platform operation, induced by a cross-side network effect between transaction counterparties. Furthermore, a critical-mass requirement that arises from an endogenous same-side network effect during the ICO rationalizes several empirical patterns observed in ICO structures. Our model provides guidance for both regulators and practitioners to discern economically valuable ICOs.
Bio: Li has presented his research at many prestigious institutions and conferences including MIT, NYU, Yale, National Bureau of Economic Research, Federal Reserve Bank, and the U.S. Securities and Exchange Commission. He is a winner of 2016 Yihong Xia Best paper award and 2014 Chicago Quantitative Alliance academic paper competition. He is also a frequent speaker for investment professionals. Dr. Li received his Ph.D in finance from UCLA Anderson School of Management and B.S. in mathematics from Fudan University (Shanghai, China) prior to joining George Mason.
Append-only Authenticated Dictionaries and Their Applications (AADs)
Alin Tomescu
Abstract: Transparency logs allow users to audit a potentially malicious service, paving the way towards a more accountable Internet. For example, Certificate Transparency (CT) enables domain owners to audit Certificate Authorities (CAs) and detect impersonation attacks. Yet to achieve their full potential, transparency logs must be efficiently auditable. Specifically, everyone should be able to verify both (non)membership of log entries and that the log remains append-only. Unfortunately, current transparency logs either provide small-sized (non)membership proofs or small-sized append-only proofs, but never both. In fact, one of the proofs always requires bandwidth linear in the size of the log, making it expensive for everyone to audit the log and resulting in a few “opaque” trusted auditors. In this paper, we address this gap with a new primitive called an append-only authenticated dictionary (AAD). Our construction is the first to achieve (poly)logarithmic size for both proof types.
Automatic Exploitation of Ethereum Smart Contracts
Johannes Krupp
Abstract: A distinguishing feature of Ethereum is its powerful scripting capability through Smart Contracts — small programs running on the Ethereum Virtual Machine (EVM). While this enables interesting new applications, it also provides an opportune setting for attacks, as security vulnerabilities in smart contracts are tightly intertwined with financial gain.
This talk presents teEther, our tool for automatic smart contract exploitation. We will derive a generic definition of vulnerable smart contracts, detail the challenges of program analysis in the context of EVM bytecode, and present the results of our large scale study on 38,000 smart contracts from the Ethereum blockchain.
Bio: Johannes Krupp is a researcher at the CISPA Helmholtz Center i.G. in Saarbrücken, Germany, working with Christian Rossow on system security. He joined CISPA as a Ph.D. candidate after receiving a bachelor degree in computer science from Saarland University in 2014. His research interests include threat detection and defenses on both the network and application layer.
Practical Data Usage Control through a TEE-based Protocol
Raymond Gao
Abstract: Data usage control refers to a data owner’s ability to enforce a policy over how data can be used, even after it has been released. Law enforcers would need such a tool to oversee the provenance and proper usage of personal data. Businesses would require such a tool to share, for example, sensitive medical data, to advance their research. Similarly, the creators of music, video, or other artistic works want their intellectual property rights to be respected when others use their creations. For applications such as enforcing privacy-preserving computation on personal data, to protecting trade secrets or intellectual property rights for creative work, the concept of usage policy enforcement beyond simple access control has been studied by various niche circles in academia through the years, — researchers like A. Pretschner and F. Kelbert have constructed theoretical frameworks for both enforcement and policy specification, and formalized the requirements for a fully decentralized usage control infrastructure. Though these results have been available, only recent development in TEE and Blockchain technology hold the keys to practical data usage control becoming a mainstream reality. We introduce a novel data utilization protocol that realizes this vision by marrying TEE with data usage policies, and discuss the scope and significance of the types of applications it makes possible.
Bio: Raymond Gao Co-founder of Covalent PhD Candidate in Mechanical and Aerospace Engineering at Princeton University, research interest: high performance computing, computational physics, numerical simulation B.S. in Applied Physics at Peking University
Automated Vulnerability Detection for Blockchain Ecosystems
James Yang
Abstract
Recent incidences have shown that Blockchain software vulnerabilities can lead to significant financial losses. In this talk we present practical techniques that detect vulnerabilities in smart contracts and Ethereum virtual machines (EVMs). The techniques include fuzzy testing and symbolic executions that have been proven effective for traditional software. By optimizing the techniques for Blockchain ecosystems, we are able to detect vulnerabilities in both smart contracts and EVMs.
Bio
James Yang is the founder of Netta Labs that focuses on testing and verification of emerging software systems. He is also a professor at Western Michigan University and Xi’an Jiaotong University. Yang obtained his PhD from the University of Pennsylvania. He has published over 80 papers and received awards such as ACM SIGSOFT Distinguished paper award and ACM TODAES best paper award.
Code Similarity–based Cryptocurrency Genogram
Ting Liu
Abstract
Since the unbelievable success of BitCoin, thousands of Blockchain-based cryptocurrencies have been released in recent years. There are many stories about cryptocurrency, such as serious plagiarism problem, pricing prediction and various security issues. In this talk, hundreds of C/C++ cryptocurrency projects have been collected. The code similarity are calculated to identify their genogram. We demonstrate two interesting results on price prediction and vulnerability detection in cryptocurrency with genogram.
Bio
Ting Liu is a Professor at Xi’an Jiaotong University, China. He received his B.S. degree and Ph.D. degree from Xi’an Jiaotong University, in 2003 and 2010, respectively. He was a visiting professor in Cornell University during 2016 to 2017. His researches include software engineering and smart grids.